Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.12-rc6).

Conflicts:

drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
cbe84e9ad5e2 ("wifi: iwlwifi: mvm: really send iwl_txpower_constraints_cmd")
188a1bf89432 ("wifi: mac80211: re-order assigning channel in activate links")
https://lore.kernel.org/all/20241028123621.7bbb131b@canb.auug.org.au/

net/mac80211/cfg.c
c4382d5ca1af ("wifi: mac80211: update the right link for tx power")
8dd0498983ee ("wifi: mac80211: Fix setting txpower with emulate_chanctx")

drivers/net/ethernet/intel/ice/ice_ptp_hw.h
6e58c3310622 ("ice: fix crash on probe for DPLL enabled E810 LOM")
e4291b64e118 ("ice: Align E810T GPIO to other products")
ebb2693f8fbd ("ice: Read SDP section from NVM for pin definitions")
ac532f4f4251 ("ice: Cleanup unused declarations")
https://lore.kernel.org/all/20241030120524.1ee1af18@canb.auug.org.au/

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3568 -3245
+10 -10
Documentation/admin-guide/pm/cpufreq.rst
··· 425 425 426 426 ``rate_limit_us`` 427 427 Minimum time (in microseconds) that has to pass between two consecutive 428 - runs of governor computations (default: 1000 times the scaling driver's 429 - transition latency). 428 + runs of governor computations (default: 1.5 times the scaling driver's 429 + transition latency or the maximum 2ms). 430 430 431 431 The purpose of this tunable is to reduce the scheduler context overhead 432 432 of the governor which might be excessive without it. ··· 474 474 This is how often the governor's worker routine should run, in 475 475 microseconds. 476 476 477 - Typically, it is set to values of the order of 10000 (10 ms). Its 478 - default value is equal to the value of ``cpuinfo_transition_latency`` 479 - for each policy this governor is attached to (but since the unit here 480 - is greater by 1000, this means that the time represented by 481 - ``sampling_rate`` is 1000 times greater than the transition latency by 482 - default). 477 + Typically, it is set to values of the order of 2000 (2 ms). Its 478 + default value is to add a 50% breathing room 479 + to ``cpuinfo_transition_latency`` on each policy this governor is 480 + attached to. The minimum is typically the length of two scheduler 481 + ticks. 483 482 484 483 If this tunable is per-policy, the following shell command sets the time 485 - represented by it to be 750 times as high as the transition latency:: 484 + represented by it to be 1.5 times as high as the transition latency 485 + (the default):: 486 486 487 - # echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate 487 + # echo `$(($(cat cpuinfo_transition_latency) * 3 / 2)) > ondemand/sampling_rate 488 488 489 489 ``up_threshold`` 490 490 If the estimated CPU load is above this value (in percent), the governor
+9 -9
Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
··· 102 102 default: 2 103 103 104 104 interrupts: 105 - oneOf: 106 - - minItems: 1 107 - items: 108 - - description: TX interrupt 109 - - description: RX interrupt 110 - - items: 111 - - description: common/combined interrupt 105 + minItems: 1 106 + maxItems: 2 112 107 113 108 interrupt-names: 114 109 oneOf: 115 - - minItems: 1 110 + - description: TX interrupt 111 + const: tx 112 + - description: RX interrupt 113 + const: rx 114 + - description: TX and RX interrupts 116 115 items: 117 116 - const: tx 118 117 - const: rx 119 - - const: common 118 + - description: Common/combined interrupt 119 + const: common 120 120 121 121 fck_parent: 122 122 $ref: /schemas/types.yaml#/definitions/string
+4
Documentation/devicetree/bindings/sound/rockchip,rk3308-codec.yaml
··· 48 48 - const: mclk_rx 49 49 - const: hclk 50 50 51 + port: 52 + $ref: audio-graph-port.yaml# 53 + unevaluatedProperties: false 54 + 51 55 resets: 52 56 maxItems: 1 53 57
+3 -2
Documentation/networking/packet_mmap.rst
··· 16 16 17 17 Howto can be found at: 18 18 19 - https://sites.google.com/site/packetmmap/ 19 + https://web.archive.org/web/20220404160947/https://sites.google.com/site/packetmmap/ 20 20 21 21 Please send your comments to 22 22 - Ulisses Alonso Camaró <uaca@i.hate.spam.alumni.uv.es> ··· 166 166 /* bind socket to eth0 */ 167 167 bind(this->socket, (struct sockaddr *)&my_addr, sizeof(struct sockaddr_ll)); 168 168 169 - A complete tutorial is available at: https://sites.google.com/site/packetmmap/ 169 + A complete tutorial is available at: 170 + https://web.archive.org/web/20220404160947/https://sites.google.com/site/packetmmap/ 170 171 171 172 By default, the user should put data at:: 172 173
+125 -136
Documentation/userspace-api/mseal.rst
··· 23 23 A similar feature already exists in the XNU kernel with the 24 24 VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2]. 25 25 26 - User API 27 - ======== 28 - mseal() 29 - ----------- 30 - The mseal() syscall has the following signature: 26 + SYSCALL 27 + ======= 28 + mseal syscall signature 29 + ----------------------- 30 + ``int mseal(void \* addr, size_t len, unsigned long flags)`` 31 31 32 - ``int mseal(void addr, size_t len, unsigned long flags)`` 32 + **addr**/**len**: virtual memory address range. 33 + The address range set by **addr**/**len** must meet: 34 + - The start address must be in an allocated VMA. 35 + - The start address must be page aligned. 36 + - The end address (**addr** + **len**) must be in an allocated VMA. 37 + - no gap (unallocated memory) between start and end address. 33 38 34 - **addr/len**: virtual memory address range. 39 + The ``len`` will be paged aligned implicitly by the kernel. 35 40 36 - The address range set by ``addr``/``len`` must meet: 37 - - The start address must be in an allocated VMA. 38 - - The start address must be page aligned. 39 - - The end address (``addr`` + ``len``) must be in an allocated VMA. 40 - - no gap (unallocated memory) between start and end address. 41 + **flags**: reserved for future use. 41 42 42 - The ``len`` will be paged aligned implicitly by the kernel. 43 + **Return values**: 44 + - **0**: Success. 45 + - **-EINVAL**: 46 + * Invalid input ``flags``. 47 + * The start address (``addr``) is not page aligned. 48 + * Address range (``addr`` + ``len``) overflow. 49 + - **-ENOMEM**: 50 + * The start address (``addr``) is not allocated. 51 + * The end address (``addr`` + ``len``) is not allocated. 52 + * A gap (unallocated memory) between start and end address. 53 + - **-EPERM**: 54 + * sealing is supported only on 64-bit CPUs, 32-bit is not supported. 43 55 44 - **flags**: reserved for future use. 56 + **Note about error return**: 57 + - For above error cases, users can expect the given memory range is 58 + unmodified, i.e. no partial update. 59 + - There might be other internal errors/cases not listed here, e.g. 60 + error during merging/splitting VMAs, or the process reaching the maximum 61 + number of supported VMAs. In those cases, partial updates to the given 62 + memory range could happen. However, those cases should be rare. 45 63 46 - **return values**: 64 + **Architecture support**: 65 + mseal only works on 64-bit CPUs, not 32-bit CPUs. 47 66 48 - - ``0``: Success. 67 + **Idempotent**: 68 + users can call mseal multiple times. mseal on an already sealed memory 69 + is a no-action (not error). 49 70 50 - - ``-EINVAL``: 51 - - Invalid input ``flags``. 52 - - The start address (``addr``) is not page aligned. 53 - - Address range (``addr`` + ``len``) overflow. 71 + **no munseal** 72 + Once mapping is sealed, it can't be unsealed. The kernel should never 73 + have munseal, this is consistent with other sealing feature, e.g. 74 + F_SEAL_SEAL for file. 54 75 55 - - ``-ENOMEM``: 56 - - The start address (``addr``) is not allocated. 57 - - The end address (``addr`` + ``len``) is not allocated. 58 - - A gap (unallocated memory) between start and end address. 76 + Blocked mm syscall for sealed mapping 77 + ------------------------------------- 78 + It might be important to note: **once the mapping is sealed, it will 79 + stay in the process's memory until the process terminates**. 59 80 60 - - ``-EPERM``: 61 - - sealing is supported only on 64-bit CPUs, 32-bit is not supported. 81 + Example:: 62 82 63 - - For above error cases, users can expect the given memory range is 64 - unmodified, i.e. no partial update. 83 + *ptr = mmap(0, 4096, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, 0, 0); 84 + rc = mseal(ptr, 4096, 0); 85 + /* munmap will fail */ 86 + rc = munmap(ptr, 4096); 87 + assert(rc < 0); 65 88 66 - - There might be other internal errors/cases not listed here, e.g. 67 - error during merging/splitting VMAs, or the process reaching the max 68 - number of supported VMAs. In those cases, partial updates to the given 69 - memory range could happen. However, those cases should be rare. 89 + Blocked mm syscall: 90 + - munmap 91 + - mmap 92 + - mremap 93 + - mprotect and pkey_mprotect 94 + - some destructive madvise behaviors: MADV_DONTNEED, MADV_FREE, 95 + MADV_DONTNEED_LOCKED, MADV_FREE, MADV_DONTFORK, MADV_WIPEONFORK 70 96 71 - **Blocked operations after sealing**: 72 - Unmapping, moving to another location, and shrinking the size, 73 - via munmap() and mremap(), can leave an empty space, therefore 74 - can be replaced with a VMA with a new set of attributes. 97 + The first set of syscalls to block is munmap, mremap, mmap. They can 98 + either leave an empty space in the address space, therefore allowing 99 + replacement with a new mapping with new set of attributes, or can 100 + overwrite the existing mapping with another mapping. 75 101 76 - Moving or expanding a different VMA into the current location, 77 - via mremap(). 102 + mprotect and pkey_mprotect are blocked because they changes the 103 + protection bits (RWX) of the mapping. 78 104 79 - Modifying a VMA via mmap(MAP_FIXED). 105 + Certain destructive madvise behaviors, specifically MADV_DONTNEED, 106 + MADV_FREE, MADV_DONTNEED_LOCKED, and MADV_WIPEONFORK, can introduce 107 + risks when applied to anonymous memory by threads lacking write 108 + permissions. Consequently, these operations are prohibited under such 109 + conditions. The aforementioned behaviors have the potential to modify 110 + region contents by discarding pages, effectively performing a memset(0) 111 + operation on the anonymous memory. 80 112 81 - Size expansion, via mremap(), does not appear to pose any 82 - specific risks to sealed VMAs. It is included anyway because 83 - the use case is unclear. In any case, users can rely on 84 - merging to expand a sealed VMA. 113 + Kernel will return -EPERM for blocked syscalls. 85 114 86 - mprotect() and pkey_mprotect(). 115 + When blocked syscall return -EPERM due to sealing, the memory regions may 116 + or may not be changed, depends on the syscall being blocked: 87 117 88 - Some destructive madvice() behaviors (e.g. MADV_DONTNEED) 89 - for anonymous memory, when users don't have write permission to the 90 - memory. Those behaviors can alter region contents by discarding pages, 91 - effectively a memset(0) for anonymous memory. 118 + - munmap: munmap is atomic. If one of VMAs in the given range is 119 + sealed, none of VMAs are updated. 120 + - mprotect, pkey_mprotect, madvise: partial update might happen, e.g. 121 + when mprotect over multiple VMAs, mprotect might update the beginning 122 + VMAs before reaching the sealed VMA and return -EPERM. 123 + - mmap and mremap: undefined behavior. 92 124 93 - Kernel will return -EPERM for blocked operations. 94 - 95 - For blocked operations, one can expect the given address is unmodified, 96 - i.e. no partial update. Note, this is different from existing mm 97 - system call behaviors, where partial updates are made till an error is 98 - found and returned to userspace. To give an example: 99 - 100 - Assume following code sequence: 101 - 102 - - ptr = mmap(null, 8192, PROT_NONE); 103 - - munmap(ptr + 4096, 4096); 104 - - ret1 = mprotect(ptr, 8192, PROT_READ); 105 - - mseal(ptr, 4096); 106 - - ret2 = mprotect(ptr, 8192, PROT_NONE); 107 - 108 - ret1 will be -ENOMEM, the page from ptr is updated to PROT_READ. 109 - 110 - ret2 will be -EPERM, the page remains to be PROT_READ. 111 - 112 - **Note**: 113 - 114 - - mseal() only works on 64-bit CPUs, not 32-bit CPU. 115 - 116 - - users can call mseal() multiple times, mseal() on an already sealed memory 117 - is a no-action (not error). 118 - 119 - - munseal() is not supported. 120 - 121 - Use cases: 122 - ========== 125 + Use cases 126 + ========= 123 127 - glibc: 124 128 The dynamic linker, during loading ELF executables, can apply sealing to 125 - non-writable memory segments. 129 + mapping segments. 126 130 127 - - Chrome browser: protect some security sensitive data-structures. 131 + - Chrome browser: protect some security sensitive data structures. 128 132 129 - Notes on which memory to seal: 130 - ============================== 131 - 132 - It might be important to note that sealing changes the lifetime of a mapping, 133 - i.e. the sealed mapping won’t be unmapped till the process terminates or the 134 - exec system call is invoked. Applications can apply sealing to any virtual 135 - memory region from userspace, but it is crucial to thoroughly analyze the 136 - mapping's lifetime prior to apply the sealing. 133 + When not to use mseal 134 + ===================== 135 + Applications can apply sealing to any virtual memory region from userspace, 136 + but it is *crucial to thoroughly analyze the mapping's lifetime* prior to 137 + apply the sealing. This is because the sealed mapping *won’t be unmapped* 138 + until the process terminates or the exec system call is invoked. 137 139 138 140 For example: 141 + - aio/shm 142 + aio/shm can call mmap and munmap on behalf of userspace, e.g. 143 + ksys_shmdt() in shm.c. The lifetimes of those mapping are not tied to 144 + the lifetime of the process. If those memories are sealed from userspace, 145 + then munmap will fail, causing leaks in VMA address space during the 146 + lifetime of the process. 139 147 140 - - aio/shm 148 + - ptr allocated by malloc (heap) 149 + Don't use mseal on the memory ptr return from malloc(). 150 + malloc() is implemented by allocator, e.g. by glibc. Heap manager might 151 + allocate a ptr from brk or mapping created by mmap. 152 + If an app calls mseal on a ptr returned from malloc(), this can affect 153 + the heap manager's ability to manage the mappings; the outcome is 154 + non-deterministic. 141 155 142 - aio/shm can call mmap()/munmap() on behalf of userspace, e.g. ksys_shmdt() in 143 - shm.c. The lifetime of those mapping are not tied to the lifetime of the 144 - process. If those memories are sealed from userspace, then munmap() will fail, 145 - causing leaks in VMA address space during the lifetime of the process. 156 + Example:: 146 157 147 - - Brk (heap) 158 + ptr = malloc(size); 159 + /* don't call mseal on ptr return from malloc. */ 160 + mseal(ptr, size); 161 + /* free will success, allocator can't shrink heap lower than ptr */ 162 + free(ptr); 148 163 149 - Currently, userspace applications can seal parts of the heap by calling 150 - malloc() and mseal(). 151 - let's assume following calls from user space: 164 + mseal doesn't block 165 + =================== 166 + In a nutshell, mseal blocks certain mm syscall from modifying some of VMA's 167 + attributes, such as protection bits (RWX). Sealed mappings doesn't mean the 168 + memory is immutable. 152 169 153 - - ptr = malloc(size); 154 - - mprotect(ptr, size, RO); 155 - - mseal(ptr, size); 156 - - free(ptr); 157 - 158 - Technically, before mseal() is added, the user can change the protection of 159 - the heap by calling mprotect(RO). As long as the user changes the protection 160 - back to RW before free(), the memory range can be reused. 161 - 162 - Adding mseal() into the picture, however, the heap is then sealed partially, 163 - the user can still free it, but the memory remains to be RO. If the address 164 - is re-used by the heap manager for another malloc, the process might crash 165 - soon after. Therefore, it is important not to apply sealing to any memory 166 - that might get recycled. 167 - 168 - Furthermore, even if the application never calls the free() for the ptr, 169 - the heap manager may invoke the brk system call to shrink the size of the 170 - heap. In the kernel, the brk-shrink will call munmap(). Consequently, 171 - depending on the location of the ptr, the outcome of brk-shrink is 172 - nondeterministic. 173 - 174 - 175 - Additional notes: 176 - ================= 177 170 As Jann Horn pointed out in [3], there are still a few ways to write 178 - to RO memory, which is, in a way, by design. Those cases are not covered 179 - by mseal(). If applications want to block such cases, sandbox tools (such as 180 - seccomp, LSM, etc) might be considered. 171 + to RO memory, which is, in a way, by design. And those could be blocked 172 + by different security measures. 181 173 182 174 Those cases are: 183 175 184 - - Write to read-only memory through /proc/self/mem interface. 185 - - Write to read-only memory through ptrace (such as PTRACE_POKETEXT). 186 - - userfaultfd. 176 + - Write to read-only memory through /proc/self/mem interface (FOLL_FORCE). 177 + - Write to read-only memory through ptrace (such as PTRACE_POKETEXT). 178 + - userfaultfd. 187 179 188 180 The idea that inspired this patch comes from Stephen Röttger’s work in V8 189 181 CFI [4]. Chrome browser in ChromeOS will be the first user of this API. 190 182 191 - Reference: 192 - ========== 193 - [1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274 194 - 195 - [2] https://man.openbsd.org/mimmutable.2 196 - 197 - [3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com 198 - 199 - [4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc 183 + Reference 184 + ========= 185 + - [1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274 186 + - [2] https://man.openbsd.org/mimmutable.2 187 + - [3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com 188 + - [4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc
+5 -1
MAINTAINERS
··· 9723 9723 F: include/linux/gpio.h 9724 9724 F: include/linux/gpio/ 9725 9725 F: include/linux/of_gpio.h 9726 + K: (devm_)?gpio_(request|free|direction|get|set) 9726 9727 9727 9728 GPIO UAPI 9728 9729 M: Bartosz Golaszewski <brgl@bgdev.pl> ··· 14993 14992 14994 14993 MICROCHIP AUDIO ASOC DRIVERS 14995 14994 M: Claudiu Beznea <claudiu.beznea@tuxon.dev> 14995 + M: Andrei Simion <andrei.simion@microchip.com> 14996 14996 L: linux-sound@vger.kernel.org 14997 14997 S: Supported 14998 14998 F: Documentation/devicetree/bindings/sound/atmel* ··· 15109 15107 15110 15108 MICROCHIP MCP16502 PMIC DRIVER 15111 15109 M: Claudiu Beznea <claudiu.beznea@tuxon.dev> 15110 + M: Andrei Simion <andrei.simion@microchip.com> 15112 15111 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 15113 15112 S: Supported 15114 15113 F: Documentation/devicetree/bindings/regulator/microchip,mcp16502.yaml ··· 15240 15237 15241 15238 MICROCHIP SSC DRIVER 15242 15239 M: Claudiu Beznea <claudiu.beznea@tuxon.dev> 15240 + M: Andrei Simion <andrei.simion@microchip.com> 15243 15241 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 15244 15242 S: Supported 15245 15243 F: Documentation/devicetree/bindings/misc/atmel-ssc.txt ··· 23171 23167 F: drivers/iio/adc/ti-lmp92064.c 23172 23168 23173 23169 TI PCM3060 ASoC CODEC DRIVER 23174 - M: Kirill Marinushkin <kmarinushkin@birdec.com> 23170 + M: Kirill Marinushkin <k.marinushkin@gmail.com> 23175 23171 L: linux-sound@vger.kernel.org 23176 23172 S: Maintained 23177 23173 F: Documentation/devicetree/bindings/sound/pcm3060.txt
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 12 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+10 -2
arch/arm64/net/bpf_jit_comp.c
··· 2220 2220 emit(A64_STR64I(A64_R(20), A64_SP, regs_off + 8), ctx); 2221 2221 2222 2222 if (flags & BPF_TRAMP_F_CALL_ORIG) { 2223 - emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); 2223 + /* for the first pass, assume the worst case */ 2224 + if (!ctx->image) 2225 + ctx->idx += 4; 2226 + else 2227 + emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); 2224 2228 emit_call((const u64)__bpf_tramp_enter, ctx); 2225 2229 } 2226 2230 ··· 2268 2264 2269 2265 if (flags & BPF_TRAMP_F_CALL_ORIG) { 2270 2266 im->ip_epilogue = ctx->ro_image + ctx->idx; 2271 - emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); 2267 + /* for the first pass, assume the worst case */ 2268 + if (!ctx->image) 2269 + ctx->idx += 4; 2270 + else 2271 + emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); 2272 2272 emit_call((const u64)__bpf_tramp_exit, ctx); 2273 2273 } 2274 2274
+1
arch/x86/Kconfig
··· 2257 2257 config ADDRESS_MASKING 2258 2258 bool "Linear Address Masking support" 2259 2259 depends on X86_64 2260 + depends on COMPILE_TEST || !CPU_MITIGATIONS # wait for LASS 2260 2261 help 2261 2262 Linear Address Masking (LAM) modifies the checking that is applied 2262 2263 to 64-bit linear addresses, allowing software to use of the
+2 -2
arch/x86/include/asm/runtime-const.h
··· 6 6 typeof(sym) __ret; \ 7 7 asm_inline("mov %1,%0\n1:\n" \ 8 8 ".pushsection runtime_ptr_" #sym ",\"a\"\n\t" \ 9 - ".long 1b - %c2 - .\n\t" \ 9 + ".long 1b - %c2 - .\n" \ 10 10 ".popsection" \ 11 11 :"=r" (__ret) \ 12 12 :"i" ((unsigned long)0x0123456789abcdefull), \ ··· 20 20 typeof(0u+(val)) __ret = (val); \ 21 21 asm_inline("shrl $12,%k0\n1:\n" \ 22 22 ".pushsection runtime_shift_" #sym ",\"a\"\n\t" \ 23 - ".long 1b - 1 - .\n\t" \ 23 + ".long 1b - 1 - .\n" \ 24 24 ".popsection" \ 25 25 :"+r" (__ret)); \ 26 26 __ret; })
+24 -19
arch/x86/include/asm/uaccess_64.h
··· 12 12 #include <asm/cpufeatures.h> 13 13 #include <asm/page.h> 14 14 #include <asm/percpu.h> 15 + #include <asm/runtime-const.h> 16 + 17 + /* 18 + * Virtual variable: there's no actual backing store for this, 19 + * it can purely be used as 'runtime_const_ptr(USER_PTR_MAX)' 20 + */ 21 + extern unsigned long USER_PTR_MAX; 15 22 16 23 #ifdef CONFIG_ADDRESS_MASKING 17 24 /* ··· 53 46 54 47 #endif 55 48 56 - /* 57 - * The virtual address space space is logically divided into a kernel 58 - * half and a user half. When cast to a signed type, user pointers 59 - * are positive and kernel pointers are negative. 60 - */ 61 - #define valid_user_address(x) ((__force long)(x) >= 0) 49 + #define valid_user_address(x) \ 50 + ((__force unsigned long)(x) <= runtime_const_ptr(USER_PTR_MAX)) 62 51 63 52 /* 64 53 * Masking the user address is an alternative to a conditional 65 54 * user_access_begin that can avoid the fencing. This only works 66 55 * for dense accesses starting at the address. 67 56 */ 68 - #define mask_user_address(x) ((typeof(x))((long)(x)|((long)(x)>>63))) 57 + static inline void __user *mask_user_address(const void __user *ptr) 58 + { 59 + unsigned long mask; 60 + asm("cmp %1,%0\n\t" 61 + "sbb %0,%0" 62 + :"=r" (mask) 63 + :"r" (ptr), 64 + "0" (runtime_const_ptr(USER_PTR_MAX))); 65 + return (__force void __user *)(mask | (__force unsigned long)ptr); 66 + } 69 67 #define masked_user_access_begin(x) ({ \ 70 68 __auto_type __masked_ptr = (x); \ 71 69 __masked_ptr = mask_user_address(__masked_ptr); \ ··· 81 69 * arbitrary values in those bits rather then masking them off. 82 70 * 83 71 * Enforce two rules: 84 - * 1. 'ptr' must be in the user half of the address space 72 + * 1. 'ptr' must be in the user part of the address space 85 73 * 2. 'ptr+size' must not overflow into kernel addresses 86 74 * 87 - * Note that addresses around the sign change are not valid addresses, 88 - * and will GP-fault even with LAM enabled if the sign bit is set (see 89 - * "CR3.LAM_SUP" that can narrow the canonicality check if we ever 90 - * enable it, but not remove it entirely). 91 - * 92 - * So the "overflow into kernel addresses" does not imply some sudden 93 - * exact boundary at the sign bit, and we can allow a lot of slop on the 94 - * size check. 75 + * Note that we always have at least one guard page between the 76 + * max user address and the non-canonical gap, allowing us to 77 + * ignore small sizes entirely. 95 78 * 96 79 * In fact, we could probably remove the size check entirely, since 97 80 * any kernel accesses will be in increasing address order starting 98 - * at 'ptr', and even if the end might be in kernel space, we'll 99 - * hit the GP faults for non-canonical accesses before we ever get 100 - * there. 81 + * at 'ptr'. 101 82 * 102 83 * That's a separate optimization, for now just handle the small 103 84 * constant case.
+10
arch/x86/kernel/cpu/common.c
··· 69 69 #include <asm/sev.h> 70 70 #include <asm/tdx.h> 71 71 #include <asm/posted_intr.h> 72 + #include <asm/runtime-const.h> 72 73 73 74 #include "cpu.h" 74 75 ··· 2390 2389 alternative_instructions(); 2391 2390 2392 2391 if (IS_ENABLED(CONFIG_X86_64)) { 2392 + unsigned long USER_PTR_MAX = TASK_SIZE_MAX-1; 2393 + 2394 + /* 2395 + * Enable this when LAM is gated on LASS support 2396 + if (cpu_feature_enabled(X86_FEATURE_LAM)) 2397 + USER_PTR_MAX = (1ul << 63) - PAGE_SIZE - 1; 2398 + */ 2399 + runtime_const_init(ptr, USER_PTR_MAX); 2400 + 2393 2401 /* 2394 2402 * Make sure the first 2MB area is not mapped by huge pages 2395 2403 * There are typically fixed size MTRRs in there and overlapping
+36 -17
arch/x86/kernel/cpu/microcode/amd.c
··· 584 584 native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); 585 585 } 586 586 587 - static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size); 587 + static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size); 588 588 589 589 static int __init save_microcode_in_initrd(void) 590 590 { ··· 605 605 if (!desc.mc) 606 606 return -EINVAL; 607 607 608 - ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); 608 + ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); 609 609 if (ret > UCODE_UPDATED) 610 610 return -EINVAL; 611 611 ··· 613 613 } 614 614 early_initcall(save_microcode_in_initrd); 615 615 616 - static inline bool patch_cpus_equivalent(struct ucode_patch *p, struct ucode_patch *n) 616 + static inline bool patch_cpus_equivalent(struct ucode_patch *p, 617 + struct ucode_patch *n, 618 + bool ignore_stepping) 617 619 { 618 620 /* Zen and newer hardcode the f/m/s in the patch ID */ 619 621 if (x86_family(bsp_cpuid_1_eax) >= 0x17) { 620 622 union cpuid_1_eax p_cid = ucode_rev_to_cpuid(p->patch_id); 621 623 union cpuid_1_eax n_cid = ucode_rev_to_cpuid(n->patch_id); 622 624 623 - /* Zap stepping */ 624 - p_cid.stepping = 0; 625 - n_cid.stepping = 0; 625 + if (ignore_stepping) { 626 + p_cid.stepping = 0; 627 + n_cid.stepping = 0; 628 + } 626 629 627 630 return p_cid.full == n_cid.full; 628 631 } else { ··· 647 644 WARN_ON_ONCE(!n.patch_id); 648 645 649 646 list_for_each_entry(p, &microcode_cache, plist) 650 - if (patch_cpus_equivalent(p, &n)) 647 + if (patch_cpus_equivalent(p, &n, false)) 651 648 return p; 652 649 653 650 return NULL; 654 651 } 655 652 656 - static inline bool patch_newer(struct ucode_patch *p, struct ucode_patch *n) 653 + static inline int patch_newer(struct ucode_patch *p, struct ucode_patch *n) 657 654 { 658 655 /* Zen and newer hardcode the f/m/s in the patch ID */ 659 656 if (x86_family(bsp_cpuid_1_eax) >= 0x17) { ··· 661 658 662 659 zp.ucode_rev = p->patch_id; 663 660 zn.ucode_rev = n->patch_id; 661 + 662 + if (zn.stepping != zp.stepping) 663 + return -1; 664 664 665 665 return zn.rev > zp.rev; 666 666 } else { ··· 674 668 static void update_cache(struct ucode_patch *new_patch) 675 669 { 676 670 struct ucode_patch *p; 671 + int ret; 677 672 678 673 list_for_each_entry(p, &microcode_cache, plist) { 679 - if (patch_cpus_equivalent(p, new_patch)) { 680 - if (!patch_newer(p, new_patch)) { 674 + if (patch_cpus_equivalent(p, new_patch, true)) { 675 + ret = patch_newer(p, new_patch); 676 + if (ret < 0) 677 + continue; 678 + else if (!ret) { 681 679 /* we already have the latest patch */ 682 680 kfree(new_patch->data); 683 681 kfree(new_patch); ··· 954 944 return UCODE_OK; 955 945 } 956 946 947 + static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size) 948 + { 949 + enum ucode_state ret; 950 + 951 + /* free old equiv table */ 952 + free_equiv_cpu_table(); 953 + 954 + ret = __load_microcode_amd(family, data, size); 955 + if (ret != UCODE_OK) 956 + cleanup(); 957 + 958 + return ret; 959 + } 960 + 957 961 static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size) 958 962 { 959 963 struct cpuinfo_x86 *c; ··· 975 951 struct ucode_patch *p; 976 952 enum ucode_state ret; 977 953 978 - /* free old equiv table */ 979 - free_equiv_cpu_table(); 980 - 981 - ret = __load_microcode_amd(family, data, size); 982 - if (ret != UCODE_OK) { 983 - cleanup(); 954 + ret = _load_microcode_amd(family, data, size); 955 + if (ret != UCODE_OK) 984 956 return ret; 985 - } 986 957 987 958 for_each_node(nid) { 988 959 cpu = cpumask_first(cpumask_of_node(nid));
+6 -6
arch/x86/kernel/traps.c
··· 261 261 int ud_type; 262 262 u32 imm; 263 263 264 - /* 265 - * Normally @regs are unpoisoned by irqentry_enter(), but handle_bug() 266 - * is a rare case that uses @regs without passing them to 267 - * irqentry_enter(). 268 - */ 269 - kmsan_unpoison_entry_regs(regs); 270 264 ud_type = decode_bug(regs->ip, &imm); 271 265 if (ud_type == BUG_NONE) 272 266 return handled; ··· 269 275 * All lies, just get the WARN/BUG out. 270 276 */ 271 277 instrumentation_begin(); 278 + /* 279 + * Normally @regs are unpoisoned by irqentry_enter(), but handle_bug() 280 + * is a rare case that uses @regs without passing them to 281 + * irqentry_enter(). 282 + */ 283 + kmsan_unpoison_entry_regs(regs); 272 284 /* 273 285 * Since we're emulating a CALL with exceptions, restore the interrupt 274 286 * state to what it was at the exception site.
+1
arch/x86/kernel/vmlinux.lds.S
··· 358 358 #endif 359 359 360 360 RUNTIME_CONST_VARIABLES 361 + RUNTIME_CONST(ptr, USER_PTR_MAX) 361 362 362 363 . = ALIGN(PAGE_SIZE); 363 364
+7 -2
arch/x86/lib/getuser.S
··· 39 39 40 40 .macro check_range size:req 41 41 .if IS_ENABLED(CONFIG_X86_64) 42 - mov %rax, %rdx 43 - sar $63, %rdx 42 + movq $0x0123456789abcdef,%rdx 43 + 1: 44 + .pushsection runtime_ptr_USER_PTR_MAX,"a" 45 + .long 1b - 8 - . 46 + .popsection 47 + cmp %rax, %rdx 48 + sbb %rdx, %rdx 44 49 or %rdx, %rax 45 50 .else 46 51 cmp $TASK_SIZE_MAX-\size+1, %eax
+2
arch/x86/virt/svm/sev.c
··· 173 173 e820__range_update(pa, PMD_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED); 174 174 e820__range_update_table(e820_table_kexec, pa, PMD_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED); 175 175 e820__range_update_table(e820_table_firmware, pa, PMD_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED); 176 + if (!memblock_is_region_reserved(pa, PMD_SIZE)) 177 + memblock_reserve(pa, PMD_SIZE); 176 178 } 177 179 } 178 180
+1 -3
block/blk-map.c
··· 600 600 if (nsegs >= nr_segs || bytes > UINT_MAX - bv->bv_len) 601 601 goto put_bio; 602 602 if (bytes + bv->bv_len > nr_iter) 603 - goto put_bio; 604 - if (bv->bv_offset + bv->bv_len > PAGE_SIZE) 605 - goto put_bio; 603 + break; 606 604 607 605 nsegs++; 608 606 bytes += bv->bv_len;
+11
drivers/acpi/button.c
··· 130 130 }, 131 131 .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN, 132 132 }, 133 + { 134 + /* 135 + * Samsung galaxybook2 ,initial _LID device notification returns 136 + * lid closed. 137 + */ 138 + .matches = { 139 + DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 140 + DMI_MATCH(DMI_PRODUCT_NAME, "750XED"), 141 + }, 142 + .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN, 143 + }, 133 144 {} 134 145 }; 135 146
+17 -5
drivers/acpi/cppc_acpi.c
··· 1916 1916 u64 mul, div; 1917 1917 1918 1918 if (caps->lowest_freq && caps->nominal_freq) { 1919 - mul = caps->nominal_freq - caps->lowest_freq; 1919 + /* Avoid special case when nominal_freq is equal to lowest_freq */ 1920 + if (caps->lowest_freq == caps->nominal_freq) { 1921 + mul = caps->nominal_freq; 1922 + div = caps->nominal_perf; 1923 + } else { 1924 + mul = caps->nominal_freq - caps->lowest_freq; 1925 + div = caps->nominal_perf - caps->lowest_perf; 1926 + } 1920 1927 mul *= KHZ_PER_MHZ; 1921 - div = caps->nominal_perf - caps->lowest_perf; 1922 1928 offset = caps->nominal_freq * KHZ_PER_MHZ - 1923 1929 div64_u64(caps->nominal_perf * mul, div); 1924 1930 } else { ··· 1945 1939 { 1946 1940 s64 retval, offset = 0; 1947 1941 static u64 max_khz; 1948 - u64 mul, div; 1942 + u64 mul, div; 1949 1943 1950 1944 if (caps->lowest_freq && caps->nominal_freq) { 1951 - mul = caps->nominal_perf - caps->lowest_perf; 1952 - div = caps->nominal_freq - caps->lowest_freq; 1945 + /* Avoid special case when nominal_freq is equal to lowest_freq */ 1946 + if (caps->lowest_freq == caps->nominal_freq) { 1947 + mul = caps->nominal_perf; 1948 + div = caps->nominal_freq; 1949 + } else { 1950 + mul = caps->nominal_perf - caps->lowest_perf; 1951 + div = caps->nominal_freq - caps->lowest_freq; 1952 + } 1953 1953 /* 1954 1954 * We don't need to convert to kHz for computing offset and can 1955 1955 * directly use nominal_freq and lowest_freq as the div64_u64
+23 -6
drivers/acpi/prmt.c
··· 52 52 static LIST_HEAD(prm_module_list); 53 53 54 54 struct prm_handler_info { 55 - guid_t guid; 55 + efi_guid_t guid; 56 56 efi_status_t (__efiapi *handler_addr)(u64, void *); 57 57 u64 static_data_buffer_addr; 58 58 u64 acpi_param_buffer_addr; ··· 72 72 struct prm_handler_info handlers[] __counted_by(handler_count); 73 73 }; 74 74 75 - static u64 efi_pa_va_lookup(u64 pa) 75 + static u64 efi_pa_va_lookup(efi_guid_t *guid, u64 pa) 76 76 { 77 77 efi_memory_desc_t *md; 78 78 u64 pa_offset = pa & ~PAGE_MASK; 79 79 u64 page = pa & PAGE_MASK; 80 80 81 81 for_each_efi_memory_desc(md) { 82 - if (md->phys_addr < pa && pa < md->phys_addr + PAGE_SIZE * md->num_pages) 82 + if ((md->attribute & EFI_MEMORY_RUNTIME) && 83 + (md->phys_addr < pa && pa < md->phys_addr + PAGE_SIZE * md->num_pages)) { 83 84 return pa_offset + md->virt_addr + page - md->phys_addr; 85 + } 84 86 } 87 + 88 + pr_warn("Failed to find VA for GUID: %pUL, PA: 0x%llx", guid, pa); 85 89 86 90 return 0; 87 91 } ··· 152 148 th = &tm->handlers[cur_handler]; 153 149 154 150 guid_copy(&th->guid, (guid_t *)handler_info->handler_guid); 155 - th->handler_addr = (void *)efi_pa_va_lookup(handler_info->handler_address); 156 - th->static_data_buffer_addr = efi_pa_va_lookup(handler_info->static_data_buffer_address); 157 - th->acpi_param_buffer_addr = efi_pa_va_lookup(handler_info->acpi_param_buffer_address); 151 + th->handler_addr = 152 + (void *)efi_pa_va_lookup(&th->guid, handler_info->handler_address); 153 + 154 + th->static_data_buffer_addr = 155 + efi_pa_va_lookup(&th->guid, handler_info->static_data_buffer_address); 156 + 157 + th->acpi_param_buffer_addr = 158 + efi_pa_va_lookup(&th->guid, handler_info->acpi_param_buffer_address); 159 + 158 160 } while (++cur_handler < tm->handler_count && (handler_info = get_next_handler(handler_info))); 159 161 160 162 return 0; ··· 286 276 module = find_prm_module(&buffer->handler_guid); 287 277 if (!handler || !module) 288 278 goto invalid_guid; 279 + 280 + if (!handler->handler_addr || 281 + !handler->static_data_buffer_addr || 282 + !handler->acpi_param_buffer_addr) { 283 + buffer->prm_status = PRM_HANDLER_ERROR; 284 + return AE_OK; 285 + } 289 286 290 287 ACPI_COPY_NAMESEG(context.signature, "PRMC"); 291 288 context.revision = 0x0;
+7
drivers/acpi/resource.c
··· 503 503 DMI_MATCH(DMI_BOARD_NAME, "17U70P"), 504 504 }, 505 505 }, 506 + { 507 + /* LG Electronics 16T90SP */ 508 + .matches = { 509 + DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), 510 + DMI_MATCH(DMI_BOARD_NAME, "16T90SP"), 511 + }, 512 + }, 506 513 { } 507 514 }; 508 515
+1
drivers/ata/libata-eh.c
··· 651 651 /* the scmd has an associated qc */ 652 652 if (!(qc->flags & ATA_QCFLAG_EH)) { 653 653 /* which hasn't failed yet, timeout */ 654 + set_host_byte(scmd, DID_TIME_OUT); 654 655 qc->err_mask |= AC_ERR_TIMEOUT; 655 656 qc->flags |= ATA_QCFLAG_EH; 656 657 nr_timedout++;
+10
drivers/char/tpm/tpm-chip.c
··· 674 674 */ 675 675 void tpm_chip_unregister(struct tpm_chip *chip) 676 676 { 677 + #ifdef CONFIG_TCG_TPM2_HMAC 678 + int rc; 679 + 680 + rc = tpm_try_get_ops(chip); 681 + if (!rc) { 682 + tpm2_end_auth_session(chip); 683 + tpm_put_ops(chip); 684 + } 685 + #endif 686 + 677 687 tpm_del_legacy_sysfs(chip); 678 688 if (tpm_is_hwrng_enabled(chip)) 679 689 hwrng_unregister(&chip->hwrng);
+3
drivers/char/tpm/tpm-dev-common.c
··· 27 27 struct tpm_header *header = (void *)buf; 28 28 ssize_t ret, len; 29 29 30 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 31 + tpm2_end_auth_session(chip); 32 + 30 33 ret = tpm2_prepare_space(chip, space, buf, bufsiz); 31 34 /* If the command is not implemented by the TPM, synthesize a 32 35 * response with a TPM2_RC_COMMAND_CODE return for user-space.
+4 -2
drivers/char/tpm/tpm-interface.c
··· 379 379 380 380 rc = tpm_try_get_ops(chip); 381 381 if (!rc) { 382 - if (chip->flags & TPM_CHIP_FLAG_TPM2) 382 + if (chip->flags & TPM_CHIP_FLAG_TPM2) { 383 + tpm2_end_auth_session(chip); 383 384 tpm2_shutdown(chip, TPM2_SU_STATE); 384 - else 385 + } else { 385 386 rc = tpm1_pm_suspend(chip, tpm_suspend_pcr); 387 + } 386 388 387 389 tpm_put_ops(chip); 388 390 }
+60 -40
drivers/char/tpm/tpm2-sessions.c
··· 333 333 } 334 334 335 335 #ifdef CONFIG_TCG_TPM2_HMAC 336 + /* The first write to /dev/tpm{rm0} will flush the session. */ 337 + attributes |= TPM2_SA_CONTINUE_SESSION; 338 + 336 339 /* 337 340 * The Architecture Guide requires us to strip trailing zeros 338 341 * before computing the HMAC ··· 487 484 sha256_final(&sctx, out); 488 485 } 489 486 490 - static void tpm_buf_append_salt(struct tpm_buf *buf, struct tpm_chip *chip) 487 + static void tpm_buf_append_salt(struct tpm_buf *buf, struct tpm_chip *chip, 488 + struct tpm2_auth *auth) 491 489 { 492 490 struct crypto_kpp *kpp; 493 491 struct kpp_request *req; ··· 547 543 sg_set_buf(&s[0], chip->null_ec_key_x, EC_PT_SZ); 548 544 sg_set_buf(&s[1], chip->null_ec_key_y, EC_PT_SZ); 549 545 kpp_request_set_input(req, s, EC_PT_SZ*2); 550 - sg_init_one(d, chip->auth->salt, EC_PT_SZ); 546 + sg_init_one(d, auth->salt, EC_PT_SZ); 551 547 kpp_request_set_output(req, d, EC_PT_SZ); 552 548 crypto_kpp_compute_shared_secret(req); 553 549 kpp_request_free(req); ··· 558 554 * This works because KDFe fully consumes the secret before it 559 555 * writes the salt 560 556 */ 561 - tpm2_KDFe(chip->auth->salt, "SECRET", x, chip->null_ec_key_x, 562 - chip->auth->salt); 557 + tpm2_KDFe(auth->salt, "SECRET", x, chip->null_ec_key_x, auth->salt); 563 558 564 559 out: 565 560 crypto_free_kpp(kpp); ··· 856 853 if (rc) 857 854 /* manually close the session if it wasn't consumed */ 858 855 tpm2_flush_context(chip, auth->handle); 859 - memzero_explicit(auth, sizeof(*auth)); 856 + 857 + kfree_sensitive(auth); 858 + chip->auth = NULL; 860 859 } else { 861 860 /* reset for next use */ 862 861 auth->session = TPM_HEADER_SIZE; ··· 886 881 return; 887 882 888 883 tpm2_flush_context(chip, auth->handle); 889 - memzero_explicit(auth, sizeof(*auth)); 884 + kfree_sensitive(auth); 885 + chip->auth = NULL; 890 886 } 891 887 EXPORT_SYMBOL(tpm2_end_auth_session); 892 888 ··· 921 915 922 916 static int tpm2_load_null(struct tpm_chip *chip, u32 *null_key) 923 917 { 924 - int rc; 925 918 unsigned int offset = 0; /* dummy offset for null seed context */ 926 919 u8 name[SHA256_DIGEST_SIZE + 2]; 920 + u32 tmp_null_key; 921 + int rc; 927 922 928 923 rc = tpm2_load_context(chip, chip->null_key_context, &offset, 929 - null_key); 930 - if (rc != -EINVAL) 931 - return rc; 924 + &tmp_null_key); 925 + if (rc != -EINVAL) { 926 + if (!rc) 927 + *null_key = tmp_null_key; 928 + goto err; 929 + } 932 930 933 - /* an integrity failure may mean the TPM has been reset */ 934 - dev_err(&chip->dev, "NULL key integrity failure!\n"); 935 - /* check the null name against what we know */ 936 - tpm2_create_primary(chip, TPM2_RH_NULL, NULL, name); 937 - if (memcmp(name, chip->null_key_name, sizeof(name)) == 0) 938 - /* name unchanged, assume transient integrity failure */ 939 - return rc; 940 - /* 941 - * Fatal TPM failure: the NULL seed has actually changed, so 942 - * the TPM must have been illegally reset. All in-kernel TPM 943 - * operations will fail because the NULL primary can't be 944 - * loaded to salt the sessions, but disable the TPM anyway so 945 - * userspace programmes can't be compromised by it. 946 - */ 947 - dev_err(&chip->dev, "NULL name has changed, disabling TPM due to interference\n"); 931 + /* Try to re-create null key, given the integrity failure: */ 932 + rc = tpm2_create_primary(chip, TPM2_RH_NULL, &tmp_null_key, name); 933 + if (rc) 934 + goto err; 935 + 936 + /* Return null key if the name has not been changed: */ 937 + if (!memcmp(name, chip->null_key_name, sizeof(name))) { 938 + *null_key = tmp_null_key; 939 + return 0; 940 + } 941 + 942 + /* Deduce from the name change TPM interference: */ 943 + dev_err(&chip->dev, "null key integrity check failed\n"); 944 + tpm2_flush_context(chip, tmp_null_key); 948 945 chip->flags |= TPM_CHIP_FLAG_DISABLE; 949 946 950 - return rc; 947 + err: 948 + return rc ? -ENODEV : 0; 951 949 } 952 950 953 951 /** ··· 968 958 */ 969 959 int tpm2_start_auth_session(struct tpm_chip *chip) 970 960 { 961 + struct tpm2_auth *auth; 971 962 struct tpm_buf buf; 972 - struct tpm2_auth *auth = chip->auth; 973 - int rc; 974 963 u32 null_key; 964 + int rc; 975 965 976 - if (!auth) { 977 - dev_warn_once(&chip->dev, "auth session is not active\n"); 966 + if (chip->auth) { 967 + dev_warn_once(&chip->dev, "auth session is active\n"); 978 968 return 0; 979 969 } 970 + 971 + auth = kzalloc(sizeof(*auth), GFP_KERNEL); 972 + if (!auth) 973 + return -ENOMEM; 980 974 981 975 rc = tpm2_load_null(chip, &null_key); 982 976 if (rc) ··· 1002 988 tpm_buf_append(&buf, auth->our_nonce, sizeof(auth->our_nonce)); 1003 989 1004 990 /* append encrypted salt and squirrel away unencrypted in auth */ 1005 - tpm_buf_append_salt(&buf, chip); 991 + tpm_buf_append_salt(&buf, chip, auth); 1006 992 /* session type (HMAC, audit or policy) */ 1007 993 tpm_buf_append_u8(&buf, TPM2_SE_HMAC); 1008 994 ··· 1024 1010 1025 1011 tpm_buf_destroy(&buf); 1026 1012 1027 - if (rc) 1028 - goto out; 1013 + if (rc == TPM2_RC_SUCCESS) { 1014 + chip->auth = auth; 1015 + return 0; 1016 + } 1029 1017 1030 - out: 1018 + out: 1019 + kfree_sensitive(auth); 1031 1020 return rc; 1032 1021 } 1033 1022 EXPORT_SYMBOL(tpm2_start_auth_session); ··· 1364 1347 * 1365 1348 * Derive and context save the null primary and allocate memory in the 1366 1349 * struct tpm_chip for the authorizations. 1350 + * 1351 + * Return: 1352 + * * 0 - OK 1353 + * * -errno - A system error 1354 + * * TPM_RC - A TPM error 1367 1355 */ 1368 1356 int tpm2_sessions_init(struct tpm_chip *chip) 1369 1357 { 1370 1358 int rc; 1371 1359 1372 1360 rc = tpm2_create_null_primary(chip); 1373 - if (rc) 1374 - dev_err(&chip->dev, "TPM: security failed (NULL seed derivation): %d\n", rc); 1375 - 1376 - chip->auth = kmalloc(sizeof(*chip->auth), GFP_KERNEL); 1377 - if (!chip->auth) 1378 - return -ENOMEM; 1361 + if (rc) { 1362 + dev_err(&chip->dev, "null key creation failed with %d\n", rc); 1363 + return rc; 1364 + } 1379 1365 1380 1366 return rc; 1381 1367 }
+1 -1
drivers/firewire/core-topology.c
··· 204 204 // the node->ports array where the parent node should be. Later, 205 205 // when we handle the parent node, we fix up the reference. 206 206 ++parent_count; 207 - node->color = i; 207 + node->color = port_index; 208 208 break; 209 209 210 210 case PHY_PACKET_SELF_ID_PORT_STATUS_CHILD:
+12 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 147 147 struct acpi_buffer *params) 148 148 { 149 149 acpi_status status; 150 + union acpi_object *obj; 150 151 union acpi_object atif_arg_elements[2]; 151 152 struct acpi_object_list atif_arg; 152 153 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; ··· 170 169 171 170 status = acpi_evaluate_object(atif->handle, NULL, &atif_arg, 172 171 &buffer); 172 + obj = (union acpi_object *)buffer.pointer; 173 173 174 - /* Fail only if calling the method fails and ATIF is supported */ 174 + /* Fail if calling the method fails and ATIF is supported */ 175 175 if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { 176 176 DRM_DEBUG_DRIVER("failed to evaluate ATIF got %s\n", 177 177 acpi_format_exception(status)); 178 - kfree(buffer.pointer); 178 + kfree(obj); 179 179 return NULL; 180 180 } 181 181 182 - return buffer.pointer; 182 + if (obj->type != ACPI_TYPE_BUFFER) { 183 + DRM_DEBUG_DRIVER("bad object returned from ATIF: %d\n", 184 + obj->type); 185 + kfree(obj); 186 + return NULL; 187 + } 188 + 189 + return obj; 183 190 } 184 191 185 192 /**
+8 -1
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
··· 51 51 #define SDMA0_HYP_DEC_REG_END 0x589a 52 52 #define SDMA1_HYP_DEC_REG_OFFSET 0x20 53 53 54 + /*define for compression field for sdma7*/ 55 + #define SDMA_PKT_CONSTANT_FILL_HEADER_compress_offset 0 56 + #define SDMA_PKT_CONSTANT_FILL_HEADER_compress_mask 0x00000001 57 + #define SDMA_PKT_CONSTANT_FILL_HEADER_compress_shift 16 58 + #define SDMA_PKT_CONSTANT_FILL_HEADER_COMPRESS(x) (((x) & SDMA_PKT_CONSTANT_FILL_HEADER_compress_mask) << SDMA_PKT_CONSTANT_FILL_HEADER_compress_shift) 59 + 54 60 static const struct amdgpu_hwip_reg_entry sdma_reg_list_7_0[] = { 55 61 SOC15_REG_ENTRY_STR(GC, 0, regSDMA0_STATUS_REG), 56 62 SOC15_REG_ENTRY_STR(GC, 0, regSDMA0_STATUS1_REG), ··· 1730 1724 uint64_t dst_offset, 1731 1725 uint32_t byte_count) 1732 1726 { 1733 - ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_CONST_FILL); 1727 + ib->ptr[ib->length_dw++] = SDMA_PKT_CONSTANT_FILL_HEADER_OP(SDMA_OP_CONST_FILL) | 1728 + SDMA_PKT_CONSTANT_FILL_HEADER_COMPRESS(1); 1734 1729 ib->ptr[ib->length_dw++] = lower_32_bits(dst_offset); 1735 1730 ib->ptr[ib->length_dw++] = upper_32_bits(dst_offset); 1736 1731 ib->ptr[ib->length_dw++] = src_data;
+2 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 8374 8374 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < 8375 8375 IP_VERSION(3, 5, 0) || 8376 8376 acrtc_state->stream->link->psr_settings.psr_version < 8377 - DC_PSR_VERSION_UNSUPPORTED) { 8377 + DC_PSR_VERSION_UNSUPPORTED || 8378 + !(adev->flags & AMD_IS_APU)) { 8378 8379 timing = &acrtc_state->stream->timing; 8379 8380 8380 8381 /* at least 2 frames */
+13
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 44 44 45 45 #include "dm_helpers.h" 46 46 #include "ddc_service_types.h" 47 + #include "clk_mgr.h" 47 48 48 49 static u32 edid_extract_panel_id(struct edid *edid) 49 50 { ··· 1122 1121 struct pipe_ctx *pipe_ctx = NULL; 1123 1122 struct amdgpu_dm_connector *aconnector = link->priv; 1124 1123 struct drm_device *dev = aconnector->base.dev; 1124 + struct dc_state *dc_state = ctx->dc->current_state; 1125 + struct clk_mgr *clk_mgr = ctx->dc->clk_mgr; 1125 1126 int i; 1126 1127 1127 1128 for (i = 0; i < MAX_PIPES; i++) { ··· 1223 1220 1224 1221 pipe_ctx->stream->test_pattern.type = test_pattern; 1225 1222 pipe_ctx->stream->test_pattern.color_space = test_pattern_color_space; 1223 + 1224 + /* Temp W/A for compliance test failure */ 1225 + dc_state->bw_ctx.bw.dcn.clk.p_state_change_support = false; 1226 + dc_state->bw_ctx.bw.dcn.clk.dramclk_khz = clk_mgr->dc_mode_softmax_enabled ? 1227 + clk_mgr->bw_params->dc_mode_softmax_memclk : clk_mgr->bw_params->max_memclk_mhz; 1228 + dc_state->bw_ctx.bw.dcn.clk.idle_dramclk_khz = dc_state->bw_ctx.bw.dcn.clk.dramclk_khz; 1229 + ctx->dc->clk_mgr->funcs->update_clocks( 1230 + ctx->dc->clk_mgr, 1231 + dc_state, 1232 + false); 1226 1233 1227 1234 dc_link_dp_set_test_pattern( 1228 1235 (struct dc_link *) link,
+2
drivers/gpu/drm/amd/display/modules/power/power_helpers.c
··· 841 841 isPSRSUSupported = false; 842 842 else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03) 843 843 isPSRSUSupported = false; 844 + else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x01) 845 + isPSRSUSupported = false; 844 846 else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1) 845 847 isPSRSUSupported = true; 846 848 }
+10 -1
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1234 1234 } 1235 1235 } 1236 1236 1237 + static bool smu_is_workload_profile_available(struct smu_context *smu, 1238 + u32 profile) 1239 + { 1240 + if (profile >= PP_SMC_POWER_PROFILE_COUNT) 1241 + return false; 1242 + return smu->workload_map && smu->workload_map[profile].valid_mapping; 1243 + } 1244 + 1237 1245 static int smu_sw_init(void *handle) 1238 1246 { 1239 1247 struct amdgpu_device *adev = (struct amdgpu_device *)handle; ··· 1273 1265 smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5; 1274 1266 smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6; 1275 1267 1276 - if (smu->is_apu) 1268 + if (smu->is_apu || 1269 + !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D)) 1277 1270 smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT]; 1278 1271 else 1279 1272 smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
+84 -48
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu14_driver_if_v14_0.h
··· 25 25 #define SMU14_DRIVER_IF_V14_0_H 26 26 27 27 //Increment this version if SkuTable_t or BoardTable_t change 28 - #define PPTABLE_VERSION 0x18 28 + #define PPTABLE_VERSION 0x1B 29 29 30 30 #define NUM_GFXCLK_DPM_LEVELS 16 31 31 #define NUM_SOCCLK_DPM_LEVELS 8 ··· 145 145 } FEATURE_BTC_e; 146 146 147 147 // Debug Overrides Bitmask 148 - #define DEBUG_OVERRIDE_DISABLE_VOLT_LINK_VCN_FCLK 0x00000001 148 + #define DEBUG_OVERRIDE_NOT_USE 0x00000001 149 149 #define DEBUG_OVERRIDE_DISABLE_VOLT_LINK_DCN_FCLK 0x00000002 150 150 #define DEBUG_OVERRIDE_DISABLE_VOLT_LINK_MP0_FCLK 0x00000004 151 151 #define DEBUG_OVERRIDE_DISABLE_VOLT_LINK_VCN_DCFCLK 0x00000008 ··· 161 161 #define DEBUG_OVERRIDE_ENABLE_SOC_VF_BRINGUP_MODE 0x00002000 162 162 #define DEBUG_OVERRIDE_ENABLE_PER_WGP_RESIENCY 0x00004000 163 163 #define DEBUG_OVERRIDE_DISABLE_MEMORY_VOLTAGE_SCALING 0x00008000 164 + #define DEBUG_OVERRIDE_DFLL_BTC_FCW_LOG 0x00010000 164 165 165 166 // VR Mapping Bit Defines 166 167 #define VR_MAPPING_VR_SELECT_MASK 0x01 ··· 391 390 typedef struct { 392 391 EccInfo_t EccInfo[24]; 393 392 } EccInfoTable_t; 393 + 394 + #define EPCS_HIGH_POWER 600 395 + #define EPCS_NORMAL_POWER 450 396 + #define EPCS_LOW_POWER 300 397 + #define EPCS_SHORTED_POWER 150 398 + #define EPCS_NO_BOOTUP 0 399 + 400 + typedef enum{ 401 + EPCS_SHORTED_LIMIT, 402 + EPCS_LOW_POWER_LIMIT, 403 + EPCS_NORMAL_POWER_LIMIT, 404 + EPCS_HIGH_POWER_LIMIT, 405 + EPCS_NOT_CONFIGURED, 406 + EPCS_STATUS_COUNT, 407 + } EPCS_STATUS_e; 394 408 395 409 //D3HOT sequences 396 410 typedef enum { ··· 678 662 } PP_GRTAVFS_FW_SEP_FUSE_e; 679 663 680 664 #define PP_NUM_RTAVFS_PWL_ZONES 5 681 - 665 + #define PP_NUM_PSM_DIDT_PWL_ZONES 3 682 666 683 667 // VBIOS or PPLIB configures telemetry slope and offset. Only slope expected to be set for SVI3 684 668 // Slope Q1.7, Offset Q1.2 ··· 762 746 uint16_t Padding; 763 747 764 748 //Frequency changes 765 - int16_t GfxclkFmin; // MHz 766 - int16_t GfxclkFmax; // MHz 767 - uint16_t UclkFmin; // MHz 768 - uint16_t UclkFmax; // MHz 749 + int16_t GfxclkFoffset; 750 + uint16_t Padding1; 751 + uint16_t UclkFmin; 752 + uint16_t UclkFmax; 769 753 uint16_t FclkFmin; 770 754 uint16_t FclkFmax; 771 755 ··· 786 770 uint8_t MaxOpTemp; 787 771 788 772 uint8_t AdvancedOdModeEnabled; 789 - uint8_t Padding1[3]; 773 + uint8_t Padding2[3]; 790 774 791 775 uint16_t GfxVoltageFullCtrlMode; 792 776 uint16_t SocVoltageFullCtrlMode; 793 777 uint16_t GfxclkFullCtrlMode; 794 778 uint16_t UclkFullCtrlMode; 795 779 uint16_t FclkFullCtrlMode; 796 - uint16_t Padding2; 780 + uint16_t Padding3; 797 781 798 782 int16_t GfxEdc; 799 783 int16_t GfxPccLimitControl; 800 784 801 - uint32_t Spare[10]; 785 + uint16_t GfxclkFmaxVmax; 786 + uint8_t GfxclkFmaxVmaxTemperature; 787 + uint8_t Padding4[1]; 788 + 789 + uint32_t Spare[9]; 802 790 uint32_t MmHubPadding[8]; // SMU internal use. Adding here instead of external as a workaround 803 791 } OverDriveTable_t; 804 792 ··· 822 802 uint16_t VddSocVmax; 823 803 824 804 //gfxclk 825 - int16_t GfxclkFmin; // MHz 826 - int16_t GfxclkFmax; // MHz 805 + int16_t GfxclkFoffset; 806 + uint16_t Padding; 827 807 //uclk 828 808 uint16_t UclkFmin; // MHz 829 809 uint16_t UclkFmax; // MHz ··· 848 828 uint8_t FanZeroRpmEnable; 849 829 //temperature 850 830 uint8_t MaxOpTemp; 851 - uint8_t Padding[2]; 831 + uint8_t Padding1[2]; 852 832 853 833 //Full Ctrl 854 834 uint16_t GfxVoltageFullCtrlMode; ··· 859 839 //EDC 860 840 int16_t GfxEdc; 861 841 int16_t GfxPccLimitControl; 862 - int16_t Padding1; 842 + int16_t Padding2; 863 843 864 844 uint32_t Spare[5]; 865 845 } OverDriveLimits_t; ··· 1007 987 uint16_t BaseClockDc; 1008 988 uint16_t GameClockDc; 1009 989 uint16_t BoostClockDc; 1010 - 1011 - uint32_t Reserved[4]; 990 + uint16_t MaxReportedClock; 991 + uint16_t Padding; 992 + uint32_t Reserved[3]; 1012 993 } DriverReportedClocks_t; 1013 994 1014 995 typedef struct { ··· 1153 1132 uint32_t DcModeMaxFreq [PPCLK_COUNT ]; // In MHz 1154 1133 1155 1134 uint16_t GfxclkAibFmax; 1156 - uint16_t GfxclkFreqCap; 1135 + uint16_t GfxDpmPadding; 1157 1136 1158 1137 //GFX Idle Power Settings 1159 1138 uint16_t GfxclkFgfxoffEntry; // Entry in RLC stage (PLL), in Mhz ··· 1193 1172 uint32_t DvoFmaxLowScaler; //Unitless float 1194 1173 1195 1174 // GFX DCS 1196 - uint16_t DcsGfxOffVoltage; //Voltage in mV(Q2) applied to VDDGFX when entering DCS GFXOFF phase 1197 - uint16_t PaddingDcs; 1175 + uint32_t PaddingDcs; 1198 1176 1199 1177 uint16_t DcsMinGfxOffTime; //Minimum amount of time PMFW shuts GFX OFF as part of GFX DCS phase 1200 1178 uint16_t DcsMaxGfxOffTime; //Maximum amount of time PMFW can shut GFX OFF as part of GFX DCS phase at a stretch. ··· 1225 1205 uint16_t DalDcModeMaxUclkFreq; 1226 1206 uint8_t PaddingsMem[2]; 1227 1207 //FCLK Section 1228 - uint16_t FclkDpmDisallowPstateFreq; //Frequency which FW will target when indicated that display config cannot support P-state. Set to 0 use FW calculated value 1229 - uint16_t PaddingFclk; 1208 + uint32_t PaddingFclk; 1230 1209 1231 1210 // Link DPM Settings 1232 1211 uint8_t PcieGenSpeed[NUM_LINK_LEVELS]; ///< 0:PciE-gen1 1:PciE-gen2 2:PciE-gen3 3:PciE-gen4 4:PciE-gen5 ··· 1234 1215 1235 1216 // SECTION: VDD_GFX AVFS 1236 1217 uint8_t OverrideGfxAvfsFuses; 1237 - uint8_t GfxAvfsPadding[3]; 1218 + uint8_t GfxAvfsPadding[1]; 1219 + uint16_t DroopGBStDev; 1238 1220 1239 1221 uint32_t SocHwRtAvfsFuses[PP_GRTAVFS_HW_FUSE_COUNT]; //new added for Soc domain 1240 1222 uint32_t GfxL2HwRtAvfsFuses[PP_GRTAVFS_HW_FUSE_COUNT]; //see fusedoc for encoding 1241 1223 //uint32_t GfxSeHwRtAvfsFuses[PP_GRTAVFS_HW_FUSE_COUNT]; 1242 - uint32_t spare_HwRtAvfsFuses[PP_GRTAVFS_HW_FUSE_COUNT]; 1224 + 1225 + uint16_t PsmDidt_Vcross[PP_NUM_PSM_DIDT_PWL_ZONES-1]; 1226 + uint32_t PsmDidt_StaticDroop_A[PP_NUM_PSM_DIDT_PWL_ZONES]; 1227 + uint32_t PsmDidt_StaticDroop_B[PP_NUM_PSM_DIDT_PWL_ZONES]; 1228 + uint32_t PsmDidt_DynDroop_A[PP_NUM_PSM_DIDT_PWL_ZONES]; 1229 + uint32_t PsmDidt_DynDroop_B[PP_NUM_PSM_DIDT_PWL_ZONES]; 1230 + uint32_t spare_HwRtAvfsFuses[19]; 1243 1231 1244 1232 uint32_t SocCommonRtAvfs[PP_GRTAVFS_FW_COMMON_FUSE_COUNT]; 1245 1233 uint32_t GfxCommonRtAvfs[PP_GRTAVFS_FW_COMMON_FUSE_COUNT]; ··· 1272 1246 uint32_t dGbV_dT_vmin; 1273 1247 uint32_t dGbV_dT_vmax; 1274 1248 1275 - //Unused: PMFW-9370 1276 - uint32_t V2F_vmin_range_low; 1277 - uint32_t V2F_vmin_range_high; 1278 - uint32_t V2F_vmax_range_low; 1279 - uint32_t V2F_vmax_range_high; 1249 + uint32_t PaddingV2F[4]; 1280 1250 1281 1251 AvfsDcBtcParams_t DcBtcGfxParams; 1282 1252 QuadraticInt_t SSCurve_GFX; ··· 1349 1327 uint16_t PsmDidtReleaseTimer; 1350 1328 uint32_t PsmDidtStallPattern; //Will be written to both pattern 1 and didt_static_level_prog 1351 1329 // CAC EDC 1352 - uint32_t Leakage_C0; // in IEEE float 1353 - uint32_t Leakage_C1; // in IEEE float 1354 - uint32_t Leakage_C2; // in IEEE float 1355 - uint32_t Leakage_C3; // in IEEE float 1356 - uint32_t Leakage_C4; // in IEEE float 1357 - uint32_t Leakage_C5; // in IEEE float 1358 - uint32_t GFX_CLK_SCALAR; // in IEEE float 1359 - uint32_t GFX_CLK_INTERCEPT; // in IEEE float 1360 - uint32_t GFX_CAC_M; // in IEEE float 1361 - uint32_t GFX_CAC_B; // in IEEE float 1362 - uint32_t VDD_GFX_CurrentLimitGuardband; // in IEEE float 1363 - uint32_t DynToTotalCacScalar; // in IEEE 1330 + uint32_t CacEdcCacLeakageC0; 1331 + uint32_t CacEdcCacLeakageC1; 1332 + uint32_t CacEdcCacLeakageC2; 1333 + uint32_t CacEdcCacLeakageC3; 1334 + uint32_t CacEdcCacLeakageC4; 1335 + uint32_t CacEdcCacLeakageC5; 1336 + uint32_t CacEdcGfxClkScalar; 1337 + uint32_t CacEdcGfxClkIntercept; 1338 + uint32_t CacEdcCac_m; 1339 + uint32_t CacEdcCac_b; 1340 + uint32_t CacEdcCurrLimitGuardband; 1341 + uint32_t CacEdcDynToTotalCacRatio; 1364 1342 // GFX EDC XVMIN 1365 1343 uint32_t XVmin_Gfx_EdcThreshScalar; 1366 1344 uint32_t XVmin_Gfx_EdcEnableFreq; ··· 1489 1467 uint8_t VddqOffEnabled; 1490 1468 uint8_t PaddingUmcFlags[2]; 1491 1469 1492 - uint32_t PostVoltageSetBacoDelay; // in microseconds. Amount of time FW will wait after power good is established or PSI0 command is issued 1470 + uint32_t Paddign1; 1493 1471 uint32_t BacoEntryDelay; // in milliseconds. Amount of time FW will wait to trigger BACO entry after receiving entry notification from OS 1494 1472 1495 1473 uint8_t FuseWritePowerMuxPresent; ··· 1552 1530 int16_t FuzzyFan_ErrorSetDelta; 1553 1531 int16_t FuzzyFan_ErrorRateSetDelta; 1554 1532 int16_t FuzzyFan_PwmSetDelta; 1555 - uint16_t FuzzyFan_Reserved; 1533 + uint16_t FanPadding2; 1556 1534 1557 1535 uint16_t FwCtfLimit[TEMP_COUNT]; 1558 1536 ··· 1569 1547 uint16_t FanSpare[1]; 1570 1548 uint8_t FanIntakeSensorSupport; 1571 1549 uint8_t FanIntakePadding; 1572 - uint32_t FanAmbientPerfBoostThreshold; 1573 1550 uint32_t FanSpare2[12]; 1551 + 1552 + uint32_t ODFeatureCtrlMask; 1574 1553 1575 1554 uint16_t TemperatureLimit_Hynix; // In degrees Celsius. Memory temperature limit associated with Hynix 1576 1555 uint16_t TemperatureLimit_Micron; // In degrees Celsius. Memory temperature limit associated with Micron ··· 1660 1637 uint16_t AverageDclk0Frequency ; 1661 1638 uint16_t AverageVclk1Frequency ; 1662 1639 uint16_t AverageDclk1Frequency ; 1663 - uint16_t PCIeBusy ; 1640 + uint16_t AveragePCIeBusy ; 1664 1641 uint16_t dGPU_W_MAX ; 1665 1642 uint16_t padding ; 1666 1643 ··· 1688 1665 1689 1666 uint16_t AverageGfxActivity ; 1690 1667 uint16_t AverageUclkActivity ; 1691 - uint16_t Vcn0ActivityPercentage ; 1668 + uint16_t AverageVcn0ActivityPercentage; 1692 1669 uint16_t Vcn1ActivityPercentage ; 1693 1670 1694 1671 uint32_t EnergyAccumulator; 1695 1672 uint16_t AverageSocketPower; 1696 - uint16_t MovingAverageTotalBoardPower; 1673 + uint16_t AverageTotalBoardPower; 1697 1674 1698 1675 uint16_t AvgTemperature[TEMP_COUNT]; 1699 1676 uint16_t AvgTemperatureFanIntake; ··· 1707 1684 1708 1685 1709 1686 uint8_t ThrottlingPercentage[THROTTLER_COUNT]; 1710 - uint8_t padding1[3]; 1687 + uint8_t VmaxThrottlingPercentage; 1688 + uint8_t padding1[2]; 1711 1689 1712 1690 //metrics for D3hot entry/exit and driver ARM msgs 1713 1691 uint32_t D3HotEntryCountPerMode[D3HOT_SEQUENCE_COUNT]; ··· 1717 1693 1718 1694 uint16_t ApuSTAPMSmartShiftLimit; 1719 1695 uint16_t ApuSTAPMLimit; 1720 - uint16_t MovingAvgApuSocketPower; 1696 + uint16_t AvgApuSocketPower; 1721 1697 1722 1698 uint16_t AverageUclkActivity_MAX; 1723 1699 ··· 1847 1823 #define TABLE_TRANSFER_FAILED 0xFF 1848 1824 #define TABLE_TRANSFER_PENDING 0xAB 1849 1825 1826 + #define TABLE_PPT_FAILED 0x100 1827 + #define TABLE_TDC_FAILED 0x200 1828 + #define TABLE_TEMP_FAILED 0x400 1829 + #define TABLE_FAN_TARGET_TEMP_FAILED 0x800 1830 + #define TABLE_FAN_STOP_TEMP_FAILED 0x1000 1831 + #define TABLE_FAN_START_TEMP_FAILED 0x2000 1832 + #define TABLE_FAN_PWM_MIN_FAILED 0x4000 1833 + #define TABLE_ACOUSTIC_TARGET_RPM_FAILED 0x8000 1834 + #define TABLE_ACOUSTIC_LIMIT_RPM_FAILED 0x10000 1835 + #define TABLE_MGPU_ACOUSTIC_TARGET_RPM_FAILED 0x20000 1836 + 1850 1837 // Table types 1851 1838 #define TABLE_PPTABLE 0 1852 1839 #define TABLE_COMBO_PPTABLE 1 ··· 1884 1849 #define IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING 0x7 1885 1850 #define IH_INTERRUPT_CONTEXT_ID_FAN_ABNORMAL 0x8 1886 1851 #define IH_INTERRUPT_CONTEXT_ID_FAN_RECOVERY 0x9 1852 + #define IH_INTERRUPT_CONTEXT_ID_DYNAMIC_TABLE 0xA 1887 1853 1888 1854 #endif
+1 -1
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
··· 28 28 #define SMU14_DRIVER_IF_VERSION_INV 0xFFFFFFFF 29 29 #define SMU14_DRIVER_IF_VERSION_SMU_V14_0_0 0x7 30 30 #define SMU14_DRIVER_IF_VERSION_SMU_V14_0_1 0x6 31 - #define SMU14_DRIVER_IF_VERSION_SMU_V14_0_2 0x26 31 + #define SMU14_DRIVER_IF_VERSION_SMU_V14_0_2 0x2E 32 32 33 33 #define FEATURE_MASK(feature) (1ULL << feature) 34 34
+24 -42
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 1077 1077 1078 1078 switch (od_feature_bit) { 1079 1079 case PP_OD_FEATURE_GFXCLK_FMIN: 1080 - od_min_setting = overdrive_lowerlimits->GfxclkFmin; 1081 - od_max_setting = overdrive_upperlimits->GfxclkFmin; 1082 - break; 1083 1080 case PP_OD_FEATURE_GFXCLK_FMAX: 1084 - od_min_setting = overdrive_lowerlimits->GfxclkFmax; 1085 - od_max_setting = overdrive_upperlimits->GfxclkFmax; 1081 + od_min_setting = overdrive_lowerlimits->GfxclkFoffset; 1082 + od_max_setting = overdrive_upperlimits->GfxclkFoffset; 1086 1083 break; 1087 1084 case PP_OD_FEATURE_UCLK_FMIN: 1088 1085 od_min_setting = overdrive_lowerlimits->UclkFmin; ··· 1266 1269 PP_OD_FEATURE_GFXCLK_BIT)) 1267 1270 break; 1268 1271 1269 - size += sysfs_emit_at(buf, size, "OD_SCLK:\n"); 1270 - size += sysfs_emit_at(buf, size, "0: %uMhz\n1: %uMhz\n", 1271 - od_table->OverDriveTable.GfxclkFmin, 1272 - od_table->OverDriveTable.GfxclkFmax); 1272 + PPTable_t *pptable = smu->smu_table.driver_pptable; 1273 + const OverDriveLimits_t * const overdrive_upperlimits = 1274 + &pptable->SkuTable.OverDriveLimitsBasicMax; 1275 + const OverDriveLimits_t * const overdrive_lowerlimits = 1276 + &pptable->SkuTable.OverDriveLimitsBasicMin; 1277 + 1278 + size += sysfs_emit_at(buf, size, "OD_SCLK_OFFSET:\n"); 1279 + size += sysfs_emit_at(buf, size, "0: %dMhz\n1: %uMhz\n", 1280 + overdrive_lowerlimits->GfxclkFoffset, 1281 + overdrive_upperlimits->GfxclkFoffset); 1273 1282 break; 1274 1283 1275 1284 case SMU_OD_MCLK: ··· 1417 1414 PP_OD_FEATURE_GFXCLK_FMAX, 1418 1415 NULL, 1419 1416 &max_value); 1420 - size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n", 1417 + size += sysfs_emit_at(buf, size, "SCLK_OFFSET: %7dMhz %10uMhz\n", 1421 1418 min_value, max_value); 1422 1419 } 1423 1420 ··· 1799 1796 DpmActivityMonitorCoeffInt_t *activity_monitor = 1800 1797 &(activity_monitor_external.DpmActivityMonitorCoeffInt); 1801 1798 int workload_type, ret = 0; 1802 - 1799 + uint32_t current_profile_mode = smu->power_profile_mode; 1803 1800 smu->power_profile_mode = input[size]; 1804 1801 1805 1802 if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) { ··· 1856 1853 return ret; 1857 1854 } 1858 1855 } 1856 + 1857 + if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE) 1858 + smu_v14_0_deep_sleep_control(smu, false); 1859 + else if (current_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE) 1860 + smu_v14_0_deep_sleep_control(smu, true); 1859 1861 1860 1862 /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ 1861 1863 workload_type = smu_cmn_to_asic_specific_index(smu, ··· 2166 2158 2167 2159 gpu_metrics->average_gfx_activity = metrics->AverageGfxActivity; 2168 2160 gpu_metrics->average_umc_activity = metrics->AverageUclkActivity; 2169 - gpu_metrics->average_mm_activity = max(metrics->Vcn0ActivityPercentage, 2161 + gpu_metrics->average_mm_activity = max(metrics->AverageVcn0ActivityPercentage, 2170 2162 metrics->Vcn1ActivityPercentage); 2171 2163 2172 2164 gpu_metrics->average_socket_power = metrics->AverageSocketPower; ··· 2225 2217 { 2226 2218 struct amdgpu_device *adev = smu->adev; 2227 2219 2228 - dev_dbg(adev->dev, "OD: Gfxclk: (%d, %d)\n", od_table->OverDriveTable.GfxclkFmin, 2229 - od_table->OverDriveTable.GfxclkFmax); 2220 + dev_dbg(adev->dev, "OD: Gfxclk offset: (%d)\n", od_table->OverDriveTable.GfxclkFoffset); 2230 2221 dev_dbg(adev->dev, "OD: Uclk: (%d, %d)\n", od_table->OverDriveTable.UclkFmin, 2231 2222 od_table->OverDriveTable.UclkFmax); 2232 2223 } ··· 2316 2309 memcpy(user_od_table, 2317 2310 boot_od_table, 2318 2311 sizeof(OverDriveTableExternal_t)); 2319 - user_od_table->OverDriveTable.GfxclkFmin = 2320 - user_od_table_bak.OverDriveTable.GfxclkFmin; 2321 - user_od_table->OverDriveTable.GfxclkFmax = 2322 - user_od_table_bak.OverDriveTable.GfxclkFmax; 2312 + user_od_table->OverDriveTable.GfxclkFoffset = 2313 + user_od_table_bak.OverDriveTable.GfxclkFoffset; 2323 2314 user_od_table->OverDriveTable.UclkFmin = 2324 2315 user_od_table_bak.OverDriveTable.UclkFmin; 2325 2316 user_od_table->OverDriveTable.UclkFmax = ··· 2446 2441 } 2447 2442 2448 2443 switch (input[i]) { 2449 - case 0: 2450 - smu_v14_0_2_get_od_setting_limits(smu, 2451 - PP_OD_FEATURE_GFXCLK_FMIN, 2452 - &minimum, 2453 - &maximum); 2454 - if (input[i + 1] < minimum || 2455 - input[i + 1] > maximum) { 2456 - dev_info(adev->dev, "GfxclkFmin (%ld) must be within [%u, %u]!\n", 2457 - input[i + 1], minimum, maximum); 2458 - return -EINVAL; 2459 - } 2460 - 2461 - od_table->OverDriveTable.GfxclkFmin = input[i + 1]; 2462 - od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_GFXCLK_BIT; 2463 - break; 2464 - 2465 2444 case 1: 2466 2445 smu_v14_0_2_get_od_setting_limits(smu, 2467 2446 PP_OD_FEATURE_GFXCLK_FMAX, ··· 2458 2469 return -EINVAL; 2459 2470 } 2460 2471 2461 - od_table->OverDriveTable.GfxclkFmax = input[i + 1]; 2472 + od_table->OverDriveTable.GfxclkFoffset = input[i + 1]; 2462 2473 od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_GFXCLK_BIT; 2463 2474 break; 2464 2475 ··· 2469 2480 } 2470 2481 } 2471 2482 2472 - if (od_table->OverDriveTable.GfxclkFmin > od_table->OverDriveTable.GfxclkFmax) { 2473 - dev_err(adev->dev, 2474 - "Invalid setting: GfxclkFmin(%u) is bigger than GfxclkFmax(%u)\n", 2475 - (uint32_t)od_table->OverDriveTable.GfxclkFmin, 2476 - (uint32_t)od_table->OverDriveTable.GfxclkFmax); 2477 - return -EINVAL; 2478 - } 2479 2483 break; 2480 2484 2481 2485 case PP_OD_EDIT_MCLK_VDDC_TABLE:
+2 -1
drivers/gpu/drm/bridge/aux-bridge.c
··· 58 58 adev->id = ret; 59 59 adev->name = "aux_bridge"; 60 60 adev->dev.parent = parent; 61 - adev->dev.of_node = of_node_get(parent->of_node); 62 61 adev->dev.release = drm_aux_bridge_release; 62 + 63 + device_set_of_node_from_dev(&adev->dev, parent); 63 64 64 65 ret = auxiliary_device_init(adev); 65 66 if (ret) {
+1
drivers/gpu/drm/bridge/tc358767.c
··· 2391 2391 if (tc->pre_emphasis[0] < 0 || tc->pre_emphasis[0] > 2 || 2392 2392 tc->pre_emphasis[1] < 0 || tc->pre_emphasis[1] > 2) { 2393 2393 dev_err(dev, "Incorrect Pre-Emphasis setting, use either 0=0dB 1=3.5dB 2=6dB\n"); 2394 + of_node_put(node); 2394 2395 return -EINVAL; 2395 2396 } 2396 2397 }
+1 -2
drivers/gpu/drm/i915/Kconfig
··· 123 123 config DRM_I915_GVT_KVMGT 124 124 tristate "Enable KVM host support Intel GVT-g graphics virtualization" 125 125 depends on DRM_I915 126 - depends on X86 126 + depends on KVM_X86 127 127 depends on 64BIT 128 - depends on KVM 129 128 depends on VFIO 130 129 select DRM_I915_GVT 131 130 select KVM_EXTERNAL_WRITE_TRACKING
+1 -1
drivers/gpu/drm/xe/xe_device.c
··· 890 890 spin_lock(&gt->global_invl_lock); 891 891 xe_mmio_write32(gt, XE2_GLOBAL_INVAL, 0x1); 892 892 893 - if (xe_mmio_wait32(gt, XE2_GLOBAL_INVAL, 0x1, 0x0, 150, NULL, true)) 893 + if (xe_mmio_wait32(gt, XE2_GLOBAL_INVAL, 0x1, 0x0, 500, NULL, true)) 894 894 xe_gt_err_once(gt, "Global invalidation timeout\n"); 895 895 spin_unlock(&gt->global_invl_lock); 896 896
+9 -3
drivers/gpu/drm/xe/xe_force_wake.c
··· 115 115 XE_FORCE_WAKE_ACK_TIMEOUT_MS * USEC_PER_MSEC, 116 116 &value, true); 117 117 if (ret) 118 - xe_gt_notice(gt, "Force wake domain %d failed to ack %s (%pe) reg[%#x] = %#x\n", 119 - domain->id, str_wake_sleep(wake), ERR_PTR(ret), 120 - domain->reg_ack.addr, value); 118 + xe_gt_err(gt, "Force wake domain %d failed to ack %s (%pe) reg[%#x] = %#x\n", 119 + domain->id, str_wake_sleep(wake), ERR_PTR(ret), 120 + domain->reg_ack.addr, value); 121 + if (value == ~0) { 122 + xe_gt_err(gt, 123 + "Force wake domain %d: %s. MMIO unreliable (forcewake register returns 0xFFFFFFFF)!\n", 124 + domain->id, str_wake_sleep(wake)); 125 + ret = -EIO; 126 + } 121 127 122 128 return ret; 123 129 }
+18
drivers/gpu/drm/xe/xe_guc_ct.c
··· 898 898 ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ); 899 899 900 900 /* 901 + * Occasionally it is seen that the G2H worker starts running after a delay of more than 902 + * a second even after being queued and activated by the Linux workqueue subsystem. This 903 + * leads to G2H timeout error. The root cause of issue lies with scheduling latency of 904 + * Lunarlake Hybrid CPU. Issue dissappears if we disable Lunarlake atom cores from BIOS 905 + * and this is beyond xe kmd. 906 + * 907 + * TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU. 908 + */ 909 + if (!ret) { 910 + flush_work(&ct->g2h_worker); 911 + if (g2h_fence.done) { 912 + xe_gt_warn(gt, "G2H fence %u, action %04x, done\n", 913 + g2h_fence.seqno, action[0]); 914 + ret = 1; 915 + } 916 + } 917 + 918 + /* 901 919 * Ensure we serialize with completion side to prevent UAF with fence going out of scope on 902 920 * the stack, since we have no clue if it will fire after the timeout before we can erase 903 921 * from the xa. Also we have some dependent loads and stores below for which we need the
+12 -2
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1726 1726 1727 1727 mutex_lock(&guc->submission_state.lock); 1728 1728 1729 - xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) 1729 + xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) { 1730 + /* Prevent redundant attempts to stop parallel queues */ 1731 + if (q->guc->id != index) 1732 + continue; 1733 + 1730 1734 guc_exec_queue_stop(guc, q); 1735 + } 1731 1736 1732 1737 mutex_unlock(&guc->submission_state.lock); 1733 1738 ··· 1770 1765 1771 1766 mutex_lock(&guc->submission_state.lock); 1772 1767 atomic_dec(&guc->submission_state.stopped); 1773 - xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) 1768 + xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) { 1769 + /* Prevent redundant attempts to start parallel queues */ 1770 + if (q->guc->id != index) 1771 + continue; 1772 + 1774 1773 guc_exec_queue_start(q); 1774 + } 1775 1775 mutex_unlock(&guc->submission_state.lock); 1776 1776 1777 1777 wake_up_all(&guc->ct.wq);
+2 -1
drivers/gpu/drm/xe/xe_sync.c
··· 54 54 { 55 55 struct xe_user_fence *ufence; 56 56 u64 __user *ptr = u64_to_user_ptr(addr); 57 + u64 __maybe_unused prefetch_val; 57 58 58 - if (!access_ok(ptr, sizeof(*ptr))) 59 + if (get_user(prefetch_val, ptr)) 59 60 return ERR_PTR(-EFAULT); 60 61 61 62 ufence = kzalloc(sizeof(*ufence), GFP_KERNEL);
+23 -1
drivers/md/md.c
··· 546 546 return 0; 547 547 } 548 548 549 + /* 550 + * The only difference from bio_chain_endio() is that the current 551 + * bi_status of bio does not affect the bi_status of parent. 552 + */ 553 + static void md_end_flush(struct bio *bio) 554 + { 555 + struct bio *parent = bio->bi_private; 556 + 557 + /* 558 + * If any flush io error before the power failure, 559 + * disk data may be lost. 560 + */ 561 + if (bio->bi_status) 562 + pr_err("md: %pg flush io error %d\n", bio->bi_bdev, 563 + blk_status_to_errno(bio->bi_status)); 564 + 565 + bio_put(bio); 566 + bio_endio(parent); 567 + } 568 + 549 569 bool md_flush_request(struct mddev *mddev, struct bio *bio) 550 570 { 551 571 struct md_rdev *rdev; ··· 585 565 new = bio_alloc_bioset(rdev->bdev, 0, 586 566 REQ_OP_WRITE | REQ_PREFLUSH, GFP_NOIO, 587 567 &mddev->bio_set); 588 - bio_chain(new, bio); 568 + new->bi_private = bio; 569 + new->bi_end_io = md_end_flush; 570 + bio_inc_remaining(bio); 589 571 submit_bio(new); 590 572 } 591 573
+5 -2
drivers/md/raid10.c
··· 4061 4061 } 4062 4062 4063 4063 if (!mddev_is_dm(conf->mddev)) { 4064 - ret = raid10_set_queue_limits(mddev); 4065 - if (ret) 4064 + int err = raid10_set_queue_limits(mddev); 4065 + 4066 + if (err) { 4067 + ret = err; 4066 4068 goto out_free_conf; 4069 + } 4067 4070 } 4068 4071 4069 4072 /* need to check that every block has at least one working mirror */
+3 -1
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1293 1293 1294 1294 /* save the buffer addr until the last read operation */ 1295 1295 *save_buf = read_buf; 1296 + } 1296 1297 1297 - /* get data ready for the first time to read */ 1298 + /* get data ready for the first time to read */ 1299 + if (!*ppos) { 1298 1300 ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1299 1301 read_buf, hns3_dbg_cmd[index].buf_len); 1300 1302 if (ret)
+58 -1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 11 11 #include <linux/irq.h> 12 12 #include <linux/ip.h> 13 13 #include <linux/ipv6.h> 14 + #include <linux/iommu.h> 14 15 #include <linux/module.h> 15 16 #include <linux/pci.h> 16 17 #include <linux/skbuff.h> ··· 380 379 381 380 #define HNS3_INVALID_PTYPE \ 382 381 ARRAY_SIZE(hns3_rx_ptype_tbl) 382 + 383 + static void hns3_dma_map_sync(struct device *dev, unsigned long iova) 384 + { 385 + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 386 + struct iommu_iotlb_gather iotlb_gather; 387 + size_t granule; 388 + 389 + if (!domain || !iommu_is_dma_domain(domain)) 390 + return; 391 + 392 + granule = 1 << __ffs(domain->pgsize_bitmap); 393 + iova = ALIGN_DOWN(iova, granule); 394 + iotlb_gather.start = iova; 395 + iotlb_gather.end = iova + granule - 1; 396 + iotlb_gather.pgsize = granule; 397 + 398 + iommu_iotlb_sync(domain, &iotlb_gather); 399 + } 383 400 384 401 static irqreturn_t hns3_irq_handle(int irq, void *vector) 385 402 { ··· 1051 1032 static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring) 1052 1033 { 1053 1034 u32 alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size; 1035 + struct net_device *netdev = ring_to_netdev(ring); 1036 + struct hns3_nic_priv *priv = netdev_priv(netdev); 1054 1037 struct hns3_tx_spare *tx_spare; 1055 1038 struct page *page; 1056 1039 dma_addr_t dma; ··· 1094 1073 tx_spare->buf = page_address(page); 1095 1074 tx_spare->len = PAGE_SIZE << order; 1096 1075 ring->tx_spare = tx_spare; 1076 + ring->tx_copybreak = priv->tx_copybreak; 1097 1077 return; 1098 1078 1099 1079 dma_mapping_error: ··· 1746 1724 unsigned int type) 1747 1725 { 1748 1726 struct hns3_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use]; 1727 + struct hnae3_handle *handle = ring->tqp->handle; 1749 1728 struct device *dev = ring_to_dev(ring); 1729 + struct hnae3_ae_dev *ae_dev; 1750 1730 unsigned int size; 1751 1731 dma_addr_t dma; 1752 1732 ··· 1779 1755 hns3_ring_stats_update(ring, sw_err_cnt); 1780 1756 return -ENOMEM; 1781 1757 } 1758 + 1759 + /* Add a SYNC command to sync io-pgtale to avoid errors in pgtable 1760 + * prefetch 1761 + */ 1762 + ae_dev = hns3_get_ae_dev(handle); 1763 + if (ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3) 1764 + hns3_dma_map_sync(dev, dma); 1782 1765 1783 1766 desc_cb->priv = priv; 1784 1767 desc_cb->length = size; ··· 2483 2452 return ret; 2484 2453 } 2485 2454 2486 - netdev->features = features; 2487 2455 return 0; 2488 2456 } 2489 2457 ··· 4898 4868 devm_kfree(&pdev->dev, priv->tqp_vector); 4899 4869 } 4900 4870 4871 + static void hns3_update_tx_spare_buf_config(struct hns3_nic_priv *priv) 4872 + { 4873 + #define HNS3_MIN_SPARE_BUF_SIZE (2 * 1024 * 1024) 4874 + #define HNS3_MAX_PACKET_SIZE (64 * 1024) 4875 + 4876 + struct iommu_domain *domain = iommu_get_domain_for_dev(priv->dev); 4877 + struct hnae3_ae_dev *ae_dev = hns3_get_ae_dev(priv->ae_handle); 4878 + struct hnae3_handle *handle = priv->ae_handle; 4879 + 4880 + if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3) 4881 + return; 4882 + 4883 + if (!(domain && iommu_is_dma_domain(domain))) 4884 + return; 4885 + 4886 + priv->min_tx_copybreak = HNS3_MAX_PACKET_SIZE; 4887 + priv->min_tx_spare_buf_size = HNS3_MIN_SPARE_BUF_SIZE; 4888 + 4889 + if (priv->tx_copybreak < priv->min_tx_copybreak) 4890 + priv->tx_copybreak = priv->min_tx_copybreak; 4891 + if (handle->kinfo.tx_spare_buf_size < priv->min_tx_spare_buf_size) 4892 + handle->kinfo.tx_spare_buf_size = priv->min_tx_spare_buf_size; 4893 + } 4894 + 4901 4895 static void hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv, 4902 4896 unsigned int ring_type) 4903 4897 { ··· 5155 5101 int i, j; 5156 5102 int ret; 5157 5103 5104 + hns3_update_tx_spare_buf_config(priv); 5158 5105 for (i = 0; i < ring_num; i++) { 5159 5106 ret = hns3_alloc_ring_memory(&priv->ring[i]); 5160 5107 if (ret) { ··· 5360 5305 priv->ae_handle = handle; 5361 5306 priv->tx_timeout_count = 0; 5362 5307 priv->max_non_tso_bd_num = ae_dev->dev_specs.max_non_tso_bd_num; 5308 + priv->min_tx_copybreak = 0; 5309 + priv->min_tx_spare_buf_size = 0; 5363 5310 set_bit(HNS3_NIC_STATE_DOWN, &priv->state); 5364 5311 5365 5312 handle->msg_enable = netif_msg_init(debug, DEFAULT_MSG_LEVEL);
+2
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
··· 596 596 struct hns3_enet_coalesce rx_coal; 597 597 u32 tx_copybreak; 598 598 u32 rx_copybreak; 599 + u32 min_tx_copybreak; 600 + u32 min_tx_spare_buf_size; 599 601 }; 600 602 601 603 union l3_hdr_info {
+33
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
··· 1933 1933 return ret; 1934 1934 } 1935 1935 1936 + static int hns3_check_tx_copybreak(struct net_device *netdev, u32 copybreak) 1937 + { 1938 + struct hns3_nic_priv *priv = netdev_priv(netdev); 1939 + 1940 + if (copybreak < priv->min_tx_copybreak) { 1941 + netdev_err(netdev, "tx copybreak %u should be no less than %u!\n", 1942 + copybreak, priv->min_tx_copybreak); 1943 + return -EINVAL; 1944 + } 1945 + return 0; 1946 + } 1947 + 1948 + static int hns3_check_tx_spare_buf_size(struct net_device *netdev, u32 buf_size) 1949 + { 1950 + struct hns3_nic_priv *priv = netdev_priv(netdev); 1951 + 1952 + if (buf_size < priv->min_tx_spare_buf_size) { 1953 + netdev_err(netdev, 1954 + "tx spare buf size %u should be no less than %u!\n", 1955 + buf_size, priv->min_tx_spare_buf_size); 1956 + return -EINVAL; 1957 + } 1958 + return 0; 1959 + } 1960 + 1936 1961 static int hns3_set_tunable(struct net_device *netdev, 1937 1962 const struct ethtool_tunable *tuna, 1938 1963 const void *data) ··· 1974 1949 1975 1950 switch (tuna->id) { 1976 1951 case ETHTOOL_TX_COPYBREAK: 1952 + ret = hns3_check_tx_copybreak(netdev, *(u32 *)data); 1953 + if (ret) 1954 + return ret; 1955 + 1977 1956 priv->tx_copybreak = *(u32 *)data; 1978 1957 1979 1958 for (i = 0; i < h->kinfo.num_tqps; i++) ··· 1992 1963 1993 1964 break; 1994 1965 case ETHTOOL_TX_COPYBREAK_BUF_SIZE: 1966 + ret = hns3_check_tx_spare_buf_size(netdev, *(u32 *)data); 1967 + if (ret) 1968 + return ret; 1969 + 1995 1970 old_tx_spare_buf_size = h->kinfo.tx_spare_buf_size; 1996 1971 new_tx_spare_buf_size = *(u32 *)data; 1997 1972 netdev_info(netdev, "request to set tx spare buf size from %u to %u\n",
+36 -9
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 6 6 #include <linux/etherdevice.h> 7 7 #include <linux/init.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/irq.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/module.h> 11 12 #include <linux/netdevice.h> ··· 3585 3584 return ret; 3586 3585 } 3587 3586 3587 + static void hclge_set_reset_pending(struct hclge_dev *hdev, 3588 + enum hnae3_reset_type reset_type) 3589 + { 3590 + /* When an incorrect reset type is executed, the get_reset_level 3591 + * function generates the HNAE3_NONE_RESET flag. As a result, this 3592 + * type do not need to pending. 3593 + */ 3594 + if (reset_type != HNAE3_NONE_RESET) 3595 + set_bit(reset_type, &hdev->reset_pending); 3596 + } 3597 + 3588 3598 static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) 3589 3599 { 3590 3600 u32 cmdq_src_reg, msix_src_reg, hw_err_src_reg; ··· 3616 3604 */ 3617 3605 if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) { 3618 3606 dev_info(&hdev->pdev->dev, "IMP reset interrupt\n"); 3619 - set_bit(HNAE3_IMP_RESET, &hdev->reset_pending); 3607 + hclge_set_reset_pending(hdev, HNAE3_IMP_RESET); 3620 3608 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 3621 3609 *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B); 3622 3610 hdev->rst_stats.imp_rst_cnt++; ··· 3626 3614 if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) { 3627 3615 dev_info(&hdev->pdev->dev, "global reset interrupt\n"); 3628 3616 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 3629 - set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending); 3617 + hclge_set_reset_pending(hdev, HNAE3_GLOBAL_RESET); 3630 3618 *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B); 3631 3619 hdev->rst_stats.global_rst_cnt++; 3632 3620 return HCLGE_VECTOR0_EVENT_RST; ··· 3781 3769 snprintf(hdev->misc_vector.name, HNAE3_INT_NAME_LEN, "%s-misc-%s", 3782 3770 HCLGE_NAME, pci_name(hdev->pdev)); 3783 3771 ret = request_irq(hdev->misc_vector.vector_irq, hclge_misc_irq_handle, 3784 - 0, hdev->misc_vector.name, hdev); 3772 + IRQ_NOAUTOEN, hdev->misc_vector.name, hdev); 3785 3773 if (ret) { 3786 3774 hclge_free_vector(hdev, 0); 3787 3775 dev_err(&hdev->pdev->dev, "request misc irq(%d) fail\n", ··· 4074 4062 case HNAE3_FUNC_RESET: 4075 4063 dev_info(&pdev->dev, "PF reset requested\n"); 4076 4064 /* schedule again to check later */ 4077 - set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending); 4065 + hclge_set_reset_pending(hdev, HNAE3_FUNC_RESET); 4078 4066 hclge_reset_task_schedule(hdev); 4079 4067 break; 4080 4068 default: ··· 4107 4095 rst_level = HNAE3_FLR_RESET; 4108 4096 clear_bit(HNAE3_FLR_RESET, addr); 4109 4097 } 4098 + 4099 + clear_bit(HNAE3_NONE_RESET, addr); 4110 4100 4111 4101 if (hdev->reset_type != HNAE3_NONE_RESET && 4112 4102 rst_level < hdev->reset_type) ··· 4251 4237 return false; 4252 4238 } else if (hdev->rst_stats.reset_fail_cnt < MAX_RESET_FAIL_CNT) { 4253 4239 hdev->rst_stats.reset_fail_cnt++; 4254 - set_bit(hdev->reset_type, &hdev->reset_pending); 4240 + hclge_set_reset_pending(hdev, hdev->reset_type); 4255 4241 dev_info(&hdev->pdev->dev, 4256 4242 "re-schedule reset task(%u)\n", 4257 4243 hdev->rst_stats.reset_fail_cnt); ··· 4494 4480 static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev, 4495 4481 enum hnae3_reset_type rst_type) 4496 4482 { 4483 + #define HCLGE_SUPPORT_RESET_TYPE \ 4484 + (BIT(HNAE3_FLR_RESET) | BIT(HNAE3_FUNC_RESET) | \ 4485 + BIT(HNAE3_GLOBAL_RESET) | BIT(HNAE3_IMP_RESET)) 4486 + 4497 4487 struct hclge_dev *hdev = ae_dev->priv; 4488 + 4489 + if (!(BIT(rst_type) & HCLGE_SUPPORT_RESET_TYPE)) { 4490 + /* To prevent reset triggered by hclge_reset_event */ 4491 + set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request); 4492 + dev_warn(&hdev->pdev->dev, "unsupported reset type %d\n", 4493 + rst_type); 4494 + return; 4495 + } 4498 4496 4499 4497 set_bit(rst_type, &hdev->default_reset_request); 4500 4498 } ··· 11917 11891 11918 11892 hclge_init_rxd_adv_layout(hdev); 11919 11893 11920 - /* Enable MISC vector(vector0) */ 11921 - hclge_enable_vector(&hdev->misc_vector, true); 11922 - 11923 11894 ret = hclge_init_wol(hdev); 11924 11895 if (ret) 11925 11896 dev_warn(&pdev->dev, ··· 11928 11905 11929 11906 hclge_state_init(hdev); 11930 11907 hdev->last_reset_time = jiffies; 11908 + 11909 + /* Enable MISC vector(vector0) */ 11910 + enable_irq(hdev->misc_vector.vector_irq); 11911 + hclge_enable_vector(&hdev->misc_vector, true); 11931 11912 11932 11913 dev_info(&hdev->pdev->dev, "%s driver initialization finished.\n", 11933 11914 HCLGE_DRIVER_NAME); ··· 12338 12311 12339 12312 /* Disable MISC vector(vector0) */ 12340 12313 hclge_enable_vector(&hdev->misc_vector, false); 12341 - synchronize_irq(hdev->misc_vector.vector_irq); 12314 + disable_irq(hdev->misc_vector.vector_irq); 12342 12315 12343 12316 /* Disable all hw interrupts */ 12344 12317 hclge_config_mac_tnl_int(hdev, false);
+3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
··· 58 58 struct hclge_dev *hdev = vport->back; 59 59 struct hclge_ptp *ptp = hdev->ptp; 60 60 61 + if (!ptp) 62 + return false; 63 + 61 64 if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) || 62 65 test_and_set_bit(HCLGE_STATE_PTP_TX_HANDLING, &hdev->state)) { 63 66 ptp->tx_skipped++;
+5 -4
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
··· 510 510 static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data, 511 511 struct hnae3_knic_private_info *kinfo) 512 512 { 513 - #define HCLGE_RING_REG_OFFSET 0x200 514 513 #define HCLGE_RING_INT_REG_OFFSET 0x4 515 514 515 + struct hnae3_queue *tqp; 516 516 int i, j, reg_num; 517 517 int data_num_sum; 518 518 u32 *reg = data; ··· 533 533 reg_num = ARRAY_SIZE(ring_reg_addr_list); 534 534 for (j = 0; j < kinfo->num_tqps; j++) { 535 535 reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg); 536 + tqp = kinfo->tqp[j]; 536 537 for (i = 0; i < reg_num; i++) 537 - *reg++ = hclge_read_dev(&hdev->hw, 538 - ring_reg_addr_list[i] + 539 - HCLGE_RING_REG_OFFSET * j); 538 + *reg++ = readl_relaxed(tqp->io_base - 539 + HCLGE_TQP_REG_OFFSET + 540 + ring_reg_addr_list[i]); 540 541 } 541 542 data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps; 542 543
+33 -7
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 1395 1395 return ret; 1396 1396 } 1397 1397 1398 + static void hclgevf_set_reset_pending(struct hclgevf_dev *hdev, 1399 + enum hnae3_reset_type reset_type) 1400 + { 1401 + /* When an incorrect reset type is executed, the get_reset_level 1402 + * function generates the HNAE3_NONE_RESET flag. As a result, this 1403 + * type do not need to pending. 1404 + */ 1405 + if (reset_type != HNAE3_NONE_RESET) 1406 + set_bit(reset_type, &hdev->reset_pending); 1407 + } 1408 + 1398 1409 static int hclgevf_reset_wait(struct hclgevf_dev *hdev) 1399 1410 { 1400 1411 #define HCLGEVF_RESET_WAIT_US 20000 ··· 1555 1544 hdev->rst_stats.rst_fail_cnt); 1556 1545 1557 1546 if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT) 1558 - set_bit(hdev->reset_type, &hdev->reset_pending); 1547 + hclgevf_set_reset_pending(hdev, hdev->reset_type); 1559 1548 1560 1549 if (hclgevf_is_reset_pending(hdev)) { 1561 1550 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); ··· 1675 1664 clear_bit(HNAE3_FLR_RESET, addr); 1676 1665 } 1677 1666 1667 + clear_bit(HNAE3_NONE_RESET, addr); 1668 + 1678 1669 return rst_level; 1679 1670 } 1680 1671 ··· 1686 1673 struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev); 1687 1674 struct hclgevf_dev *hdev = ae_dev->priv; 1688 1675 1689 - dev_info(&hdev->pdev->dev, "received reset request from VF enet\n"); 1690 - 1691 1676 if (hdev->default_reset_request) 1692 1677 hdev->reset_level = 1693 1678 hclgevf_get_reset_level(&hdev->default_reset_request); 1694 1679 else 1695 1680 hdev->reset_level = HNAE3_VF_FUNC_RESET; 1681 + 1682 + dev_info(&hdev->pdev->dev, "received reset request from VF enet, reset level is %d\n", 1683 + hdev->reset_level); 1696 1684 1697 1685 /* reset of this VF requested */ 1698 1686 set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state); ··· 1705 1691 static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev, 1706 1692 enum hnae3_reset_type rst_type) 1707 1693 { 1694 + #define HCLGEVF_SUPPORT_RESET_TYPE \ 1695 + (BIT(HNAE3_VF_RESET) | BIT(HNAE3_VF_FUNC_RESET) | \ 1696 + BIT(HNAE3_VF_PF_FUNC_RESET) | BIT(HNAE3_VF_FULL_RESET) | \ 1697 + BIT(HNAE3_FLR_RESET) | BIT(HNAE3_VF_EXP_RESET)) 1698 + 1708 1699 struct hclgevf_dev *hdev = ae_dev->priv; 1709 1700 1701 + if (!(BIT(rst_type) & HCLGEVF_SUPPORT_RESET_TYPE)) { 1702 + /* To prevent reset triggered by hclge_reset_event */ 1703 + set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request); 1704 + dev_info(&hdev->pdev->dev, "unsupported reset type %d\n", 1705 + rst_type); 1706 + return; 1707 + } 1710 1708 set_bit(rst_type, &hdev->default_reset_request); 1711 1709 } 1712 1710 ··· 1875 1849 */ 1876 1850 if (hdev->reset_attempts > HCLGEVF_MAX_RESET_ATTEMPTS_CNT) { 1877 1851 /* prepare for full reset of stack + pcie interface */ 1878 - set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending); 1852 + hclgevf_set_reset_pending(hdev, HNAE3_VF_FULL_RESET); 1879 1853 1880 1854 /* "defer" schedule the reset task again */ 1881 1855 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 1882 1856 } else { 1883 1857 hdev->reset_attempts++; 1884 1858 1885 - set_bit(hdev->reset_level, &hdev->reset_pending); 1859 + hclgevf_set_reset_pending(hdev, hdev->reset_level); 1886 1860 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 1887 1861 } 1888 1862 hclgevf_reset_task_schedule(hdev); ··· 2005 1979 rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING); 2006 1980 dev_info(&hdev->pdev->dev, 2007 1981 "receive reset interrupt 0x%x!\n", rst_ing_reg); 2008 - set_bit(HNAE3_VF_RESET, &hdev->reset_pending); 1982 + hclgevf_set_reset_pending(hdev, HNAE3_VF_RESET); 2009 1983 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 2010 1984 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 2011 1985 *clearval = ~(1U << HCLGEVF_VECTOR0_RST_INT_B); ··· 2315 2289 clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state); 2316 2290 2317 2291 INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task); 2292 + timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); 2318 2293 2319 2294 mutex_init(&hdev->mbx_resp.mbx_mutex); 2320 2295 sema_init(&hdev->reset_sem, 1); ··· 3015 2988 HCLGEVF_DRIVER_NAME); 3016 2989 3017 2990 hclgevf_task_schedule(hdev, round_jiffies_relative(HZ)); 3018 - timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); 3019 2991 3020 2992 return 0; 3021 2993
+5 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
··· 123 123 void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version, 124 124 void *data) 125 125 { 126 - #define HCLGEVF_RING_REG_OFFSET 0x200 127 126 #define HCLGEVF_RING_INT_REG_OFFSET 0x4 128 127 129 128 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 129 + struct hnae3_queue *tqp; 130 130 int i, j, reg_um; 131 131 u32 *reg = data; 132 132 ··· 147 147 reg_um = ARRAY_SIZE(ring_reg_addr_list); 148 148 for (j = 0; j < hdev->num_tqps; j++) { 149 149 reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg); 150 + tqp = &hdev->htqp[j].q; 150 151 for (i = 0; i < reg_um; i++) 151 - *reg++ = hclgevf_read_dev(&hdev->hw, 152 - ring_reg_addr_list[i] + 153 - HCLGEVF_RING_REG_OFFSET * j); 152 + *reg++ = readl_relaxed(tqp->io_base - 153 + HCLGEVF_TQP_REG_OFFSET + 154 + ring_reg_addr_list[i]); 154 155 } 155 156 156 157 reg_um = ARRAY_SIZE(tqp_intr_reg_addr_list);
+70
drivers/net/ethernet/intel/ice/ice_dpll.c
··· 10 10 #define ICE_DPLL_PIN_IDX_INVALID 0xff 11 11 #define ICE_DPLL_RCLK_NUM_PER_PF 1 12 12 #define ICE_DPLL_PIN_ESYNC_PULSE_HIGH_PERCENT 25 13 + #define ICE_DPLL_PIN_GEN_RCLK_FREQ 1953125 13 14 14 15 /** 15 16 * enum ice_dpll_pin_type - enumerate ice pin types: ··· 2065 2064 } 2066 2065 2067 2066 /** 2067 + * ice_dpll_init_info_pins_generic - initializes generic pins info 2068 + * @pf: board private structure 2069 + * @input: if input pins initialized 2070 + * 2071 + * Init information for generic pins, cache them in PF's pins structures. 2072 + * 2073 + * Return: 2074 + * * 0 - success 2075 + * * negative - init failure reason 2076 + */ 2077 + static int ice_dpll_init_info_pins_generic(struct ice_pf *pf, bool input) 2078 + { 2079 + struct ice_dpll *de = &pf->dplls.eec, *dp = &pf->dplls.pps; 2080 + static const char labels[][sizeof("99")] = { 2081 + "0", "1", "2", "3", "4", "5", "6", "7", "8", 2082 + "9", "10", "11", "12", "13", "14", "15" }; 2083 + u32 cap = DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE; 2084 + enum ice_dpll_pin_type pin_type; 2085 + int i, pin_num, ret = -EINVAL; 2086 + struct ice_dpll_pin *pins; 2087 + u32 phase_adj_max; 2088 + 2089 + if (input) { 2090 + pin_num = pf->dplls.num_inputs; 2091 + pins = pf->dplls.inputs; 2092 + phase_adj_max = pf->dplls.input_phase_adj_max; 2093 + pin_type = ICE_DPLL_PIN_TYPE_INPUT; 2094 + cap |= DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE; 2095 + } else { 2096 + pin_num = pf->dplls.num_outputs; 2097 + pins = pf->dplls.outputs; 2098 + phase_adj_max = pf->dplls.output_phase_adj_max; 2099 + pin_type = ICE_DPLL_PIN_TYPE_OUTPUT; 2100 + } 2101 + if (pin_num > ARRAY_SIZE(labels)) 2102 + return ret; 2103 + 2104 + for (i = 0; i < pin_num; i++) { 2105 + pins[i].idx = i; 2106 + pins[i].prop.board_label = labels[i]; 2107 + pins[i].prop.phase_range.min = phase_adj_max; 2108 + pins[i].prop.phase_range.max = -phase_adj_max; 2109 + pins[i].prop.capabilities = cap; 2110 + pins[i].pf = pf; 2111 + ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL); 2112 + if (ret) 2113 + break; 2114 + if (input && pins[i].freq == ICE_DPLL_PIN_GEN_RCLK_FREQ) 2115 + pins[i].prop.type = DPLL_PIN_TYPE_MUX; 2116 + else 2117 + pins[i].prop.type = DPLL_PIN_TYPE_EXT; 2118 + if (!input) 2119 + continue; 2120 + ret = ice_aq_get_cgu_ref_prio(&pf->hw, de->dpll_idx, i, 2121 + &de->input_prio[i]); 2122 + if (ret) 2123 + break; 2124 + ret = ice_aq_get_cgu_ref_prio(&pf->hw, dp->dpll_idx, i, 2125 + &dp->input_prio[i]); 2126 + if (ret) 2127 + break; 2128 + } 2129 + 2130 + return ret; 2131 + } 2132 + 2133 + /** 2068 2134 * ice_dpll_init_info_direct_pins - initializes direct pins info 2069 2135 * @pf: board private structure 2070 2136 * @pin_type: type of pins being initialized ··· 2169 2101 default: 2170 2102 return -EINVAL; 2171 2103 } 2104 + if (num_pins != ice_cgu_get_num_pins(hw, input)) 2105 + return ice_dpll_init_info_pins_generic(pf, input); 2172 2106 2173 2107 for (i = 0; i < num_pins; i++) { 2174 2108 caps = 0;
+19 -2
drivers/net/ethernet/intel/ice/ice_ptp_hw.c
··· 34 34 ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common }, 35 35 { "GNSS-1PPS", ZL_REF4P, DPLL_PIN_TYPE_GNSS, 36 36 ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz }, 37 - { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, 0, }, 38 37 }; 39 38 40 39 static const struct ice_cgu_pin_desc ice_e810t_qsfp_cgu_inputs[] = { ··· 51 52 ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common }, 52 53 { "GNSS-1PPS", ZL_REF4P, DPLL_PIN_TYPE_GNSS, 53 54 ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz }, 54 - { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, }, 55 55 }; 56 56 57 57 static const struct ice_cgu_pin_desc ice_e810t_sfp_cgu_outputs[] = { ··· 6043 6045 } 6044 6046 6045 6047 return t; 6048 + } 6049 + 6050 + /** 6051 + * ice_cgu_get_num_pins - get pin description array size 6052 + * @hw: pointer to the hw struct 6053 + * @input: if request is done against input or output pins 6054 + * 6055 + * Return: size of pin description array for given hw. 6056 + */ 6057 + int ice_cgu_get_num_pins(struct ice_hw *hw, bool input) 6058 + { 6059 + const struct ice_cgu_pin_desc *t; 6060 + int size; 6061 + 6062 + t = ice_cgu_get_pin_desc(hw, input, &size); 6063 + if (t) 6064 + return size; 6065 + 6066 + return 0; 6046 6067 } 6047 6068 6048 6069 /**
+1
drivers/net/ethernet/intel/ice/ice_ptp_hw.h
··· 406 406 int ice_write_sma_ctrl(struct ice_hw *hw, u8 data); 407 407 int ice_read_pca9575_reg(struct ice_hw *hw, u8 offset, u8 *data); 408 408 int ice_ptp_read_sdp_ac(struct ice_hw *hw, __le16 *entries, uint *num_entries); 409 + int ice_cgu_get_num_pins(struct ice_hw *hw, bool input); 409 410 enum dpll_pin_type ice_cgu_get_pin_type(struct ice_hw *hw, u8 pin, bool input); 410 411 struct dpll_pin_frequency * 411 412 ice_cgu_get_pin_freq_supp(struct ice_hw *hw, u8 pin, bool input, u8 *num);
+1 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 907 907 int i, err = 0, vector = 0, free_vector = 0; 908 908 909 909 err = request_irq(adapter->msix_entries[vector].vector, 910 - igb_msix_other, 0, netdev->name, adapter); 910 + igb_msix_other, IRQF_NO_THREAD, netdev->name, adapter); 911 911 if (err) 912 912 goto err_out; 913 913
+2 -2
drivers/net/ethernet/mediatek/mtk_wed_wo.h
··· 91 91 #define MT7981_FIRMWARE_WO "mediatek/mt7981_wo.bin" 92 92 #define MT7986_FIRMWARE_WO0 "mediatek/mt7986_wo_0.bin" 93 93 #define MT7986_FIRMWARE_WO1 "mediatek/mt7986_wo_1.bin" 94 - #define MT7988_FIRMWARE_WO0 "mediatek/mt7988_wo_0.bin" 95 - #define MT7988_FIRMWARE_WO1 "mediatek/mt7988_wo_1.bin" 94 + #define MT7988_FIRMWARE_WO0 "mediatek/mt7988/mt7988_wo_0.bin" 95 + #define MT7988_FIRMWARE_WO1 "mediatek/mt7988/mt7988_wo_1.bin" 96 96 97 97 #define MTK_WO_MCU_CFG_LS_BASE 0 98 98 #define MTK_WO_MCU_CFG_LS_HW_VER_ADDR (MTK_WO_MCU_CFG_LS_BASE + 0x000)
+17 -8
drivers/net/ethernet/mellanox/mlxsw/pci.c
··· 389 389 dma_unmap_single(&pdev->dev, mapaddr, frag_len, direction); 390 390 } 391 391 392 - static struct sk_buff *mlxsw_pci_rdq_build_skb(struct page *pages[], 392 + static struct sk_buff *mlxsw_pci_rdq_build_skb(struct mlxsw_pci_queue *q, 393 + struct page *pages[], 393 394 u16 byte_count) 394 395 { 396 + struct mlxsw_pci_queue *cq = q->u.rdq.cq; 395 397 unsigned int linear_data_size; 398 + struct page_pool *page_pool; 396 399 struct sk_buff *skb; 397 400 int page_index = 0; 398 401 bool linear_only; 399 402 void *data; 403 + 404 + linear_only = byte_count + MLXSW_PCI_RX_BUF_SW_OVERHEAD <= PAGE_SIZE; 405 + linear_data_size = linear_only ? byte_count : 406 + PAGE_SIZE - 407 + MLXSW_PCI_RX_BUF_SW_OVERHEAD; 408 + 409 + page_pool = cq->u.cq.page_pool; 410 + page_pool_dma_sync_for_cpu(page_pool, pages[page_index], 411 + MLXSW_PCI_SKB_HEADROOM, linear_data_size); 400 412 401 413 data = page_address(pages[page_index]); 402 414 net_prefetch(data); ··· 416 404 skb = napi_build_skb(data, PAGE_SIZE); 417 405 if (unlikely(!skb)) 418 406 return ERR_PTR(-ENOMEM); 419 - 420 - linear_only = byte_count + MLXSW_PCI_RX_BUF_SW_OVERHEAD <= PAGE_SIZE; 421 - linear_data_size = linear_only ? byte_count : 422 - PAGE_SIZE - 423 - MLXSW_PCI_RX_BUF_SW_OVERHEAD; 424 407 425 408 skb_reserve(skb, MLXSW_PCI_SKB_HEADROOM); 426 409 skb_put(skb, linear_data_size); ··· 432 425 433 426 page = pages[page_index]; 434 427 frag_size = min(byte_count, PAGE_SIZE); 428 + page_pool_dma_sync_for_cpu(page_pool, page, 0, frag_size); 435 429 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, 436 430 page, 0, frag_size, PAGE_SIZE); 437 431 byte_count -= frag_size; ··· 768 760 if (err) 769 761 goto out; 770 762 771 - skb = mlxsw_pci_rdq_build_skb(pages, byte_count); 763 + skb = mlxsw_pci_rdq_build_skb(q, pages, byte_count); 772 764 if (IS_ERR(skb)) { 773 765 dev_err_ratelimited(&pdev->dev, "Failed to build skb for RDQ\n"); 774 766 mlxsw_pci_rdq_pages_recycle(q, pages, num_sg_entries); ··· 996 988 if (cq_type != MLXSW_PCI_CQ_RDQ) 997 989 return 0; 998 990 999 - pp_params.flags = PP_FLAG_DMA_MAP; 991 + pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; 1000 992 pp_params.pool_size = MLXSW_PCI_WQE_COUNT * mlxsw_pci->num_sg_entries; 1001 993 pp_params.nid = dev_to_node(&mlxsw_pci->pdev->dev); 1002 994 pp_params.dev = &mlxsw_pci->pdev->dev; 1003 995 pp_params.napi = &q->u.cq.napi; 1004 996 pp_params.dma_dir = DMA_FROM_DEVICE; 997 + pp_params.max_len = PAGE_SIZE; 1005 998 1006 999 page_pool = page_pool_create(&pp_params); 1007 1000 if (IS_ERR(page_pool))
+24 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
··· 481 481 struct mlxsw_sp_ipip_entry *ipip_entry, 482 482 struct netlink_ext_ack *extack) 483 483 { 484 + u32 new_kvdl_index, old_kvdl_index = ipip_entry->dip_kvdl_index; 485 + struct in6_addr old_addr6 = ipip_entry->parms.daddr.addr6; 484 486 struct mlxsw_sp_ipip_parms new_parms; 487 + int err; 485 488 486 489 new_parms = mlxsw_sp_ipip_netdev_parms_init_gre6(ipip_entry->ol_dev); 487 - return mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry, 488 - &new_parms, extack); 490 + 491 + err = mlxsw_sp_ipv6_addr_kvdl_index_get(mlxsw_sp, 492 + &new_parms.daddr.addr6, 493 + &new_kvdl_index); 494 + if (err) 495 + return err; 496 + ipip_entry->dip_kvdl_index = new_kvdl_index; 497 + 498 + err = mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry, 499 + &new_parms, extack); 500 + if (err) 501 + goto err_change_gre; 502 + 503 + mlxsw_sp_ipv6_addr_put(mlxsw_sp, &old_addr6); 504 + 505 + return 0; 506 + 507 + err_change_gre: 508 + ipip_entry->dip_kvdl_index = old_kvdl_index; 509 + mlxsw_sp_ipv6_addr_put(mlxsw_sp, &new_parms.daddr.addr6); 510 + return err; 489 511 } 490 512 491 513 static int
+7
drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
··· 16 16 #include "spectrum.h" 17 17 #include "spectrum_ptp.h" 18 18 #include "core.h" 19 + #include "txheader.h" 19 20 20 21 #define MLXSW_SP1_PTP_CLOCK_CYCLES_SHIFT 29 21 22 #define MLXSW_SP1_PTP_CLOCK_FREQ_KHZ 156257 /* 6.4nSec */ ··· 1685 1684 struct sk_buff *skb, 1686 1685 const struct mlxsw_tx_info *tx_info) 1687 1686 { 1687 + if (skb_cow_head(skb, MLXSW_TXHDR_LEN)) { 1688 + this_cpu_inc(mlxsw_sp_port->pcpu_stats->tx_dropped); 1689 + dev_kfree_skb_any(skb); 1690 + return -ENOMEM; 1691 + } 1692 + 1688 1693 mlxsw_sp_txhdr_construct(skb, tx_info); 1689 1694 return 0; 1690 1695 }
+8
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
··· 203 203 readl(ioaddr + DMA_CHAN_TX_CONTROL(dwmac4_addrs, channel)); 204 204 reg_space[DMA_CHAN_RX_CONTROL(default_addrs, channel) / 4] = 205 205 readl(ioaddr + DMA_CHAN_RX_CONTROL(dwmac4_addrs, channel)); 206 + reg_space[DMA_CHAN_TX_BASE_ADDR_HI(default_addrs, channel) / 4] = 207 + readl(ioaddr + DMA_CHAN_TX_BASE_ADDR_HI(dwmac4_addrs, channel)); 206 208 reg_space[DMA_CHAN_TX_BASE_ADDR(default_addrs, channel) / 4] = 207 209 readl(ioaddr + DMA_CHAN_TX_BASE_ADDR(dwmac4_addrs, channel)); 210 + reg_space[DMA_CHAN_RX_BASE_ADDR_HI(default_addrs, channel) / 4] = 211 + readl(ioaddr + DMA_CHAN_RX_BASE_ADDR_HI(dwmac4_addrs, channel)); 208 212 reg_space[DMA_CHAN_RX_BASE_ADDR(default_addrs, channel) / 4] = 209 213 readl(ioaddr + DMA_CHAN_RX_BASE_ADDR(dwmac4_addrs, channel)); 210 214 reg_space[DMA_CHAN_TX_END_ADDR(default_addrs, channel) / 4] = ··· 229 225 readl(ioaddr + DMA_CHAN_CUR_TX_DESC(dwmac4_addrs, channel)); 230 226 reg_space[DMA_CHAN_CUR_RX_DESC(default_addrs, channel) / 4] = 231 227 readl(ioaddr + DMA_CHAN_CUR_RX_DESC(dwmac4_addrs, channel)); 228 + reg_space[DMA_CHAN_CUR_TX_BUF_ADDR_HI(default_addrs, channel) / 4] = 229 + readl(ioaddr + DMA_CHAN_CUR_TX_BUF_ADDR_HI(dwmac4_addrs, channel)); 232 230 reg_space[DMA_CHAN_CUR_TX_BUF_ADDR(default_addrs, channel) / 4] = 233 231 readl(ioaddr + DMA_CHAN_CUR_TX_BUF_ADDR(dwmac4_addrs, channel)); 232 + reg_space[DMA_CHAN_CUR_RX_BUF_ADDR_HI(default_addrs, channel) / 4] = 233 + readl(ioaddr + DMA_CHAN_CUR_RX_BUF_ADDR_HI(dwmac4_addrs, channel)); 234 234 reg_space[DMA_CHAN_CUR_RX_BUF_ADDR(default_addrs, channel) / 4] = 235 235 readl(ioaddr + DMA_CHAN_CUR_RX_BUF_ADDR(dwmac4_addrs, channel)); 236 236 reg_space[DMA_CHAN_STATUS(default_addrs, channel) / 4] =
+2
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
··· 127 127 #define DMA_CHAN_SLOT_CTRL_STATUS(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x3c) 128 128 #define DMA_CHAN_CUR_TX_DESC(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x44) 129 129 #define DMA_CHAN_CUR_RX_DESC(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x4c) 130 + #define DMA_CHAN_CUR_TX_BUF_ADDR_HI(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x50) 130 131 #define DMA_CHAN_CUR_TX_BUF_ADDR(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x54) 132 + #define DMA_CHAN_CUR_RX_BUF_ADDR_HI(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x58) 131 133 #define DMA_CHAN_CUR_RX_BUF_ADDR(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x5c) 132 134 #define DMA_CHAN_STATUS(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x60) 133 135
+17 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4304 4304 if (dma_mapping_error(priv->device, des)) 4305 4305 goto dma_map_err; 4306 4306 4307 - tx_q->tx_skbuff_dma[first_entry].buf = des; 4308 - tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb); 4309 - tx_q->tx_skbuff_dma[first_entry].map_as_page = false; 4310 - tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB; 4311 - 4312 4307 if (priv->dma_cap.addr64 <= 32) { 4313 4308 first->des0 = cpu_to_le32(des); 4314 4309 ··· 4321 4326 } 4322 4327 4323 4328 stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue); 4329 + 4330 + /* In case two or more DMA transmit descriptors are allocated for this 4331 + * non-paged SKB data, the DMA buffer address should be saved to 4332 + * tx_q->tx_skbuff_dma[].buf corresponding to the last descriptor, 4333 + * and leave the other tx_q->tx_skbuff_dma[].buf as NULL to guarantee 4334 + * that stmmac_tx_clean() does not unmap the entire DMA buffer too early 4335 + * since the tail areas of the DMA buffer can be accessed by DMA engine 4336 + * sooner or later. 4337 + * By saving the DMA buffer address to tx_q->tx_skbuff_dma[].buf 4338 + * corresponding to the last descriptor, stmmac_tx_clean() will unmap 4339 + * this DMA buffer right after the DMA engine completely finishes the 4340 + * full buffer transmission. 4341 + */ 4342 + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des; 4343 + tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_headlen(skb); 4344 + tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false; 4345 + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; 4324 4346 4325 4347 /* Prepare fragments */ 4326 4348 for (i = 0; i < nfrags; i++) {
+13 -9
drivers/net/gtp.c
··· 1702 1702 return -EINVAL; 1703 1703 1704 1704 if (data[IFLA_GTP_FD0]) { 1705 - u32 fd0 = nla_get_u32(data[IFLA_GTP_FD0]); 1705 + int fd0 = nla_get_u32(data[IFLA_GTP_FD0]); 1706 1706 1707 - sk0 = gtp_encap_enable_socket(fd0, UDP_ENCAP_GTP0, gtp); 1708 - if (IS_ERR(sk0)) 1709 - return PTR_ERR(sk0); 1707 + if (fd0 >= 0) { 1708 + sk0 = gtp_encap_enable_socket(fd0, UDP_ENCAP_GTP0, gtp); 1709 + if (IS_ERR(sk0)) 1710 + return PTR_ERR(sk0); 1711 + } 1710 1712 } 1711 1713 1712 1714 if (data[IFLA_GTP_FD1]) { 1713 - u32 fd1 = nla_get_u32(data[IFLA_GTP_FD1]); 1715 + int fd1 = nla_get_u32(data[IFLA_GTP_FD1]); 1714 1716 1715 - sk1u = gtp_encap_enable_socket(fd1, UDP_ENCAP_GTP1U, gtp); 1716 - if (IS_ERR(sk1u)) { 1717 - gtp_encap_disable_sock(sk0); 1718 - return PTR_ERR(sk1u); 1717 + if (fd1 >= 0) { 1718 + sk1u = gtp_encap_enable_socket(fd1, UDP_ENCAP_GTP1U, gtp); 1719 + if (IS_ERR(sk1u)) { 1720 + gtp_encap_disable_sock(sk0); 1721 + return PTR_ERR(sk1u); 1722 + } 1719 1723 } 1720 1724 } 1721 1725
+1 -2
drivers/net/macsec.c
··· 3798 3798 { 3799 3799 struct macsec_dev *macsec = macsec_priv(dev); 3800 3800 3801 - if (macsec->secy.tx_sc.md_dst) 3802 - metadata_dst_free(macsec->secy.tx_sc.md_dst); 3801 + dst_release(&macsec->secy.tx_sc.md_dst->dst); 3803 3802 free_percpu(macsec->stats); 3804 3803 free_percpu(macsec->secy.tx_sc.stats); 3805 3804
+3
drivers/net/mctp/mctp-i2c.c
··· 588 588 if (len > MCTP_I2C_MAXMTU) 589 589 return -EMSGSIZE; 590 590 591 + if (!daddr || !saddr) 592 + return -EINVAL; 593 + 591 594 lldst = *((u8 *)daddr); 592 595 llsrc = *((u8 *)saddr); 593 596
+3 -1
drivers/net/netdevsim/fib.c
··· 1377 1377 1378 1378 if (pos != 0) 1379 1379 return -EINVAL; 1380 - if (size > sizeof(buf)) 1380 + if (size > sizeof(buf) - 1) 1381 1381 return -EINVAL; 1382 1382 if (copy_from_user(buf, user_buf, size)) 1383 1383 return -EFAULT; 1384 + buf[size] = 0; 1385 + 1384 1386 if (sscanf(buf, "%u %hu", &nhid, &bucket_index) != 2) 1385 1387 return -EINVAL; 1386 1388
+1
drivers/net/usb/qmi_wwan.c
··· 1076 1076 USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7), 1077 1077 .driver_info = (unsigned long)&qmi_wwan_info, 1078 1078 }, 1079 + {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0122)}, /* Quectel RG650V */ 1079 1080 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */ 1080 1081 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */ 1081 1082 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
+1
drivers/net/usb/r8152.c
··· 10069 10069 { USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) }, 10070 10070 { USB_DEVICE(VENDOR_ID_LENOVO, 0x3069) }, 10071 10071 { USB_DEVICE(VENDOR_ID_LENOVO, 0x3082) }, 10072 + { USB_DEVICE(VENDOR_ID_LENOVO, 0x3098) }, 10072 10073 { USB_DEVICE(VENDOR_ID_LENOVO, 0x7205) }, 10073 10074 { USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) }, 10074 10075 { USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
+6 -1
drivers/net/wireless/ath/ath10k/wmi-tlv.c
··· 3043 3043 struct sk_buff *msdu) 3044 3044 { 3045 3045 struct ath10k_skb_cb *cb = ATH10K_SKB_CB(msdu); 3046 + struct ath10k_mgmt_tx_pkt_addr *pkt_addr; 3046 3047 struct ath10k_wmi *wmi = &ar->wmi; 3047 3048 3048 - idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id); 3049 + spin_lock_bh(&ar->data_lock); 3050 + pkt_addr = idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id); 3051 + spin_unlock_bh(&ar->data_lock); 3052 + 3053 + kfree(pkt_addr); 3049 3054 3050 3055 return 0; 3051 3056 }
+2
drivers/net/wireless/ath/ath10k/wmi.c
··· 2441 2441 dma_unmap_single(ar->dev, pkt_addr->paddr, 2442 2442 msdu->len, DMA_TO_DEVICE); 2443 2443 info = IEEE80211_SKB_CB(msdu); 2444 + kfree(pkt_addr); 2444 2445 2445 2446 if (param->status) { 2446 2447 info->flags &= ~IEEE80211_TX_STAT_ACK; ··· 9613 9612 dma_unmap_single(ar->dev, pkt_addr->paddr, 9614 9613 msdu->len, DMA_TO_DEVICE); 9615 9614 ieee80211_free_txskb(ar->hw, msdu); 9615 + kfree(pkt_addr); 9616 9616 9617 9617 return 0; 9618 9618 }
+5 -2
drivers/net/wireless/ath/ath11k/dp_rx.c
··· 5291 5291 hal_status == HAL_TLV_STATUS_PPDU_DONE) { 5292 5292 rx_mon_stats->status_ppdu_done++; 5293 5293 pmon->mon_ppdu_status = DP_PPDU_STATUS_DONE; 5294 - ath11k_dp_rx_mon_dest_process(ar, mac_id, budget, napi); 5295 - pmon->mon_ppdu_status = DP_PPDU_STATUS_START; 5294 + if (!ab->hw_params.full_monitor_mode) { 5295 + ath11k_dp_rx_mon_dest_process(ar, mac_id, 5296 + budget, napi); 5297 + pmon->mon_ppdu_status = DP_PPDU_STATUS_START; 5298 + } 5296 5299 } 5297 5300 5298 5301 if (ppdu_info->peer_id == HAL_INVALID_PEERID ||
+1 -1
drivers/net/wireless/ath/wil6210/txrx.c
··· 306 306 struct sk_buff *skb) 307 307 { 308 308 struct wil6210_rtap { 309 - struct ieee80211_radiotap_header rthdr; 309 + struct ieee80211_radiotap_header_fixed rthdr; 310 310 /* fields should be in the order of bits in rthdr.it_present */ 311 311 /* flags */ 312 312 u8 flags;
+1
drivers/net/wireless/broadcom/brcm80211/Kconfig
··· 27 27 config BRCM_TRACING 28 28 bool "Broadcom device tracing" 29 29 depends on BRCMSMAC || BRCMFMAC 30 + depends on TRACING 30 31 help 31 32 If you say Y here, the Broadcom wireless drivers will register 32 33 with ftrace to dump event information into the trace ringbuffer.
+1 -1
drivers/net/wireless/intel/ipw2x00/ipw2100.c
··· 2515 2515 * to build this manually element by element, we can write it much 2516 2516 * more efficiently than we can parse it. ORDER MATTERS HERE */ 2517 2517 struct ipw_rt_hdr { 2518 - struct ieee80211_radiotap_header rt_hdr; 2518 + struct ieee80211_radiotap_header_fixed rt_hdr; 2519 2519 s8 rt_dbmsignal; /* signal in dbM, kluged to signed */ 2520 2520 } *ipw_rt; 2521 2521
+1 -1
drivers/net/wireless/intel/ipw2x00/ipw2200.h
··· 1141 1141 * structure is provided regardless of any bits unset. 1142 1142 */ 1143 1143 struct ipw_rt_hdr { 1144 - struct ieee80211_radiotap_header rt_hdr; 1144 + struct ieee80211_radiotap_header_fixed rt_hdr; 1145 1145 u64 rt_tsf; /* TSF */ /* XXX */ 1146 1146 u8 rt_flags; /* radiotap packet flags */ 1147 1147 u8 rt_rate; /* rate in 500kb/s */
+14 -1
drivers/net/wireless/intel/iwlegacy/common.c
··· 3122 3122 struct il_cmd_meta *out_meta; 3123 3123 dma_addr_t phys_addr; 3124 3124 unsigned long flags; 3125 + u8 *out_payload; 3125 3126 u32 idx; 3126 3127 u16 fix_size; 3127 3128 ··· 3158 3157 out_cmd = txq->cmd[idx]; 3159 3158 out_meta = &txq->meta[idx]; 3160 3159 3160 + /* The payload is in the same place in regular and huge 3161 + * command buffers, but we need to let the compiler know when 3162 + * we're using a larger payload buffer to avoid "field- 3163 + * spanning write" warnings at run-time for huge commands. 3164 + */ 3165 + if (cmd->flags & CMD_SIZE_HUGE) 3166 + out_payload = ((struct il_device_cmd_huge *)out_cmd)->cmd.payload; 3167 + else 3168 + out_payload = out_cmd->cmd.payload; 3169 + 3161 3170 if (WARN_ON(out_meta->flags & CMD_MAPPED)) { 3162 3171 spin_unlock_irqrestore(&il->hcmd_lock, flags); 3163 3172 return -ENOSPC; ··· 3181 3170 out_meta->callback = cmd->callback; 3182 3171 3183 3172 out_cmd->hdr.cmd = cmd->id; 3184 - memcpy(&out_cmd->cmd.payload, cmd->data, cmd->len); 3173 + memcpy(out_payload, cmd->data, cmd->len); 3185 3174 3186 3175 /* At this point, the out_cmd now has all of the incoming cmd 3187 3176 * information */ ··· 4973 4962 */ 4974 4963 pci_write_config_byte(pdev, PCI_CFG_RETRY_TIMEOUT, 0x00); 4975 4964 4965 + _il_wr(il, CSR_INT, 0xffffffff); 4966 + _il_wr(il, CSR_FH_INT_STATUS, 0xffffffff); 4976 4967 il_enable_interrupts(il); 4977 4968 4978 4969 if (!(_il_rd(il, CSR_GP_CNTRL) & CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
+12
drivers/net/wireless/intel/iwlegacy/common.h
··· 560 560 561 561 #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct il_device_cmd)) 562 562 563 + /** 564 + * struct il_device_cmd_huge 565 + * 566 + * For use when sending huge commands. 567 + */ 568 + struct il_device_cmd_huge { 569 + struct il_cmd_header hdr; /* uCode API */ 570 + union { 571 + u8 payload[IL_MAX_CMD_SIZE - sizeof(struct il_cmd_header)]; 572 + } __packed cmd; 573 + } __packed; 574 + 563 575 struct il_host_cmd { 564 576 const void *data; 565 577 unsigned long reply_page;
+58 -38
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
··· 429 429 return ret; 430 430 } 431 431 432 - static int iwl_acpi_sar_set_profile(union acpi_object *table, 433 - struct iwl_sar_profile *profile, 434 - bool enabled, u8 num_chains, 435 - u8 num_sub_bands) 432 + static int 433 + iwl_acpi_parse_chains_table(union acpi_object *table, 434 + struct iwl_sar_profile_chain *chains, 435 + u8 num_chains, u8 num_sub_bands) 436 436 { 437 - int i, j, idx = 0; 438 - 439 - /* 440 - * The table from ACPI is flat, but we store it in a 441 - * structured array. 442 - */ 443 - for (i = 0; i < BIOS_SAR_MAX_CHAINS_PER_PROFILE; i++) { 444 - for (j = 0; j < BIOS_SAR_MAX_SUB_BANDS_NUM; j++) { 437 + for (u8 chain = 0; chain < num_chains; chain++) { 438 + for (u8 subband = 0; subband < BIOS_SAR_MAX_SUB_BANDS_NUM; 439 + subband++) { 445 440 /* if we don't have the values, use the default */ 446 - if (i >= num_chains || j >= num_sub_bands) { 447 - profile->chains[i].subbands[j] = 0; 441 + if (subband >= num_sub_bands) { 442 + chains[chain].subbands[subband] = 0; 443 + } else if (table->type != ACPI_TYPE_INTEGER || 444 + table->integer.value > U8_MAX) { 445 + return -EINVAL; 448 446 } else { 449 - if (table[idx].type != ACPI_TYPE_INTEGER || 450 - table[idx].integer.value > U8_MAX) 451 - return -EINVAL; 452 - 453 - profile->chains[i].subbands[j] = 454 - table[idx].integer.value; 455 - 456 - idx++; 447 + chains[chain].subbands[subband] = 448 + table->integer.value; 449 + table++; 457 450 } 458 451 } 459 452 } 460 - 461 - /* Only if all values were valid can the profile be enabled */ 462 - profile->enabled = enabled; 463 453 464 454 return 0; 465 455 } ··· 533 543 /* The profile from WRDS is officially profile 1, but goes 534 544 * into sar_profiles[0] (because we don't have a profile 0). 535 545 */ 536 - ret = iwl_acpi_sar_set_profile(table, &fwrt->sar_profiles[0], 537 - flags & IWL_SAR_ENABLE_MSK, 538 - num_chains, num_sub_bands); 546 + ret = iwl_acpi_parse_chains_table(table, fwrt->sar_profiles[0].chains, 547 + num_chains, num_sub_bands); 548 + if (!ret && flags & IWL_SAR_ENABLE_MSK) 549 + fwrt->sar_profiles[0].enabled = true; 550 + 539 551 out_free: 540 552 kfree(data); 541 553 return ret; ··· 549 557 bool enabled; 550 558 int i, n_profiles, tbl_rev, pos; 551 559 int ret = 0; 552 - u8 num_chains, num_sub_bands; 560 + u8 num_sub_bands; 553 561 554 562 data = iwl_acpi_get_object(fwrt->dev, ACPI_EWRD_METHOD); 555 563 if (IS_ERR(data)) ··· 565 573 goto out_free; 566 574 } 567 575 568 - num_chains = ACPI_SAR_NUM_CHAINS_REV2; 569 576 num_sub_bands = ACPI_SAR_NUM_SUB_BANDS_REV2; 570 577 571 578 goto read_table; ··· 580 589 goto out_free; 581 590 } 582 591 583 - num_chains = ACPI_SAR_NUM_CHAINS_REV1; 584 592 num_sub_bands = ACPI_SAR_NUM_SUB_BANDS_REV1; 585 593 586 594 goto read_table; ··· 595 605 goto out_free; 596 606 } 597 607 598 - num_chains = ACPI_SAR_NUM_CHAINS_REV0; 599 608 num_sub_bands = ACPI_SAR_NUM_SUB_BANDS_REV0; 600 609 601 610 goto read_table; ··· 626 637 /* the tables start at element 3 */ 627 638 pos = 3; 628 639 640 + BUILD_BUG_ON(ACPI_SAR_NUM_CHAINS_REV0 != ACPI_SAR_NUM_CHAINS_REV1); 641 + BUILD_BUG_ON(ACPI_SAR_NUM_CHAINS_REV2 != 2 * ACPI_SAR_NUM_CHAINS_REV0); 642 + 643 + /* parse non-cdb chains for all profiles */ 629 644 for (i = 0; i < n_profiles; i++) { 630 645 union acpi_object *table = &wifi_pkg->package.elements[pos]; 646 + 631 647 /* The EWRD profiles officially go from 2 to 4, but we 632 648 * save them in sar_profiles[1-3] (because we don't 633 649 * have profile 0). So in the array we start from 1. 634 650 */ 635 - ret = iwl_acpi_sar_set_profile(table, 636 - &fwrt->sar_profiles[i + 1], 637 - enabled, num_chains, 638 - num_sub_bands); 651 + ret = iwl_acpi_parse_chains_table(table, 652 + fwrt->sar_profiles[i + 1].chains, 653 + ACPI_SAR_NUM_CHAINS_REV0, 654 + num_sub_bands); 639 655 if (ret < 0) 640 - break; 656 + goto out_free; 641 657 642 658 /* go to the next table */ 643 - pos += num_chains * num_sub_bands; 659 + pos += ACPI_SAR_NUM_CHAINS_REV0 * num_sub_bands; 644 660 } 661 + 662 + /* non-cdb table revisions */ 663 + if (tbl_rev < 2) 664 + goto set_enabled; 665 + 666 + /* parse cdb chains for all profiles */ 667 + for (i = 0; i < n_profiles; i++) { 668 + struct iwl_sar_profile_chain *chains; 669 + union acpi_object *table; 670 + 671 + table = &wifi_pkg->package.elements[pos]; 672 + chains = &fwrt->sar_profiles[i + 1].chains[ACPI_SAR_NUM_CHAINS_REV0]; 673 + ret = iwl_acpi_parse_chains_table(table, 674 + chains, 675 + ACPI_SAR_NUM_CHAINS_REV0, 676 + num_sub_bands); 677 + if (ret < 0) 678 + goto out_free; 679 + 680 + /* go to the next table */ 681 + pos += ACPI_SAR_NUM_CHAINS_REV0 * num_sub_bands; 682 + } 683 + 684 + set_enabled: 685 + for (i = 0; i < n_profiles; i++) 686 + fwrt->sar_profiles[i + 1].enabled = enabled; 645 687 646 688 out_free: 647 689 kfree(data);
+3 -1
drivers/net/wireless/intel/iwlwifi/fw/init.c
··· 39 39 } 40 40 IWL_EXPORT_SYMBOL(iwl_fw_runtime_init); 41 41 42 + /* Assumes the appropriate lock is held by the caller */ 42 43 void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt) 43 44 { 44 45 iwl_fw_suspend_timestamp(fwrt); 45 - iwl_dbg_tlv_time_point(fwrt, IWL_FW_INI_TIME_POINT_HOST_D3_START, NULL); 46 + iwl_dbg_tlv_time_point_sync(fwrt, IWL_FW_INI_TIME_POINT_HOST_D3_START, 47 + NULL); 46 48 } 47 49 IWL_EXPORT_SYMBOL(iwl_fw_runtime_suspend); 48 50
+22 -12
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1413 1413 const struct iwl_op_mode_ops *ops = op->ops; 1414 1414 struct dentry *dbgfs_dir = NULL; 1415 1415 struct iwl_op_mode *op_mode = NULL; 1416 + int retry, max_retry = !!iwlwifi_mod_params.fw_restart * IWL_MAX_INIT_RETRY; 1416 1417 1417 1418 /* also protects start/stop from racing against each other */ 1418 1419 lockdep_assert_held(&iwlwifi_opmode_table_mtx); 1419 1420 1420 - #ifdef CONFIG_IWLWIFI_DEBUGFS 1421 - drv->dbgfs_op_mode = debugfs_create_dir(op->name, 1422 - drv->dbgfs_drv); 1423 - dbgfs_dir = drv->dbgfs_op_mode; 1424 - #endif 1425 - 1426 - op_mode = ops->start(drv->trans, drv->trans->cfg, 1427 - &drv->fw, dbgfs_dir); 1428 - if (op_mode) 1429 - return op_mode; 1421 + for (retry = 0; retry <= max_retry; retry++) { 1430 1422 1431 1423 #ifdef CONFIG_IWLWIFI_DEBUGFS 1432 - debugfs_remove_recursive(drv->dbgfs_op_mode); 1433 - drv->dbgfs_op_mode = NULL; 1424 + drv->dbgfs_op_mode = debugfs_create_dir(op->name, 1425 + drv->dbgfs_drv); 1426 + dbgfs_dir = drv->dbgfs_op_mode; 1434 1427 #endif 1428 + 1429 + op_mode = ops->start(drv->trans, drv->trans->cfg, 1430 + &drv->fw, dbgfs_dir); 1431 + 1432 + if (op_mode) 1433 + return op_mode; 1434 + 1435 + if (test_bit(STATUS_TRANS_DEAD, &drv->trans->status)) 1436 + break; 1437 + 1438 + IWL_ERR(drv, "retry init count %d\n", retry); 1439 + 1440 + #ifdef CONFIG_IWLWIFI_DEBUGFS 1441 + debugfs_remove_recursive(drv->dbgfs_op_mode); 1442 + drv->dbgfs_op_mode = NULL; 1443 + #endif 1444 + } 1435 1445 1436 1446 return NULL; 1437 1447 }
+3
drivers/net/wireless/intel/iwlwifi/iwl-drv.h
··· 98 98 #define VISIBLE_IF_IWLWIFI_KUNIT static 99 99 #endif 100 100 101 + /* max retry for init flow */ 102 + #define IWL_MAX_INIT_RETRY 2 103 + 101 104 #define FW_NAME_PRE_BUFSIZE 64 102 105 struct iwl_trans; 103 106 const char *iwl_drv_get_fwname_pre(struct iwl_trans *trans, char *buf);
+2
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 1398 1398 1399 1399 iwl_mvm_pause_tcm(mvm, true); 1400 1400 1401 + mutex_lock(&mvm->mutex); 1401 1402 iwl_fw_runtime_suspend(&mvm->fwrt); 1403 + mutex_unlock(&mvm->mutex); 1402 1404 1403 1405 return __iwl_mvm_suspend(hw, wowlan, false); 1404 1406 }
+4 -6
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 1307 1307 void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags) 1308 1308 { 1309 1309 u32 error_log_size = mvm->fw->ucode_capa.error_log_size; 1310 + u32 status = 0; 1310 1311 int ret; 1311 - u32 resp; 1312 1312 1313 1313 struct iwl_fw_error_recovery_cmd recovery_cmd = { 1314 1314 .flags = cpu_to_le32(flags), ··· 1316 1316 }; 1317 1317 struct iwl_host_cmd host_cmd = { 1318 1318 .id = WIDE_ID(SYSTEM_GROUP, FW_ERROR_RECOVERY_CMD), 1319 - .flags = CMD_WANT_SKB, 1320 1319 .data = {&recovery_cmd, }, 1321 1320 .len = {sizeof(recovery_cmd), }, 1322 1321 }; ··· 1335 1336 recovery_cmd.buf_size = cpu_to_le32(error_log_size); 1336 1337 } 1337 1338 1338 - ret = iwl_mvm_send_cmd(mvm, &host_cmd); 1339 + ret = iwl_mvm_send_cmd_status(mvm, &host_cmd, &status); 1339 1340 kfree(mvm->error_recovery_buf); 1340 1341 mvm->error_recovery_buf = NULL; 1341 1342 ··· 1346 1347 1347 1348 /* skb respond is only relevant in ERROR_RECOVERY_UPDATE_DB */ 1348 1349 if (flags & ERROR_RECOVERY_UPDATE_DB) { 1349 - resp = le32_to_cpu(*(__le32 *)host_cmd.resp_pkt->data); 1350 - if (resp) { 1350 + if (status) { 1351 1351 IWL_ERR(mvm, 1352 1352 "Failed to send recovery cmd blob was invalid %d\n", 1353 - resp); 1353 + status); 1354 1354 1355 1355 ieee80211_iterate_interfaces(mvm->hw, 0, 1356 1356 iwl_mvm_disconnect_iterator,
+10 -2
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 1293 1293 { 1294 1294 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); 1295 1295 int ret; 1296 + int retry, max_retry = 0; 1296 1297 1297 1298 mutex_lock(&mvm->mutex); 1298 1299 1299 1300 /* we are starting the mac not in error flow, and restart is enabled */ 1300 1301 if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) && 1301 1302 iwlwifi_mod_params.fw_restart) { 1303 + max_retry = IWL_MAX_INIT_RETRY; 1302 1304 /* 1303 1305 * This will prevent mac80211 recovery flows to trigger during 1304 1306 * init failures ··· 1308 1306 set_bit(IWL_MVM_STATUS_STARTING, &mvm->status); 1309 1307 } 1310 1308 1311 - ret = __iwl_mvm_mac_start(mvm); 1309 + for (retry = 0; retry <= max_retry; retry++) { 1310 + ret = __iwl_mvm_mac_start(mvm); 1311 + if (!ret) 1312 + break; 1313 + 1314 + IWL_ERR(mvm, "mac start retry %d\n", retry); 1315 + } 1312 1316 clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status); 1313 1317 1314 1318 mutex_unlock(&mvm->mutex); ··· 2000 1992 mvm->p2p_device_vif = NULL; 2001 1993 } 2002 1994 2003 - iwl_mvm_unset_link_mapping(mvm, vif, &vif->bss_conf); 2004 1995 iwl_mvm_mac_ctxt_remove(mvm, vif); 2005 1996 2006 1997 RCU_INIT_POINTER(mvm->vif_id_to_mac[mvmvif->id], NULL); ··· 2008 2001 mvm->monitor_on = false; 2009 2002 2010 2003 out: 2004 + iwl_mvm_unset_link_mapping(mvm, vif, &vif->bss_conf); 2011 2005 if (vif->type == NL80211_IFTYPE_AP || 2012 2006 vif->type == NL80211_IFTYPE_ADHOC) { 2013 2007 iwl_mvm_dealloc_int_sta(mvm, &mvmvif->deflink.mcast_sta);
+18 -6
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
··· 41 41 /* reset deflink MLO parameters */ 42 42 mvmvif->deflink.fw_link_id = IWL_MVM_FW_LINK_ID_INVALID; 43 43 mvmvif->deflink.active = 0; 44 - /* the first link always points to the default one */ 45 - mvmvif->link[0] = &mvmvif->deflink; 46 44 47 45 ret = iwl_mvm_mld_mac_ctxt_add(mvm, vif); 48 46 if (ret) ··· 58 60 IEEE80211_VIF_SUPPORTS_CQM_RSSI; 59 61 } 60 62 61 - ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf); 62 - if (ret) 63 - goto out_free_bf; 63 + /* We want link[0] to point to the default link, unless we have MLO and 64 + * in this case this will be modified later by .change_vif_links() 65 + * If we are in the restart flow with an MLD connection, we will wait 66 + * to .change_vif_links() to setup the links. 67 + */ 68 + if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) || 69 + !ieee80211_vif_is_mld(vif)) { 70 + mvmvif->link[0] = &mvmvif->deflink; 71 + 72 + ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf); 73 + if (ret) 74 + goto out_free_bf; 75 + } 64 76 65 77 /* Save a pointer to p2p device vif, so it can later be used to 66 78 * update the p2p device MAC when a GO is started/stopped ··· 1194 1186 1195 1187 mutex_lock(&mvm->mutex); 1196 1188 1197 - if (old_links == 0) { 1189 + /* If we're in RESTART flow, the default link wasn't added in 1190 + * drv_add_interface(), and link[0] doesn't point to it. 1191 + */ 1192 + if (old_links == 0 && !test_bit(IWL_MVM_STATUS_IN_HW_RESTART, 1193 + &mvm->status)) { 1198 1194 err = iwl_mvm_disable_link(mvm, vif, &vif->bss_conf); 1199 1195 if (err) 1200 1196 goto out_err;
+3 -3
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 1774 1774 &cp->channel_config[ch_cnt]; 1775 1775 1776 1776 u32 s_ssid_bitmap = 0, bssid_bitmap = 0, flags = 0; 1777 - u8 j, k, n_s_ssids = 0, n_bssids = 0; 1777 + u8 k, n_s_ssids = 0, n_bssids = 0; 1778 1778 u8 max_s_ssids, max_bssids; 1779 1779 bool force_passive = false, found = false, allow_passive = true, 1780 1780 unsolicited_probe_on_chan = false, psc_no_listen = false; ··· 1799 1799 cfg->v5.iter_count = 1; 1800 1800 cfg->v5.iter_interval = 0; 1801 1801 1802 - for (j = 0; j < params->n_6ghz_params; j++) { 1802 + for (u32 j = 0; j < params->n_6ghz_params; j++) { 1803 1803 s8 tmp_psd_20; 1804 1804 1805 1805 if (!(scan_6ghz_params[j].channel_idx == i)) ··· 1873 1873 * SSID. 1874 1874 * TODO: improve this logic 1875 1875 */ 1876 - for (j = 0; j < params->n_6ghz_params; j++) { 1876 + for (u32 j = 0; j < params->n_6ghz_params; j++) { 1877 1877 if (!(scan_6ghz_params[j].channel_idx == i)) 1878 1878 continue; 1879 1879
+2 -2
drivers/net/wireless/marvell/libertas/radiotap.h
··· 2 2 #include <net/ieee80211_radiotap.h> 3 3 4 4 struct tx_radiotap_hdr { 5 - struct ieee80211_radiotap_header hdr; 5 + struct ieee80211_radiotap_header_fixed hdr; 6 6 u8 rate; 7 7 u8 txpower; 8 8 u8 rts_retries; ··· 31 31 #define IEEE80211_FC_DSTODS 0x0300 32 32 33 33 struct rx_radiotap_hdr { 34 - struct ieee80211_radiotap_header hdr; 34 + struct ieee80211_radiotap_header_fixed hdr; 35 35 u8 flags; 36 36 u8 rate; 37 37 u8 antsignal;
+5 -2
drivers/net/wireless/mediatek/mt76/mcu.c
··· 84 84 mutex_lock(&dev->mcu.mutex); 85 85 86 86 if (dev->mcu_ops->mcu_skb_prepare_msg) { 87 + orig_skb = skb; 87 88 ret = dev->mcu_ops->mcu_skb_prepare_msg(dev, skb, cmd, &seq); 88 89 if (ret < 0) 89 90 goto out; 90 91 } 91 92 92 93 retry: 93 - orig_skb = skb_get(skb); 94 + /* orig skb might be needed for retry, mcu_skb_send_msg consumes it */ 95 + if (orig_skb) 96 + skb_get(orig_skb); 94 97 ret = dev->mcu_ops->mcu_skb_send_msg(dev, skb, cmd, &seq); 95 98 if (ret < 0) 96 99 goto out; ··· 108 105 do { 109 106 skb = mt76_mcu_get_response(dev, expires); 110 107 if (!skb && !test_bit(MT76_MCU_RESET, &dev->phy.state) && 111 - retry++ < dev->mcu_ops->max_retry) { 108 + orig_skb && retry++ < dev->mcu_ops->max_retry) { 112 109 dev_err(dev->dev, "Retry message %08x (seq %d)\n", 113 110 cmd, seq); 114 111 skb = orig_skb;
+2 -2
drivers/net/wireless/microchip/wilc1000/mon.c
··· 7 7 #include "cfg80211.h" 8 8 9 9 struct wilc_wfi_radiotap_hdr { 10 - struct ieee80211_radiotap_header hdr; 10 + struct ieee80211_radiotap_header_fixed hdr; 11 11 u8 rate; 12 12 } __packed; 13 13 14 14 struct wilc_wfi_radiotap_cb_hdr { 15 - struct ieee80211_radiotap_header hdr; 15 + struct ieee80211_radiotap_header_fixed hdr; 16 16 u8 rate; 17 17 u8 dump; 18 18 u16 tx_flags;
-1
drivers/net/wireless/realtek/rtlwifi/rtl8192du/sw.c
··· 352 352 {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x8194, rtl92du_hal_cfg)}, 353 353 {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x8111, rtl92du_hal_cfg)}, 354 354 {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x0193, rtl92du_hal_cfg)}, 355 - {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x8171, rtl92du_hal_cfg)}, 356 355 {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0xe194, rtl92du_hal_cfg)}, 357 356 {RTL_USB_DEVICE(0x2019, 0xab2c, rtl92du_hal_cfg)}, 358 357 {RTL_USB_DEVICE(0x2019, 0xab2d, rtl92du_hal_cfg)},
-1
drivers/net/wireless/realtek/rtw88/usb.c
··· 772 772 u8 size, timeout; 773 773 u16 val16; 774 774 775 - rtw_write32_set(rtwdev, REG_RXDMA_AGG_PG_TH, BIT_EN_PRE_CALC); 776 775 rtw_write8_set(rtwdev, REG_TXDMA_PQ_MAP, BIT_RXDMA_AGG_EN); 777 776 rtw_write8_clr(rtwdev, REG_RXDMA_AGG_PG_TH + 3, BIT(7)); 778 777
+2
drivers/net/wireless/realtek/rtw89/coex.c
··· 6506 6506 6507 6507 /* todo DBCC related event */ 6508 6508 rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC] wl_info phy_now=%d\n", phy_now); 6509 + rtw89_debug(rtwdev, RTW89_DBG_BTC, 6510 + "[BTC] rlink cnt_2g=%d cnt_5g=%d\n", cnt_2g, cnt_5g); 6509 6511 6510 6512 if (wl_rinfo->dbcc_en != rtwdev->dbcc_en) { 6511 6513 wl_rinfo->dbcc_chg = 1;
+41 -7
drivers/net/wireless/realtek/rtw89/pci.c
··· 3050 3050 pci_disable_device(pdev); 3051 3051 } 3052 3052 3053 - static void rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev) 3053 + static bool rtw89_pci_chip_is_manual_dac(struct rtw89_dev *rtwdev) 3054 3054 { 3055 - struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv; 3056 3055 const struct rtw89_chip_info *chip = rtwdev->chip; 3057 - 3058 - if (!rtwpci->enable_dac) 3059 - return; 3060 3056 3061 3057 switch (chip->chip_id) { 3062 3058 case RTL8852A: 3063 3059 case RTL8852B: 3064 3060 case RTL8851B: 3065 3061 case RTL8852BT: 3066 - break; 3062 + return true; 3067 3063 default: 3068 - return; 3064 + return false; 3069 3065 } 3066 + } 3067 + 3068 + static bool rtw89_pci_is_dac_compatible_bridge(struct rtw89_dev *rtwdev) 3069 + { 3070 + struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv; 3071 + struct pci_dev *bridge = pci_upstream_bridge(rtwpci->pdev); 3072 + 3073 + if (!rtw89_pci_chip_is_manual_dac(rtwdev)) 3074 + return true; 3075 + 3076 + if (!bridge) 3077 + return false; 3078 + 3079 + switch (bridge->vendor) { 3080 + case PCI_VENDOR_ID_INTEL: 3081 + return true; 3082 + case PCI_VENDOR_ID_ASMEDIA: 3083 + if (bridge->device == 0x2806) 3084 + return true; 3085 + break; 3086 + } 3087 + 3088 + return false; 3089 + } 3090 + 3091 + static void rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev) 3092 + { 3093 + struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv; 3094 + 3095 + if (!rtwpci->enable_dac) 3096 + return; 3097 + 3098 + if (!rtw89_pci_chip_is_manual_dac(rtwdev)) 3099 + return; 3070 3100 3071 3101 rtw89_pci_config_byte_set(rtwdev, RTW89_PCIE_L1_CTRL, RTW89_PCIE_BIT_EN_64BITS); 3072 3102 } ··· 3115 3085 goto err; 3116 3086 } 3117 3087 3088 + if (!rtw89_pci_is_dac_compatible_bridge(rtwdev)) 3089 + goto no_dac; 3090 + 3118 3091 ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(36)); 3119 3092 if (!ret) { 3120 3093 rtwpci->enable_dac = true; ··· 3130 3097 goto err_release_regions; 3131 3098 } 3132 3099 } 3100 + no_dac: 3133 3101 3134 3102 resource_len = pci_resource_len(pdev, bar_id); 3135 3103 rtwpci->mmap = pci_iomap(pdev, bar_id, resource_len);
+2 -2
drivers/net/wireless/virtual/mac80211_hwsim.c
··· 763 763 }; 764 764 765 765 struct hwsim_radiotap_hdr { 766 - struct ieee80211_radiotap_header hdr; 766 + struct ieee80211_radiotap_header_fixed hdr; 767 767 __le64 rt_tsft; 768 768 u8 rt_flags; 769 769 u8 rt_rate; ··· 772 772 } __packed; 773 773 774 774 struct hwsim_radiotap_ack_hdr { 775 - struct ieee80211_radiotap_header hdr; 775 + struct ieee80211_radiotap_header_fixed hdr; 776 776 u8 rt_flags; 777 777 u8 pad; 778 778 __le16 rt_channel;
+2
drivers/pci/probe.c
··· 3105 3105 list_for_each_entry(child, &bus->children, node) 3106 3106 pcie_bus_configure_settings(child); 3107 3107 3108 + pci_lock_rescan_remove(); 3108 3109 pci_bus_add_devices(bus); 3110 + pci_unlock_rescan_remove(); 3109 3111 return 0; 3110 3112 } 3111 3113 EXPORT_SYMBOL_GPL(pci_host_probe);
+50 -5
drivers/pci/pwrctl/pci-pwrctl-pwrseq.c
··· 6 6 #include <linux/device.h> 7 7 #include <linux/mod_devicetable.h> 8 8 #include <linux/module.h> 9 - #include <linux/of.h> 10 9 #include <linux/pci-pwrctl.h> 11 10 #include <linux/platform_device.h> 11 + #include <linux/property.h> 12 12 #include <linux/pwrseq/consumer.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/types.h> ··· 16 16 struct pci_pwrctl_pwrseq_data { 17 17 struct pci_pwrctl ctx; 18 18 struct pwrseq_desc *pwrseq; 19 + }; 20 + 21 + struct pci_pwrctl_pwrseq_pdata { 22 + const char *target; 23 + /* 24 + * Called before doing anything else to perform device-specific 25 + * verification between requesting the power sequencing handle. 26 + */ 27 + int (*validate_device)(struct device *dev); 28 + }; 29 + 30 + static int pci_pwrctl_pwrseq_qcm_wcn_validate_device(struct device *dev) 31 + { 32 + /* 33 + * Old device trees for some platforms already define wifi nodes for 34 + * the WCN family of chips since before power sequencing was added 35 + * upstream. 36 + * 37 + * These nodes don't consume the regulator outputs from the PMU, and 38 + * if we allow this driver to bind to one of such "incomplete" nodes, 39 + * we'll see a kernel log error about the indefinite probe deferral. 40 + * 41 + * Check the existence of the regulator supply that exists on all 42 + * WCN models before moving forward. 43 + */ 44 + if (!device_property_present(dev, "vddaon-supply")) 45 + return -ENODEV; 46 + 47 + return 0; 48 + } 49 + 50 + static const struct pci_pwrctl_pwrseq_pdata pci_pwrctl_pwrseq_qcom_wcn_pdata = { 51 + .target = "wlan", 52 + .validate_device = pci_pwrctl_pwrseq_qcm_wcn_validate_device, 19 53 }; 20 54 21 55 static void devm_pci_pwrctl_pwrseq_power_off(void *data) ··· 61 27 62 28 static int pci_pwrctl_pwrseq_probe(struct platform_device *pdev) 63 29 { 30 + const struct pci_pwrctl_pwrseq_pdata *pdata; 64 31 struct pci_pwrctl_pwrseq_data *data; 65 32 struct device *dev = &pdev->dev; 66 33 int ret; 34 + 35 + pdata = device_get_match_data(dev); 36 + if (!pdata || !pdata->target) 37 + return -EINVAL; 38 + 39 + if (pdata->validate_device) { 40 + ret = pdata->validate_device(dev); 41 + if (ret) 42 + return ret; 43 + } 67 44 68 45 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 69 46 if (!data) 70 47 return -ENOMEM; 71 48 72 - data->pwrseq = devm_pwrseq_get(dev, of_device_get_match_data(dev)); 49 + data->pwrseq = devm_pwrseq_get(dev, pdata->target); 73 50 if (IS_ERR(data->pwrseq)) 74 51 return dev_err_probe(dev, PTR_ERR(data->pwrseq), 75 52 "Failed to get the power sequencer\n"); ··· 109 64 { 110 65 /* ATH11K in QCA6390 package. */ 111 66 .compatible = "pci17cb,1101", 112 - .data = "wlan", 67 + .data = &pci_pwrctl_pwrseq_qcom_wcn_pdata, 113 68 }, 114 69 { 115 70 /* ATH11K in WCN6855 package. */ 116 71 .compatible = "pci17cb,1103", 117 - .data = "wlan", 72 + .data = &pci_pwrctl_pwrseq_qcom_wcn_pdata, 118 73 }, 119 74 { 120 75 /* ATH12K in WCN7850 package. */ 121 76 .compatible = "pci17cb,1107", 122 - .data = "wlan", 77 + .data = &pci_pwrctl_pwrseq_qcom_wcn_pdata, 123 78 }, 124 79 { } 125 80 };
+10
drivers/platform/x86/asus-wmi.c
··· 3908 3908 if (!asus->throttle_thermal_policy_dev) 3909 3909 return 0; 3910 3910 3911 + /* 3912 + * We need to set the default thermal profile during probe or otherwise 3913 + * the system will often remain in silent mode, causing low performance. 3914 + */ 3915 + err = throttle_thermal_policy_set_default(asus); 3916 + if (err < 0) { 3917 + pr_warn("Failed to set default thermal profile\n"); 3918 + return err; 3919 + } 3920 + 3911 3921 dev_info(dev, "Using throttle_thermal_policy for platform_profile support\n"); 3912 3922 3913 3923 asus->platform_profile_handler.profile_get = asus_wmi_platform_profile_get;
+9
drivers/platform/x86/dell/dell-wmi-base.c
··· 264 264 /*Speaker Mute*/ 265 265 { KE_KEY, 0x109, { KEY_MUTE} }, 266 266 267 + /* S2Idle screen off */ 268 + { KE_IGNORE, 0x120, { KEY_RESERVED }}, 269 + 270 + /* Leaving S4 or S2Idle suspend */ 271 + { KE_IGNORE, 0x130, { KEY_RESERVED }}, 272 + 273 + /* Entering S2Idle suspend */ 274 + { KE_IGNORE, 0x140, { KEY_RESERVED }}, 275 + 267 276 /* Mic mute */ 268 277 { KE_KEY, 0x150, { KEY_MICMUTE } }, 269 278
-2
drivers/platform/x86/intel/pmc/adl.c
··· 295 295 .ppfear_buckets = CNP_PPFEAR_NUM_ENTRIES, 296 296 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 297 297 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 298 - .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 299 - .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 300 298 .ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED, 301 299 .lpm_num_modes = ADL_LPM_NUM_MODES, 302 300 .lpm_num_maps = ADL_LPM_NUM_MAPS,
-2
drivers/platform/x86/intel/pmc/cnp.c
··· 200 200 .ppfear_buckets = CNP_PPFEAR_NUM_ENTRIES, 201 201 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 202 202 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 203 - .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 204 - .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 205 203 .ltr_ignore_max = CNP_NUM_IP_IGN_ALLOWED, 206 204 .etr3_offset = ETR3_OFFSET, 207 205 };
-46
drivers/platform/x86/intel/pmc/core.c
··· 11 11 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 - #include <linux/acpi_pmtmr.h> 15 14 #include <linux/bitfield.h> 16 15 #include <linux/debugfs.h> 17 16 #include <linux/delay.h> ··· 1257 1258 return val == 1; 1258 1259 } 1259 1260 1260 - /* 1261 - * Enable or disable ACPI PM Timer 1262 - * 1263 - * This function is intended to be a callback for ACPI PM suspend/resume event. 1264 - * The ACPI PM Timer is enabled on resume only if it was enabled during suspend. 1265 - */ 1266 - static void pmc_core_acpi_pm_timer_suspend_resume(void *data, bool suspend) 1267 - { 1268 - struct pmc_dev *pmcdev = data; 1269 - struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN]; 1270 - const struct pmc_reg_map *map = pmc->map; 1271 - bool enabled; 1272 - u32 reg; 1273 - 1274 - if (!map->acpi_pm_tmr_ctl_offset) 1275 - return; 1276 - 1277 - guard(mutex)(&pmcdev->lock); 1278 - 1279 - if (!suspend && !pmcdev->enable_acpi_pm_timer_on_resume) 1280 - return; 1281 - 1282 - reg = pmc_core_reg_read(pmc, map->acpi_pm_tmr_ctl_offset); 1283 - enabled = !(reg & map->acpi_pm_tmr_disable_bit); 1284 - if (suspend) 1285 - reg |= map->acpi_pm_tmr_disable_bit; 1286 - else 1287 - reg &= ~map->acpi_pm_tmr_disable_bit; 1288 - pmc_core_reg_write(pmc, map->acpi_pm_tmr_ctl_offset, reg); 1289 - 1290 - pmcdev->enable_acpi_pm_timer_on_resume = suspend && enabled; 1291 - } 1292 - 1293 1261 static void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev) 1294 1262 { 1295 1263 debugfs_remove_recursive(pmcdev->dbgfs_dir); ··· 1452 1486 struct pmc_dev *pmcdev; 1453 1487 const struct x86_cpu_id *cpu_id; 1454 1488 int (*core_init)(struct pmc_dev *pmcdev); 1455 - const struct pmc_reg_map *map; 1456 1489 struct pmc *primary_pmc; 1457 1490 int ret; 1458 1491 ··· 1510 1545 pm_report_max_hw_sleep(FIELD_MAX(SLP_S0_RES_COUNTER_MASK) * 1511 1546 pmc_core_adjust_slp_s0_step(primary_pmc, 1)); 1512 1547 1513 - map = primary_pmc->map; 1514 - if (map->acpi_pm_tmr_ctl_offset) 1515 - acpi_pmtmr_register_suspend_resume_callback(pmc_core_acpi_pm_timer_suspend_resume, 1516 - pmcdev); 1517 - 1518 1548 device_initialized = true; 1519 1549 dev_info(&pdev->dev, " initialized\n"); 1520 1550 ··· 1519 1559 static void pmc_core_remove(struct platform_device *pdev) 1520 1560 { 1521 1561 struct pmc_dev *pmcdev = platform_get_drvdata(pdev); 1522 - const struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN]; 1523 - const struct pmc_reg_map *map = pmc->map; 1524 - 1525 - if (map->acpi_pm_tmr_ctl_offset) 1526 - acpi_pmtmr_unregister_suspend_resume_callback(); 1527 - 1528 1562 pmc_core_dbgfs_unregister(pmcdev); 1529 1563 pmc_core_clean_structure(pdev); 1530 1564 }
-8
drivers/platform/x86/intel/pmc/core.h
··· 68 68 #define SPT_PMC_LTR_SCC 0x3A0 69 69 #define SPT_PMC_LTR_ISH 0x3A4 70 70 71 - #define SPT_PMC_ACPI_PM_TMR_CTL_OFFSET 0x18FC 72 - 73 71 /* Sunrise Point: PGD PFET Enable Ack Status Registers */ 74 72 enum ppfear_regs { 75 73 SPT_PMC_XRAM_PPFEAR0A = 0x590, ··· 147 149 148 150 #define SPT_PMC_VRIC1_SLPS0LVEN BIT(13) 149 151 #define SPT_PMC_VRIC1_XTALSDQDIS BIT(22) 150 - 151 - #define SPT_PMC_BIT_ACPI_PM_TMR_DISABLE BIT(1) 152 152 153 153 /* Cannonlake Power Management Controller register offsets */ 154 154 #define CNP_PMC_SLPS0_DBG_OFFSET 0x10B4 ··· 351 355 const u8 *lpm_reg_index; 352 356 const u32 pson_residency_offset; 353 357 const u32 pson_residency_counter_step; 354 - const u32 acpi_pm_tmr_ctl_offset; 355 - const u32 acpi_pm_tmr_disable_bit; 356 358 }; 357 359 358 360 /** ··· 426 432 u32 die_c6_offset; 427 433 struct telem_endpoint *punit_ep; 428 434 struct pmc_info *regmap_list; 429 - 430 - bool enable_acpi_pm_timer_on_resume; 431 435 }; 432 436 433 437 enum pmc_index {
+3 -1
drivers/platform/x86/intel/pmc/core_ssram.c
··· 29 29 #define LPM_REG_COUNT 28 30 30 #define LPM_MODE_OFFSET 1 31 31 32 - DEFINE_FREE(pmc_core_iounmap, void __iomem *, iounmap(_T)); 32 + DEFINE_FREE(pmc_core_iounmap, void __iomem *, if (_T) iounmap(_T)) 33 33 34 34 static u32 pmc_core_find_guid(struct pmc_info *list, const struct pmc_reg_map *map) 35 35 { ··· 262 262 263 263 ssram_base = ssram_pcidev->resource[0].start; 264 264 tmp_ssram = ioremap(ssram_base, SSRAM_HDR_SIZE); 265 + if (!tmp_ssram) 266 + return -ENOMEM; 265 267 266 268 if (pmc_idx != PMC_IDX_MAIN) { 267 269 /*
-2
drivers/platform/x86/intel/pmc/icl.c
··· 46 46 .ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES, 47 47 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 48 48 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 49 - .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 50 - .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 51 49 .ltr_ignore_max = ICL_NUM_IP_IGN_ALLOWED, 52 50 .etr3_offset = ETR3_OFFSET, 53 51 };
-2
drivers/platform/x86/intel/pmc/mtl.c
··· 462 462 .ppfear_buckets = MTL_SOCM_PPFEAR_NUM_ENTRIES, 463 463 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 464 464 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 465 - .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 466 - .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 467 465 .lpm_num_maps = ADL_LPM_NUM_MAPS, 468 466 .ltr_ignore_max = MTL_SOCM_NUM_IP_IGN_ALLOWED, 469 467 .lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
-2
drivers/platform/x86/intel/pmc/tgl.c
··· 197 197 .ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES, 198 198 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 199 199 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 200 - .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 201 - .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 202 200 .ltr_ignore_max = TGL_NUM_IP_IGN_ALLOWED, 203 201 .lpm_num_maps = TGL_LPM_NUM_MAPS, 204 202 .lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
+1 -1
drivers/powercap/dtpm_devfreq.c
··· 178 178 ret = dev_pm_qos_add_request(dev, &dtpm_devfreq->qos_req, 179 179 DEV_PM_QOS_MAX_FREQUENCY, 180 180 PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE); 181 - if (ret) { 181 + if (ret < 0) { 182 182 pr_err("Failed to add QoS request: %d\n", ret); 183 183 goto out_dtpm_unregister; 184 184 }
+4 -6
drivers/scsi/scsi_debug.c
··· 3651 3651 enum dma_data_direction dir; 3652 3652 struct scsi_data_buffer *sdb = &scp->sdb; 3653 3653 u8 *fsp; 3654 - int i; 3654 + int i, total = 0; 3655 3655 3656 3656 /* 3657 3657 * Even though reads are inherently atomic (in this driver), we expect ··· 3688 3688 fsp + (block * sdebug_sector_size), 3689 3689 sdebug_sector_size, sg_skip, do_write); 3690 3690 sdeb_data_sector_unlock(sip, do_write); 3691 - if (ret != sdebug_sector_size) { 3692 - ret += (i * sdebug_sector_size); 3691 + total += ret; 3692 + if (ret != sdebug_sector_size) 3693 3693 break; 3694 - } 3695 3694 sg_skip += sdebug_sector_size; 3696 3695 if (++block >= sdebug_store_sectors) 3697 3696 block = 0; 3698 3697 } 3699 - ret = num * sdebug_sector_size; 3700 3698 sdeb_data_unlock(sip, atomic); 3701 3699 3702 - return ret; 3700 + return total; 3703 3701 } 3704 3702 3705 3703 /* Returns number of bytes copied or -1 if error. */
+6 -13
drivers/soundwire/intel_ace2x.c
··· 376 376 static int intel_prepare(struct snd_pcm_substream *substream, 377 377 struct snd_soc_dai *dai) 378 378 { 379 + struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream); 379 380 struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); 380 381 struct sdw_intel *sdw = cdns_to_intel(cdns); 381 382 struct sdw_cdns_dai_runtime *dai_runtime; 383 + struct snd_pcm_hw_params *hw_params; 382 384 int ch, dir; 383 - int ret = 0; 384 385 385 386 dai_runtime = cdns->dai_runtime_array[dai->id]; 386 387 if (!dai_runtime) { ··· 390 389 return -EIO; 391 390 } 392 391 392 + hw_params = &rtd->dpcm[substream->stream].hw_params; 393 393 if (dai_runtime->suspended) { 394 - struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream); 395 - struct snd_pcm_hw_params *hw_params; 396 - 397 - hw_params = &rtd->dpcm[substream->stream].hw_params; 398 - 399 394 dai_runtime->suspended = false; 400 395 401 396 /* ··· 412 415 /* the SHIM will be configured in the callback functions */ 413 416 414 417 sdw_cdns_config_stream(cdns, ch, dir, dai_runtime->pdi); 415 - 416 - /* Inform DSP about PDI stream number */ 417 - ret = intel_params_stream(sdw, substream, dai, 418 - hw_params, 419 - sdw->instance, 420 - dai_runtime->pdi->intel_alh_id); 421 418 } 422 419 423 - return ret; 420 + /* Inform DSP about PDI stream number */ 421 + return intel_params_stream(sdw, substream, dai, hw_params, sdw->instance, 422 + dai_runtime->pdi->intel_alh_id); 424 423 } 425 424 426 425 static int
+5 -1
drivers/spi/spi-fsl-dspi.c
··· 1003 1003 u32 cs_sck_delay = 0, sck_cs_delay = 0; 1004 1004 struct fsl_dspi_platform_data *pdata; 1005 1005 unsigned char pasc = 0, asc = 0; 1006 + struct gpio_desc *gpio_cs; 1006 1007 struct chip_data *chip; 1007 1008 unsigned long clkrate; 1008 1009 bool cs = true; ··· 1078 1077 chip->ctar_val |= SPI_CTAR_LSBFE; 1079 1078 } 1080 1079 1081 - gpiod_direction_output(spi_get_csgpiod(spi, 0), false); 1080 + gpio_cs = spi_get_csgpiod(spi, 0); 1081 + if (gpio_cs) 1082 + gpiod_direction_output(gpio_cs, false); 1083 + 1082 1084 dspi_deassert_cs(spi, &cs); 1083 1085 1084 1086 spi_set_ctldata(spi, chip);
+5 -3
drivers/spi/spi-geni-qcom.c
··· 1116 1116 init_completion(&mas->tx_reset_done); 1117 1117 init_completion(&mas->rx_reset_done); 1118 1118 spin_lock_init(&mas->lock); 1119 + 1120 + ret = geni_icc_get(&mas->se, NULL); 1121 + if (ret) 1122 + return ret; 1123 + 1119 1124 pm_runtime_use_autosuspend(&pdev->dev); 1120 1125 pm_runtime_set_autosuspend_delay(&pdev->dev, 250); 1121 1126 ret = devm_pm_runtime_enable(dev); ··· 1130 1125 if (device_property_read_bool(&pdev->dev, "spi-slave")) 1131 1126 spi->target = true; 1132 1127 1133 - ret = geni_icc_get(&mas->se, NULL); 1134 - if (ret) 1135 - return ret; 1136 1128 /* Set the bus quota to a reasonable value for register access */ 1137 1129 mas->se.icc_paths[GENI_TO_CORE].avg_bw = Bps_to_icc(CORE_2X_50_MHZ); 1138 1130 mas->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
+1 -1
drivers/spi/spi-mtk-snfi.c
··· 1187 1187 1188 1188 /** 1189 1189 * mtk_snand_is_page_ops() - check if the op is a controller supported page op. 1190 - * @op spi-mem op to check 1190 + * @op: spi-mem op to check 1191 1191 * 1192 1192 * Check whether op can be executed with read_from_cache or program_load 1193 1193 * mode in the controller.
+1
drivers/spi/spi-stm32.c
··· 2044 2044 .baud_rate_div_max = STM32H7_SPI_MBR_DIV_MAX, 2045 2045 .has_fifo = true, 2046 2046 .prevent_dma_burst = true, 2047 + .has_device_mode = true, 2047 2048 }; 2048 2049 2049 2050 static const struct of_device_id stm32_spi_of_match[] = {
+1 -1
drivers/ufs/core/ufshcd.c
··· 8219 8219 8220 8220 err = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, QUERY_ATTR_IDN_SECONDS_PASSED, 8221 8221 0, 0, &val); 8222 - ufshcd_rpm_put_sync(hba); 8222 + ufshcd_rpm_put(hba); 8223 8223 8224 8224 if (err) 8225 8225 dev_err(hba->dev, "%s: Failed to update rtc %d\n", __func__, err);
+1 -14
drivers/video/fbdev/Kconfig
··· 1236 1236 config FB_VOODOO1 1237 1237 tristate "3Dfx Voodoo Graphics (sst1) support" 1238 1238 depends on FB && PCI 1239 - depends on FB_DEVICE 1240 1239 select FB_IOMEM_HELPERS 1241 1240 help 1242 1241 Say Y here if you have a 3Dfx Voodoo Graphics (Voodoo1/sst1) or ··· 1373 1374 config FB_WM8505 1374 1375 bool "Wondermedia WM8xxx-series frame buffer support" 1375 1376 depends on (FB = y) && HAS_IOMEM && (ARCH_VT8500 || COMPILE_TEST) 1377 + select FB_IOMEM_FOPS 1376 1378 select FB_SYS_FILLRECT if (!FB_WMT_GE_ROPS) 1377 1379 select FB_SYS_COPYAREA if (!FB_WMT_GE_ROPS) 1378 1380 select FB_SYS_IMAGEBLIT ··· 1659 1659 color operation, with depths ranging from 1 bpp to 8 bpp monochrome 1660 1660 and 8, 15 or 16 bpp color; 90 degrees clockwise display rotation for 1661 1661 panels <= 320 pixel horizontal resolution. 1662 - 1663 - config FB_DA8XX 1664 - tristate "DA8xx/OMAP-L1xx/AM335x Framebuffer support" 1665 - depends on FB && HAVE_CLK && HAS_IOMEM 1666 - depends on ARCH_DAVINCI_DA8XX || SOC_AM33XX || COMPILE_TEST 1667 - select FB_CFB_REV_PIXELS_IN_BYTE 1668 - select FB_IOMEM_HELPERS 1669 - select FB_MODE_HELPERS 1670 - select VIDEOMODE_HELPERS 1671 - help 1672 - This is the frame buffer device driver for the TI LCD controller 1673 - found on DA8xx/OMAP-L1xx/AM335x SoCs. 1674 - If unsure, say N. 1675 1662 1676 1663 config FB_VIRTUAL 1677 1664 tristate "Virtual Frame Buffer support (ONLY FOR TESTING!)"
-1
drivers/video/fbdev/Makefile
··· 121 121 obj-$(CONFIG_FB_EFI) += efifb.o 122 122 obj-$(CONFIG_FB_VGA16) += vga16fb.o 123 123 obj-$(CONFIG_FB_OF) += offb.o 124 - obj-$(CONFIG_FB_DA8XX) += da8xx-fb.o 125 124 obj-$(CONFIG_FB_SSD1307) += ssd1307fb.o 126 125 obj-$(CONFIG_FB_SIMPLE) += simplefb.o 127 126
+1 -1
drivers/video/fbdev/bw2.c
··· 147 147 return 0; 148 148 } 149 149 150 - static struct sbus_mmap_map bw2_mmap_map[] = { 150 + static const struct sbus_mmap_map bw2_mmap_map[] = { 151 151 { 152 152 .size = SBUS_MMAP_FBSIZE(1) 153 153 },
+1 -1
drivers/video/fbdev/cg14.c
··· 360 360 info->fix.accel = FB_ACCEL_SUN_CG14; 361 361 } 362 362 363 - static struct sbus_mmap_map __cg14_mmap_map[CG14_MMAP_ENTRIES] = { 363 + static const struct sbus_mmap_map __cg14_mmap_map[CG14_MMAP_ENTRIES] = { 364 364 { 365 365 .voff = CG14_REGS, 366 366 .poff = 0x80000000,
+1 -1
drivers/video/fbdev/cg3.c
··· 209 209 return 0; 210 210 } 211 211 212 - static struct sbus_mmap_map cg3_mmap_map[] = { 212 + static const struct sbus_mmap_map cg3_mmap_map[] = { 213 213 { 214 214 .voff = CG3_MMAP_OFFSET, 215 215 .poff = CG3_RAM_OFFSET,
+1 -1
drivers/video/fbdev/cg6.c
··· 545 545 return 0; 546 546 } 547 547 548 - static struct sbus_mmap_map cg6_mmap_map[] = { 548 + static const struct sbus_mmap_map cg6_mmap_map[] = { 549 549 { 550 550 .voff = CG6_FBC, 551 551 .poff = CG6_FBC_OFFSET,
-1665
drivers/video/fbdev/da8xx-fb.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright (C) 2008-2009 MontaVista Software Inc. 4 - * Copyright (C) 2008-2009 Texas Instruments Inc 5 - * 6 - * Based on the LCD driver for TI Avalanche processors written by 7 - * Ajay Singh and Shalom Hai. 8 - */ 9 - #include <linux/module.h> 10 - #include <linux/kernel.h> 11 - #include <linux/fb.h> 12 - #include <linux/dma-mapping.h> 13 - #include <linux/device.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/uaccess.h> 16 - #include <linux/pm_runtime.h> 17 - #include <linux/interrupt.h> 18 - #include <linux/wait.h> 19 - #include <linux/clk.h> 20 - #include <linux/cpufreq.h> 21 - #include <linux/console.h> 22 - #include <linux/regulator/consumer.h> 23 - #include <linux/spinlock.h> 24 - #include <linux/slab.h> 25 - #include <linux/delay.h> 26 - #include <linux/lcm.h> 27 - #include <video/da8xx-fb.h> 28 - #include <asm/div64.h> 29 - 30 - #define DRIVER_NAME "da8xx_lcdc" 31 - 32 - #define LCD_VERSION_1 1 33 - #define LCD_VERSION_2 2 34 - 35 - /* LCD Status Register */ 36 - #define LCD_END_OF_FRAME1 BIT(9) 37 - #define LCD_END_OF_FRAME0 BIT(8) 38 - #define LCD_PL_LOAD_DONE BIT(6) 39 - #define LCD_FIFO_UNDERFLOW BIT(5) 40 - #define LCD_SYNC_LOST BIT(2) 41 - #define LCD_FRAME_DONE BIT(0) 42 - 43 - /* LCD DMA Control Register */ 44 - #define LCD_DMA_BURST_SIZE(x) ((x) << 4) 45 - #define LCD_DMA_BURST_1 0x0 46 - #define LCD_DMA_BURST_2 0x1 47 - #define LCD_DMA_BURST_4 0x2 48 - #define LCD_DMA_BURST_8 0x3 49 - #define LCD_DMA_BURST_16 0x4 50 - #define LCD_V1_END_OF_FRAME_INT_ENA BIT(2) 51 - #define LCD_V2_END_OF_FRAME0_INT_ENA BIT(8) 52 - #define LCD_V2_END_OF_FRAME1_INT_ENA BIT(9) 53 - #define LCD_DUAL_FRAME_BUFFER_ENABLE BIT(0) 54 - 55 - /* LCD Control Register */ 56 - #define LCD_CLK_DIVISOR(x) ((x) << 8) 57 - #define LCD_RASTER_MODE 0x01 58 - 59 - /* LCD Raster Control Register */ 60 - #define LCD_PALETTE_LOAD_MODE(x) ((x) << 20) 61 - #define PALETTE_AND_DATA 0x00 62 - #define PALETTE_ONLY 0x01 63 - #define DATA_ONLY 0x02 64 - 65 - #define LCD_MONO_8BIT_MODE BIT(9) 66 - #define LCD_RASTER_ORDER BIT(8) 67 - #define LCD_TFT_MODE BIT(7) 68 - #define LCD_V1_UNDERFLOW_INT_ENA BIT(6) 69 - #define LCD_V2_UNDERFLOW_INT_ENA BIT(5) 70 - #define LCD_V1_PL_INT_ENA BIT(4) 71 - #define LCD_V2_PL_INT_ENA BIT(6) 72 - #define LCD_MONOCHROME_MODE BIT(1) 73 - #define LCD_RASTER_ENABLE BIT(0) 74 - #define LCD_TFT_ALT_ENABLE BIT(23) 75 - #define LCD_STN_565_ENABLE BIT(24) 76 - #define LCD_V2_DMA_CLK_EN BIT(2) 77 - #define LCD_V2_LIDD_CLK_EN BIT(1) 78 - #define LCD_V2_CORE_CLK_EN BIT(0) 79 - #define LCD_V2_LPP_B10 26 80 - #define LCD_V2_TFT_24BPP_MODE BIT(25) 81 - #define LCD_V2_TFT_24BPP_UNPACK BIT(26) 82 - 83 - /* LCD Raster Timing 2 Register */ 84 - #define LCD_AC_BIAS_TRANSITIONS_PER_INT(x) ((x) << 16) 85 - #define LCD_AC_BIAS_FREQUENCY(x) ((x) << 8) 86 - #define LCD_SYNC_CTRL BIT(25) 87 - #define LCD_SYNC_EDGE BIT(24) 88 - #define LCD_INVERT_PIXEL_CLOCK BIT(22) 89 - #define LCD_INVERT_LINE_CLOCK BIT(21) 90 - #define LCD_INVERT_FRAME_CLOCK BIT(20) 91 - 92 - /* LCD Block */ 93 - #define LCD_PID_REG 0x0 94 - #define LCD_CTRL_REG 0x4 95 - #define LCD_STAT_REG 0x8 96 - #define LCD_RASTER_CTRL_REG 0x28 97 - #define LCD_RASTER_TIMING_0_REG 0x2C 98 - #define LCD_RASTER_TIMING_1_REG 0x30 99 - #define LCD_RASTER_TIMING_2_REG 0x34 100 - #define LCD_DMA_CTRL_REG 0x40 101 - #define LCD_DMA_FRM_BUF_BASE_ADDR_0_REG 0x44 102 - #define LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG 0x48 103 - #define LCD_DMA_FRM_BUF_BASE_ADDR_1_REG 0x4C 104 - #define LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG 0x50 105 - 106 - /* Interrupt Registers available only in Version 2 */ 107 - #define LCD_RAW_STAT_REG 0x58 108 - #define LCD_MASKED_STAT_REG 0x5c 109 - #define LCD_INT_ENABLE_SET_REG 0x60 110 - #define LCD_INT_ENABLE_CLR_REG 0x64 111 - #define LCD_END_OF_INT_IND_REG 0x68 112 - 113 - /* Clock registers available only on Version 2 */ 114 - #define LCD_CLK_ENABLE_REG 0x6c 115 - #define LCD_CLK_RESET_REG 0x70 116 - #define LCD_CLK_MAIN_RESET BIT(3) 117 - 118 - #define LCD_NUM_BUFFERS 2 119 - 120 - #define PALETTE_SIZE 256 121 - 122 - #define CLK_MIN_DIV 2 123 - #define CLK_MAX_DIV 255 124 - 125 - static void __iomem *da8xx_fb_reg_base; 126 - static unsigned int lcd_revision; 127 - static irq_handler_t lcdc_irq_handler; 128 - static wait_queue_head_t frame_done_wq; 129 - static int frame_done_flag; 130 - 131 - static unsigned int lcdc_read(unsigned int addr) 132 - { 133 - return (unsigned int)__raw_readl(da8xx_fb_reg_base + (addr)); 134 - } 135 - 136 - static void lcdc_write(unsigned int val, unsigned int addr) 137 - { 138 - __raw_writel(val, da8xx_fb_reg_base + (addr)); 139 - } 140 - 141 - struct da8xx_fb_par { 142 - struct device *dev; 143 - dma_addr_t p_palette_base; 144 - unsigned char *v_palette_base; 145 - dma_addr_t vram_phys; 146 - unsigned long vram_size; 147 - void *vram_virt; 148 - unsigned int dma_start; 149 - unsigned int dma_end; 150 - struct clk *lcdc_clk; 151 - int irq; 152 - unsigned int palette_sz; 153 - int blank; 154 - wait_queue_head_t vsync_wait; 155 - int vsync_flag; 156 - int vsync_timeout; 157 - spinlock_t lock_for_chan_update; 158 - 159 - /* 160 - * LCDC has 2 ping pong DMA channels, channel 0 161 - * and channel 1. 162 - */ 163 - unsigned int which_dma_channel_done; 164 - #ifdef CONFIG_CPU_FREQ 165 - struct notifier_block freq_transition; 166 - #endif 167 - unsigned int lcdc_clk_rate; 168 - struct regulator *lcd_supply; 169 - u32 pseudo_palette[16]; 170 - struct fb_videomode mode; 171 - struct lcd_ctrl_config cfg; 172 - }; 173 - 174 - static struct fb_var_screeninfo da8xx_fb_var; 175 - 176 - static struct fb_fix_screeninfo da8xx_fb_fix = { 177 - .id = "DA8xx FB Drv", 178 - .type = FB_TYPE_PACKED_PIXELS, 179 - .type_aux = 0, 180 - .visual = FB_VISUAL_PSEUDOCOLOR, 181 - .xpanstep = 0, 182 - .ypanstep = 1, 183 - .ywrapstep = 0, 184 - .accel = FB_ACCEL_NONE 185 - }; 186 - 187 - static struct fb_videomode known_lcd_panels[] = { 188 - /* Sharp LCD035Q3DG01 */ 189 - [0] = { 190 - .name = "Sharp_LCD035Q3DG01", 191 - .xres = 320, 192 - .yres = 240, 193 - .pixclock = KHZ2PICOS(4607), 194 - .left_margin = 6, 195 - .right_margin = 8, 196 - .upper_margin = 2, 197 - .lower_margin = 2, 198 - .hsync_len = 0, 199 - .vsync_len = 0, 200 - .sync = FB_SYNC_CLK_INVERT, 201 - }, 202 - /* Sharp LK043T1DG01 */ 203 - [1] = { 204 - .name = "Sharp_LK043T1DG01", 205 - .xres = 480, 206 - .yres = 272, 207 - .pixclock = KHZ2PICOS(7833), 208 - .left_margin = 2, 209 - .right_margin = 2, 210 - .upper_margin = 2, 211 - .lower_margin = 2, 212 - .hsync_len = 41, 213 - .vsync_len = 10, 214 - .sync = 0, 215 - .flag = 0, 216 - }, 217 - [2] = { 218 - /* Hitachi SP10Q010 */ 219 - .name = "SP10Q010", 220 - .xres = 320, 221 - .yres = 240, 222 - .pixclock = KHZ2PICOS(7833), 223 - .left_margin = 10, 224 - .right_margin = 10, 225 - .upper_margin = 10, 226 - .lower_margin = 10, 227 - .hsync_len = 10, 228 - .vsync_len = 10, 229 - .sync = 0, 230 - .flag = 0, 231 - }, 232 - [3] = { 233 - /* Densitron 84-0023-001T */ 234 - .name = "Densitron_84-0023-001T", 235 - .xres = 320, 236 - .yres = 240, 237 - .pixclock = KHZ2PICOS(6400), 238 - .left_margin = 0, 239 - .right_margin = 0, 240 - .upper_margin = 0, 241 - .lower_margin = 0, 242 - .hsync_len = 30, 243 - .vsync_len = 3, 244 - .sync = 0, 245 - }, 246 - }; 247 - 248 - static bool da8xx_fb_is_raster_enabled(void) 249 - { 250 - return !!(lcdc_read(LCD_RASTER_CTRL_REG) & LCD_RASTER_ENABLE); 251 - } 252 - 253 - /* Enable the Raster Engine of the LCD Controller */ 254 - static void lcd_enable_raster(void) 255 - { 256 - u32 reg; 257 - 258 - /* Put LCDC in reset for several cycles */ 259 - if (lcd_revision == LCD_VERSION_2) 260 - /* Write 1 to reset LCDC */ 261 - lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 262 - mdelay(1); 263 - 264 - /* Bring LCDC out of reset */ 265 - if (lcd_revision == LCD_VERSION_2) 266 - lcdc_write(0, LCD_CLK_RESET_REG); 267 - mdelay(1); 268 - 269 - /* Above reset sequence doesnot reset register context */ 270 - reg = lcdc_read(LCD_RASTER_CTRL_REG); 271 - if (!(reg & LCD_RASTER_ENABLE)) 272 - lcdc_write(reg | LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); 273 - } 274 - 275 - /* Disable the Raster Engine of the LCD Controller */ 276 - static void lcd_disable_raster(enum da8xx_frame_complete wait_for_frame_done) 277 - { 278 - u32 reg; 279 - int ret; 280 - 281 - reg = lcdc_read(LCD_RASTER_CTRL_REG); 282 - if (reg & LCD_RASTER_ENABLE) 283 - lcdc_write(reg & ~LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); 284 - else 285 - /* return if already disabled */ 286 - return; 287 - 288 - if ((wait_for_frame_done == DA8XX_FRAME_WAIT) && 289 - (lcd_revision == LCD_VERSION_2)) { 290 - frame_done_flag = 0; 291 - ret = wait_event_interruptible_timeout(frame_done_wq, 292 - frame_done_flag != 0, 293 - msecs_to_jiffies(50)); 294 - if (ret == 0) 295 - pr_err("LCD Controller timed out\n"); 296 - } 297 - } 298 - 299 - static void lcd_blit(int load_mode, struct da8xx_fb_par *par) 300 - { 301 - u32 start; 302 - u32 end; 303 - u32 reg_ras; 304 - u32 reg_dma; 305 - u32 reg_int; 306 - 307 - /* init reg to clear PLM (loading mode) fields */ 308 - reg_ras = lcdc_read(LCD_RASTER_CTRL_REG); 309 - reg_ras &= ~(3 << 20); 310 - 311 - reg_dma = lcdc_read(LCD_DMA_CTRL_REG); 312 - 313 - if (load_mode == LOAD_DATA) { 314 - start = par->dma_start; 315 - end = par->dma_end; 316 - 317 - reg_ras |= LCD_PALETTE_LOAD_MODE(DATA_ONLY); 318 - if (lcd_revision == LCD_VERSION_1) { 319 - reg_dma |= LCD_V1_END_OF_FRAME_INT_ENA; 320 - } else { 321 - reg_int = lcdc_read(LCD_INT_ENABLE_SET_REG) | 322 - LCD_V2_END_OF_FRAME0_INT_ENA | 323 - LCD_V2_END_OF_FRAME1_INT_ENA | 324 - LCD_FRAME_DONE | LCD_SYNC_LOST; 325 - lcdc_write(reg_int, LCD_INT_ENABLE_SET_REG); 326 - } 327 - reg_dma |= LCD_DUAL_FRAME_BUFFER_ENABLE; 328 - 329 - lcdc_write(start, LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 330 - lcdc_write(end, LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 331 - lcdc_write(start, LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 332 - lcdc_write(end, LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 333 - } else if (load_mode == LOAD_PALETTE) { 334 - start = par->p_palette_base; 335 - end = start + par->palette_sz - 1; 336 - 337 - reg_ras |= LCD_PALETTE_LOAD_MODE(PALETTE_ONLY); 338 - 339 - if (lcd_revision == LCD_VERSION_1) { 340 - reg_ras |= LCD_V1_PL_INT_ENA; 341 - } else { 342 - reg_int = lcdc_read(LCD_INT_ENABLE_SET_REG) | 343 - LCD_V2_PL_INT_ENA; 344 - lcdc_write(reg_int, LCD_INT_ENABLE_SET_REG); 345 - } 346 - 347 - lcdc_write(start, LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 348 - lcdc_write(end, LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 349 - } 350 - 351 - lcdc_write(reg_dma, LCD_DMA_CTRL_REG); 352 - lcdc_write(reg_ras, LCD_RASTER_CTRL_REG); 353 - 354 - /* 355 - * The Raster enable bit must be set after all other control fields are 356 - * set. 357 - */ 358 - lcd_enable_raster(); 359 - } 360 - 361 - /* Configure the Burst Size and fifo threhold of DMA */ 362 - static int lcd_cfg_dma(int burst_size, int fifo_th) 363 - { 364 - u32 reg; 365 - 366 - reg = lcdc_read(LCD_DMA_CTRL_REG) & 0x00000001; 367 - switch (burst_size) { 368 - case 1: 369 - reg |= LCD_DMA_BURST_SIZE(LCD_DMA_BURST_1); 370 - break; 371 - case 2: 372 - reg |= LCD_DMA_BURST_SIZE(LCD_DMA_BURST_2); 373 - break; 374 - case 4: 375 - reg |= LCD_DMA_BURST_SIZE(LCD_DMA_BURST_4); 376 - break; 377 - case 8: 378 - reg |= LCD_DMA_BURST_SIZE(LCD_DMA_BURST_8); 379 - break; 380 - case 16: 381 - default: 382 - reg |= LCD_DMA_BURST_SIZE(LCD_DMA_BURST_16); 383 - break; 384 - } 385 - 386 - reg |= (fifo_th << 8); 387 - 388 - lcdc_write(reg, LCD_DMA_CTRL_REG); 389 - 390 - return 0; 391 - } 392 - 393 - static void lcd_cfg_ac_bias(int period, int transitions_per_int) 394 - { 395 - u32 reg; 396 - 397 - /* Set the AC Bias Period and Number of Transisitons per Interrupt */ 398 - reg = lcdc_read(LCD_RASTER_TIMING_2_REG) & 0xFFF00000; 399 - reg |= LCD_AC_BIAS_FREQUENCY(period) | 400 - LCD_AC_BIAS_TRANSITIONS_PER_INT(transitions_per_int); 401 - lcdc_write(reg, LCD_RASTER_TIMING_2_REG); 402 - } 403 - 404 - static void lcd_cfg_horizontal_sync(int back_porch, int pulse_width, 405 - int front_porch) 406 - { 407 - u32 reg; 408 - 409 - reg = lcdc_read(LCD_RASTER_TIMING_0_REG) & 0x3ff; 410 - reg |= (((back_porch-1) & 0xff) << 24) 411 - | (((front_porch-1) & 0xff) << 16) 412 - | (((pulse_width-1) & 0x3f) << 10); 413 - lcdc_write(reg, LCD_RASTER_TIMING_0_REG); 414 - 415 - /* 416 - * LCDC Version 2 adds some extra bits that increase the allowable 417 - * size of the horizontal timing registers. 418 - * remember that the registers use 0 to represent 1 so all values 419 - * that get set into register need to be decremented by 1 420 - */ 421 - if (lcd_revision == LCD_VERSION_2) { 422 - /* Mask off the bits we want to change */ 423 - reg = lcdc_read(LCD_RASTER_TIMING_2_REG) & ~0x780000ff; 424 - reg |= ((front_porch-1) & 0x300) >> 8; 425 - reg |= ((back_porch-1) & 0x300) >> 4; 426 - reg |= ((pulse_width-1) & 0x3c0) << 21; 427 - lcdc_write(reg, LCD_RASTER_TIMING_2_REG); 428 - } 429 - } 430 - 431 - static void lcd_cfg_vertical_sync(int back_porch, int pulse_width, 432 - int front_porch) 433 - { 434 - u32 reg; 435 - 436 - reg = lcdc_read(LCD_RASTER_TIMING_1_REG) & 0x3ff; 437 - reg |= ((back_porch & 0xff) << 24) 438 - | ((front_porch & 0xff) << 16) 439 - | (((pulse_width-1) & 0x3f) << 10); 440 - lcdc_write(reg, LCD_RASTER_TIMING_1_REG); 441 - } 442 - 443 - static int lcd_cfg_display(const struct lcd_ctrl_config *cfg, 444 - struct fb_videomode *panel) 445 - { 446 - u32 reg; 447 - u32 reg_int; 448 - 449 - reg = lcdc_read(LCD_RASTER_CTRL_REG) & ~(LCD_TFT_MODE | 450 - LCD_MONO_8BIT_MODE | 451 - LCD_MONOCHROME_MODE); 452 - 453 - switch (cfg->panel_shade) { 454 - case MONOCHROME: 455 - reg |= LCD_MONOCHROME_MODE; 456 - if (cfg->mono_8bit_mode) 457 - reg |= LCD_MONO_8BIT_MODE; 458 - break; 459 - case COLOR_ACTIVE: 460 - reg |= LCD_TFT_MODE; 461 - if (cfg->tft_alt_mode) 462 - reg |= LCD_TFT_ALT_ENABLE; 463 - break; 464 - 465 - case COLOR_PASSIVE: 466 - /* AC bias applicable only for Pasive panels */ 467 - lcd_cfg_ac_bias(cfg->ac_bias, cfg->ac_bias_intrpt); 468 - if (cfg->bpp == 12 && cfg->stn_565_mode) 469 - reg |= LCD_STN_565_ENABLE; 470 - break; 471 - 472 - default: 473 - return -EINVAL; 474 - } 475 - 476 - /* enable additional interrupts here */ 477 - if (lcd_revision == LCD_VERSION_1) { 478 - reg |= LCD_V1_UNDERFLOW_INT_ENA; 479 - } else { 480 - reg_int = lcdc_read(LCD_INT_ENABLE_SET_REG) | 481 - LCD_V2_UNDERFLOW_INT_ENA; 482 - lcdc_write(reg_int, LCD_INT_ENABLE_SET_REG); 483 - } 484 - 485 - lcdc_write(reg, LCD_RASTER_CTRL_REG); 486 - 487 - reg = lcdc_read(LCD_RASTER_TIMING_2_REG); 488 - 489 - reg |= LCD_SYNC_CTRL; 490 - 491 - if (cfg->sync_edge) 492 - reg |= LCD_SYNC_EDGE; 493 - else 494 - reg &= ~LCD_SYNC_EDGE; 495 - 496 - if ((panel->sync & FB_SYNC_HOR_HIGH_ACT) == 0) 497 - reg |= LCD_INVERT_LINE_CLOCK; 498 - else 499 - reg &= ~LCD_INVERT_LINE_CLOCK; 500 - 501 - if ((panel->sync & FB_SYNC_VERT_HIGH_ACT) == 0) 502 - reg |= LCD_INVERT_FRAME_CLOCK; 503 - else 504 - reg &= ~LCD_INVERT_FRAME_CLOCK; 505 - 506 - lcdc_write(reg, LCD_RASTER_TIMING_2_REG); 507 - 508 - return 0; 509 - } 510 - 511 - static int lcd_cfg_frame_buffer(struct da8xx_fb_par *par, u32 width, u32 height, 512 - u32 bpp, u32 raster_order) 513 - { 514 - u32 reg; 515 - 516 - if (bpp > 16 && lcd_revision == LCD_VERSION_1) 517 - return -EINVAL; 518 - 519 - /* Set the Panel Width */ 520 - /* Pixels per line = (PPL + 1)*16 */ 521 - if (lcd_revision == LCD_VERSION_1) { 522 - /* 523 - * 0x3F in bits 4..9 gives max horizontal resolution = 1024 524 - * pixels. 525 - */ 526 - width &= 0x3f0; 527 - } else { 528 - /* 529 - * 0x7F in bits 4..10 gives max horizontal resolution = 2048 530 - * pixels. 531 - */ 532 - width &= 0x7f0; 533 - } 534 - 535 - reg = lcdc_read(LCD_RASTER_TIMING_0_REG); 536 - reg &= 0xfffffc00; 537 - if (lcd_revision == LCD_VERSION_1) { 538 - reg |= ((width >> 4) - 1) << 4; 539 - } else { 540 - width = (width >> 4) - 1; 541 - reg |= ((width & 0x3f) << 4) | ((width & 0x40) >> 3); 542 - } 543 - lcdc_write(reg, LCD_RASTER_TIMING_0_REG); 544 - 545 - /* Set the Panel Height */ 546 - /* Set bits 9:0 of Lines Per Pixel */ 547 - reg = lcdc_read(LCD_RASTER_TIMING_1_REG); 548 - reg = ((height - 1) & 0x3ff) | (reg & 0xfffffc00); 549 - lcdc_write(reg, LCD_RASTER_TIMING_1_REG); 550 - 551 - /* Set bit 10 of Lines Per Pixel */ 552 - if (lcd_revision == LCD_VERSION_2) { 553 - reg = lcdc_read(LCD_RASTER_TIMING_2_REG); 554 - reg |= ((height - 1) & 0x400) << 16; 555 - lcdc_write(reg, LCD_RASTER_TIMING_2_REG); 556 - } 557 - 558 - /* Set the Raster Order of the Frame Buffer */ 559 - reg = lcdc_read(LCD_RASTER_CTRL_REG) & ~(1 << 8); 560 - if (raster_order) 561 - reg |= LCD_RASTER_ORDER; 562 - 563 - par->palette_sz = 16 * 2; 564 - 565 - switch (bpp) { 566 - case 1: 567 - case 2: 568 - case 4: 569 - case 16: 570 - break; 571 - case 24: 572 - reg |= LCD_V2_TFT_24BPP_MODE; 573 - break; 574 - case 32: 575 - reg |= LCD_V2_TFT_24BPP_MODE; 576 - reg |= LCD_V2_TFT_24BPP_UNPACK; 577 - break; 578 - case 8: 579 - par->palette_sz = 256 * 2; 580 - break; 581 - 582 - default: 583 - return -EINVAL; 584 - } 585 - 586 - lcdc_write(reg, LCD_RASTER_CTRL_REG); 587 - 588 - return 0; 589 - } 590 - 591 - #define CNVT_TOHW(val, width) ((((val) << (width)) + 0x7FFF - (val)) >> 16) 592 - static int fb_setcolreg(unsigned regno, unsigned red, unsigned green, 593 - unsigned blue, unsigned transp, 594 - struct fb_info *info) 595 - { 596 - struct da8xx_fb_par *par = info->par; 597 - unsigned short *palette = (unsigned short *) par->v_palette_base; 598 - u_short pal; 599 - int update_hw = 0; 600 - 601 - if (regno > 255) 602 - return 1; 603 - 604 - if (info->fix.visual == FB_VISUAL_DIRECTCOLOR) 605 - return 1; 606 - 607 - if (info->var.bits_per_pixel > 16 && lcd_revision == LCD_VERSION_1) 608 - return -EINVAL; 609 - 610 - switch (info->fix.visual) { 611 - case FB_VISUAL_TRUECOLOR: 612 - red = CNVT_TOHW(red, info->var.red.length); 613 - green = CNVT_TOHW(green, info->var.green.length); 614 - blue = CNVT_TOHW(blue, info->var.blue.length); 615 - break; 616 - case FB_VISUAL_PSEUDOCOLOR: 617 - switch (info->var.bits_per_pixel) { 618 - case 4: 619 - if (regno > 15) 620 - return -EINVAL; 621 - 622 - if (info->var.grayscale) { 623 - pal = regno; 624 - } else { 625 - red >>= 4; 626 - green >>= 8; 627 - blue >>= 12; 628 - 629 - pal = red & 0x0f00; 630 - pal |= green & 0x00f0; 631 - pal |= blue & 0x000f; 632 - } 633 - if (regno == 0) 634 - pal |= 0x2000; 635 - palette[regno] = pal; 636 - break; 637 - 638 - case 8: 639 - red >>= 4; 640 - green >>= 8; 641 - blue >>= 12; 642 - 643 - pal = (red & 0x0f00); 644 - pal |= (green & 0x00f0); 645 - pal |= (blue & 0x000f); 646 - 647 - if (palette[regno] != pal) { 648 - update_hw = 1; 649 - palette[regno] = pal; 650 - } 651 - break; 652 - } 653 - break; 654 - } 655 - 656 - /* Truecolor has hardware independent palette */ 657 - if (info->fix.visual == FB_VISUAL_TRUECOLOR) { 658 - u32 v; 659 - 660 - if (regno > 15) 661 - return -EINVAL; 662 - 663 - v = (red << info->var.red.offset) | 664 - (green << info->var.green.offset) | 665 - (blue << info->var.blue.offset); 666 - 667 - ((u32 *) (info->pseudo_palette))[regno] = v; 668 - if (palette[0] != 0x4000) { 669 - update_hw = 1; 670 - palette[0] = 0x4000; 671 - } 672 - } 673 - 674 - /* Update the palette in the h/w as needed. */ 675 - if (update_hw) 676 - lcd_blit(LOAD_PALETTE, par); 677 - 678 - return 0; 679 - } 680 - #undef CNVT_TOHW 681 - 682 - static void da8xx_fb_lcd_reset(void) 683 - { 684 - /* DMA has to be disabled */ 685 - lcdc_write(0, LCD_DMA_CTRL_REG); 686 - lcdc_write(0, LCD_RASTER_CTRL_REG); 687 - 688 - if (lcd_revision == LCD_VERSION_2) { 689 - lcdc_write(0, LCD_INT_ENABLE_SET_REG); 690 - /* Write 1 to reset */ 691 - lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 692 - lcdc_write(0, LCD_CLK_RESET_REG); 693 - } 694 - } 695 - 696 - static int da8xx_fb_config_clk_divider(struct da8xx_fb_par *par, 697 - unsigned lcdc_clk_div, 698 - unsigned lcdc_clk_rate) 699 - { 700 - int ret; 701 - 702 - if (par->lcdc_clk_rate != lcdc_clk_rate) { 703 - ret = clk_set_rate(par->lcdc_clk, lcdc_clk_rate); 704 - if (ret) { 705 - dev_err(par->dev, 706 - "unable to set clock rate at %u\n", 707 - lcdc_clk_rate); 708 - return ret; 709 - } 710 - par->lcdc_clk_rate = clk_get_rate(par->lcdc_clk); 711 - } 712 - 713 - /* Configure the LCD clock divisor. */ 714 - lcdc_write(LCD_CLK_DIVISOR(lcdc_clk_div) | 715 - (LCD_RASTER_MODE & 0x1), LCD_CTRL_REG); 716 - 717 - if (lcd_revision == LCD_VERSION_2) 718 - lcdc_write(LCD_V2_DMA_CLK_EN | LCD_V2_LIDD_CLK_EN | 719 - LCD_V2_CORE_CLK_EN, LCD_CLK_ENABLE_REG); 720 - 721 - return 0; 722 - } 723 - 724 - static unsigned int da8xx_fb_calc_clk_divider(struct da8xx_fb_par *par, 725 - unsigned pixclock, 726 - unsigned *lcdc_clk_rate) 727 - { 728 - unsigned lcdc_clk_div; 729 - 730 - pixclock = PICOS2KHZ(pixclock) * 1000; 731 - 732 - *lcdc_clk_rate = par->lcdc_clk_rate; 733 - 734 - if (pixclock < (*lcdc_clk_rate / CLK_MAX_DIV)) { 735 - *lcdc_clk_rate = clk_round_rate(par->lcdc_clk, 736 - pixclock * CLK_MAX_DIV); 737 - lcdc_clk_div = CLK_MAX_DIV; 738 - } else if (pixclock > (*lcdc_clk_rate / CLK_MIN_DIV)) { 739 - *lcdc_clk_rate = clk_round_rate(par->lcdc_clk, 740 - pixclock * CLK_MIN_DIV); 741 - lcdc_clk_div = CLK_MIN_DIV; 742 - } else { 743 - lcdc_clk_div = *lcdc_clk_rate / pixclock; 744 - } 745 - 746 - return lcdc_clk_div; 747 - } 748 - 749 - static int da8xx_fb_calc_config_clk_divider(struct da8xx_fb_par *par, 750 - struct fb_videomode *mode) 751 - { 752 - unsigned lcdc_clk_rate; 753 - unsigned lcdc_clk_div = da8xx_fb_calc_clk_divider(par, mode->pixclock, 754 - &lcdc_clk_rate); 755 - 756 - return da8xx_fb_config_clk_divider(par, lcdc_clk_div, lcdc_clk_rate); 757 - } 758 - 759 - static unsigned da8xx_fb_round_clk(struct da8xx_fb_par *par, 760 - unsigned pixclock) 761 - { 762 - unsigned lcdc_clk_div, lcdc_clk_rate; 763 - 764 - lcdc_clk_div = da8xx_fb_calc_clk_divider(par, pixclock, &lcdc_clk_rate); 765 - return KHZ2PICOS(lcdc_clk_rate / (1000 * lcdc_clk_div)); 766 - } 767 - 768 - static int lcd_init(struct da8xx_fb_par *par, const struct lcd_ctrl_config *cfg, 769 - struct fb_videomode *panel) 770 - { 771 - u32 bpp; 772 - int ret = 0; 773 - 774 - ret = da8xx_fb_calc_config_clk_divider(par, panel); 775 - if (ret) { 776 - dev_err(par->dev, "unable to configure clock\n"); 777 - return ret; 778 - } 779 - 780 - if (panel->sync & FB_SYNC_CLK_INVERT) 781 - lcdc_write((lcdc_read(LCD_RASTER_TIMING_2_REG) | 782 - LCD_INVERT_PIXEL_CLOCK), LCD_RASTER_TIMING_2_REG); 783 - else 784 - lcdc_write((lcdc_read(LCD_RASTER_TIMING_2_REG) & 785 - ~LCD_INVERT_PIXEL_CLOCK), LCD_RASTER_TIMING_2_REG); 786 - 787 - /* Configure the DMA burst size and fifo threshold. */ 788 - ret = lcd_cfg_dma(cfg->dma_burst_sz, cfg->fifo_th); 789 - if (ret < 0) 790 - return ret; 791 - 792 - /* Configure the vertical and horizontal sync properties. */ 793 - lcd_cfg_vertical_sync(panel->upper_margin, panel->vsync_len, 794 - panel->lower_margin); 795 - lcd_cfg_horizontal_sync(panel->left_margin, panel->hsync_len, 796 - panel->right_margin); 797 - 798 - /* Configure for disply */ 799 - ret = lcd_cfg_display(cfg, panel); 800 - if (ret < 0) 801 - return ret; 802 - 803 - bpp = cfg->bpp; 804 - 805 - if (bpp == 12) 806 - bpp = 16; 807 - ret = lcd_cfg_frame_buffer(par, (unsigned int)panel->xres, 808 - (unsigned int)panel->yres, bpp, 809 - cfg->raster_order); 810 - if (ret < 0) 811 - return ret; 812 - 813 - /* Configure FDD */ 814 - lcdc_write((lcdc_read(LCD_RASTER_CTRL_REG) & 0xfff00fff) | 815 - (cfg->fdd << 12), LCD_RASTER_CTRL_REG); 816 - 817 - return 0; 818 - } 819 - 820 - /* IRQ handler for version 2 of LCDC */ 821 - static irqreturn_t lcdc_irq_handler_rev02(int irq, void *arg) 822 - { 823 - struct da8xx_fb_par *par = arg; 824 - u32 stat = lcdc_read(LCD_MASKED_STAT_REG); 825 - 826 - if ((stat & LCD_SYNC_LOST) && (stat & LCD_FIFO_UNDERFLOW)) { 827 - lcd_disable_raster(DA8XX_FRAME_NOWAIT); 828 - lcdc_write(stat, LCD_MASKED_STAT_REG); 829 - lcd_enable_raster(); 830 - } else if (stat & LCD_PL_LOAD_DONE) { 831 - /* 832 - * Must disable raster before changing state of any control bit. 833 - * And also must be disabled before clearing the PL loading 834 - * interrupt via the following write to the status register. If 835 - * this is done after then one gets multiple PL done interrupts. 836 - */ 837 - lcd_disable_raster(DA8XX_FRAME_NOWAIT); 838 - 839 - lcdc_write(stat, LCD_MASKED_STAT_REG); 840 - 841 - /* Disable PL completion interrupt */ 842 - lcdc_write(LCD_V2_PL_INT_ENA, LCD_INT_ENABLE_CLR_REG); 843 - 844 - /* Setup and start data loading mode */ 845 - lcd_blit(LOAD_DATA, par); 846 - } else { 847 - lcdc_write(stat, LCD_MASKED_STAT_REG); 848 - 849 - if (stat & LCD_END_OF_FRAME0) { 850 - par->which_dma_channel_done = 0; 851 - lcdc_write(par->dma_start, 852 - LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 853 - lcdc_write(par->dma_end, 854 - LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 855 - par->vsync_flag = 1; 856 - wake_up_interruptible(&par->vsync_wait); 857 - } 858 - 859 - if (stat & LCD_END_OF_FRAME1) { 860 - par->which_dma_channel_done = 1; 861 - lcdc_write(par->dma_start, 862 - LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 863 - lcdc_write(par->dma_end, 864 - LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 865 - par->vsync_flag = 1; 866 - wake_up_interruptible(&par->vsync_wait); 867 - } 868 - 869 - /* Set only when controller is disabled and at the end of 870 - * active frame 871 - */ 872 - if (stat & BIT(0)) { 873 - frame_done_flag = 1; 874 - wake_up_interruptible(&frame_done_wq); 875 - } 876 - } 877 - 878 - lcdc_write(0, LCD_END_OF_INT_IND_REG); 879 - return IRQ_HANDLED; 880 - } 881 - 882 - /* IRQ handler for version 1 LCDC */ 883 - static irqreturn_t lcdc_irq_handler_rev01(int irq, void *arg) 884 - { 885 - struct da8xx_fb_par *par = arg; 886 - u32 stat = lcdc_read(LCD_STAT_REG); 887 - u32 reg_ras; 888 - 889 - if ((stat & LCD_SYNC_LOST) && (stat & LCD_FIFO_UNDERFLOW)) { 890 - lcd_disable_raster(DA8XX_FRAME_NOWAIT); 891 - lcdc_write(stat, LCD_STAT_REG); 892 - lcd_enable_raster(); 893 - } else if (stat & LCD_PL_LOAD_DONE) { 894 - /* 895 - * Must disable raster before changing state of any control bit. 896 - * And also must be disabled before clearing the PL loading 897 - * interrupt via the following write to the status register. If 898 - * this is done after then one gets multiple PL done interrupts. 899 - */ 900 - lcd_disable_raster(DA8XX_FRAME_NOWAIT); 901 - 902 - lcdc_write(stat, LCD_STAT_REG); 903 - 904 - /* Disable PL completion inerrupt */ 905 - reg_ras = lcdc_read(LCD_RASTER_CTRL_REG); 906 - reg_ras &= ~LCD_V1_PL_INT_ENA; 907 - lcdc_write(reg_ras, LCD_RASTER_CTRL_REG); 908 - 909 - /* Setup and start data loading mode */ 910 - lcd_blit(LOAD_DATA, par); 911 - } else { 912 - lcdc_write(stat, LCD_STAT_REG); 913 - 914 - if (stat & LCD_END_OF_FRAME0) { 915 - par->which_dma_channel_done = 0; 916 - lcdc_write(par->dma_start, 917 - LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 918 - lcdc_write(par->dma_end, 919 - LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 920 - par->vsync_flag = 1; 921 - wake_up_interruptible(&par->vsync_wait); 922 - } 923 - 924 - if (stat & LCD_END_OF_FRAME1) { 925 - par->which_dma_channel_done = 1; 926 - lcdc_write(par->dma_start, 927 - LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 928 - lcdc_write(par->dma_end, 929 - LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 930 - par->vsync_flag = 1; 931 - wake_up_interruptible(&par->vsync_wait); 932 - } 933 - } 934 - 935 - return IRQ_HANDLED; 936 - } 937 - 938 - static int fb_check_var(struct fb_var_screeninfo *var, 939 - struct fb_info *info) 940 - { 941 - int err = 0; 942 - struct da8xx_fb_par *par = info->par; 943 - int bpp = var->bits_per_pixel >> 3; 944 - unsigned long line_size = var->xres_virtual * bpp; 945 - 946 - if (var->bits_per_pixel > 16 && lcd_revision == LCD_VERSION_1) 947 - return -EINVAL; 948 - 949 - switch (var->bits_per_pixel) { 950 - case 1: 951 - case 8: 952 - var->red.offset = 0; 953 - var->red.length = 8; 954 - var->green.offset = 0; 955 - var->green.length = 8; 956 - var->blue.offset = 0; 957 - var->blue.length = 8; 958 - var->transp.offset = 0; 959 - var->transp.length = 0; 960 - var->nonstd = 0; 961 - break; 962 - case 4: 963 - var->red.offset = 0; 964 - var->red.length = 4; 965 - var->green.offset = 0; 966 - var->green.length = 4; 967 - var->blue.offset = 0; 968 - var->blue.length = 4; 969 - var->transp.offset = 0; 970 - var->transp.length = 0; 971 - var->nonstd = FB_NONSTD_REV_PIX_IN_B; 972 - break; 973 - case 16: /* RGB 565 */ 974 - var->red.offset = 11; 975 - var->red.length = 5; 976 - var->green.offset = 5; 977 - var->green.length = 6; 978 - var->blue.offset = 0; 979 - var->blue.length = 5; 980 - var->transp.offset = 0; 981 - var->transp.length = 0; 982 - var->nonstd = 0; 983 - break; 984 - case 24: 985 - var->red.offset = 16; 986 - var->red.length = 8; 987 - var->green.offset = 8; 988 - var->green.length = 8; 989 - var->blue.offset = 0; 990 - var->blue.length = 8; 991 - var->nonstd = 0; 992 - break; 993 - case 32: 994 - var->transp.offset = 24; 995 - var->transp.length = 8; 996 - var->red.offset = 16; 997 - var->red.length = 8; 998 - var->green.offset = 8; 999 - var->green.length = 8; 1000 - var->blue.offset = 0; 1001 - var->blue.length = 8; 1002 - var->nonstd = 0; 1003 - break; 1004 - default: 1005 - err = -EINVAL; 1006 - } 1007 - 1008 - var->red.msb_right = 0; 1009 - var->green.msb_right = 0; 1010 - var->blue.msb_right = 0; 1011 - var->transp.msb_right = 0; 1012 - 1013 - if (line_size * var->yres_virtual > par->vram_size) 1014 - var->yres_virtual = par->vram_size / line_size; 1015 - 1016 - if (var->yres > var->yres_virtual) 1017 - var->yres = var->yres_virtual; 1018 - 1019 - if (var->xres > var->xres_virtual) 1020 - var->xres = var->xres_virtual; 1021 - 1022 - if (var->xres + var->xoffset > var->xres_virtual) 1023 - var->xoffset = var->xres_virtual - var->xres; 1024 - if (var->yres + var->yoffset > var->yres_virtual) 1025 - var->yoffset = var->yres_virtual - var->yres; 1026 - 1027 - var->pixclock = da8xx_fb_round_clk(par, var->pixclock); 1028 - 1029 - return err; 1030 - } 1031 - 1032 - #ifdef CONFIG_CPU_FREQ 1033 - static int lcd_da8xx_cpufreq_transition(struct notifier_block *nb, 1034 - unsigned long val, void *data) 1035 - { 1036 - struct da8xx_fb_par *par; 1037 - 1038 - par = container_of(nb, struct da8xx_fb_par, freq_transition); 1039 - if (val == CPUFREQ_POSTCHANGE) { 1040 - if (par->lcdc_clk_rate != clk_get_rate(par->lcdc_clk)) { 1041 - par->lcdc_clk_rate = clk_get_rate(par->lcdc_clk); 1042 - lcd_disable_raster(DA8XX_FRAME_WAIT); 1043 - da8xx_fb_calc_config_clk_divider(par, &par->mode); 1044 - if (par->blank == FB_BLANK_UNBLANK) 1045 - lcd_enable_raster(); 1046 - } 1047 - } 1048 - 1049 - return 0; 1050 - } 1051 - 1052 - static int lcd_da8xx_cpufreq_register(struct da8xx_fb_par *par) 1053 - { 1054 - par->freq_transition.notifier_call = lcd_da8xx_cpufreq_transition; 1055 - 1056 - return cpufreq_register_notifier(&par->freq_transition, 1057 - CPUFREQ_TRANSITION_NOTIFIER); 1058 - } 1059 - 1060 - static void lcd_da8xx_cpufreq_deregister(struct da8xx_fb_par *par) 1061 - { 1062 - cpufreq_unregister_notifier(&par->freq_transition, 1063 - CPUFREQ_TRANSITION_NOTIFIER); 1064 - } 1065 - #endif 1066 - 1067 - static void fb_remove(struct platform_device *dev) 1068 - { 1069 - struct fb_info *info = platform_get_drvdata(dev); 1070 - struct da8xx_fb_par *par = info->par; 1071 - int ret; 1072 - 1073 - #ifdef CONFIG_CPU_FREQ 1074 - lcd_da8xx_cpufreq_deregister(par); 1075 - #endif 1076 - if (par->lcd_supply) { 1077 - ret = regulator_disable(par->lcd_supply); 1078 - if (ret) 1079 - dev_warn(&dev->dev, "Failed to disable regulator (%pe)\n", 1080 - ERR_PTR(ret)); 1081 - } 1082 - 1083 - lcd_disable_raster(DA8XX_FRAME_WAIT); 1084 - lcdc_write(0, LCD_RASTER_CTRL_REG); 1085 - 1086 - /* disable DMA */ 1087 - lcdc_write(0, LCD_DMA_CTRL_REG); 1088 - 1089 - unregister_framebuffer(info); 1090 - fb_dealloc_cmap(&info->cmap); 1091 - pm_runtime_put_sync(&dev->dev); 1092 - pm_runtime_disable(&dev->dev); 1093 - framebuffer_release(info); 1094 - } 1095 - 1096 - /* 1097 - * Function to wait for vertical sync which for this LCD peripheral 1098 - * translates into waiting for the current raster frame to complete. 1099 - */ 1100 - static int fb_wait_for_vsync(struct fb_info *info) 1101 - { 1102 - struct da8xx_fb_par *par = info->par; 1103 - int ret; 1104 - 1105 - /* 1106 - * Set flag to 0 and wait for isr to set to 1. It would seem there is a 1107 - * race condition here where the ISR could have occurred just before or 1108 - * just after this set. But since we are just coarsely waiting for 1109 - * a frame to complete then that's OK. i.e. if the frame completed 1110 - * just before this code executed then we have to wait another full 1111 - * frame time but there is no way to avoid such a situation. On the 1112 - * other hand if the frame completed just after then we don't need 1113 - * to wait long at all. Either way we are guaranteed to return to the 1114 - * user immediately after a frame completion which is all that is 1115 - * required. 1116 - */ 1117 - par->vsync_flag = 0; 1118 - ret = wait_event_interruptible_timeout(par->vsync_wait, 1119 - par->vsync_flag != 0, 1120 - par->vsync_timeout); 1121 - if (ret < 0) 1122 - return ret; 1123 - if (ret == 0) 1124 - return -ETIMEDOUT; 1125 - 1126 - return 0; 1127 - } 1128 - 1129 - static int fb_ioctl(struct fb_info *info, unsigned int cmd, 1130 - unsigned long arg) 1131 - { 1132 - struct lcd_sync_arg sync_arg; 1133 - 1134 - switch (cmd) { 1135 - case FBIOGET_CONTRAST: 1136 - case FBIOPUT_CONTRAST: 1137 - case FBIGET_BRIGHTNESS: 1138 - case FBIPUT_BRIGHTNESS: 1139 - case FBIGET_COLOR: 1140 - case FBIPUT_COLOR: 1141 - return -ENOTTY; 1142 - case FBIPUT_HSYNC: 1143 - if (copy_from_user(&sync_arg, (char *)arg, 1144 - sizeof(struct lcd_sync_arg))) 1145 - return -EFAULT; 1146 - lcd_cfg_horizontal_sync(sync_arg.back_porch, 1147 - sync_arg.pulse_width, 1148 - sync_arg.front_porch); 1149 - break; 1150 - case FBIPUT_VSYNC: 1151 - if (copy_from_user(&sync_arg, (char *)arg, 1152 - sizeof(struct lcd_sync_arg))) 1153 - return -EFAULT; 1154 - lcd_cfg_vertical_sync(sync_arg.back_porch, 1155 - sync_arg.pulse_width, 1156 - sync_arg.front_porch); 1157 - break; 1158 - case FBIO_WAITFORVSYNC: 1159 - return fb_wait_for_vsync(info); 1160 - default: 1161 - return -EINVAL; 1162 - } 1163 - return 0; 1164 - } 1165 - 1166 - static int cfb_blank(int blank, struct fb_info *info) 1167 - { 1168 - struct da8xx_fb_par *par = info->par; 1169 - int ret = 0; 1170 - 1171 - if (par->blank == blank) 1172 - return 0; 1173 - 1174 - par->blank = blank; 1175 - switch (blank) { 1176 - case FB_BLANK_UNBLANK: 1177 - lcd_enable_raster(); 1178 - 1179 - if (par->lcd_supply) { 1180 - ret = regulator_enable(par->lcd_supply); 1181 - if (ret) 1182 - return ret; 1183 - } 1184 - break; 1185 - case FB_BLANK_NORMAL: 1186 - case FB_BLANK_VSYNC_SUSPEND: 1187 - case FB_BLANK_HSYNC_SUSPEND: 1188 - case FB_BLANK_POWERDOWN: 1189 - if (par->lcd_supply) { 1190 - ret = regulator_disable(par->lcd_supply); 1191 - if (ret) 1192 - return ret; 1193 - } 1194 - 1195 - lcd_disable_raster(DA8XX_FRAME_WAIT); 1196 - break; 1197 - default: 1198 - ret = -EINVAL; 1199 - } 1200 - 1201 - return ret; 1202 - } 1203 - 1204 - /* 1205 - * Set new x,y offsets in the virtual display for the visible area and switch 1206 - * to the new mode. 1207 - */ 1208 - static int da8xx_pan_display(struct fb_var_screeninfo *var, 1209 - struct fb_info *fbi) 1210 - { 1211 - int ret = 0; 1212 - struct fb_var_screeninfo new_var; 1213 - struct da8xx_fb_par *par = fbi->par; 1214 - struct fb_fix_screeninfo *fix = &fbi->fix; 1215 - unsigned int end; 1216 - unsigned int start; 1217 - unsigned long irq_flags; 1218 - 1219 - if (var->xoffset != fbi->var.xoffset || 1220 - var->yoffset != fbi->var.yoffset) { 1221 - memcpy(&new_var, &fbi->var, sizeof(new_var)); 1222 - new_var.xoffset = var->xoffset; 1223 - new_var.yoffset = var->yoffset; 1224 - if (fb_check_var(&new_var, fbi)) 1225 - ret = -EINVAL; 1226 - else { 1227 - memcpy(&fbi->var, &new_var, sizeof(new_var)); 1228 - 1229 - start = fix->smem_start + 1230 - new_var.yoffset * fix->line_length + 1231 - new_var.xoffset * fbi->var.bits_per_pixel / 8; 1232 - end = start + fbi->var.yres * fix->line_length - 1; 1233 - par->dma_start = start; 1234 - par->dma_end = end; 1235 - spin_lock_irqsave(&par->lock_for_chan_update, 1236 - irq_flags); 1237 - if (par->which_dma_channel_done == 0) { 1238 - lcdc_write(par->dma_start, 1239 - LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 1240 - lcdc_write(par->dma_end, 1241 - LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 1242 - } else if (par->which_dma_channel_done == 1) { 1243 - lcdc_write(par->dma_start, 1244 - LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 1245 - lcdc_write(par->dma_end, 1246 - LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 1247 - } 1248 - spin_unlock_irqrestore(&par->lock_for_chan_update, 1249 - irq_flags); 1250 - } 1251 - } 1252 - 1253 - return ret; 1254 - } 1255 - 1256 - static int da8xxfb_set_par(struct fb_info *info) 1257 - { 1258 - struct da8xx_fb_par *par = info->par; 1259 - int ret; 1260 - bool raster = da8xx_fb_is_raster_enabled(); 1261 - 1262 - if (raster) 1263 - lcd_disable_raster(DA8XX_FRAME_WAIT); 1264 - 1265 - fb_var_to_videomode(&par->mode, &info->var); 1266 - 1267 - par->cfg.bpp = info->var.bits_per_pixel; 1268 - 1269 - info->fix.visual = (par->cfg.bpp <= 8) ? 1270 - FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_TRUECOLOR; 1271 - info->fix.line_length = (par->mode.xres * par->cfg.bpp) / 8; 1272 - 1273 - ret = lcd_init(par, &par->cfg, &par->mode); 1274 - if (ret < 0) { 1275 - dev_err(par->dev, "lcd init failed\n"); 1276 - return ret; 1277 - } 1278 - 1279 - par->dma_start = info->fix.smem_start + 1280 - info->var.yoffset * info->fix.line_length + 1281 - info->var.xoffset * info->var.bits_per_pixel / 8; 1282 - par->dma_end = par->dma_start + 1283 - info->var.yres * info->fix.line_length - 1; 1284 - 1285 - lcdc_write(par->dma_start, LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 1286 - lcdc_write(par->dma_end, LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 1287 - lcdc_write(par->dma_start, LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 1288 - lcdc_write(par->dma_end, LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 1289 - 1290 - if (raster) 1291 - lcd_enable_raster(); 1292 - 1293 - return 0; 1294 - } 1295 - 1296 - static const struct fb_ops da8xx_fb_ops = { 1297 - .owner = THIS_MODULE, 1298 - FB_DEFAULT_IOMEM_OPS, 1299 - .fb_check_var = fb_check_var, 1300 - .fb_set_par = da8xxfb_set_par, 1301 - .fb_setcolreg = fb_setcolreg, 1302 - .fb_pan_display = da8xx_pan_display, 1303 - .fb_ioctl = fb_ioctl, 1304 - .fb_blank = cfb_blank, 1305 - }; 1306 - 1307 - static struct fb_videomode *da8xx_fb_get_videomode(struct platform_device *dev) 1308 - { 1309 - struct da8xx_lcdc_platform_data *fb_pdata = dev_get_platdata(&dev->dev); 1310 - struct fb_videomode *lcdc_info; 1311 - int i; 1312 - 1313 - for (i = 0, lcdc_info = known_lcd_panels; 1314 - i < ARRAY_SIZE(known_lcd_panels); i++, lcdc_info++) { 1315 - if (strcmp(fb_pdata->type, lcdc_info->name) == 0) 1316 - break; 1317 - } 1318 - 1319 - if (i == ARRAY_SIZE(known_lcd_panels)) { 1320 - dev_err(&dev->dev, "no panel found\n"); 1321 - return NULL; 1322 - } 1323 - dev_info(&dev->dev, "found %s panel\n", lcdc_info->name); 1324 - 1325 - return lcdc_info; 1326 - } 1327 - 1328 - static int fb_probe(struct platform_device *device) 1329 - { 1330 - struct da8xx_lcdc_platform_data *fb_pdata = 1331 - dev_get_platdata(&device->dev); 1332 - struct lcd_ctrl_config *lcd_cfg; 1333 - struct fb_videomode *lcdc_info; 1334 - struct fb_info *da8xx_fb_info; 1335 - struct da8xx_fb_par *par; 1336 - struct clk *tmp_lcdc_clk; 1337 - int ret; 1338 - unsigned long ulcm; 1339 - 1340 - if (fb_pdata == NULL) { 1341 - dev_err(&device->dev, "Can not get platform data\n"); 1342 - return -ENOENT; 1343 - } 1344 - 1345 - lcdc_info = da8xx_fb_get_videomode(device); 1346 - if (lcdc_info == NULL) 1347 - return -ENODEV; 1348 - 1349 - da8xx_fb_reg_base = devm_platform_ioremap_resource(device, 0); 1350 - if (IS_ERR(da8xx_fb_reg_base)) 1351 - return PTR_ERR(da8xx_fb_reg_base); 1352 - 1353 - tmp_lcdc_clk = devm_clk_get(&device->dev, "fck"); 1354 - if (IS_ERR(tmp_lcdc_clk)) 1355 - return dev_err_probe(&device->dev, PTR_ERR(tmp_lcdc_clk), 1356 - "Can not get device clock\n"); 1357 - 1358 - pm_runtime_enable(&device->dev); 1359 - pm_runtime_get_sync(&device->dev); 1360 - 1361 - /* Determine LCD IP Version */ 1362 - switch (lcdc_read(LCD_PID_REG)) { 1363 - case 0x4C100102: 1364 - lcd_revision = LCD_VERSION_1; 1365 - break; 1366 - case 0x4F200800: 1367 - case 0x4F201000: 1368 - lcd_revision = LCD_VERSION_2; 1369 - break; 1370 - default: 1371 - dev_warn(&device->dev, "Unknown PID Reg value 0x%x, " 1372 - "defaulting to LCD revision 1\n", 1373 - lcdc_read(LCD_PID_REG)); 1374 - lcd_revision = LCD_VERSION_1; 1375 - break; 1376 - } 1377 - 1378 - lcd_cfg = (struct lcd_ctrl_config *)fb_pdata->controller_data; 1379 - 1380 - if (!lcd_cfg) { 1381 - ret = -EINVAL; 1382 - goto err_pm_runtime_disable; 1383 - } 1384 - 1385 - da8xx_fb_info = framebuffer_alloc(sizeof(struct da8xx_fb_par), 1386 - &device->dev); 1387 - if (!da8xx_fb_info) { 1388 - ret = -ENOMEM; 1389 - goto err_pm_runtime_disable; 1390 - } 1391 - 1392 - par = da8xx_fb_info->par; 1393 - par->dev = &device->dev; 1394 - par->lcdc_clk = tmp_lcdc_clk; 1395 - par->lcdc_clk_rate = clk_get_rate(par->lcdc_clk); 1396 - 1397 - par->lcd_supply = devm_regulator_get_optional(&device->dev, "lcd"); 1398 - if (IS_ERR(par->lcd_supply)) { 1399 - if (PTR_ERR(par->lcd_supply) == -EPROBE_DEFER) { 1400 - ret = -EPROBE_DEFER; 1401 - goto err_release_fb; 1402 - } 1403 - 1404 - par->lcd_supply = NULL; 1405 - } else { 1406 - ret = regulator_enable(par->lcd_supply); 1407 - if (ret) 1408 - goto err_release_fb; 1409 - } 1410 - 1411 - fb_videomode_to_var(&da8xx_fb_var, lcdc_info); 1412 - par->cfg = *lcd_cfg; 1413 - 1414 - da8xx_fb_lcd_reset(); 1415 - 1416 - /* allocate frame buffer */ 1417 - par->vram_size = lcdc_info->xres * lcdc_info->yres * lcd_cfg->bpp; 1418 - ulcm = lcm((lcdc_info->xres * lcd_cfg->bpp)/8, PAGE_SIZE); 1419 - par->vram_size = roundup(par->vram_size/8, ulcm); 1420 - par->vram_size = par->vram_size * LCD_NUM_BUFFERS; 1421 - 1422 - par->vram_virt = dmam_alloc_coherent(par->dev, 1423 - par->vram_size, 1424 - &par->vram_phys, 1425 - GFP_KERNEL | GFP_DMA); 1426 - if (!par->vram_virt) { 1427 - dev_err(&device->dev, 1428 - "GLCD: kmalloc for frame buffer failed\n"); 1429 - ret = -EINVAL; 1430 - goto err_disable_reg; 1431 - } 1432 - 1433 - da8xx_fb_info->screen_base = (char __iomem *) par->vram_virt; 1434 - da8xx_fb_fix.smem_start = par->vram_phys; 1435 - da8xx_fb_fix.smem_len = par->vram_size; 1436 - da8xx_fb_fix.line_length = (lcdc_info->xres * lcd_cfg->bpp) / 8; 1437 - 1438 - par->dma_start = par->vram_phys; 1439 - par->dma_end = par->dma_start + lcdc_info->yres * 1440 - da8xx_fb_fix.line_length - 1; 1441 - 1442 - /* allocate palette buffer */ 1443 - par->v_palette_base = dmam_alloc_coherent(par->dev, PALETTE_SIZE, 1444 - &par->p_palette_base, 1445 - GFP_KERNEL | GFP_DMA); 1446 - if (!par->v_palette_base) { 1447 - dev_err(&device->dev, 1448 - "GLCD: kmalloc for palette buffer failed\n"); 1449 - ret = -EINVAL; 1450 - goto err_release_fb; 1451 - } 1452 - 1453 - par->irq = platform_get_irq(device, 0); 1454 - if (par->irq < 0) { 1455 - ret = -ENOENT; 1456 - goto err_release_fb; 1457 - } 1458 - 1459 - da8xx_fb_var.grayscale = 1460 - lcd_cfg->panel_shade == MONOCHROME ? 1 : 0; 1461 - da8xx_fb_var.bits_per_pixel = lcd_cfg->bpp; 1462 - 1463 - /* Initialize fbinfo */ 1464 - da8xx_fb_info->fix = da8xx_fb_fix; 1465 - da8xx_fb_info->var = da8xx_fb_var; 1466 - da8xx_fb_info->fbops = &da8xx_fb_ops; 1467 - da8xx_fb_info->pseudo_palette = par->pseudo_palette; 1468 - da8xx_fb_info->fix.visual = (da8xx_fb_info->var.bits_per_pixel <= 8) ? 1469 - FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_TRUECOLOR; 1470 - 1471 - ret = fb_alloc_cmap(&da8xx_fb_info->cmap, PALETTE_SIZE, 0); 1472 - if (ret) 1473 - goto err_disable_reg; 1474 - da8xx_fb_info->cmap.len = par->palette_sz; 1475 - 1476 - /* initialize var_screeninfo */ 1477 - da8xx_fb_var.activate = FB_ACTIVATE_FORCE; 1478 - fb_set_var(da8xx_fb_info, &da8xx_fb_var); 1479 - 1480 - platform_set_drvdata(device, da8xx_fb_info); 1481 - 1482 - /* initialize the vsync wait queue */ 1483 - init_waitqueue_head(&par->vsync_wait); 1484 - par->vsync_timeout = HZ / 5; 1485 - par->which_dma_channel_done = -1; 1486 - spin_lock_init(&par->lock_for_chan_update); 1487 - 1488 - /* Register the Frame Buffer */ 1489 - if (register_framebuffer(da8xx_fb_info) < 0) { 1490 - dev_err(&device->dev, 1491 - "GLCD: Frame Buffer Registration Failed!\n"); 1492 - ret = -EINVAL; 1493 - goto err_dealloc_cmap; 1494 - } 1495 - 1496 - #ifdef CONFIG_CPU_FREQ 1497 - ret = lcd_da8xx_cpufreq_register(par); 1498 - if (ret) { 1499 - dev_err(&device->dev, "failed to register cpufreq\n"); 1500 - goto err_cpu_freq; 1501 - } 1502 - #endif 1503 - 1504 - if (lcd_revision == LCD_VERSION_1) 1505 - lcdc_irq_handler = lcdc_irq_handler_rev01; 1506 - else { 1507 - init_waitqueue_head(&frame_done_wq); 1508 - lcdc_irq_handler = lcdc_irq_handler_rev02; 1509 - } 1510 - 1511 - ret = devm_request_irq(&device->dev, par->irq, lcdc_irq_handler, 0, 1512 - DRIVER_NAME, par); 1513 - if (ret) 1514 - goto irq_freq; 1515 - return 0; 1516 - 1517 - irq_freq: 1518 - #ifdef CONFIG_CPU_FREQ 1519 - lcd_da8xx_cpufreq_deregister(par); 1520 - err_cpu_freq: 1521 - #endif 1522 - unregister_framebuffer(da8xx_fb_info); 1523 - 1524 - err_dealloc_cmap: 1525 - fb_dealloc_cmap(&da8xx_fb_info->cmap); 1526 - 1527 - err_disable_reg: 1528 - if (par->lcd_supply) 1529 - regulator_disable(par->lcd_supply); 1530 - err_release_fb: 1531 - framebuffer_release(da8xx_fb_info); 1532 - 1533 - err_pm_runtime_disable: 1534 - pm_runtime_put_sync(&device->dev); 1535 - pm_runtime_disable(&device->dev); 1536 - 1537 - return ret; 1538 - } 1539 - 1540 - #ifdef CONFIG_PM_SLEEP 1541 - static struct lcdc_context { 1542 - u32 clk_enable; 1543 - u32 ctrl; 1544 - u32 dma_ctrl; 1545 - u32 raster_timing_0; 1546 - u32 raster_timing_1; 1547 - u32 raster_timing_2; 1548 - u32 int_enable_set; 1549 - u32 dma_frm_buf_base_addr_0; 1550 - u32 dma_frm_buf_ceiling_addr_0; 1551 - u32 dma_frm_buf_base_addr_1; 1552 - u32 dma_frm_buf_ceiling_addr_1; 1553 - u32 raster_ctrl; 1554 - } reg_context; 1555 - 1556 - static void lcd_context_save(void) 1557 - { 1558 - if (lcd_revision == LCD_VERSION_2) { 1559 - reg_context.clk_enable = lcdc_read(LCD_CLK_ENABLE_REG); 1560 - reg_context.int_enable_set = lcdc_read(LCD_INT_ENABLE_SET_REG); 1561 - } 1562 - 1563 - reg_context.ctrl = lcdc_read(LCD_CTRL_REG); 1564 - reg_context.dma_ctrl = lcdc_read(LCD_DMA_CTRL_REG); 1565 - reg_context.raster_timing_0 = lcdc_read(LCD_RASTER_TIMING_0_REG); 1566 - reg_context.raster_timing_1 = lcdc_read(LCD_RASTER_TIMING_1_REG); 1567 - reg_context.raster_timing_2 = lcdc_read(LCD_RASTER_TIMING_2_REG); 1568 - reg_context.dma_frm_buf_base_addr_0 = 1569 - lcdc_read(LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 1570 - reg_context.dma_frm_buf_ceiling_addr_0 = 1571 - lcdc_read(LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 1572 - reg_context.dma_frm_buf_base_addr_1 = 1573 - lcdc_read(LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 1574 - reg_context.dma_frm_buf_ceiling_addr_1 = 1575 - lcdc_read(LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 1576 - reg_context.raster_ctrl = lcdc_read(LCD_RASTER_CTRL_REG); 1577 - return; 1578 - } 1579 - 1580 - static void lcd_context_restore(void) 1581 - { 1582 - if (lcd_revision == LCD_VERSION_2) { 1583 - lcdc_write(reg_context.clk_enable, LCD_CLK_ENABLE_REG); 1584 - lcdc_write(reg_context.int_enable_set, LCD_INT_ENABLE_SET_REG); 1585 - } 1586 - 1587 - lcdc_write(reg_context.ctrl, LCD_CTRL_REG); 1588 - lcdc_write(reg_context.dma_ctrl, LCD_DMA_CTRL_REG); 1589 - lcdc_write(reg_context.raster_timing_0, LCD_RASTER_TIMING_0_REG); 1590 - lcdc_write(reg_context.raster_timing_1, LCD_RASTER_TIMING_1_REG); 1591 - lcdc_write(reg_context.raster_timing_2, LCD_RASTER_TIMING_2_REG); 1592 - lcdc_write(reg_context.dma_frm_buf_base_addr_0, 1593 - LCD_DMA_FRM_BUF_BASE_ADDR_0_REG); 1594 - lcdc_write(reg_context.dma_frm_buf_ceiling_addr_0, 1595 - LCD_DMA_FRM_BUF_CEILING_ADDR_0_REG); 1596 - lcdc_write(reg_context.dma_frm_buf_base_addr_1, 1597 - LCD_DMA_FRM_BUF_BASE_ADDR_1_REG); 1598 - lcdc_write(reg_context.dma_frm_buf_ceiling_addr_1, 1599 - LCD_DMA_FRM_BUF_CEILING_ADDR_1_REG); 1600 - lcdc_write(reg_context.raster_ctrl, LCD_RASTER_CTRL_REG); 1601 - return; 1602 - } 1603 - 1604 - static int fb_suspend(struct device *dev) 1605 - { 1606 - struct fb_info *info = dev_get_drvdata(dev); 1607 - struct da8xx_fb_par *par = info->par; 1608 - int ret; 1609 - 1610 - console_lock(); 1611 - if (par->lcd_supply) { 1612 - ret = regulator_disable(par->lcd_supply); 1613 - if (ret) 1614 - return ret; 1615 - } 1616 - 1617 - fb_set_suspend(info, 1); 1618 - lcd_disable_raster(DA8XX_FRAME_WAIT); 1619 - lcd_context_save(); 1620 - pm_runtime_put_sync(dev); 1621 - console_unlock(); 1622 - 1623 - return 0; 1624 - } 1625 - static int fb_resume(struct device *dev) 1626 - { 1627 - struct fb_info *info = dev_get_drvdata(dev); 1628 - struct da8xx_fb_par *par = info->par; 1629 - int ret; 1630 - 1631 - console_lock(); 1632 - pm_runtime_get_sync(dev); 1633 - lcd_context_restore(); 1634 - if (par->blank == FB_BLANK_UNBLANK) { 1635 - lcd_enable_raster(); 1636 - 1637 - if (par->lcd_supply) { 1638 - ret = regulator_enable(par->lcd_supply); 1639 - if (ret) 1640 - return ret; 1641 - } 1642 - } 1643 - 1644 - fb_set_suspend(info, 0); 1645 - console_unlock(); 1646 - 1647 - return 0; 1648 - } 1649 - #endif 1650 - 1651 - static SIMPLE_DEV_PM_OPS(fb_pm_ops, fb_suspend, fb_resume); 1652 - 1653 - static struct platform_driver da8xx_fb_driver = { 1654 - .probe = fb_probe, 1655 - .remove = fb_remove, 1656 - .driver = { 1657 - .name = DRIVER_NAME, 1658 - .pm = &fb_pm_ops, 1659 - }, 1660 - }; 1661 - module_platform_driver(da8xx_fb_driver); 1662 - 1663 - MODULE_DESCRIPTION("Framebuffer driver for TI da8xx/omap-l1xx"); 1664 - MODULE_AUTHOR("Texas Instruments"); 1665 - MODULE_LICENSE("GPL");
+1 -1
drivers/video/fbdev/ffb.c
··· 710 710 return 0; 711 711 } 712 712 713 - static struct sbus_mmap_map ffb_mmap_map[] = { 713 + static const struct sbus_mmap_map ffb_mmap_map[] = { 714 714 { 715 715 .voff = FFB_SFB8R_VOFF, 716 716 .poff = FFB_SFB8R_POFF,
+1 -1
drivers/video/fbdev/leo.c
··· 338 338 return 0; 339 339 } 340 340 341 - static struct sbus_mmap_map leo_mmap_map[] = { 341 + static const struct sbus_mmap_map leo_mmap_map[] = { 342 342 { 343 343 .voff = LEO_SS0_MAP, 344 344 .poff = LEO_OFF_SS0,
+4 -4
drivers/video/fbdev/nvidia/nv_hw.c
··· 1509 1509 NV_WR32(par->PFIFO, 0x0495 * 4, 0x00000001); 1510 1510 NV_WR32(par->PFIFO, 0x0140 * 4, 0x00000001); 1511 1511 1512 - if (!state) { 1513 - par->CurrentState = NULL; 1514 - return; 1515 - } 1512 + if (!state) { 1513 + par->CurrentState = NULL; 1514 + return; 1515 + } 1516 1516 1517 1517 if (par->Architecture >= NV_ARCH_10) { 1518 1518 if (par->twoHeads) {
+1 -1
drivers/video/fbdev/p9100.c
··· 206 206 return 0; 207 207 } 208 208 209 - static struct sbus_mmap_map p9100_mmap_map[] = { 209 + static const struct sbus_mmap_map p9100_mmap_map[] = { 210 210 { CG3_MMAP_OFFSET, 0, SBUS_MMAP_FBSIZE(1) }, 211 211 { 0, 0, 0 } 212 212 };
+1 -1
drivers/video/fbdev/sbuslib.c
··· 38 38 return fbsize * (-size); 39 39 } 40 40 41 - int sbusfb_mmap_helper(struct sbus_mmap_map *map, 41 + int sbusfb_mmap_helper(const struct sbus_mmap_map *map, 42 42 unsigned long physbase, 43 43 unsigned long fbsize, 44 44 unsigned long iospace,
+1 -1
drivers/video/fbdev/sbuslib.h
··· 19 19 20 20 extern void sbusfb_fill_var(struct fb_var_screeninfo *var, 21 21 struct device_node *dp, int bpp); 22 - extern int sbusfb_mmap_helper(struct sbus_mmap_map *map, 22 + extern int sbusfb_mmap_helper(const struct sbus_mmap_map *map, 23 23 unsigned long physbase, unsigned long fbsize, 24 24 unsigned long iospace, 25 25 struct vm_area_struct *vma);
+7 -2
drivers/video/fbdev/sstfb.c
··· 716 716 pci_write_config_dword(sst_dev, PCI_INIT_ENABLE, tmp); 717 717 } 718 718 719 + #ifdef CONFIG_FB_DEVICE 719 720 static ssize_t store_vgapass(struct device *device, struct device_attribute *attr, 720 721 const char *buf, size_t count) 721 722 { ··· 740 739 741 740 static struct device_attribute device_attrs[] = { 742 741 __ATTR(vgapass, S_IRUGO|S_IWUSR, show_vgapass, store_vgapass) 743 - }; 742 + }; 743 + #endif 744 744 745 745 static int sstfb_ioctl(struct fb_info *info, unsigned int cmd, 746 746 unsigned long arg) ··· 1438 1436 1439 1437 sstfb_clear_screen(info); 1440 1438 1439 + #ifdef CONFIG_FB_DEVICE 1441 1440 if (device_create_file(info->dev, &device_attrs[0])) 1442 1441 printk(KERN_WARNING "sstfb: can't create sysfs entry.\n"); 1443 - 1442 + #endif 1444 1443 1445 1444 fb_info(info, "%s frame buffer device at 0x%p\n", 1446 1445 fix->id, info->screen_base); ··· 1471 1468 info = pci_get_drvdata(pdev); 1472 1469 par = info->par; 1473 1470 1471 + #ifdef CONFIG_FB_DEVICE 1474 1472 device_remove_file(info->dev, &device_attrs[0]); 1473 + #endif 1475 1474 sst_shutdown(info); 1476 1475 iounmap(info->screen_base); 1477 1476 iounmap(par->mmio_vbase);
+1 -1
drivers/video/fbdev/tcx.c
··· 236 236 return 0; 237 237 } 238 238 239 - static struct sbus_mmap_map __tcx_mmap_map[TCX_MMAP_ENTRIES] = { 239 + static const struct sbus_mmap_map __tcx_mmap_map[TCX_MMAP_ENTRIES] = { 240 240 { 241 241 .voff = TCX_RAM8BIT, 242 242 .size = SBUS_MMAP_FBSIZE(1)
+27 -7
fs/9p/v9fs.h
··· 179 179 struct inode *old_dir, struct dentry *old_dentry, 180 180 struct inode *new_dir, struct dentry *new_dentry, 181 181 unsigned int flags); 182 - extern struct inode *v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid, 183 - bool new); 182 + extern struct inode *v9fs_inode_from_fid(struct v9fs_session_info *v9ses, 183 + struct p9_fid *fid, 184 + struct super_block *sb, int new); 184 185 extern const struct inode_operations v9fs_dir_inode_operations_dotl; 185 186 extern const struct inode_operations v9fs_file_inode_operations_dotl; 186 187 extern const struct inode_operations v9fs_symlink_inode_operations_dotl; 187 188 extern const struct netfs_request_ops v9fs_req_ops; 188 - extern struct inode *v9fs_fid_iget_dotl(struct super_block *sb, 189 - struct p9_fid *fid, bool new); 189 + extern struct inode *v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses, 190 + struct p9_fid *fid, 191 + struct super_block *sb, int new); 190 192 191 193 /* other default globals */ 192 194 #define V9FS_PORT 564 ··· 227 225 */ 228 226 static inline struct inode * 229 227 v9fs_get_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid, 230 - struct super_block *sb, bool new) 228 + struct super_block *sb) 231 229 { 232 230 if (v9fs_proto_dotl(v9ses)) 233 - return v9fs_fid_iget_dotl(sb, fid, new); 231 + return v9fs_inode_from_fid_dotl(v9ses, fid, sb, 0); 234 232 else 235 - return v9fs_fid_iget(sb, fid, new); 233 + return v9fs_inode_from_fid(v9ses, fid, sb, 0); 234 + } 235 + 236 + /** 237 + * v9fs_get_new_inode_from_fid - Helper routine to populate an inode by 238 + * issuing a attribute request 239 + * @v9ses: session information 240 + * @fid: fid to issue attribute request for 241 + * @sb: superblock on which to create inode 242 + * 243 + */ 244 + static inline struct inode * 245 + v9fs_get_new_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid, 246 + struct super_block *sb) 247 + { 248 + if (v9fs_proto_dotl(v9ses)) 249 + return v9fs_inode_from_fid_dotl(v9ses, fid, sb, 1); 250 + else 251 + return v9fs_inode_from_fid(v9ses, fid, sb, 1); 236 252 } 237 253 238 254 #endif
+1 -1
fs/9p/v9fs_vfs.h
··· 42 42 void v9fs_free_inode(struct inode *inode); 43 43 void v9fs_set_netfs_context(struct inode *inode); 44 44 int v9fs_init_inode(struct v9fs_session_info *v9ses, 45 - struct inode *inode, struct p9_qid *qid, umode_t mode, dev_t rdev); 45 + struct inode *inode, umode_t mode, dev_t rdev); 46 46 void v9fs_evict_inode(struct inode *inode); 47 47 #if (BITS_PER_LONG == 32) 48 48 #define QID2INO(q) ((ino_t) (((q)->path+2) ^ (((q)->path) >> 32)))
+82 -47
fs/9p/vfs_inode.c
··· 256 256 } 257 257 258 258 int v9fs_init_inode(struct v9fs_session_info *v9ses, 259 - struct inode *inode, struct p9_qid *qid, umode_t mode, dev_t rdev) 259 + struct inode *inode, umode_t mode, dev_t rdev) 260 260 { 261 261 int err = 0; 262 - struct v9fs_inode *v9inode = V9FS_I(inode); 263 - 264 - memcpy(&v9inode->qid, qid, sizeof(struct p9_qid)); 265 262 266 263 inode_init_owner(&nop_mnt_idmap, inode, NULL, mode); 267 264 inode->i_blocks = 0; ··· 362 365 clear_inode(inode); 363 366 } 364 367 365 - struct inode * 366 - v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid, bool new) 368 + static int v9fs_test_inode(struct inode *inode, void *data) 369 + { 370 + int umode; 371 + dev_t rdev; 372 + struct v9fs_inode *v9inode = V9FS_I(inode); 373 + struct p9_wstat *st = (struct p9_wstat *)data; 374 + struct v9fs_session_info *v9ses = v9fs_inode2v9ses(inode); 375 + 376 + umode = p9mode2unixmode(v9ses, st, &rdev); 377 + /* don't match inode of different type */ 378 + if (inode_wrong_type(inode, umode)) 379 + return 0; 380 + 381 + /* compare qid details */ 382 + if (memcmp(&v9inode->qid.version, 383 + &st->qid.version, sizeof(v9inode->qid.version))) 384 + return 0; 385 + 386 + if (v9inode->qid.type != st->qid.type) 387 + return 0; 388 + 389 + if (v9inode->qid.path != st->qid.path) 390 + return 0; 391 + return 1; 392 + } 393 + 394 + static int v9fs_test_new_inode(struct inode *inode, void *data) 395 + { 396 + return 0; 397 + } 398 + 399 + static int v9fs_set_inode(struct inode *inode, void *data) 400 + { 401 + struct v9fs_inode *v9inode = V9FS_I(inode); 402 + struct p9_wstat *st = (struct p9_wstat *)data; 403 + 404 + memcpy(&v9inode->qid, &st->qid, sizeof(st->qid)); 405 + return 0; 406 + } 407 + 408 + static struct inode *v9fs_qid_iget(struct super_block *sb, 409 + struct p9_qid *qid, 410 + struct p9_wstat *st, 411 + int new) 367 412 { 368 413 dev_t rdev; 369 414 int retval; 370 415 umode_t umode; 371 416 struct inode *inode; 372 - struct p9_wstat *st; 373 417 struct v9fs_session_info *v9ses = sb->s_fs_info; 418 + int (*test)(struct inode *inode, void *data); 374 419 375 - inode = iget_locked(sb, QID2INO(&fid->qid)); 376 - if (unlikely(!inode)) 420 + if (new) 421 + test = v9fs_test_new_inode; 422 + else 423 + test = v9fs_test_inode; 424 + 425 + inode = iget5_locked(sb, QID2INO(qid), test, v9fs_set_inode, st); 426 + if (!inode) 377 427 return ERR_PTR(-ENOMEM); 378 - if (!(inode->i_state & I_NEW)) { 379 - if (!new) { 380 - goto done; 381 - } else { 382 - p9_debug(P9_DEBUG_VFS, "WARNING: Inode collision %ld\n", 383 - inode->i_ino); 384 - iput(inode); 385 - remove_inode_hash(inode); 386 - inode = iget_locked(sb, QID2INO(&fid->qid)); 387 - WARN_ON(!(inode->i_state & I_NEW)); 388 - } 389 - } 390 - 428 + if (!(inode->i_state & I_NEW)) 429 + return inode; 391 430 /* 392 431 * initialize the inode with the stat info 393 432 * FIXME!! we may need support for stale inodes 394 433 * later. 395 434 */ 396 - st = p9_client_stat(fid); 397 - if (IS_ERR(st)) { 398 - retval = PTR_ERR(st); 399 - goto error; 400 - } 401 - 435 + inode->i_ino = QID2INO(qid); 402 436 umode = p9mode2unixmode(v9ses, st, &rdev); 403 - retval = v9fs_init_inode(v9ses, inode, &fid->qid, umode, rdev); 404 - v9fs_stat2inode(st, inode, sb, 0); 405 - p9stat_free(st); 406 - kfree(st); 437 + retval = v9fs_init_inode(v9ses, inode, umode, rdev); 407 438 if (retval) 408 439 goto error; 409 440 441 + v9fs_stat2inode(st, inode, sb, 0); 410 442 v9fs_set_netfs_context(inode); 411 443 v9fs_cache_inode_get_cookie(inode); 412 444 unlock_new_inode(inode); 413 - done: 414 445 return inode; 415 446 error: 416 447 iget_failed(inode); 417 448 return ERR_PTR(retval); 449 + 450 + } 451 + 452 + struct inode * 453 + v9fs_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid, 454 + struct super_block *sb, int new) 455 + { 456 + struct p9_wstat *st; 457 + struct inode *inode = NULL; 458 + 459 + st = p9_client_stat(fid); 460 + if (IS_ERR(st)) 461 + return ERR_CAST(st); 462 + 463 + inode = v9fs_qid_iget(sb, &st->qid, st, new); 464 + p9stat_free(st); 465 + kfree(st); 466 + return inode; 418 467 } 419 468 420 469 /** ··· 492 449 */ 493 450 static void v9fs_dec_count(struct inode *inode) 494 451 { 495 - if (!S_ISDIR(inode->i_mode) || inode->i_nlink > 2) { 496 - if (inode->i_nlink) { 497 - drop_nlink(inode); 498 - } else { 499 - p9_debug(P9_DEBUG_VFS, 500 - "WARNING: unexpected i_nlink zero %d inode %ld\n", 501 - inode->i_nlink, inode->i_ino); 502 - } 503 - } 452 + if (!S_ISDIR(inode->i_mode) || inode->i_nlink > 2) 453 + drop_nlink(inode); 504 454 } 505 455 506 456 /** ··· 543 507 v9fs_dec_count(dir); 544 508 } else 545 509 v9fs_dec_count(inode); 546 - 547 - if (inode->i_nlink <= 0) /* no more refs unhash it */ 548 - remove_inode_hash(inode); 549 510 550 511 v9fs_invalidate_inode_attr(inode); 551 512 v9fs_invalidate_inode_attr(dir); ··· 609 576 /* 610 577 * instantiate inode and assign the unopened fid to the dentry 611 578 */ 612 - inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb, true); 579 + inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb); 613 580 if (IS_ERR(inode)) { 614 581 err = PTR_ERR(inode); 615 582 p9_debug(P9_DEBUG_VFS, ··· 737 704 inode = NULL; 738 705 else if (IS_ERR(fid)) 739 706 inode = ERR_CAST(fid); 707 + else if (v9ses->cache & (CACHE_META|CACHE_LOOSE)) 708 + inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb); 740 709 else 741 - inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb, false); 710 + inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb); 742 711 /* 743 712 * If we had a rename on the server and a parallel lookup 744 713 * for the new name, then make sure we instantiate with
+81 -31
fs/9p/vfs_inode_dotl.c
··· 52 52 return current_fsgid(); 53 53 } 54 54 55 + static int v9fs_test_inode_dotl(struct inode *inode, void *data) 56 + { 57 + struct v9fs_inode *v9inode = V9FS_I(inode); 58 + struct p9_stat_dotl *st = (struct p9_stat_dotl *)data; 55 59 60 + /* don't match inode of different type */ 61 + if (inode_wrong_type(inode, st->st_mode)) 62 + return 0; 56 63 57 - struct inode * 58 - v9fs_fid_iget_dotl(struct super_block *sb, struct p9_fid *fid, bool new) 64 + if (inode->i_generation != st->st_gen) 65 + return 0; 66 + 67 + /* compare qid details */ 68 + if (memcmp(&v9inode->qid.version, 69 + &st->qid.version, sizeof(v9inode->qid.version))) 70 + return 0; 71 + 72 + if (v9inode->qid.type != st->qid.type) 73 + return 0; 74 + 75 + if (v9inode->qid.path != st->qid.path) 76 + return 0; 77 + return 1; 78 + } 79 + 80 + /* Always get a new inode */ 81 + static int v9fs_test_new_inode_dotl(struct inode *inode, void *data) 82 + { 83 + return 0; 84 + } 85 + 86 + static int v9fs_set_inode_dotl(struct inode *inode, void *data) 87 + { 88 + struct v9fs_inode *v9inode = V9FS_I(inode); 89 + struct p9_stat_dotl *st = (struct p9_stat_dotl *)data; 90 + 91 + memcpy(&v9inode->qid, &st->qid, sizeof(st->qid)); 92 + inode->i_generation = st->st_gen; 93 + return 0; 94 + } 95 + 96 + static struct inode *v9fs_qid_iget_dotl(struct super_block *sb, 97 + struct p9_qid *qid, 98 + struct p9_fid *fid, 99 + struct p9_stat_dotl *st, 100 + int new) 59 101 { 60 102 int retval; 61 103 struct inode *inode; 62 - struct p9_stat_dotl *st; 63 104 struct v9fs_session_info *v9ses = sb->s_fs_info; 105 + int (*test)(struct inode *inode, void *data); 64 106 65 - inode = iget_locked(sb, QID2INO(&fid->qid)); 66 - if (unlikely(!inode)) 107 + if (new) 108 + test = v9fs_test_new_inode_dotl; 109 + else 110 + test = v9fs_test_inode_dotl; 111 + 112 + inode = iget5_locked(sb, QID2INO(qid), test, v9fs_set_inode_dotl, st); 113 + if (!inode) 67 114 return ERR_PTR(-ENOMEM); 68 - if (!(inode->i_state & I_NEW)) { 69 - if (!new) { 70 - goto done; 71 - } else { /* deal with race condition in inode number reuse */ 72 - p9_debug(P9_DEBUG_ERROR, "WARNING: Inode collision %lx\n", 73 - inode->i_ino); 74 - iput(inode); 75 - remove_inode_hash(inode); 76 - inode = iget_locked(sb, QID2INO(&fid->qid)); 77 - WARN_ON(!(inode->i_state & I_NEW)); 78 - } 79 - } 80 - 115 + if (!(inode->i_state & I_NEW)) 116 + return inode; 81 117 /* 82 118 * initialize the inode with the stat info 83 119 * FIXME!! we may need support for stale inodes 84 120 * later. 85 121 */ 86 - st = p9_client_getattr_dotl(fid, P9_STATS_BASIC | P9_STATS_GEN); 87 - if (IS_ERR(st)) { 88 - retval = PTR_ERR(st); 89 - goto error; 90 - } 91 - 92 - retval = v9fs_init_inode(v9ses, inode, &fid->qid, 122 + inode->i_ino = QID2INO(qid); 123 + retval = v9fs_init_inode(v9ses, inode, 93 124 st->st_mode, new_decode_dev(st->st_rdev)); 94 - v9fs_stat2inode_dotl(st, inode, 0); 95 - kfree(st); 96 125 if (retval) 97 126 goto error; 98 127 128 + v9fs_stat2inode_dotl(st, inode, 0); 99 129 v9fs_set_netfs_context(inode); 100 130 v9fs_cache_inode_get_cookie(inode); 101 131 retval = v9fs_get_acl(inode, fid); ··· 133 103 goto error; 134 104 135 105 unlock_new_inode(inode); 136 - done: 137 106 return inode; 138 107 error: 139 108 iget_failed(inode); 140 109 return ERR_PTR(retval); 110 + 111 + } 112 + 113 + struct inode * 114 + v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses, struct p9_fid *fid, 115 + struct super_block *sb, int new) 116 + { 117 + struct p9_stat_dotl *st; 118 + struct inode *inode = NULL; 119 + 120 + st = p9_client_getattr_dotl(fid, P9_STATS_BASIC | P9_STATS_GEN); 121 + if (IS_ERR(st)) 122 + return ERR_CAST(st); 123 + 124 + inode = v9fs_qid_iget_dotl(sb, &st->qid, fid, st, new); 125 + kfree(st); 126 + return inode; 141 127 } 142 128 143 129 struct dotl_openflag_map { ··· 305 259 p9_debug(P9_DEBUG_VFS, "p9_client_walk failed %d\n", err); 306 260 goto out; 307 261 } 308 - inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true); 262 + inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb); 309 263 if (IS_ERR(inode)) { 310 264 err = PTR_ERR(inode); 311 265 p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n", err); ··· 355 309 umode_t omode) 356 310 { 357 311 int err; 312 + struct v9fs_session_info *v9ses; 358 313 struct p9_fid *fid = NULL, *dfid = NULL; 359 314 kgid_t gid; 360 315 const unsigned char *name; ··· 365 318 struct posix_acl *dacl = NULL, *pacl = NULL; 366 319 367 320 p9_debug(P9_DEBUG_VFS, "name %pd\n", dentry); 321 + v9ses = v9fs_inode2v9ses(dir); 368 322 369 323 omode |= S_IFDIR; 370 324 if (dir->i_mode & S_ISGID) ··· 400 352 } 401 353 402 354 /* instantiate inode and assign the unopened fid to the dentry */ 403 - inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true); 355 + inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb); 404 356 if (IS_ERR(inode)) { 405 357 err = PTR_ERR(inode); 406 358 p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n", ··· 797 749 kgid_t gid; 798 750 const unsigned char *name; 799 751 umode_t mode; 752 + struct v9fs_session_info *v9ses; 800 753 struct p9_fid *fid = NULL, *dfid = NULL; 801 754 struct inode *inode; 802 755 struct p9_qid qid; ··· 807 758 dir->i_ino, dentry, omode, 808 759 MAJOR(rdev), MINOR(rdev)); 809 760 761 + v9ses = v9fs_inode2v9ses(dir); 810 762 dfid = v9fs_parent_fid(dentry); 811 763 if (IS_ERR(dfid)) { 812 764 err = PTR_ERR(dfid); ··· 838 788 err); 839 789 goto error; 840 790 } 841 - inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true); 791 + inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb); 842 792 if (IS_ERR(inode)) { 843 793 err = PTR_ERR(inode); 844 794 p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n",
+1 -1
fs/9p/vfs_super.c
··· 139 139 else 140 140 sb->s_d_op = &v9fs_dentry_operations; 141 141 142 - inode = v9fs_get_inode_from_fid(v9ses, fid, sb, true); 142 + inode = v9fs_get_new_inode_from_fid(v9ses, fid, sb); 143 143 if (IS_ERR(inode)) { 144 144 retval = PTR_ERR(inode); 145 145 goto release_sb;
+4 -4
fs/backing-file.c
··· 80 80 refcount_t ref; 81 81 struct kiocb *orig_iocb; 82 82 /* used for aio completion */ 83 - void (*end_write)(struct file *); 83 + void (*end_write)(struct file *, loff_t, ssize_t); 84 84 struct work_struct work; 85 85 long res; 86 86 }; ··· 109 109 struct kiocb *orig_iocb = aio->orig_iocb; 110 110 111 111 if (aio->end_write) 112 - aio->end_write(orig_iocb->ki_filp); 112 + aio->end_write(orig_iocb->ki_filp, iocb->ki_pos, res); 113 113 114 114 orig_iocb->ki_pos = iocb->ki_pos; 115 115 backing_aio_put(aio); ··· 239 239 240 240 ret = vfs_iter_write(file, iter, &iocb->ki_pos, rwf); 241 241 if (ctx->end_write) 242 - ctx->end_write(ctx->user_file); 242 + ctx->end_write(ctx->user_file, iocb->ki_pos, ret); 243 243 } else { 244 244 struct backing_aio *aio; 245 245 ··· 317 317 revert_creds(old_cred); 318 318 319 319 if (ctx->end_write) 320 - ctx->end_write(ctx->user_file); 320 + ctx->end_write(ctx->user_file, ppos ? *ppos : 0, ret); 321 321 322 322 return ret; 323 323 }
+12 -6
fs/fuse/file.c
··· 2288 2288 struct folio *tmp_folio; 2289 2289 int err; 2290 2290 2291 + if (!data->ff) { 2292 + err = -EIO; 2293 + data->ff = fuse_write_file_get(fi); 2294 + if (!data->ff) 2295 + goto out_unlock; 2296 + } 2297 + 2291 2298 if (wpa && fuse_writepage_need_send(fc, &folio->page, ap, data)) { 2292 2299 fuse_writepages_send(data); 2293 2300 data->wpa = NULL; ··· 2358 2351 struct writeback_control *wbc) 2359 2352 { 2360 2353 struct inode *inode = mapping->host; 2361 - struct fuse_inode *fi = get_fuse_inode(inode); 2362 2354 struct fuse_conn *fc = get_fuse_conn(inode); 2363 2355 struct fuse_fill_wb_data data; 2364 2356 int err; 2365 2357 2358 + err = -EIO; 2366 2359 if (fuse_is_bad(inode)) 2367 - return -EIO; 2360 + goto out; 2368 2361 2369 2362 if (wbc->sync_mode == WB_SYNC_NONE && 2370 2363 fc->num_background >= fc->congestion_threshold) ··· 2372 2365 2373 2366 data.inode = inode; 2374 2367 data.wpa = NULL; 2375 - data.ff = fuse_write_file_get(fi); 2376 - if (!data.ff) 2377 - return -EIO; 2368 + data.ff = NULL; 2378 2369 2379 2370 err = -ENOMEM; 2380 2371 data.orig_pages = kcalloc(fc->max_pages, ··· 2386 2381 WARN_ON(!data.wpa->ia.ap.num_pages); 2387 2382 fuse_writepages_send(&data); 2388 2383 } 2384 + if (data.ff) 2385 + fuse_file_put(data.ff, false); 2389 2386 2390 2387 kfree(data.orig_pages); 2391 2388 out: 2392 - fuse_file_put(data.ff, false); 2393 2389 return err; 2394 2390 } 2395 2391
+4 -5
fs/fuse/passthrough.c
··· 18 18 fuse_invalidate_atime(inode); 19 19 } 20 20 21 - static void fuse_file_modified(struct file *file) 21 + static void fuse_passthrough_end_write(struct file *file, loff_t pos, ssize_t ret) 22 22 { 23 23 struct inode *inode = file_inode(file); 24 24 25 - fuse_invalidate_attr_mask(inode, FUSE_STATX_MODSIZE); 25 + fuse_write_update_attr(inode, pos, ret); 26 26 } 27 27 28 28 ssize_t fuse_passthrough_read_iter(struct kiocb *iocb, struct iov_iter *iter) ··· 63 63 struct backing_file_ctx ctx = { 64 64 .cred = ff->cred, 65 65 .user_file = file, 66 - .end_write = fuse_file_modified, 66 + .end_write = fuse_passthrough_end_write, 67 67 }; 68 68 69 69 pr_debug("%s: backing_file=0x%p, pos=%lld, len=%zu\n", __func__, ··· 110 110 struct backing_file_ctx ctx = { 111 111 .cred = ff->cred, 112 112 .user_file = out, 113 - .end_write = fuse_file_modified, 113 + .end_write = fuse_passthrough_end_write, 114 114 }; 115 115 116 116 pr_debug("%s: backing_file=0x%p, pos=%lld, len=%zu, flags=0x%x\n", __func__, ··· 234 234 goto out; 235 235 236 236 backing_sb = file_inode(file)->i_sb; 237 - pr_info("%s: %x:%pD %i\n", __func__, backing_sb->s_dev, file, backing_sb->s_stack_depth); 238 237 res = -ELOOP; 239 238 if (backing_sb->s_stack_depth >= fc->max_stack_depth) 240 239 goto out_fput;
+41 -9
fs/nfsd/nfs4state.c
··· 1359 1359 destroy_unhashed_deleg(dp); 1360 1360 } 1361 1361 1362 + /** 1363 + * revoke_delegation - perform nfs4 delegation structure cleanup 1364 + * @dp: pointer to the delegation 1365 + * 1366 + * This function assumes that it's called either from the administrative 1367 + * interface (nfsd4_revoke_states()) that's revoking a specific delegation 1368 + * stateid or it's called from a laundromat thread (nfsd4_landromat()) that 1369 + * determined that this specific state has expired and needs to be revoked 1370 + * (both mark state with the appropriate stid sc_status mode). It is also 1371 + * assumed that a reference was taken on the @dp state. 1372 + * 1373 + * If this function finds that the @dp state is SC_STATUS_FREED it means 1374 + * that a FREE_STATEID operation for this stateid has been processed and 1375 + * we can proceed to removing it from recalled list. However, if @dp state 1376 + * isn't marked SC_STATUS_FREED, it means we need place it on the cl_revoked 1377 + * list and wait for the FREE_STATEID to arrive from the client. At the same 1378 + * time, we need to mark it as SC_STATUS_FREEABLE to indicate to the 1379 + * nfsd4_free_stateid() function that this stateid has already been added 1380 + * to the cl_revoked list and that nfsd4_free_stateid() is now responsible 1381 + * for removing it from the list. Inspection of where the delegation state 1382 + * in the revocation process is protected by the clp->cl_lock. 1383 + */ 1362 1384 static void revoke_delegation(struct nfs4_delegation *dp) 1363 1385 { 1364 1386 struct nfs4_client *clp = dp->dl_stid.sc_client; 1365 1387 1366 1388 WARN_ON(!list_empty(&dp->dl_recall_lru)); 1389 + WARN_ON_ONCE(!(dp->dl_stid.sc_status & 1390 + (SC_STATUS_REVOKED | SC_STATUS_ADMIN_REVOKED))); 1367 1391 1368 1392 trace_nfsd_stid_revoke(&dp->dl_stid); 1369 1393 1370 - if (dp->dl_stid.sc_status & 1371 - (SC_STATUS_REVOKED | SC_STATUS_ADMIN_REVOKED)) { 1372 - spin_lock(&clp->cl_lock); 1373 - refcount_inc(&dp->dl_stid.sc_count); 1374 - list_add(&dp->dl_recall_lru, &clp->cl_revoked); 1375 - spin_unlock(&clp->cl_lock); 1394 + spin_lock(&clp->cl_lock); 1395 + if (dp->dl_stid.sc_status & SC_STATUS_FREED) { 1396 + list_del_init(&dp->dl_recall_lru); 1397 + goto out; 1376 1398 } 1399 + list_add(&dp->dl_recall_lru, &clp->cl_revoked); 1400 + dp->dl_stid.sc_status |= SC_STATUS_FREEABLE; 1401 + out: 1402 + spin_unlock(&clp->cl_lock); 1377 1403 destroy_unhashed_deleg(dp); 1378 1404 } 1379 1405 ··· 1806 1780 mutex_unlock(&stp->st_mutex); 1807 1781 break; 1808 1782 case SC_TYPE_DELEG: 1783 + refcount_inc(&stid->sc_count); 1809 1784 dp = delegstateid(stid); 1810 1785 spin_lock(&state_lock); 1811 1786 if (!unhash_delegation_locked( ··· 6572 6545 dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru); 6573 6546 if (!state_expired(&lt, dp->dl_time)) 6574 6547 break; 6548 + refcount_inc(&dp->dl_stid.sc_count); 6575 6549 unhash_delegation_locked(dp, SC_STATUS_REVOKED); 6576 6550 list_add(&dp->dl_recall_lru, &reaplist); 6577 6551 } ··· 7185 7157 s->sc_status |= SC_STATUS_CLOSED; 7186 7158 spin_unlock(&s->sc_lock); 7187 7159 dp = delegstateid(s); 7188 - list_del_init(&dp->dl_recall_lru); 7160 + if (s->sc_status & SC_STATUS_FREEABLE) 7161 + list_del_init(&dp->dl_recall_lru); 7162 + s->sc_status |= SC_STATUS_FREED; 7189 7163 spin_unlock(&cl->cl_lock); 7190 7164 nfs4_put_stid(s); 7191 7165 ret = nfs_ok; ··· 7517 7487 if ((status = fh_verify(rqstp, &cstate->current_fh, S_IFREG, 0))) 7518 7488 return status; 7519 7489 7520 - status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG, 0, &s, nn); 7490 + status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG, 7491 + SC_STATUS_REVOKED | SC_STATUS_FREEABLE, 7492 + &s, nn); 7521 7493 if (status) 7522 7494 goto out; 7523 7495 dp = delegstateid(s); ··· 8716 8684 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 8717 8685 8718 8686 shrinker_free(nn->nfsd_client_shrinker); 8719 - cancel_work(&nn->nfsd_shrinker_work); 8687 + cancel_work_sync(&nn->nfsd_shrinker_work); 8720 8688 cancel_delayed_work_sync(&nn->laundromat_work); 8721 8689 locks_end_grace(&nn->nfsd4_manager); 8722 8690
+2
fs/nfsd/state.h
··· 114 114 /* For a deleg stateid kept around only to process free_stateid's: */ 115 115 #define SC_STATUS_REVOKED BIT(1) 116 116 #define SC_STATUS_ADMIN_REVOKED BIT(2) 117 + #define SC_STATUS_FREEABLE BIT(3) 118 + #define SC_STATUS_FREED BIT(4) 117 119 unsigned short sc_status; 118 120 119 121 struct list_head sc_cp_list;
+1
fs/nilfs2/page.c
··· 401 401 402 402 folio_clear_uptodate(folio); 403 403 folio_clear_mappedtodisk(folio); 404 + folio_clear_checked(folio); 404 405 405 406 head = folio_buffers(folio); 406 407 if (head) {
+8
fs/ocfs2/file.c
··· 1787 1787 return 0; 1788 1788 1789 1789 if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) { 1790 + int id_count = ocfs2_max_inline_data_with_xattr(inode->i_sb, di); 1791 + 1792 + if (byte_start > id_count || byte_start + byte_len > id_count) { 1793 + ret = -EINVAL; 1794 + mlog_errno(ret); 1795 + goto out; 1796 + } 1797 + 1790 1798 ret = ocfs2_truncate_inline(inode, di_bh, byte_start, 1791 1799 byte_start + byte_len, 0); 1792 1800 if (ret) {
+7 -2
fs/overlayfs/file.c
··· 231 231 ovl_copyattr(file_inode(file)); 232 232 } 233 233 234 + static void ovl_file_end_write(struct file *file, loff_t pos, ssize_t ret) 235 + { 236 + ovl_file_modified(file); 237 + } 238 + 234 239 static void ovl_file_accessed(struct file *file) 235 240 { 236 241 struct inode *inode, *upperinode; ··· 299 294 struct backing_file_ctx ctx = { 300 295 .cred = ovl_creds(inode->i_sb), 301 296 .user_file = file, 302 - .end_write = ovl_file_modified, 297 + .end_write = ovl_file_end_write, 303 298 }; 304 299 305 300 if (!iov_iter_count(iter)) ··· 369 364 struct backing_file_ctx ctx = { 370 365 .cred = ovl_creds(inode->i_sb), 371 366 .user_file = out, 372 - .end_write = ovl_file_modified, 367 + .end_write = ovl_file_end_write, 373 368 }; 374 369 375 370 inode_lock(inode);
+1 -1
fs/smb/client/cifsfs.c
··· 1780 1780 nomem_subreqpool: 1781 1781 kmem_cache_destroy(cifs_io_subrequest_cachep); 1782 1782 nomem_subreq: 1783 - mempool_destroy(&cifs_io_request_pool); 1783 + mempool_exit(&cifs_io_request_pool); 1784 1784 nomem_reqpool: 1785 1785 kmem_cache_destroy(cifs_io_request_cachep); 1786 1786 nomem_req:
+7
fs/smb/client/fs_context.c
··· 920 920 else { 921 921 kfree_sensitive(ses->password); 922 922 ses->password = kstrdup(ctx->password, GFP_KERNEL); 923 + if (!ses->password) 924 + return -ENOMEM; 923 925 kfree_sensitive(ses->password2); 924 926 ses->password2 = kstrdup(ctx->password2, GFP_KERNEL); 927 + if (!ses->password2) { 928 + kfree_sensitive(ses->password); 929 + ses->password = NULL; 930 + return -ENOMEM; 931 + } 925 932 } 926 933 STEAL_STRING(cifs_sb, ctx, domainname); 927 934 STEAL_STRING(cifs_sb, ctx, nodename);
+28
fs/userfaultfd.c
··· 692 692 } 693 693 } 694 694 695 + void dup_userfaultfd_fail(struct list_head *fcs) 696 + { 697 + struct userfaultfd_fork_ctx *fctx, *n; 698 + 699 + /* 700 + * An error has occurred on fork, we will tear memory down, but have 701 + * allocated memory for fctx's and raised reference counts for both the 702 + * original and child contexts (and on the mm for each as a result). 703 + * 704 + * These would ordinarily be taken care of by a user handling the event, 705 + * but we are no longer doing so, so manually clean up here. 706 + * 707 + * mm tear down will take care of cleaning up VMA contexts. 708 + */ 709 + list_for_each_entry_safe(fctx, n, fcs, list) { 710 + struct userfaultfd_ctx *octx = fctx->orig; 711 + struct userfaultfd_ctx *ctx = fctx->new; 712 + 713 + atomic_dec(&octx->mmap_changing); 714 + VM_BUG_ON(atomic_read(&octx->mmap_changing) < 0); 715 + userfaultfd_ctx_put(octx); 716 + userfaultfd_ctx_put(ctx); 717 + 718 + list_del(&fctx->list); 719 + kfree(fctx); 720 + } 721 + } 722 + 695 723 void mremap_userfaultfd_prep(struct vm_area_struct *vma, 696 724 struct vm_userfaultfd_ctx *vm_ctx) 697 725 {
+28 -47
fs/xfs/libxfs/xfs_ag.c
··· 185 185 } 186 186 187 187 /* 188 - * Free up the per-ag resources associated with the mount structure. 188 + * Free up the per-ag resources within the specified AG range. 189 189 */ 190 190 void 191 - xfs_free_perag( 192 - struct xfs_mount *mp) 191 + xfs_free_perag_range( 192 + struct xfs_mount *mp, 193 + xfs_agnumber_t first_agno, 194 + xfs_agnumber_t end_agno) 195 + 193 196 { 194 - struct xfs_perag *pag; 195 197 xfs_agnumber_t agno; 196 198 197 - for (agno = 0; agno < mp->m_sb.sb_agcount; agno++) { 198 - pag = xa_erase(&mp->m_perags, agno); 199 + for (agno = first_agno; agno < end_agno; agno++) { 200 + struct xfs_perag *pag = xa_erase(&mp->m_perags, agno); 201 + 199 202 ASSERT(pag); 200 203 XFS_IS_CORRUPT(pag->pag_mount, atomic_read(&pag->pag_ref) != 0); 201 204 xfs_defer_drain_free(&pag->pag_intents_drain); ··· 273 270 return __xfs_agino_range(mp, xfs_ag_block_count(mp, agno), first, last); 274 271 } 275 272 276 - /* 277 - * Free perag within the specified AG range, it is only used to free unused 278 - * perags under the error handling path. 279 - */ 280 - void 281 - xfs_free_unused_perag_range( 273 + int 274 + xfs_update_last_ag_size( 282 275 struct xfs_mount *mp, 283 - xfs_agnumber_t agstart, 284 - xfs_agnumber_t agend) 276 + xfs_agnumber_t prev_agcount) 285 277 { 286 - struct xfs_perag *pag; 287 - xfs_agnumber_t index; 278 + struct xfs_perag *pag = xfs_perag_grab(mp, prev_agcount - 1); 288 279 289 - for (index = agstart; index < agend; index++) { 290 - pag = xa_erase(&mp->m_perags, index); 291 - if (!pag) 292 - break; 293 - xfs_buf_cache_destroy(&pag->pag_bcache); 294 - xfs_defer_drain_free(&pag->pag_intents_drain); 295 - kfree(pag); 296 - } 280 + if (!pag) 281 + return -EFSCORRUPTED; 282 + pag->block_count = __xfs_ag_block_count(mp, prev_agcount - 1, 283 + mp->m_sb.sb_agcount, mp->m_sb.sb_dblocks); 284 + __xfs_agino_range(mp, pag->block_count, &pag->agino_min, 285 + &pag->agino_max); 286 + xfs_perag_rele(pag); 287 + return 0; 297 288 } 298 289 299 290 int 300 291 xfs_initialize_perag( 301 292 struct xfs_mount *mp, 302 - xfs_agnumber_t agcount, 293 + xfs_agnumber_t old_agcount, 294 + xfs_agnumber_t new_agcount, 303 295 xfs_rfsblock_t dblocks, 304 296 xfs_agnumber_t *maxagi) 305 297 { 306 298 struct xfs_perag *pag; 307 299 xfs_agnumber_t index; 308 - xfs_agnumber_t first_initialised = NULLAGNUMBER; 309 300 int error; 310 301 311 - /* 312 - * Walk the current per-ag tree so we don't try to initialise AGs 313 - * that already exist (growfs case). Allocate and insert all the 314 - * AGs we don't find ready for initialisation. 315 - */ 316 - for (index = 0; index < agcount; index++) { 317 - pag = xfs_perag_get(mp, index); 318 - if (pag) { 319 - xfs_perag_put(pag); 320 - continue; 321 - } 322 - 323 - pag = kzalloc(sizeof(*pag), GFP_KERNEL | __GFP_RETRY_MAYFAIL); 302 + for (index = old_agcount; index < new_agcount; index++) { 303 + pag = kzalloc(sizeof(*pag), GFP_KERNEL); 324 304 if (!pag) { 325 305 error = -ENOMEM; 326 306 goto out_unwind_new_pags; ··· 339 353 /* Active ref owned by mount indicates AG is online. */ 340 354 atomic_set(&pag->pag_active_ref, 1); 341 355 342 - /* first new pag is fully initialized */ 343 - if (first_initialised == NULLAGNUMBER) 344 - first_initialised = index; 345 - 346 356 /* 347 357 * Pre-calculated geometry 348 358 */ 349 - pag->block_count = __xfs_ag_block_count(mp, index, agcount, 359 + pag->block_count = __xfs_ag_block_count(mp, index, new_agcount, 350 360 dblocks); 351 361 pag->min_block = XFS_AGFL_BLOCK(mp); 352 362 __xfs_agino_range(mp, pag->block_count, &pag->agino_min, 353 363 &pag->agino_max); 354 364 } 355 365 356 - index = xfs_set_inode_alloc(mp, agcount); 366 + index = xfs_set_inode_alloc(mp, new_agcount); 357 367 358 368 if (maxagi) 359 369 *maxagi = index; ··· 363 381 out_free_pag: 364 382 kfree(pag); 365 383 out_unwind_new_pags: 366 - /* unwind any prior newly initialized pags */ 367 - xfs_free_unused_perag_range(mp, first_initialised, agcount); 384 + xfs_free_perag_range(mp, old_agcount, index); 368 385 return error; 369 386 } 370 387
+6 -5
fs/xfs/libxfs/xfs_ag.h
··· 144 144 __XFS_AG_OPSTATE(allows_inodes, ALLOWS_INODES) 145 145 __XFS_AG_OPSTATE(agfl_needs_reset, AGFL_NEEDS_RESET) 146 146 147 - void xfs_free_unused_perag_range(struct xfs_mount *mp, xfs_agnumber_t agstart, 148 - xfs_agnumber_t agend); 149 - int xfs_initialize_perag(struct xfs_mount *mp, xfs_agnumber_t agcount, 150 - xfs_rfsblock_t dcount, xfs_agnumber_t *maxagi); 147 + int xfs_initialize_perag(struct xfs_mount *mp, xfs_agnumber_t old_agcount, 148 + xfs_agnumber_t agcount, xfs_rfsblock_t dcount, 149 + xfs_agnumber_t *maxagi); 150 + void xfs_free_perag_range(struct xfs_mount *mp, xfs_agnumber_t first_agno, 151 + xfs_agnumber_t end_agno); 151 152 int xfs_initialize_perag_data(struct xfs_mount *mp, xfs_agnumber_t agno); 152 - void xfs_free_perag(struct xfs_mount *mp); 153 + int xfs_update_last_ag_size(struct xfs_mount *mp, xfs_agnumber_t prev_agcount); 153 154 154 155 /* Passive AG references */ 155 156 struct xfs_perag *xfs_perag_get(struct xfs_mount *mp, xfs_agnumber_t agno);
+5 -3
fs/xfs/scrub/repair.c
··· 1084 1084 return error; 1085 1085 1086 1086 /* Make sure the attr fork looks ok before we delete it. */ 1087 - error = xrep_metadata_inode_subtype(sc, XFS_SCRUB_TYPE_BMBTA); 1088 - if (error) 1089 - return error; 1087 + if (xfs_inode_hasattr(sc->ip)) { 1088 + error = xrep_metadata_inode_subtype(sc, XFS_SCRUB_TYPE_BMBTA); 1089 + if (error) 1090 + return error; 1091 + } 1090 1092 1091 1093 /* Clear the reflink flag since metadata never shares. */ 1092 1094 if (xfs_is_reflink_inode(sc->ip)) {
+70
fs/xfs/xfs_buf_item_recover.c
··· 22 22 #include "xfs_inode.h" 23 23 #include "xfs_dir2.h" 24 24 #include "xfs_quota.h" 25 + #include "xfs_alloc.h" 26 + #include "xfs_ag.h" 27 + #include "xfs_sb.h" 25 28 26 29 /* 27 30 * This is the number of entries in the l_buf_cancel_table used during ··· 688 685 } 689 686 690 687 /* 688 + * Update the in-memory superblock and perag structures from the primary SB 689 + * buffer. 690 + * 691 + * This is required because transactions running after growfs may require the 692 + * updated values to be set in a previous fully commit transaction. 693 + */ 694 + static int 695 + xlog_recover_do_primary_sb_buffer( 696 + struct xfs_mount *mp, 697 + struct xlog_recover_item *item, 698 + struct xfs_buf *bp, 699 + struct xfs_buf_log_format *buf_f, 700 + xfs_lsn_t current_lsn) 701 + { 702 + struct xfs_dsb *dsb = bp->b_addr; 703 + xfs_agnumber_t orig_agcount = mp->m_sb.sb_agcount; 704 + int error; 705 + 706 + xlog_recover_do_reg_buffer(mp, item, bp, buf_f, current_lsn); 707 + 708 + if (orig_agcount == 0) { 709 + xfs_alert(mp, "Trying to grow file system without AGs"); 710 + return -EFSCORRUPTED; 711 + } 712 + 713 + /* 714 + * Update the in-core super block from the freshly recovered on-disk one. 715 + */ 716 + xfs_sb_from_disk(&mp->m_sb, dsb); 717 + 718 + if (mp->m_sb.sb_agcount < orig_agcount) { 719 + xfs_alert(mp, "Shrinking AG count in log recovery not supported"); 720 + return -EFSCORRUPTED; 721 + } 722 + 723 + /* 724 + * Growfs can also grow the last existing AG. In this case we also need 725 + * to update the length in the in-core perag structure and values 726 + * depending on it. 727 + */ 728 + error = xfs_update_last_ag_size(mp, orig_agcount); 729 + if (error) 730 + return error; 731 + 732 + /* 733 + * Initialize the new perags, and also update various block and inode 734 + * allocator setting based off the number of AGs or total blocks. 735 + * Because of the latter this also needs to happen if the agcount did 736 + * not change. 737 + */ 738 + error = xfs_initialize_perag(mp, orig_agcount, mp->m_sb.sb_agcount, 739 + mp->m_sb.sb_dblocks, &mp->m_maxagi); 740 + if (error) { 741 + xfs_warn(mp, "Failed recovery per-ag init: %d", error); 742 + return error; 743 + } 744 + mp->m_alloc_set_aside = xfs_alloc_set_aside(mp); 745 + return 0; 746 + } 747 + 748 + /* 691 749 * V5 filesystems know the age of the buffer on disk being recovered. We can 692 750 * have newer objects on disk than we are replaying, and so for these cases we 693 751 * don't want to replay the current change as that will make the buffer contents ··· 1030 966 1031 967 dirty = xlog_recover_do_dquot_buffer(mp, log, item, bp, buf_f); 1032 968 if (!dirty) 969 + goto out_release; 970 + } else if ((xfs_blft_from_flags(buf_f) & XFS_BLFT_SB_BUF) && 971 + xfs_buf_daddr(bp) == 0) { 972 + error = xlog_recover_do_primary_sb_buffer(mp, item, bp, buf_f, 973 + current_lsn); 974 + if (error) 1033 975 goto out_release; 1034 976 } else { 1035 977 xlog_recover_do_reg_buffer(mp, item, bp, buf_f, current_lsn);
+9 -11
fs/xfs/xfs_fsops.c
··· 87 87 struct xfs_mount *mp, /* mount point for filesystem */ 88 88 struct xfs_growfs_data *in) /* growfs data input struct */ 89 89 { 90 + xfs_agnumber_t oagcount = mp->m_sb.sb_agcount; 90 91 struct xfs_buf *bp; 91 92 int error; 92 93 xfs_agnumber_t nagcount; ··· 95 94 xfs_rfsblock_t nb, nb_div, nb_mod; 96 95 int64_t delta; 97 96 bool lastag_extended = false; 98 - xfs_agnumber_t oagcount; 99 97 struct xfs_trans *tp; 100 98 struct aghdr_init_data id = {}; 101 99 struct xfs_perag *last_pag; ··· 138 138 if (delta == 0) 139 139 return 0; 140 140 141 - oagcount = mp->m_sb.sb_agcount; 142 - /* allocate the new per-ag structures */ 143 - if (nagcount > oagcount) { 144 - error = xfs_initialize_perag(mp, nagcount, nb, &nagimax); 145 - if (error) 146 - return error; 147 - } else if (nagcount < oagcount) { 148 - /* TODO: shrinking the entire AGs hasn't yet completed */ 141 + /* TODO: shrinking the entire AGs hasn't yet completed */ 142 + if (nagcount < oagcount) 149 143 return -EINVAL; 150 - } 144 + 145 + /* allocate the new per-ag structures */ 146 + error = xfs_initialize_perag(mp, oagcount, nagcount, nb, &nagimax); 147 + if (error) 148 + return error; 151 149 152 150 if (delta > 0) 153 151 error = xfs_trans_alloc(mp, &M_RES(mp)->tr_growdata, ··· 229 231 xfs_trans_cancel(tp); 230 232 out_free_unused_perag: 231 233 if (nagcount > oagcount) 232 - xfs_free_unused_perag_range(mp, oagcount, nagcount); 234 + xfs_free_perag_range(mp, oagcount, nagcount); 233 235 return error; 234 236 } 235 237
-7
fs/xfs/xfs_log_recover.c
··· 3393 3393 /* re-initialise in-core superblock and geometry structures */ 3394 3394 mp->m_features |= xfs_sb_version_to_features(sbp); 3395 3395 xfs_reinit_percpu_counters(mp); 3396 - error = xfs_initialize_perag(mp, sbp->sb_agcount, sbp->sb_dblocks, 3397 - &mp->m_maxagi); 3398 - if (error) { 3399 - xfs_warn(mp, "Failed post-recovery per-ag init: %d", error); 3400 - return error; 3401 - } 3402 - mp->m_alloc_set_aside = xfs_alloc_set_aside(mp); 3403 3396 3404 3397 /* Normal transactions can now occur */ 3405 3398 clear_bit(XLOG_ACTIVE_RECOVERY, &log->l_opstate);
+4 -5
fs/xfs/xfs_mount.c
··· 810 810 /* 811 811 * Allocate and initialize the per-ag data. 812 812 */ 813 - error = xfs_initialize_perag(mp, sbp->sb_agcount, mp->m_sb.sb_dblocks, 814 - &mp->m_maxagi); 813 + error = xfs_initialize_perag(mp, 0, sbp->sb_agcount, 814 + mp->m_sb.sb_dblocks, &mp->m_maxagi); 815 815 if (error) { 816 816 xfs_warn(mp, "Failed per-ag init: %d", error); 817 817 goto out_free_dir; ··· 1048 1048 xfs_buftarg_drain(mp->m_logdev_targp); 1049 1049 xfs_buftarg_drain(mp->m_ddev_targp); 1050 1050 out_free_perag: 1051 - xfs_free_perag(mp); 1051 + xfs_free_perag_range(mp, 0, mp->m_sb.sb_agcount); 1052 1052 out_free_dir: 1053 1053 xfs_da_unmount(mp); 1054 1054 out_remove_uuid: ··· 1129 1129 xfs_errortag_clearall(mp); 1130 1130 #endif 1131 1131 shrinker_free(mp->m_inodegc_shrinker); 1132 - xfs_free_perag(mp); 1133 - 1132 + xfs_free_perag_range(mp, 0, mp->m_sb.sb_agcount); 1134 1133 xfs_errortag_del(mp); 1135 1134 xfs_error_sysfs_del(mp); 1136 1135 xchk_stats_unregister(mp->m_scrub_stats);
+1 -1
include/linux/backing-file.h
··· 16 16 const struct cred *cred; 17 17 struct file *user_file; 18 18 void (*accessed)(struct file *); 19 - void (*end_write)(struct file *); 19 + void (*end_write)(struct file *, loff_t, ssize_t); 20 20 }; 21 21 22 22 struct file *backing_file_open(const struct path *user_path, int flags,
+11 -3
include/linux/bpf.h
··· 635 635 */ 636 636 PTR_UNTRUSTED = BIT(6 + BPF_BASE_TYPE_BITS), 637 637 638 + /* MEM can be uninitialized. */ 638 639 MEM_UNINIT = BIT(7 + BPF_BASE_TYPE_BITS), 639 640 640 641 /* DYNPTR points to memory local to the bpf program. */ ··· 701 700 */ 702 701 MEM_ALIGNED = BIT(17 + BPF_BASE_TYPE_BITS), 703 702 703 + /* MEM is being written to, often combined with MEM_UNINIT. Non-presence 704 + * of MEM_WRITE means that MEM is only being read. MEM_WRITE without the 705 + * MEM_UNINIT means that memory needs to be initialized since it is also 706 + * read. 707 + */ 708 + MEM_WRITE = BIT(18 + BPF_BASE_TYPE_BITS), 709 + 704 710 __BPF_TYPE_FLAG_MAX, 705 711 __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1, 706 712 }; ··· 766 758 ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET, 767 759 ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK, 768 760 ARG_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID, 769 - /* pointer to memory does not need to be initialized, helper function must fill 770 - * all bytes or clear them in error case. 761 + /* Pointer to memory does not need to be initialized, since helper function 762 + * fills all bytes or clears them in error case. 771 763 */ 772 - ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | ARG_PTR_TO_MEM, 764 + ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | MEM_WRITE | ARG_PTR_TO_MEM, 773 765 /* Pointer to valid memory of size known at compile time. */ 774 766 ARG_PTR_TO_FIXED_SIZE_MEM = MEM_FIXED_SIZE | ARG_PTR_TO_MEM, 775 767
+3
include/linux/bpf_mem_alloc.h
··· 33 33 int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size); 34 34 void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma); 35 35 36 + /* Check the allocation size for kmalloc equivalent allocator */ 37 + int bpf_mem_alloc_check_size(bool percpu, size_t size); 38 + 36 39 /* kmalloc/kfree equivalent: */ 37 40 void *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size); 38 41 void bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr);
+1
include/linux/bpf_types.h
··· 146 146 BPF_LINK_TYPE(BPF_LINK_TYPE_NETFILTER, netfilter) 147 147 BPF_LINK_TYPE(BPF_LINK_TYPE_TCX, tcx) 148 148 BPF_LINK_TYPE(BPF_LINK_TYPE_NETKIT, netkit) 149 + BPF_LINK_TYPE(BPF_LINK_TYPE_SOCKMAP, sockmap) 149 150 #endif 150 151 #ifdef CONFIG_PERF_EVENTS 151 152 BPF_LINK_TYPE(BPF_LINK_TYPE_PERF_EVENT, perf)
+4 -6
include/linux/ksm.h
··· 54 54 return atomic_long_read(&mm->ksm_zero_pages); 55 55 } 56 56 57 - static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) 57 + static inline void ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) 58 58 { 59 + /* Adding mm to ksm is best effort on fork. */ 59 60 if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) 60 - return __ksm_enter(mm); 61 - 62 - return 0; 61 + __ksm_enter(mm); 63 62 } 64 63 65 64 static inline int ksm_execve(struct mm_struct *mm) ··· 106 107 return 0; 107 108 } 108 109 109 - static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) 110 + static inline void ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) 110 111 { 111 - return 0; 112 112 } 113 113 114 114 static inline int ksm_execve(struct mm_struct *mm)
+15 -6
include/linux/uaccess.h
··· 38 38 #else 39 39 #define can_do_masked_user_access() 0 40 40 #define masked_user_access_begin(src) NULL 41 + #define mask_user_address(src) (src) 41 42 #endif 42 43 43 44 /* ··· 160 159 { 161 160 unsigned long res = n; 162 161 might_fault(); 163 - if (!should_fail_usercopy() && likely(access_ok(from, n))) { 162 + if (should_fail_usercopy()) 163 + goto fail; 164 + if (can_do_masked_user_access()) 165 + from = mask_user_address(from); 166 + else { 167 + if (!access_ok(from, n)) 168 + goto fail; 164 169 /* 165 170 * Ensure that bad access_ok() speculation will not 166 171 * lead to nasty side effects *after* the copy is 167 172 * finished: 168 173 */ 169 174 barrier_nospec(); 170 - instrument_copy_from_user_before(to, from, n); 171 - res = raw_copy_from_user(to, from, n); 172 - instrument_copy_from_user_after(to, from, n, res); 173 175 } 174 - if (unlikely(res)) 175 - memset(to + (n - res), 0, res); 176 + instrument_copy_from_user_before(to, from, n); 177 + res = raw_copy_from_user(to, from, n); 178 + instrument_copy_from_user_after(to, from, n, res); 179 + if (likely(!res)) 180 + return 0; 181 + fail: 182 + memset(to + (n - res), 0, res); 176 183 return res; 177 184 } 178 185 extern __must_check unsigned long
+5
include/linux/userfaultfd_k.h
··· 249 249 250 250 extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *); 251 251 extern void dup_userfaultfd_complete(struct list_head *); 252 + void dup_userfaultfd_fail(struct list_head *); 252 253 253 254 extern void mremap_userfaultfd_prep(struct vm_area_struct *, 254 255 struct vm_userfaultfd_ctx *); ··· 349 348 } 350 349 351 350 static inline void dup_userfaultfd_complete(struct list_head *l) 351 + { 352 + } 353 + 354 + static inline void dup_userfaultfd_fail(struct list_head *l) 352 355 { 353 356 } 354 357
+44
include/net/cfg80211.h
··· 6135 6135 struct wiphy_delayed_work *dwork); 6136 6136 6137 6137 /** 6138 + * wiphy_delayed_work_pending - Find out whether a wiphy delayable 6139 + * work item is currently pending. 6140 + * 6141 + * @wiphy: the wiphy, for debug purposes 6142 + * @dwork: the delayed work in question 6143 + * 6144 + * Return: true if timer is pending, false otherwise 6145 + * 6146 + * How wiphy_delayed_work_queue() works is by setting a timer which 6147 + * when it expires calls wiphy_work_queue() to queue the wiphy work. 6148 + * Because wiphy_delayed_work_queue() uses mod_timer(), if it is 6149 + * called twice and the second call happens before the first call 6150 + * deadline, the work will rescheduled for the second deadline and 6151 + * won't run before that. 6152 + * 6153 + * wiphy_delayed_work_pending() can be used to detect if calling 6154 + * wiphy_work_delayed_work_queue() would start a new work schedule 6155 + * or delayed a previous one. As seen below it cannot be used to 6156 + * detect precisely if the work has finished to execute nor if it 6157 + * is currently executing. 6158 + * 6159 + * CPU0 CPU1 6160 + * wiphy_delayed_work_queue(wk) 6161 + * mod_timer(wk->timer) 6162 + * wiphy_delayed_work_pending(wk) -> true 6163 + * 6164 + * [...] 6165 + * expire_timers(wk->timer) 6166 + * detach_timer(wk->timer) 6167 + * wiphy_delayed_work_pending(wk) -> false 6168 + * wk->timer->function() | 6169 + * wiphy_work_queue(wk) | delayed work pending 6170 + * list_add_tail() | returns false but 6171 + * queue_work(cfg80211_wiphy_work) | wk->func() has not 6172 + * | been run yet 6173 + * [...] | 6174 + * cfg80211_wiphy_work() | 6175 + * wk->func() V 6176 + * 6177 + */ 6178 + bool wiphy_delayed_work_pending(struct wiphy *wiphy, 6179 + struct wiphy_delayed_work *dwork); 6180 + 6181 + /** 6138 6182 * enum ieee80211_ap_reg_power - regulatory power for an Access Point 6139 6183 * 6140 6184 * @IEEE80211_REG_UNSET_AP: Access Point has no regulatory power mode
+21 -16
include/net/ieee80211_radiotap.h
··· 24 24 * struct ieee80211_radiotap_header - base radiotap header 25 25 */ 26 26 struct ieee80211_radiotap_header { 27 - /** 28 - * @it_version: radiotap version, always 0 29 - */ 30 - uint8_t it_version; 27 + __struct_group(ieee80211_radiotap_header_fixed, hdr, __packed, 28 + /** 29 + * @it_version: radiotap version, always 0 30 + */ 31 + uint8_t it_version; 31 32 32 - /** 33 - * @it_pad: padding (or alignment) 34 - */ 35 - uint8_t it_pad; 33 + /** 34 + * @it_pad: padding (or alignment) 35 + */ 36 + uint8_t it_pad; 36 37 37 - /** 38 - * @it_len: overall radiotap header length 39 - */ 40 - __le16 it_len; 38 + /** 39 + * @it_len: overall radiotap header length 40 + */ 41 + __le16 it_len; 41 42 42 - /** 43 - * @it_present: (first) present word 44 - */ 45 - __le32 it_present; 43 + /** 44 + * @it_present: (first) present word 45 + */ 46 + __le32 it_present; 47 + ); 46 48 47 49 /** 48 50 * @it_optional: all remaining presence bitmaps 49 51 */ 50 52 __le32 it_optional[]; 51 53 } __packed; 54 + 55 + static_assert(offsetof(struct ieee80211_radiotap_header, it_optional) == sizeof(struct ieee80211_radiotap_header_fixed), 56 + "struct member likely outside of __struct_group()"); 52 57 53 58 /* version is always 0 */ 54 59 #define PKTHDR_RADIOTAP_VERSION 0
+1 -1
include/net/ip_tunnels.h
··· 354 354 memset(fl4, 0, sizeof(*fl4)); 355 355 356 356 if (oif) { 357 - fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index_rcu(net, oif); 357 + fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index(net, oif); 358 358 /* Legacy VRF/l3mdev use case */ 359 359 fl4->flowi4_oif = fl4->flowi4_l3mdev ? 0 : oif; 360 360 }
+3
include/uapi/linux/bpf.h
··· 1121 1121 1122 1122 #define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE 1123 1123 1124 + /* Add BPF_LINK_TYPE(type, name) in bpf_types.h to keep bpf_link_type_strs[] 1125 + * in sync with the definitions below. 1126 + */ 1124 1127 enum bpf_link_type { 1125 1128 BPF_LINK_TYPE_UNSPEC = 0, 1126 1129 BPF_LINK_TYPE_RAW_TRACEPOINT = 1,
+1 -1
include/uapi/sound/asoc.h
··· 88 88 89 89 /* ABI version */ 90 90 #define SND_SOC_TPLG_ABI_VERSION 0x5 /* current version */ 91 - #define SND_SOC_TPLG_ABI_VERSION_MIN 0x4 /* oldest version supported */ 91 + #define SND_SOC_TPLG_ABI_VERSION_MIN 0x5 /* oldest version supported */ 92 92 93 93 /* Max size of TLV data */ 94 94 #define SND_SOC_TPLG_TLV_SIZE 32
-94
include/video/da8xx-fb.h
··· 1 - /* 2 - * Header file for TI DA8XX LCD controller platform data. 3 - * 4 - * Copyright (C) 2008-2009 MontaVista Software Inc. 5 - * Copyright (C) 2008-2009 Texas Instruments Inc 6 - * 7 - * This file is licensed under the terms of the GNU General Public License 8 - * version 2. This program is licensed "as is" without any warranty of any 9 - * kind, whether express or implied. 10 - */ 11 - 12 - #ifndef DA8XX_FB_H 13 - #define DA8XX_FB_H 14 - 15 - enum panel_shade { 16 - MONOCHROME = 0, 17 - COLOR_ACTIVE, 18 - COLOR_PASSIVE, 19 - }; 20 - 21 - enum raster_load_mode { 22 - LOAD_DATA = 1, 23 - LOAD_PALETTE, 24 - }; 25 - 26 - enum da8xx_frame_complete { 27 - DA8XX_FRAME_WAIT, 28 - DA8XX_FRAME_NOWAIT, 29 - }; 30 - 31 - struct da8xx_lcdc_platform_data { 32 - const char manu_name[10]; 33 - void *controller_data; 34 - const char type[25]; 35 - }; 36 - 37 - struct lcd_ctrl_config { 38 - enum panel_shade panel_shade; 39 - 40 - /* AC Bias Pin Frequency */ 41 - int ac_bias; 42 - 43 - /* AC Bias Pin Transitions per Interrupt */ 44 - int ac_bias_intrpt; 45 - 46 - /* DMA burst size */ 47 - int dma_burst_sz; 48 - 49 - /* Bits per pixel */ 50 - int bpp; 51 - 52 - /* FIFO DMA Request Delay */ 53 - int fdd; 54 - 55 - /* TFT Alternative Signal Mapping (Only for active) */ 56 - unsigned char tft_alt_mode; 57 - 58 - /* 12 Bit Per Pixel (5-6-5) Mode (Only for passive) */ 59 - unsigned char stn_565_mode; 60 - 61 - /* Mono 8-bit Mode: 1=D0-D7 or 0=D0-D3 */ 62 - unsigned char mono_8bit_mode; 63 - 64 - /* Horizontal and Vertical Sync Edge: 0=rising 1=falling */ 65 - unsigned char sync_edge; 66 - 67 - /* Raster Data Order Select: 1=Most-to-least 0=Least-to-most */ 68 - unsigned char raster_order; 69 - 70 - /* DMA FIFO threshold */ 71 - int fifo_th; 72 - }; 73 - 74 - struct lcd_sync_arg { 75 - int back_porch; 76 - int front_porch; 77 - int pulse_width; 78 - }; 79 - 80 - /* ioctls */ 81 - #define FBIOGET_CONTRAST _IOR('F', 1, int) 82 - #define FBIOPUT_CONTRAST _IOW('F', 2, int) 83 - #define FBIGET_BRIGHTNESS _IOR('F', 3, int) 84 - #define FBIPUT_BRIGHTNESS _IOW('F', 3, int) 85 - #define FBIGET_COLOR _IOR('F', 5, int) 86 - #define FBIPUT_COLOR _IOW('F', 6, int) 87 - #define FBIPUT_HSYNC _IOW('F', 9, int) 88 - #define FBIPUT_VSYNC _IOW('F', 10, int) 89 - 90 - /* Proprietary FB_SYNC_ flags */ 91 - #define FB_SYNC_CLK_INVERT 0x40000000 92 - 93 - #endif /* ifndef DA8XX_FB_H */ 94 -
+18 -1
kernel/bpf/cgroup.c
··· 24 24 DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE); 25 25 EXPORT_SYMBOL(cgroup_bpf_enabled_key); 26 26 27 + /* 28 + * cgroup bpf destruction makes heavy use of work items and there can be a lot 29 + * of concurrent destructions. Use a separate workqueue so that cgroup bpf 30 + * destruction work items don't end up filling up max_active of system_wq 31 + * which may lead to deadlock. 32 + */ 33 + static struct workqueue_struct *cgroup_bpf_destroy_wq; 34 + 35 + static int __init cgroup_bpf_wq_init(void) 36 + { 37 + cgroup_bpf_destroy_wq = alloc_workqueue("cgroup_bpf_destroy", 0, 1); 38 + if (!cgroup_bpf_destroy_wq) 39 + panic("Failed to alloc workqueue for cgroup bpf destroy.\n"); 40 + return 0; 41 + } 42 + core_initcall(cgroup_bpf_wq_init); 43 + 27 44 /* __always_inline is necessary to prevent indirect call through run_prog 28 45 * function pointer. 29 46 */ ··· 351 334 struct cgroup *cgrp = container_of(ref, struct cgroup, bpf.refcnt); 352 335 353 336 INIT_WORK(&cgrp->bpf.release_work, cgroup_bpf_release); 354 - queue_work(system_wq, &cgrp->bpf.release_work); 337 + queue_work(cgroup_bpf_destroy_wq, &cgrp->bpf.release_work); 355 338 } 356 339 357 340 /* Get underlying bpf_prog of bpf_prog_list entry, regardless if it's through
+49 -15
kernel/bpf/helpers.c
··· 111 111 .gpl_only = false, 112 112 .ret_type = RET_INTEGER, 113 113 .arg1_type = ARG_CONST_MAP_PTR, 114 - .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT, 114 + .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT | MEM_WRITE, 115 115 }; 116 116 117 117 BPF_CALL_2(bpf_map_peek_elem, struct bpf_map *, map, void *, value) ··· 124 124 .gpl_only = false, 125 125 .ret_type = RET_INTEGER, 126 126 .arg1_type = ARG_CONST_MAP_PTR, 127 - .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT, 127 + .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT | MEM_WRITE, 128 128 }; 129 129 130 130 BPF_CALL_3(bpf_map_lookup_percpu_elem, struct bpf_map *, map, void *, key, u32, cpu) ··· 538 538 .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY, 539 539 .arg2_type = ARG_CONST_SIZE, 540 540 .arg3_type = ARG_ANYTHING, 541 - .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 541 + .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED, 542 542 .arg4_size = sizeof(s64), 543 543 }; 544 544 ··· 566 566 .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY, 567 567 .arg2_type = ARG_CONST_SIZE, 568 568 .arg3_type = ARG_ANYTHING, 569 - .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 569 + .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED, 570 570 .arg4_size = sizeof(u64), 571 571 }; 572 572 ··· 1742 1742 .arg1_type = ARG_PTR_TO_UNINIT_MEM, 1743 1743 .arg2_type = ARG_CONST_SIZE_OR_ZERO, 1744 1744 .arg3_type = ARG_ANYTHING, 1745 - .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL | MEM_UNINIT, 1745 + .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL | MEM_UNINIT | MEM_WRITE, 1746 1746 }; 1747 1747 1748 1748 BPF_CALL_5(bpf_dynptr_read, void *, dst, u32, len, const struct bpf_dynptr_kern *, src, ··· 2851 2851 __u64 __opaque[2]; 2852 2852 } __aligned(8); 2853 2853 2854 + #define BITS_ITER_NR_WORDS_MAX 511 2855 + 2854 2856 struct bpf_iter_bits_kern { 2855 2857 union { 2856 - unsigned long *bits; 2857 - unsigned long bits_copy; 2858 + __u64 *bits; 2859 + __u64 bits_copy; 2858 2860 }; 2859 - u32 nr_bits; 2861 + int nr_bits; 2860 2862 int bit; 2861 2863 } __aligned(8); 2864 + 2865 + /* On 64-bit hosts, unsigned long and u64 have the same size, so passing 2866 + * a u64 pointer and an unsigned long pointer to find_next_bit() will 2867 + * return the same result, as both point to the same 8-byte area. 2868 + * 2869 + * For 32-bit little-endian hosts, using a u64 pointer or unsigned long 2870 + * pointer also makes no difference. This is because the first iterated 2871 + * unsigned long is composed of bits 0-31 of the u64 and the second unsigned 2872 + * long is composed of bits 32-63 of the u64. 2873 + * 2874 + * However, for 32-bit big-endian hosts, this is not the case. The first 2875 + * iterated unsigned long will be bits 32-63 of the u64, so swap these two 2876 + * ulong values within the u64. 2877 + */ 2878 + static void swap_ulong_in_u64(u64 *bits, unsigned int nr) 2879 + { 2880 + #if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) 2881 + unsigned int i; 2882 + 2883 + for (i = 0; i < nr; i++) 2884 + bits[i] = (bits[i] >> 32) | ((u64)(u32)bits[i] << 32); 2885 + #endif 2886 + } 2862 2887 2863 2888 /** 2864 2889 * bpf_iter_bits_new() - Initialize a new bits iterator for a given memory area 2865 2890 * @it: The new bpf_iter_bits to be created 2866 2891 * @unsafe_ptr__ign: A pointer pointing to a memory area to be iterated over 2867 2892 * @nr_words: The size of the specified memory area, measured in 8-byte units. 2868 - * Due to the limitation of memalloc, it can't be greater than 512. 2893 + * The maximum value of @nr_words is @BITS_ITER_NR_WORDS_MAX. This limit may be 2894 + * further reduced by the BPF memory allocator implementation. 2869 2895 * 2870 2896 * This function initializes a new bpf_iter_bits structure for iterating over 2871 2897 * a memory area which is specified by the @unsafe_ptr__ign and @nr_words. It ··· 2918 2892 2919 2893 if (!unsafe_ptr__ign || !nr_words) 2920 2894 return -EINVAL; 2895 + if (nr_words > BITS_ITER_NR_WORDS_MAX) 2896 + return -E2BIG; 2921 2897 2922 2898 /* Optimization for u64 mask */ 2923 2899 if (nr_bits == 64) { ··· 2927 2899 if (err) 2928 2900 return -EFAULT; 2929 2901 2902 + swap_ulong_in_u64(&kit->bits_copy, nr_words); 2903 + 2930 2904 kit->nr_bits = nr_bits; 2931 2905 return 0; 2932 2906 } 2907 + 2908 + if (bpf_mem_alloc_check_size(false, nr_bytes)) 2909 + return -E2BIG; 2933 2910 2934 2911 /* Fallback to memalloc */ 2935 2912 kit->bits = bpf_mem_alloc(&bpf_global_ma, nr_bytes); ··· 2946 2913 bpf_mem_free(&bpf_global_ma, kit->bits); 2947 2914 return err; 2948 2915 } 2916 + 2917 + swap_ulong_in_u64(kit->bits, nr_words); 2949 2918 2950 2919 kit->nr_bits = nr_bits; 2951 2920 return 0; ··· 2965 2930 __bpf_kfunc int *bpf_iter_bits_next(struct bpf_iter_bits *it) 2966 2931 { 2967 2932 struct bpf_iter_bits_kern *kit = (void *)it; 2968 - u32 nr_bits = kit->nr_bits; 2969 - const unsigned long *bits; 2970 - int bit; 2933 + int bit = kit->bit, nr_bits = kit->nr_bits; 2934 + const void *bits; 2971 2935 2972 - if (nr_bits == 0) 2936 + if (!nr_bits || bit >= nr_bits) 2973 2937 return NULL; 2974 2938 2975 2939 bits = nr_bits == 64 ? &kit->bits_copy : kit->bits; 2976 - bit = find_next_bit(bits, nr_bits, kit->bit + 1); 2940 + bit = find_next_bit(bits, nr_bits, bit + 1); 2977 2941 if (bit >= nr_bits) { 2978 - kit->nr_bits = 0; 2942 + kit->bit = bit; 2979 2943 return NULL; 2980 2944 } 2981 2945
+3 -2
kernel/bpf/inode.c
··· 880 880 const struct btf_type *enum_t; 881 881 const char *enum_pfx; 882 882 u64 *delegate_msk, msk = 0; 883 - char *p; 883 + char *p, *str; 884 884 int val; 885 885 886 886 /* ignore errors, fallback to hex */ ··· 911 911 return -EINVAL; 912 912 } 913 913 914 - while ((p = strsep(&param->string, ":"))) { 914 + str = param->string; 915 + while ((p = strsep(&str, ":"))) { 915 916 if (strcmp(p, "any") == 0) { 916 917 msk |= ~0ULL; 917 918 } else if (find_btf_enum_const(info.btf, enum_t, enum_pfx, p, &val)) {
+1 -1
kernel/bpf/lpm_trie.c
··· 655 655 if (!key || key->prefixlen > trie->max_prefixlen) 656 656 goto find_leftmost; 657 657 658 - node_stack = kmalloc_array(trie->max_prefixlen, 658 + node_stack = kmalloc_array(trie->max_prefixlen + 1, 659 659 sizeof(struct lpm_trie_node *), 660 660 GFP_ATOMIC | __GFP_NOWARN); 661 661 if (!node_stack)
+13 -1
kernel/bpf/memalloc.c
··· 35 35 */ 36 36 #define LLIST_NODE_SZ sizeof(struct llist_node) 37 37 38 + #define BPF_MEM_ALLOC_SIZE_MAX 4096 39 + 38 40 /* similar to kmalloc, but sizeof == 8 bucket is gone */ 39 41 static u8 size_index[24] __ro_after_init = { 40 42 3, /* 8 */ ··· 67 65 68 66 static int bpf_mem_cache_idx(size_t size) 69 67 { 70 - if (!size || size > 4096) 68 + if (!size || size > BPF_MEM_ALLOC_SIZE_MAX) 71 69 return -1; 72 70 73 71 if (size <= 192) ··· 1006 1004 } 1007 1005 1008 1006 return !ret ? NULL : ret + LLIST_NODE_SZ; 1007 + } 1008 + 1009 + int bpf_mem_alloc_check_size(bool percpu, size_t size) 1010 + { 1011 + /* The size of percpu allocation doesn't have LLIST_NODE_SZ overhead */ 1012 + if ((percpu && size > BPF_MEM_ALLOC_SIZE_MAX) || 1013 + (!percpu && size > BPF_MEM_ALLOC_SIZE_MAX - LLIST_NODE_SZ)) 1014 + return -E2BIG; 1015 + 1016 + return 0; 1009 1017 }
+1 -1
kernel/bpf/ringbuf.c
··· 632 632 .arg1_type = ARG_CONST_MAP_PTR, 633 633 .arg2_type = ARG_ANYTHING, 634 634 .arg3_type = ARG_ANYTHING, 635 - .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | MEM_UNINIT, 635 + .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | MEM_UNINIT | MEM_WRITE, 636 636 }; 637 637 638 638 BPF_CALL_2(bpf_ringbuf_submit_dynptr, struct bpf_dynptr_kern *, ptr, u64, flags)
+10 -6
kernel/bpf/syscall.c
··· 3069 3069 { 3070 3070 const struct bpf_link *link = filp->private_data; 3071 3071 const struct bpf_prog *prog = link->prog; 3072 + enum bpf_link_type type = link->type; 3072 3073 char prog_tag[sizeof(prog->tag) * 2 + 1] = { }; 3073 3074 3074 - seq_printf(m, 3075 - "link_type:\t%s\n" 3076 - "link_id:\t%u\n", 3077 - bpf_link_type_strs[link->type], 3078 - link->id); 3075 + if (type < ARRAY_SIZE(bpf_link_type_strs) && bpf_link_type_strs[type]) { 3076 + seq_printf(m, "link_type:\t%s\n", bpf_link_type_strs[type]); 3077 + } else { 3078 + WARN_ONCE(1, "missing BPF_LINK_TYPE(...) for link type %u\n", type); 3079 + seq_printf(m, "link_type:\t<%u>\n", type); 3080 + } 3081 + seq_printf(m, "link_id:\t%u\n", link->id); 3082 + 3079 3083 if (prog) { 3080 3084 bin2hex(prog_tag, prog->tag, sizeof(prog->tag)); 3081 3085 seq_printf(m, ··· 5896 5892 .arg1_type = ARG_PTR_TO_MEM, 5897 5893 .arg2_type = ARG_CONST_SIZE_OR_ZERO, 5898 5894 .arg3_type = ARG_ANYTHING, 5899 - .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 5895 + .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED, 5900 5896 .arg4_size = sizeof(u64), 5901 5897 }; 5902 5898
+44 -54
kernel/bpf/verifier.c
··· 6804 6804 struct bpf_func_state *state, 6805 6805 enum bpf_access_type t) 6806 6806 { 6807 - struct bpf_insn_aux_data *aux = &env->insn_aux_data[env->insn_idx]; 6808 - int min_valid_off, max_bpf_stack; 6809 - 6810 - /* If accessing instruction is a spill/fill from bpf_fastcall pattern, 6811 - * add room for all caller saved registers below MAX_BPF_STACK. 6812 - * In case if bpf_fastcall rewrite won't happen maximal stack depth 6813 - * would be checked by check_max_stack_depth_subprog(). 6814 - */ 6815 - max_bpf_stack = MAX_BPF_STACK; 6816 - if (aux->fastcall_pattern) 6817 - max_bpf_stack += CALLER_SAVED_REGS * BPF_REG_SIZE; 6807 + int min_valid_off; 6818 6808 6819 6809 if (t == BPF_WRITE || env->allow_uninit_stack) 6820 - min_valid_off = -max_bpf_stack; 6810 + min_valid_off = -MAX_BPF_STACK; 6821 6811 else 6822 6812 min_valid_off = -state->allocated_stack; 6823 6813 ··· 7428 7438 } 7429 7439 7430 7440 static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, 7431 - int access_size, bool zero_size_allowed, 7441 + int access_size, enum bpf_access_type access_type, 7442 + bool zero_size_allowed, 7432 7443 struct bpf_call_arg_meta *meta) 7433 7444 { 7434 7445 struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno]; ··· 7441 7450 return check_packet_access(env, regno, reg->off, access_size, 7442 7451 zero_size_allowed); 7443 7452 case PTR_TO_MAP_KEY: 7444 - if (meta && meta->raw_mode) { 7453 + if (access_type == BPF_WRITE) { 7445 7454 verbose(env, "R%d cannot write into %s\n", regno, 7446 7455 reg_type_str(env, reg->type)); 7447 7456 return -EACCES; ··· 7449 7458 return check_mem_region_access(env, regno, reg->off, access_size, 7450 7459 reg->map_ptr->key_size, false); 7451 7460 case PTR_TO_MAP_VALUE: 7452 - if (check_map_access_type(env, regno, reg->off, access_size, 7453 - meta && meta->raw_mode ? BPF_WRITE : 7454 - BPF_READ)) 7461 + if (check_map_access_type(env, regno, reg->off, access_size, access_type)) 7455 7462 return -EACCES; 7456 7463 return check_map_access(env, regno, reg->off, access_size, 7457 7464 zero_size_allowed, ACCESS_HELPER); 7458 7465 case PTR_TO_MEM: 7459 7466 if (type_is_rdonly_mem(reg->type)) { 7460 - if (meta && meta->raw_mode) { 7467 + if (access_type == BPF_WRITE) { 7461 7468 verbose(env, "R%d cannot write into %s\n", regno, 7462 7469 reg_type_str(env, reg->type)); 7463 7470 return -EACCES; ··· 7466 7477 zero_size_allowed); 7467 7478 case PTR_TO_BUF: 7468 7479 if (type_is_rdonly_mem(reg->type)) { 7469 - if (meta && meta->raw_mode) { 7480 + if (access_type == BPF_WRITE) { 7470 7481 verbose(env, "R%d cannot write into %s\n", regno, 7471 7482 reg_type_str(env, reg->type)); 7472 7483 return -EACCES; ··· 7494 7505 * Dynamically check it now. 7495 7506 */ 7496 7507 if (!env->ops->convert_ctx_access) { 7497 - enum bpf_access_type atype = meta && meta->raw_mode ? BPF_WRITE : BPF_READ; 7498 7508 int offset = access_size - 1; 7499 7509 7500 7510 /* Allow zero-byte read from PTR_TO_CTX */ ··· 7501 7513 return zero_size_allowed ? 0 : -EACCES; 7502 7514 7503 7515 return check_mem_access(env, env->insn_idx, regno, offset, BPF_B, 7504 - atype, -1, false, false); 7516 + access_type, -1, false, false); 7505 7517 } 7506 7518 7507 7519 fallthrough; ··· 7526 7538 */ 7527 7539 static int check_mem_size_reg(struct bpf_verifier_env *env, 7528 7540 struct bpf_reg_state *reg, u32 regno, 7541 + enum bpf_access_type access_type, 7529 7542 bool zero_size_allowed, 7530 7543 struct bpf_call_arg_meta *meta) 7531 7544 { ··· 7542 7553 */ 7543 7554 meta->msize_max_value = reg->umax_value; 7544 7555 7545 - /* The register is SCALAR_VALUE; the access check 7546 - * happens using its boundaries. 7556 + /* The register is SCALAR_VALUE; the access check happens using 7557 + * its boundaries. For unprivileged variable accesses, disable 7558 + * raw mode so that the program is required to initialize all 7559 + * the memory that the helper could just partially fill up. 7547 7560 */ 7548 7561 if (!tnum_is_const(reg->var_off)) 7549 - /* For unprivileged variable accesses, disable raw 7550 - * mode so that the program is required to 7551 - * initialize all the memory that the helper could 7552 - * just partially fill up. 7553 - */ 7554 7562 meta = NULL; 7555 7563 7556 7564 if (reg->smin_value < 0) { ··· 7567 7581 regno); 7568 7582 return -EACCES; 7569 7583 } 7570 - err = check_helper_mem_access(env, regno - 1, 7571 - reg->umax_value, 7572 - zero_size_allowed, meta); 7584 + err = check_helper_mem_access(env, regno - 1, reg->umax_value, 7585 + access_type, zero_size_allowed, meta); 7573 7586 if (!err) 7574 7587 err = mark_chain_precision(env, regno); 7575 7588 return err; ··· 7579 7594 { 7580 7595 bool may_be_null = type_may_be_null(reg->type); 7581 7596 struct bpf_reg_state saved_reg; 7582 - struct bpf_call_arg_meta meta; 7583 7597 int err; 7584 7598 7585 7599 if (register_is_null(reg)) 7586 7600 return 0; 7587 7601 7588 - memset(&meta, 0, sizeof(meta)); 7589 7602 /* Assuming that the register contains a value check if the memory 7590 7603 * access is safe. Temporarily save and restore the register's state as 7591 7604 * the conversion shouldn't be visible to a caller. ··· 7593 7610 mark_ptr_not_null_reg(reg); 7594 7611 } 7595 7612 7596 - err = check_helper_mem_access(env, regno, mem_size, true, &meta); 7597 - /* Check access for BPF_WRITE */ 7598 - meta.raw_mode = true; 7599 - err = err ?: check_helper_mem_access(env, regno, mem_size, true, &meta); 7613 + err = check_helper_mem_access(env, regno, mem_size, BPF_READ, true, NULL); 7614 + err = err ?: check_helper_mem_access(env, regno, mem_size, BPF_WRITE, true, NULL); 7600 7615 7601 7616 if (may_be_null) 7602 7617 *reg = saved_reg; ··· 7620 7639 mark_ptr_not_null_reg(mem_reg); 7621 7640 } 7622 7641 7623 - err = check_mem_size_reg(env, reg, regno, true, &meta); 7624 - /* Check access for BPF_WRITE */ 7625 - meta.raw_mode = true; 7626 - err = err ?: check_mem_size_reg(env, reg, regno, true, &meta); 7642 + err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta); 7643 + err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta); 7627 7644 7628 7645 if (may_be_null) 7629 7646 *mem_reg = saved_reg; 7647 + 7630 7648 return err; 7631 7649 } 7632 7650 ··· 8928 8948 verbose(env, "invalid map_ptr to access map->key\n"); 8929 8949 return -EACCES; 8930 8950 } 8931 - err = check_helper_mem_access(env, regno, 8932 - meta->map_ptr->key_size, false, 8933 - NULL); 8951 + err = check_helper_mem_access(env, regno, meta->map_ptr->key_size, 8952 + BPF_READ, false, NULL); 8934 8953 break; 8935 8954 case ARG_PTR_TO_MAP_VALUE: 8936 8955 if (type_may_be_null(arg_type) && register_is_null(reg)) ··· 8944 8965 return -EACCES; 8945 8966 } 8946 8967 meta->raw_mode = arg_type & MEM_UNINIT; 8947 - err = check_helper_mem_access(env, regno, 8948 - meta->map_ptr->value_size, false, 8949 - meta); 8968 + err = check_helper_mem_access(env, regno, meta->map_ptr->value_size, 8969 + arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ, 8970 + false, meta); 8950 8971 break; 8951 8972 case ARG_PTR_TO_PERCPU_BTF_ID: 8952 8973 if (!reg->btf_id) { ··· 8988 9009 */ 8989 9010 meta->raw_mode = arg_type & MEM_UNINIT; 8990 9011 if (arg_type & MEM_FIXED_SIZE) { 8991 - err = check_helper_mem_access(env, regno, fn->arg_size[arg], false, meta); 9012 + err = check_helper_mem_access(env, regno, fn->arg_size[arg], 9013 + arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ, 9014 + false, meta); 8992 9015 if (err) 8993 9016 return err; 8994 9017 if (arg_type & MEM_ALIGNED) ··· 8998 9017 } 8999 9018 break; 9000 9019 case ARG_CONST_SIZE: 9001 - err = check_mem_size_reg(env, reg, regno, false, meta); 9020 + err = check_mem_size_reg(env, reg, regno, 9021 + fn->arg_type[arg - 1] & MEM_WRITE ? 9022 + BPF_WRITE : BPF_READ, 9023 + false, meta); 9002 9024 break; 9003 9025 case ARG_CONST_SIZE_OR_ZERO: 9004 - err = check_mem_size_reg(env, reg, regno, true, meta); 9026 + err = check_mem_size_reg(env, reg, regno, 9027 + fn->arg_type[arg - 1] & MEM_WRITE ? 9028 + BPF_WRITE : BPF_READ, 9029 + true, meta); 9005 9030 break; 9006 9031 case ARG_PTR_TO_DYNPTR: 9007 9032 err = process_dynptr_func(env, regno, insn_idx, arg_type, 0); ··· 17876 17889 struct bpf_verifier_state_list *sl, **pprev; 17877 17890 struct bpf_verifier_state *cur = env->cur_state, *new, *loop_entry; 17878 17891 int i, j, n, err, states_cnt = 0; 17879 - bool force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx); 17880 - bool add_new_state = force_new_state; 17881 - bool force_exact; 17892 + bool force_new_state, add_new_state, force_exact; 17893 + 17894 + force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx) || 17895 + /* Avoid accumulating infinitely long jmp history */ 17896 + cur->jmp_history_cnt > 40; 17882 17897 17883 17898 /* bpf progs typically have pruning point every 4 instructions 17884 17899 * http://vger.kernel.org/bpfconf2019.html#session-1 ··· 17890 17901 * In tests that amounts to up to 50% reduction into total verifier 17891 17902 * memory consumption and 20% verifier time speedup. 17892 17903 */ 17904 + add_new_state = force_new_state; 17893 17905 if (env->jmps_processed - env->prev_jmps_processed >= 2 && 17894 17906 env->insn_processed - env->prev_insn_processed >= 8) 17895 17907 add_new_state = true; ··· 21203 21213 delta += cnt - 1; 21204 21214 env->prog = prog = new_prog; 21205 21215 insn = new_prog->insnsi + i + delta; 21206 - continue; 21216 + goto next_insn; 21207 21217 } 21208 21218 21209 21219 /* Implement bpf_kptr_xchg inline */
+2 -2
kernel/cgroup/cgroup.c
··· 5789 5789 { 5790 5790 struct cgroup *cgroup; 5791 5791 int ret = false; 5792 - int level = 1; 5792 + int level = 0; 5793 5793 5794 5794 lockdep_assert_held(&cgroup_mutex); 5795 5795 ··· 5797 5797 if (cgroup->nr_descendants >= cgroup->max_descendants) 5798 5798 goto fail; 5799 5799 5800 - if (level > cgroup->max_depth) 5800 + if (level >= cgroup->max_depth) 5801 5801 goto fail; 5802 5802 5803 5803 level++;
+6 -6
kernel/fork.c
··· 653 653 mm->exec_vm = oldmm->exec_vm; 654 654 mm->stack_vm = oldmm->stack_vm; 655 655 656 - retval = ksm_fork(mm, oldmm); 657 - if (retval) 658 - goto out; 659 - khugepaged_fork(mm, oldmm); 660 - 661 656 /* Use __mt_dup() to efficiently build an identical maple tree. */ 662 657 retval = __mt_dup(&oldmm->mm_mt, &mm->mm_mt, GFP_KERNEL); 663 658 if (unlikely(retval)) ··· 755 760 vma_iter_free(&vmi); 756 761 if (!retval) { 757 762 mt_set_in_rcu(vmi.mas.tree); 763 + ksm_fork(mm, oldmm); 764 + khugepaged_fork(mm, oldmm); 758 765 } else if (mpnt) { 759 766 /* 760 767 * The entire maple tree has already been duplicated. If the ··· 772 775 mmap_write_unlock(mm); 773 776 flush_tlb_mm(oldmm); 774 777 mmap_write_unlock(oldmm); 775 - dup_userfaultfd_complete(&uf); 778 + if (!retval) 779 + dup_userfaultfd_complete(&uf); 780 + else 781 + dup_userfaultfd_fail(&uf); 776 782 fail_uprobe_end: 777 783 uprobe_end_dup_mmap(); 778 784 return retval;
+1 -3
kernel/resource.c
··· 459 459 rams_size += 16; 460 460 } 461 461 462 - rams[i].start = res.start; 463 - rams[i++].end = res.end; 464 - 462 + rams[i++] = res; 465 463 start = res.end + 1; 466 464 } 467 465
+17 -12
kernel/sched/ext.c
··· 862 862 DEFINE_STATIC_KEY_FALSE(__scx_ops_enabled); 863 863 DEFINE_STATIC_PERCPU_RWSEM(scx_fork_rwsem); 864 864 static atomic_t scx_ops_enable_state_var = ATOMIC_INIT(SCX_OPS_DISABLED); 865 - static atomic_t scx_ops_bypass_depth = ATOMIC_INIT(0); 865 + static int scx_ops_bypass_depth; 866 + static DEFINE_RAW_SPINLOCK(__scx_ops_bypass_lock); 866 867 static bool scx_ops_init_task_enabled; 867 868 static bool scx_switching_all; 868 869 DEFINE_STATIC_KEY_FALSE(__scx_switched_all); ··· 4299 4298 */ 4300 4299 static void scx_ops_bypass(bool bypass) 4301 4300 { 4302 - int depth, cpu; 4301 + int cpu; 4302 + unsigned long flags; 4303 4303 4304 + raw_spin_lock_irqsave(&__scx_ops_bypass_lock, flags); 4304 4305 if (bypass) { 4305 - depth = atomic_inc_return(&scx_ops_bypass_depth); 4306 - WARN_ON_ONCE(depth <= 0); 4307 - if (depth != 1) 4308 - return; 4306 + scx_ops_bypass_depth++; 4307 + WARN_ON_ONCE(scx_ops_bypass_depth <= 0); 4308 + if (scx_ops_bypass_depth != 1) 4309 + goto unlock; 4309 4310 } else { 4310 - depth = atomic_dec_return(&scx_ops_bypass_depth); 4311 - WARN_ON_ONCE(depth < 0); 4312 - if (depth != 0) 4313 - return; 4311 + scx_ops_bypass_depth--; 4312 + WARN_ON_ONCE(scx_ops_bypass_depth < 0); 4313 + if (scx_ops_bypass_depth != 0) 4314 + goto unlock; 4314 4315 } 4315 4316 4316 4317 /* ··· 4329 4326 struct rq_flags rf; 4330 4327 struct task_struct *p, *n; 4331 4328 4332 - rq_lock_irqsave(rq, &rf); 4329 + rq_lock(rq, &rf); 4333 4330 4334 4331 if (bypass) { 4335 4332 WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING); ··· 4365 4362 sched_enq_and_set_task(&ctx); 4366 4363 } 4367 4364 4368 - rq_unlock_irqrestore(rq, &rf); 4365 + rq_unlock(rq, &rf); 4369 4366 4370 4367 /* resched to restore ticks and idle state */ 4371 4368 resched_cpu(cpu); 4372 4369 } 4370 + unlock: 4371 + raw_spin_unlock_irqrestore(&__scx_ops_bypass_lock, flags); 4373 4372 } 4374 4373 4375 4374 static void free_exit_info(struct scx_exit_info *ei)
+2 -4
kernel/trace/bpf_trace.c
··· 1202 1202 .ret_type = RET_INTEGER, 1203 1203 .arg1_type = ARG_PTR_TO_CTX, 1204 1204 .arg2_type = ARG_ANYTHING, 1205 - .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 1205 + .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED, 1206 1206 .arg3_size = sizeof(u64), 1207 1207 }; 1208 1208 ··· 1219 1219 .func = get_func_ret, 1220 1220 .ret_type = RET_INTEGER, 1221 1221 .arg1_type = ARG_PTR_TO_CTX, 1222 - .arg2_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 1222 + .arg2_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED, 1223 1223 .arg2_size = sizeof(u64), 1224 1224 }; 1225 1225 ··· 2216 2216 2217 2217 old_array = bpf_event_rcu_dereference(event->tp_event->prog_array); 2218 2218 ret = bpf_prog_array_copy(old_array, event->prog, NULL, 0, &new_array); 2219 - if (ret == -ENOENT) 2220 - goto unlock; 2221 2219 if (ret < 0) { 2222 2220 bpf_prog_array_delete_safe(old_array, event->prog); 2223 2221 } else {
+4 -8
kernel/trace/fgraph.c
··· 1252 1252 int ret = 0; 1253 1253 int i = -1; 1254 1254 1255 - mutex_lock(&ftrace_lock); 1255 + guard(mutex)(&ftrace_lock); 1256 1256 1257 1257 if (!fgraph_initialized) { 1258 - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph_idle_init", 1258 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph:online", 1259 1259 fgraph_cpu_init, NULL); 1260 1260 if (ret < 0) { 1261 1261 pr_warn("fgraph: Error to init cpu hotplug support\n"); ··· 1273 1273 } 1274 1274 1275 1275 i = fgraph_lru_alloc_index(); 1276 - if (i < 0 || WARN_ON_ONCE(fgraph_array[i] != &fgraph_stub)) { 1277 - ret = -ENOSPC; 1278 - goto out; 1279 - } 1276 + if (i < 0 || WARN_ON_ONCE(fgraph_array[i] != &fgraph_stub)) 1277 + return -ENOSPC; 1280 1278 gops->idx = i; 1281 1279 1282 1280 ftrace_graph_active++; ··· 1311 1313 gops->saved_func = NULL; 1312 1314 fgraph_lru_release_index(i); 1313 1315 } 1314 - out: 1315 - mutex_unlock(&ftrace_lock); 1316 1316 return ret; 1317 1317 } 1318 1318
+1 -1
lib/slub_kunit.c
··· 141 141 { 142 142 struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32, 143 143 SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE); 144 - u8 *p = __kmalloc_cache_noprof(s, GFP_KERNEL, 18); 144 + u8 *p = alloc_hooks(__kmalloc_cache_noprof(s, GFP_KERNEL, 18)); 145 145 146 146 kasan_disable_current(); 147 147
-1
mm/Kconfig
··· 1085 1085 depends on MMU 1086 1086 1087 1087 config GET_FREE_REGION 1088 - depends on SPARSEMEM 1089 1088 bool 1090 1089 1091 1090 config DEVICE_PRIVATE
+13 -2
mm/memory.c
··· 4187 4187 } 4188 4188 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 4189 4189 4190 + static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq); 4191 + 4190 4192 /* 4191 4193 * We enter with non-exclusive mmap_lock (to exclude vma changes, 4192 4194 * but allow concurrent faults), and pte mapped but not yet locked. ··· 4201 4199 { 4202 4200 struct vm_area_struct *vma = vmf->vma; 4203 4201 struct folio *swapcache, *folio = NULL; 4202 + DECLARE_WAITQUEUE(wait, current); 4204 4203 struct page *page; 4205 4204 struct swap_info_struct *si = NULL; 4206 4205 rmap_t rmap_flags = RMAP_NONE; ··· 4300 4297 * Relax a bit to prevent rapid 4301 4298 * repeated page faults. 4302 4299 */ 4300 + add_wait_queue(&swapcache_wq, &wait); 4303 4301 schedule_timeout_uninterruptible(1); 4302 + remove_wait_queue(&swapcache_wq, &wait); 4304 4303 goto out_page; 4305 4304 } 4306 4305 need_clear_cache = true; ··· 4609 4604 pte_unmap_unlock(vmf->pte, vmf->ptl); 4610 4605 out: 4611 4606 /* Clear the swap cache pin for direct swapin after PTL unlock */ 4612 - if (need_clear_cache) 4607 + if (need_clear_cache) { 4613 4608 swapcache_clear(si, entry, nr_pages); 4609 + if (waitqueue_active(&swapcache_wq)) 4610 + wake_up(&swapcache_wq); 4611 + } 4614 4612 if (si) 4615 4613 put_swap_device(si); 4616 4614 return ret; ··· 4628 4620 folio_unlock(swapcache); 4629 4621 folio_put(swapcache); 4630 4622 } 4631 - if (need_clear_cache) 4623 + if (need_clear_cache) { 4632 4624 swapcache_clear(si, entry, nr_pages); 4625 + if (waitqueue_active(&swapcache_wq)) 4626 + wake_up(&swapcache_wq); 4627 + } 4633 4628 if (si) 4634 4629 put_swap_device(si); 4635 4630 return ret;
+61 -23
mm/mmap.c
··· 1418 1418 vmg.flags = vm_flags; 1419 1419 } 1420 1420 1421 + /* 1422 + * clear PTEs while the vma is still in the tree so that rmap 1423 + * cannot race with the freeing later in the truncate scenario. 1424 + * This is also needed for call_mmap(), which is why vm_ops 1425 + * close function is called. 1426 + */ 1427 + vms_clean_up_area(&vms, &mas_detach); 1421 1428 vma = vma_merge_new_range(&vmg); 1422 1429 if (vma) 1423 1430 goto expanded; ··· 1446 1439 1447 1440 if (file) { 1448 1441 vma->vm_file = get_file(file); 1449 - /* 1450 - * call_mmap() may map PTE, so ensure there are no existing PTEs 1451 - * and call the vm_ops close function if one exists. 1452 - */ 1453 - vms_clean_up_area(&vms, &mas_detach); 1454 1442 error = call_mmap(file, vma); 1455 1443 if (error) 1456 1444 goto unmap_and_free_vma; ··· 1642 1640 unsigned long populate = 0; 1643 1641 unsigned long ret = -EINVAL; 1644 1642 struct file *file; 1643 + vm_flags_t vm_flags; 1645 1644 1646 1645 pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. See Documentation/mm/remap_file_pages.rst.\n", 1647 1646 current->comm, current->pid); ··· 1659 1656 if (pgoff + (size >> PAGE_SHIFT) < pgoff) 1660 1657 return ret; 1661 1658 1662 - if (mmap_write_lock_killable(mm)) 1659 + if (mmap_read_lock_killable(mm)) 1663 1660 return -EINTR; 1661 + 1662 + /* 1663 + * Look up VMA under read lock first so we can perform the security 1664 + * without holding locks (which can be problematic). We reacquire a 1665 + * write lock later and check nothing changed underneath us. 1666 + */ 1667 + vma = vma_lookup(mm, start); 1668 + 1669 + if (!vma || !(vma->vm_flags & VM_SHARED)) { 1670 + mmap_read_unlock(mm); 1671 + return -EINVAL; 1672 + } 1673 + 1674 + prot |= vma->vm_flags & VM_READ ? PROT_READ : 0; 1675 + prot |= vma->vm_flags & VM_WRITE ? PROT_WRITE : 0; 1676 + prot |= vma->vm_flags & VM_EXEC ? PROT_EXEC : 0; 1677 + 1678 + flags &= MAP_NONBLOCK; 1679 + flags |= MAP_SHARED | MAP_FIXED | MAP_POPULATE; 1680 + if (vma->vm_flags & VM_LOCKED) 1681 + flags |= MAP_LOCKED; 1682 + 1683 + /* Save vm_flags used to calculate prot and flags, and recheck later. */ 1684 + vm_flags = vma->vm_flags; 1685 + file = get_file(vma->vm_file); 1686 + 1687 + mmap_read_unlock(mm); 1688 + 1689 + /* Call outside mmap_lock to be consistent with other callers. */ 1690 + ret = security_mmap_file(file, prot, flags); 1691 + if (ret) { 1692 + fput(file); 1693 + return ret; 1694 + } 1695 + 1696 + ret = -EINVAL; 1697 + 1698 + /* OK security check passed, take write lock + let it rip. */ 1699 + if (mmap_write_lock_killable(mm)) { 1700 + fput(file); 1701 + return -EINTR; 1702 + } 1664 1703 1665 1704 vma = vma_lookup(mm, start); 1666 1705 1667 - if (!vma || !(vma->vm_flags & VM_SHARED)) 1706 + if (!vma) 1707 + goto out; 1708 + 1709 + /* Make sure things didn't change under us. */ 1710 + if (vma->vm_flags != vm_flags) 1711 + goto out; 1712 + if (vma->vm_file != file) 1668 1713 goto out; 1669 1714 1670 1715 if (start + size > vma->vm_end) { ··· 1740 1689 goto out; 1741 1690 } 1742 1691 1743 - prot |= vma->vm_flags & VM_READ ? PROT_READ : 0; 1744 - prot |= vma->vm_flags & VM_WRITE ? PROT_WRITE : 0; 1745 - prot |= vma->vm_flags & VM_EXEC ? PROT_EXEC : 0; 1746 - 1747 - flags &= MAP_NONBLOCK; 1748 - flags |= MAP_SHARED | MAP_FIXED | MAP_POPULATE; 1749 - if (vma->vm_flags & VM_LOCKED) 1750 - flags |= MAP_LOCKED; 1751 - 1752 - file = get_file(vma->vm_file); 1753 - ret = security_mmap_file(vma->vm_file, prot, flags); 1754 - if (ret) 1755 - goto out_fput; 1756 1692 ret = do_mmap(vma->vm_file, start, size, 1757 1693 prot, flags, 0, pgoff, &populate, NULL); 1758 - out_fput: 1759 - fput(file); 1760 1694 out: 1761 1695 mmap_write_unlock(mm); 1696 + fput(file); 1762 1697 if (populate) 1763 1698 mm_populate(ret, populate); 1764 1699 if (!IS_ERR_VALUE(ret)) ··· 1791 1754 VMG_STATE(vmg, mm, vmi, addr, addr + len, flags, PHYS_PFN(addr)); 1792 1755 1793 1756 vmg.prev = vma; 1794 - vma_iter_next_range(vmi); 1757 + /* vmi is positioned at prev, which this mode expects. */ 1758 + vmg.merge_flags = VMG_FLAG_JUST_EXPAND; 1795 1759 1796 1760 if (vma_merge_new_range(&vmg)) 1797 1761 goto out;
+1 -1
mm/numa_memblks.c
··· 349 349 for_each_reserved_mem_region(mb_region) { 350 350 int nid = memblock_get_region_node(mb_region); 351 351 352 - if (nid != MAX_NUMNODES) 352 + if (numa_valid_node(nid)) 353 353 node_set(nid, reserved_nodemask); 354 354 } 355 355
+5 -5
mm/page_alloc.c
··· 2893 2893 page = __rmqueue(zone, order, migratetype, alloc_flags); 2894 2894 2895 2895 /* 2896 - * If the allocation fails, allow OOM handling access 2897 - * to HIGHATOMIC reserves as failing now is worse than 2898 - * failing a high-order atomic allocation in the 2899 - * future. 2896 + * If the allocation fails, allow OOM handling and 2897 + * order-0 (atomic) allocs access to HIGHATOMIC 2898 + * reserves as failing now is worse than failing a 2899 + * high-order atomic allocation in the future. 2900 2900 */ 2901 - if (!page && (alloc_flags & ALLOC_OOM)) 2901 + if (!page && (alloc_flags & (ALLOC_OOM|ALLOC_NON_BLOCK))) 2902 2902 page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); 2903 2903 2904 2904 if (!page) {
+11 -5
mm/pagewalk.c
··· 744 744 pud = pudp_get(pudp); 745 745 if (pud_none(pud)) 746 746 goto not_found; 747 - if (IS_ENABLED(CONFIG_PGTABLE_HAS_HUGE_LEAVES) && pud_leaf(pud)) { 747 + if (IS_ENABLED(CONFIG_PGTABLE_HAS_HUGE_LEAVES) && 748 + (!pud_present(pud) || pud_leaf(pud))) { 748 749 ptl = pud_lock(vma->vm_mm, pudp); 749 750 pud = pudp_get(pudp); 750 751 ··· 754 753 fw->pudp = pudp; 755 754 fw->pud = pud; 756 755 756 + /* 757 + * TODO: FW_MIGRATION support for PUD migration entries 758 + * once there are relevant users. 759 + */ 757 760 if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) { 758 761 spin_unlock(ptl); 759 762 goto not_found; ··· 774 769 } 775 770 776 771 pmd_table: 777 - VM_WARN_ON_ONCE(pud_leaf(*pudp)); 772 + VM_WARN_ON_ONCE(!pud_present(pud) || pud_leaf(pud)); 778 773 pmdp = pmd_offset(pudp, addr); 779 774 pmd = pmdp_get_lockless(pmdp); 780 775 if (pmd_none(pmd)) 781 776 goto not_found; 782 - if (IS_ENABLED(CONFIG_PGTABLE_HAS_HUGE_LEAVES) && pmd_leaf(pmd)) { 777 + if (IS_ENABLED(CONFIG_PGTABLE_HAS_HUGE_LEAVES) && 778 + (!pmd_present(pmd) || pmd_leaf(pmd))) { 783 779 ptl = pmd_lock(vma->vm_mm, pmdp); 784 780 pmd = pmdp_get(pmdp); 785 781 ··· 792 786 if (pmd_none(pmd)) { 793 787 spin_unlock(ptl); 794 788 goto not_found; 795 - } else if (!pmd_leaf(pmd)) { 789 + } else if (pmd_present(pmd) && !pmd_leaf(pmd)) { 796 790 spin_unlock(ptl); 797 791 goto pte_table; 798 792 } else if (pmd_present(pmd)) { ··· 818 812 } 819 813 820 814 pte_table: 821 - VM_WARN_ON_ONCE(pmd_leaf(pmdp_get_lockless(pmdp))); 815 + VM_WARN_ON_ONCE(!pmd_present(pmd) || pmd_leaf(pmd)); 822 816 ptep = pte_offset_map_lock(vma->vm_mm, pmdp, addr, &ptl); 823 817 if (!ptep) 824 818 goto not_found;
+2
mm/shmem.c
··· 1166 1166 stat->attributes_mask |= (STATX_ATTR_APPEND | 1167 1167 STATX_ATTR_IMMUTABLE | 1168 1168 STATX_ATTR_NODUMP); 1169 + inode_lock_shared(inode); 1169 1170 generic_fillattr(idmap, request_mask, inode, stat); 1171 + inode_unlock_shared(inode); 1170 1172 1171 1173 if (shmem_huge_global_enabled(inode, 0, 0, false, NULL, 0)) 1172 1174 stat->blksize = HPAGE_PMD_SIZE;
+1 -1
mm/slab_common.c
··· 1209 1209 /* Zero out spare memory. */ 1210 1210 if (want_init_on_alloc(flags)) { 1211 1211 kasan_disable_current(); 1212 - memset((void *)p + new_size, 0, ks - new_size); 1212 + memset(kasan_reset_tag(p) + new_size, 0, ks - new_size); 1213 1213 kasan_enable_current(); 1214 1214 } 1215 1215
+15 -8
mm/vma.c
··· 917 917 pgoff_t pgoff = vmg->pgoff; 918 918 pgoff_t pglen = PHYS_PFN(end - start); 919 919 bool can_merge_left, can_merge_right; 920 + bool just_expand = vmg->merge_flags & VMG_FLAG_JUST_EXPAND; 920 921 921 922 mmap_assert_write_locked(vmg->mm); 922 923 VM_WARN_ON(vmg->vma); ··· 931 930 return NULL; 932 931 933 932 can_merge_left = can_vma_merge_left(vmg); 934 - can_merge_right = can_vma_merge_right(vmg, can_merge_left); 933 + can_merge_right = !just_expand && can_vma_merge_right(vmg, can_merge_left); 935 934 936 935 /* If we can merge with the next VMA, adjust vmg accordingly. */ 937 936 if (can_merge_right) { ··· 954 953 if (can_merge_right && !can_merge_remove_vma(next)) 955 954 vmg->end = end; 956 955 957 - vma_prev(vmg->vmi); /* Equivalent to going to the previous range */ 956 + /* In expand-only case we are already positioned at prev. */ 957 + if (!just_expand) { 958 + /* Equivalent to going to the previous range. */ 959 + vma_prev(vmg->vmi); 960 + } 958 961 } 959 962 960 963 /* ··· 972 967 } 973 968 974 969 /* If expansion failed, reset state. Allows us to retry merge later. */ 975 - vmg->vma = NULL; 976 - vmg->start = start; 977 - vmg->end = end; 978 - vmg->pgoff = pgoff; 979 - if (vmg->vma == prev) 980 - vma_iter_set(vmg->vmi, start); 970 + if (!just_expand) { 971 + vmg->vma = NULL; 972 + vmg->start = start; 973 + vmg->end = end; 974 + vmg->pgoff = pgoff; 975 + if (vmg->vma == prev) 976 + vma_iter_set(vmg->vmi, start); 977 + } 981 978 982 979 return NULL; 983 980 }
+17 -9
mm/vma.h
··· 59 59 VMA_MERGE_SUCCESS, 60 60 }; 61 61 62 + enum vma_merge_flags { 63 + VMG_FLAG_DEFAULT = 0, 64 + /* 65 + * If we can expand, simply do so. We know there is nothing to merge to 66 + * the right. Does not reset state upon failure to merge. The VMA 67 + * iterator is assumed to be positioned at the previous VMA, rather than 68 + * at the gap. 69 + */ 70 + VMG_FLAG_JUST_EXPAND = 1 << 0, 71 + }; 72 + 62 73 /* Represents a VMA merge operation. */ 63 74 struct vma_merge_struct { 64 75 struct mm_struct *mm; ··· 86 75 struct mempolicy *policy; 87 76 struct vm_userfaultfd_ctx uffd_ctx; 88 77 struct anon_vma_name *anon_name; 78 + enum vma_merge_flags merge_flags; 89 79 enum vma_merge_state state; 90 80 }; 91 81 ··· 111 99 .flags = flags_, \ 112 100 .pgoff = pgoff_, \ 113 101 .state = VMA_MERGE_START, \ 102 + .merge_flags = VMG_FLAG_DEFAULT, \ 114 103 } 115 104 116 105 #define VMG_VMA_STATE(name, vmi_, prev_, vma_, start_, end_) \ ··· 131 118 .uffd_ctx = vma_->vm_userfaultfd_ctx, \ 132 119 .anon_name = anon_vma_name(vma_), \ 133 120 .state = VMA_MERGE_START, \ 121 + .merge_flags = VMG_FLAG_DEFAULT, \ 134 122 } 135 123 136 124 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE ··· 255 241 * failure method of leaving a gap where the MAP_FIXED mapping failed. 256 242 */ 257 243 mas_set_range(mas, vms->start, vms->end - 1); 258 - if (unlikely(mas_store_gfp(mas, NULL, GFP_KERNEL))) { 259 - pr_warn_once("%s: (%d) Unable to abort munmap() operation\n", 260 - current->comm, current->pid); 261 - /* Leaving vmas detached and in-tree may hamper recovery */ 262 - reattach_vmas(mas_detach); 263 - } else { 264 - /* Clean up the insertion of the unfortunate gap */ 265 - vms_complete_munmap_vmas(vms, mas_detach); 266 - } 244 + mas_store_gfp(mas, NULL, GFP_KERNEL|__GFP_NOFAIL); 245 + /* Clean up the insertion of the unfortunate gap */ 246 + vms_complete_munmap_vmas(vms, mas_detach); 267 247 } 268 248 269 249 int
+11 -7
net/bluetooth/hci_sync.c
··· 206 206 return ERR_PTR(err); 207 207 } 208 208 209 + /* If command return a status event skb will be set to NULL as there are 210 + * no parameters. 211 + */ 212 + if (!skb) 213 + return ERR_PTR(-ENODATA); 214 + 209 215 return skb; 210 216 } 211 217 EXPORT_SYMBOL(__hci_cmd_sync_sk); ··· 261 255 u8 status; 262 256 263 257 skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk); 258 + 259 + /* If command return a status event, skb will be set to -ENODATA */ 260 + if (skb == ERR_PTR(-ENODATA)) 261 + return 0; 262 + 264 263 if (IS_ERR(skb)) { 265 264 if (!event) 266 265 bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld", opcode, 267 266 PTR_ERR(skb)); 268 267 return PTR_ERR(skb); 269 268 } 270 - 271 - /* If command return a status event skb will be set to NULL as there are 272 - * no parameters, in case of failure IS_ERR(skb) would have be set to 273 - * the actual error would be found with PTR_ERR(skb). 274 - */ 275 - if (!skb) 276 - return 0; 277 269 278 270 status = skb->data[0]; 279 271
+1
net/bpf/test_run.c
··· 246 246 head->ctx.data_meta = head->orig_ctx.data_meta; 247 247 head->ctx.data_end = head->orig_ctx.data_end; 248 248 xdp_update_frame_from_buff(&head->ctx, head->frame); 249 + head->frame->mem = head->orig_ctx.rxq->mem; 249 250 } 250 251 251 252 static int xdp_recv_frames(struct xdp_frame **frames, int nframes,
+4
net/core/dev.c
··· 3641 3641 return 0; 3642 3642 3643 3643 if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) { 3644 + if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) && 3645 + skb_network_header_len(skb) != sizeof(struct ipv6hdr)) 3646 + goto sw_checksum; 3644 3647 switch (skb->csum_offset) { 3645 3648 case offsetof(struct tcphdr, check): 3646 3649 case offsetof(struct udphdr, check): ··· 3651 3648 } 3652 3649 } 3653 3650 3651 + sw_checksum: 3654 3652 return skb_checksum_help(skb); 3655 3653 } 3656 3654 EXPORT_SYMBOL(skb_csum_hwoffload_help);
+15 -27
net/core/filter.c
··· 6292 6292 { 6293 6293 int ret = BPF_MTU_CHK_RET_FRAG_NEEDED; 6294 6294 struct net_device *dev = skb->dev; 6295 - int skb_len, dev_len; 6296 - int mtu = 0; 6295 + int mtu, dev_len, skb_len; 6297 6296 6298 - if (unlikely(flags & ~(BPF_MTU_CHK_SEGS))) { 6299 - ret = -EINVAL; 6300 - goto out; 6301 - } 6302 - 6303 - if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len))) { 6304 - ret = -EINVAL; 6305 - goto out; 6306 - } 6297 + if (unlikely(flags & ~(BPF_MTU_CHK_SEGS))) 6298 + return -EINVAL; 6299 + if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len))) 6300 + return -EINVAL; 6307 6301 6308 6302 dev = __dev_via_ifindex(dev, ifindex); 6309 - if (unlikely(!dev)) { 6310 - ret = -ENODEV; 6311 - goto out; 6312 - } 6303 + if (unlikely(!dev)) 6304 + return -ENODEV; 6313 6305 6314 6306 mtu = READ_ONCE(dev->mtu); 6315 6307 dev_len = mtu + dev->hard_header_len; ··· 6336 6344 struct net_device *dev = xdp->rxq->dev; 6337 6345 int xdp_len = xdp->data_end - xdp->data; 6338 6346 int ret = BPF_MTU_CHK_RET_SUCCESS; 6339 - int mtu = 0, dev_len; 6347 + int mtu, dev_len; 6340 6348 6341 6349 /* XDP variant doesn't support multi-buffer segment check (yet) */ 6342 - if (unlikely(flags)) { 6343 - ret = -EINVAL; 6344 - goto out; 6345 - } 6350 + if (unlikely(flags)) 6351 + return -EINVAL; 6346 6352 6347 6353 dev = __dev_via_ifindex(dev, ifindex); 6348 - if (unlikely(!dev)) { 6349 - ret = -ENODEV; 6350 - goto out; 6351 - } 6354 + if (unlikely(!dev)) 6355 + return -ENODEV; 6352 6356 6353 6357 mtu = READ_ONCE(dev->mtu); 6354 6358 dev_len = mtu + dev->hard_header_len; ··· 6356 6368 xdp_len += len_diff; /* minus result pass check */ 6357 6369 if (xdp_len > dev_len) 6358 6370 ret = BPF_MTU_CHK_RET_FRAG_NEEDED; 6359 - out: 6371 + 6360 6372 *mtu_len = mtu; 6361 6373 return ret; 6362 6374 } ··· 6367 6379 .ret_type = RET_INTEGER, 6368 6380 .arg1_type = ARG_PTR_TO_CTX, 6369 6381 .arg2_type = ARG_ANYTHING, 6370 - .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 6382 + .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_WRITE | MEM_ALIGNED, 6371 6383 .arg3_size = sizeof(u32), 6372 6384 .arg4_type = ARG_ANYTHING, 6373 6385 .arg5_type = ARG_ANYTHING, ··· 6379 6391 .ret_type = RET_INTEGER, 6380 6392 .arg1_type = ARG_PTR_TO_CTX, 6381 6393 .arg2_type = ARG_ANYTHING, 6382 - .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED, 6394 + .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_WRITE | MEM_ALIGNED, 6383 6395 .arg3_size = sizeof(u32), 6384 6396 .arg4_type = ARG_ANYTHING, 6385 6397 .arg5_type = ARG_ANYTHING,
+2 -2
net/core/rtnetlink.c
··· 2140 2140 [IFLA_NUM_TX_QUEUES] = { .type = NLA_U32 }, 2141 2141 [IFLA_NUM_RX_QUEUES] = { .type = NLA_U32 }, 2142 2142 [IFLA_GSO_MAX_SEGS] = { .type = NLA_U32 }, 2143 - [IFLA_GSO_MAX_SIZE] = { .type = NLA_U32 }, 2143 + [IFLA_GSO_MAX_SIZE] = NLA_POLICY_MIN(NLA_U32, MAX_TCP_HEADER + 1), 2144 2144 [IFLA_PHYS_PORT_ID] = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN }, 2145 2145 [IFLA_CARRIER_CHANGES] = { .type = NLA_U32 }, /* ignored */ 2146 2146 [IFLA_PHYS_SWITCH_ID] = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN }, ··· 2165 2165 [IFLA_TSO_MAX_SIZE] = { .type = NLA_REJECT }, 2166 2166 [IFLA_TSO_MAX_SEGS] = { .type = NLA_REJECT }, 2167 2167 [IFLA_ALLMULTI] = { .type = NLA_REJECT }, 2168 - [IFLA_GSO_IPV4_MAX_SIZE] = { .type = NLA_U32 }, 2168 + [IFLA_GSO_IPV4_MAX_SIZE] = NLA_POLICY_MIN(NLA_U32, MAX_TCP_HEADER + 1), 2169 2169 [IFLA_GRO_IPV4_MAX_SIZE] = { .type = NLA_U32 }, 2170 2170 }; 2171 2171
+4
net/core/sock_map.c
··· 1760 1760 ret = -EINVAL; 1761 1761 goto out; 1762 1762 } 1763 + if (!sockmap_link->map) { 1764 + ret = -ENOLINK; 1765 + goto out; 1766 + } 1763 1767 1764 1768 ret = sock_map_prog_link_lookup(sockmap_link->map, &pprog, &plink, 1765 1769 sockmap_link->attach_type);
+1 -1
net/ipv4/ip_tunnel.c
··· 218 218 219 219 ip_tunnel_flags_copy(flags, parms->i_flags); 220 220 221 - hlist_for_each_entry_rcu(t, head, hash_node) { 221 + hlist_for_each_entry_rcu(t, head, hash_node, lockdep_rtnl_is_held()) { 222 222 if (local == t->parms.iph.saddr && 223 223 remote == t->parms.iph.daddr && 224 224 link == READ_ONCE(t->parms.link) &&
+4 -3
net/ipv4/tcp_bpf.c
··· 221 221 int flags, 222 222 int *addr_len) 223 223 { 224 - struct tcp_sock *tcp = tcp_sk(sk); 225 224 int peek = flags & MSG_PEEK; 226 - u32 seq = tcp->copied_seq; 227 225 struct sk_psock *psock; 226 + struct tcp_sock *tcp; 228 227 int copied = 0; 228 + u32 seq; 229 229 230 230 if (unlikely(flags & MSG_ERRQUEUE)) 231 231 return inet_recv_error(sk, msg, len, addr_len); ··· 238 238 return tcp_recvmsg(sk, msg, len, flags, addr_len); 239 239 240 240 lock_sock(sk); 241 - 241 + tcp = tcp_sk(sk); 242 + seq = tcp->copied_seq; 242 243 /* We may have received data on the sk_receive_queue pre-accept and 243 244 * then we can not use read_skb in this context because we haven't 244 245 * assigned a sk_socket yet so have no link to the ops. The work-around
+7 -8
net/ipv6/netfilter/nf_reject_ipv6.c
··· 268 268 void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb, 269 269 int hook) 270 270 { 271 - struct sk_buff *nskb; 272 - struct tcphdr _otcph; 273 - const struct tcphdr *otcph; 274 - unsigned int otcplen, hh_len; 275 271 const struct ipv6hdr *oip6h = ipv6_hdr(oldskb); 276 272 struct dst_entry *dst = NULL; 273 + const struct tcphdr *otcph; 274 + struct sk_buff *nskb; 275 + struct tcphdr _otcph; 276 + unsigned int otcplen; 277 277 struct flowi6 fl6; 278 278 279 279 if ((!(ipv6_addr_type(&oip6h->saddr) & IPV6_ADDR_UNICAST)) || ··· 312 312 if (IS_ERR(dst)) 313 313 return; 314 314 315 - hh_len = (dst->dev->hard_header_len + 15)&~15; 316 - nskb = alloc_skb(hh_len + 15 + dst->header_len + sizeof(struct ipv6hdr) 317 - + sizeof(struct tcphdr) + dst->trailer_len, 315 + nskb = alloc_skb(LL_MAX_HEADER + sizeof(struct ipv6hdr) + 316 + sizeof(struct tcphdr) + dst->trailer_len, 318 317 GFP_ATOMIC); 319 318 320 319 if (!nskb) { ··· 326 327 327 328 nskb->mark = fl6.flowi6_mark; 328 329 329 - skb_reserve(nskb, hh_len + dst->header_len); 330 + skb_reserve(nskb, LL_MAX_HEADER); 330 331 nf_reject_ip6hdr_put(nskb, oldskb, IPPROTO_TCP, ip6_dst_hoplimit(dst)); 331 332 nf_reject_ip6_tcphdr_put(nskb, oldskb, otcph, otcplen); 332 333
+1 -1
net/mac80211/Kconfig
··· 96 96 97 97 config MAC80211_MESSAGE_TRACING 98 98 bool "Trace all mac80211 debug messages" 99 - depends on MAC80211 99 + depends on MAC80211 && TRACING 100 100 help 101 101 Select this option to have mac80211 register the 102 102 mac80211_msg trace subsystem with tracepoints to
+16 -9
net/mac80211/cfg.c
··· 3071 3071 bool update_txp_type = false; 3072 3072 bool has_monitor = false; 3073 3073 int user_power_level; 3074 + int old_power = local->user_power_level; 3074 3075 3075 3076 lockdep_assert_wiphy(local->hw.wiphy); 3076 3077 ··· 3181 3180 } 3182 3181 } 3183 3182 3183 + if (local->emulate_chanctx && 3184 + (old_power != local->user_power_level)) 3185 + ieee80211_hw_conf_chan(local); 3186 + 3184 3187 return 0; 3185 3188 } 3186 3189 ··· 3195 3190 struct ieee80211_local *local = wiphy_priv(wiphy); 3196 3191 struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); 3197 3192 3198 - if (local->ops->get_txpower) 3193 + if (local->ops->get_txpower && 3194 + (sdata->flags & IEEE80211_SDATA_IN_DRIVER)) 3199 3195 return drv_get_txpower(local, sdata, dbm); 3200 3196 3201 3197 if (local->emulate_chanctx) ··· 4885 4879 ieee80211_color_change_finalize(link); 4886 4880 } 4887 4881 4888 - void ieee80211_color_collision_detection_work(struct work_struct *work) 4882 + void ieee80211_color_collision_detection_work(struct wiphy *wiphy, 4883 + struct wiphy_work *work) 4889 4884 { 4890 - struct delayed_work *delayed_work = to_delayed_work(work); 4891 4885 struct ieee80211_link_data *link = 4892 - container_of(delayed_work, struct ieee80211_link_data, 4893 - color_collision_detect_work); 4886 + container_of(work, struct ieee80211_link_data, 4887 + color_collision_detect_work.work); 4894 4888 struct ieee80211_sub_if_data *sdata = link->sdata; 4895 4889 4896 4890 cfg80211_obss_color_collision_notify(sdata->dev, link->color_bitmap, ··· 4943 4937 return; 4944 4938 } 4945 4939 4946 - if (delayed_work_pending(&link->color_collision_detect_work)) { 4940 + if (wiphy_delayed_work_pending(sdata->local->hw.wiphy, 4941 + &link->color_collision_detect_work)) { 4947 4942 rcu_read_unlock(); 4948 4943 return; 4949 4944 } ··· 4953 4946 /* queue the color collision detection event every 500 ms in order to 4954 4947 * avoid sending too much netlink messages to userspace. 4955 4948 */ 4956 - ieee80211_queue_delayed_work(&sdata->local->hw, 4957 - &link->color_collision_detect_work, 4958 - msecs_to_jiffies(500)); 4949 + wiphy_delayed_work_queue(sdata->local->hw.wiphy, 4950 + &link->color_collision_detect_work, 4951 + msecs_to_jiffies(500)); 4959 4952 4960 4953 rcu_read_unlock(); 4961 4954 }
+6 -4
net/mac80211/ieee80211_i.h
··· 892 892 /* temporary data for search algorithm etc. */ 893 893 struct ieee80211_chan_req req; 894 894 895 - struct ieee80211_chanctx_conf conf; 896 - 897 895 bool radar_detected; 896 + 897 + /* MUST be last - ends in a flexible-array member. */ 898 + struct ieee80211_chanctx_conf conf; 898 899 }; 899 900 900 901 struct mac80211_qos_map { ··· 1052 1051 } csa; 1053 1052 1054 1053 struct wiphy_work color_change_finalize_work; 1055 - struct delayed_work color_collision_detect_work; 1054 + struct wiphy_delayed_work color_collision_detect_work; 1056 1055 u64 color_bitmap; 1057 1056 1058 1057 /* context reservation -- protected with wiphy mutex */ ··· 2004 2003 /* color change handling */ 2005 2004 void ieee80211_color_change_finalize_work(struct wiphy *wiphy, 2006 2005 struct wiphy_work *work); 2007 - void ieee80211_color_collision_detection_work(struct work_struct *work); 2006 + void ieee80211_color_collision_detection_work(struct wiphy *wiphy, 2007 + struct wiphy_work *work); 2008 2008 2009 2009 /* interface handling */ 2010 2010 #define MAC80211_SUPPORTED_FEATURES_TX (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
+25 -17
net/mac80211/key.c
··· 987 987 } 988 988 } 989 989 990 + static void 991 + ieee80211_key_iter(struct ieee80211_hw *hw, 992 + struct ieee80211_vif *vif, 993 + struct ieee80211_key *key, 994 + void (*iter)(struct ieee80211_hw *hw, 995 + struct ieee80211_vif *vif, 996 + struct ieee80211_sta *sta, 997 + struct ieee80211_key_conf *key, 998 + void *data), 999 + void *iter_data) 1000 + { 1001 + /* skip keys of station in removal process */ 1002 + if (key->sta && key->sta->removed) 1003 + return; 1004 + if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)) 1005 + return; 1006 + iter(hw, vif, key->sta ? &key->sta->sta : NULL, 1007 + &key->conf, iter_data); 1008 + } 1009 + 990 1010 void ieee80211_iter_keys(struct ieee80211_hw *hw, 991 1011 struct ieee80211_vif *vif, 992 1012 void (*iter)(struct ieee80211_hw *hw, ··· 1025 1005 if (vif) { 1026 1006 sdata = vif_to_sdata(vif); 1027 1007 list_for_each_entry_safe(key, tmp, &sdata->key_list, list) 1028 - iter(hw, &sdata->vif, 1029 - key->sta ? &key->sta->sta : NULL, 1030 - &key->conf, iter_data); 1008 + ieee80211_key_iter(hw, vif, key, iter, iter_data); 1031 1009 } else { 1032 1010 list_for_each_entry(sdata, &local->interfaces, list) 1033 1011 list_for_each_entry_safe(key, tmp, 1034 1012 &sdata->key_list, list) 1035 - iter(hw, &sdata->vif, 1036 - key->sta ? &key->sta->sta : NULL, 1037 - &key->conf, iter_data); 1013 + ieee80211_key_iter(hw, &sdata->vif, key, 1014 + iter, iter_data); 1038 1015 } 1039 1016 } 1040 1017 EXPORT_SYMBOL(ieee80211_iter_keys); ··· 1048 1031 { 1049 1032 struct ieee80211_key *key; 1050 1033 1051 - list_for_each_entry_rcu(key, &sdata->key_list, list) { 1052 - /* skip keys of station in removal process */ 1053 - if (key->sta && key->sta->removed) 1054 - continue; 1055 - if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)) 1056 - continue; 1057 - 1058 - iter(hw, &sdata->vif, 1059 - key->sta ? &key->sta->sta : NULL, 1060 - &key->conf, iter_data); 1061 - } 1034 + list_for_each_entry_rcu(key, &sdata->key_list, list) 1035 + ieee80211_key_iter(hw, &sdata->vif, key, iter, iter_data); 1062 1036 } 1063 1037 1064 1038 void ieee80211_iter_keys_rcu(struct ieee80211_hw *hw,
+4 -3
net/mac80211/link.c
··· 44 44 ieee80211_csa_finalize_work); 45 45 wiphy_work_init(&link->color_change_finalize_work, 46 46 ieee80211_color_change_finalize_work); 47 - INIT_DELAYED_WORK(&link->color_collision_detect_work, 48 - ieee80211_color_collision_detection_work); 47 + wiphy_delayed_work_init(&link->color_collision_detect_work, 48 + ieee80211_color_collision_detection_work); 49 49 INIT_LIST_HEAD(&link->assigned_chanctx_list); 50 50 INIT_LIST_HEAD(&link->reserved_chanctx_list); 51 51 wiphy_delayed_work_init(&link->dfs_cac_timer_work, ··· 75 75 if (link->sdata->vif.type == NL80211_IFTYPE_STATION) 76 76 ieee80211_mgd_stop_link(link); 77 77 78 - cancel_delayed_work_sync(&link->color_collision_detect_work); 78 + wiphy_delayed_work_cancel(link->sdata->local->hw.wiphy, 79 + &link->color_collision_detect_work); 79 80 wiphy_work_cancel(link->sdata->local->hw.wiphy, 80 81 &link->color_change_finalize_work); 81 82 wiphy_work_cancel(link->sdata->local->hw.wiphy,
+2
net/mac80211/main.c
··· 167 167 } 168 168 169 169 power = ieee80211_chandef_max_power(&chandef); 170 + if (local->user_power_level != IEEE80211_UNSET_POWER_LEVEL) 171 + power = min(local->user_power_level, power); 170 172 171 173 rcu_read_lock(); 172 174 list_for_each_entry_rcu(sdata, &local->interfaces, list) {
+2
net/mptcp/protocol.c
··· 2864 2864 if (unlikely(!net->mib.mptcp_statistics) && !mptcp_mib_alloc(net)) 2865 2865 return -ENOMEM; 2866 2866 2867 + rcu_read_lock(); 2867 2868 ret = mptcp_init_sched(mptcp_sk(sk), 2868 2869 mptcp_sched_find(mptcp_get_scheduler(net))); 2870 + rcu_read_unlock(); 2869 2871 if (ret) 2870 2872 return ret; 2871 2873
+3
net/netfilter/nft_payload.c
··· 904 904 ((priv->base != NFT_PAYLOAD_TRANSPORT_HEADER && 905 905 priv->base != NFT_PAYLOAD_INNER_HEADER) || 906 906 skb->ip_summed != CHECKSUM_PARTIAL)) { 907 + if (offset + priv->len > skb->len) 908 + goto err; 909 + 907 910 fsum = skb_checksum(skb, offset, priv->len, 0); 908 911 tsum = csum_partial(src, priv->len, 0); 909 912
+1 -1
net/netfilter/x_tables.c
··· 1269 1269 1270 1270 /* and once again: */ 1271 1271 list_for_each_entry(t, &xt_net->tables[af], list) 1272 - if (strcmp(t->name, name) == 0) 1272 + if (strcmp(t->name, name) == 0 && owner == t->me) 1273 1273 return t; 1274 1274 1275 1275 module_put(owner);
+1
net/sched/cls_api.c
··· 1518 1518 return 0; 1519 1519 1520 1520 err_dev_insert: 1521 + tcf_block_offload_unbind(block, q, ei); 1521 1522 err_block_offload_bind: 1522 1523 tcf_chain0_head_change_cb_del(block, ei); 1523 1524 err_chain0_head_change_cb_add:
+1 -1
net/sched/sch_api.c
··· 791 791 drops = max_t(int, n, 0); 792 792 rcu_read_lock(); 793 793 while ((parentid = sch->parent)) { 794 - if (TC_H_MAJ(parentid) == TC_H_MAJ(TC_H_INGRESS)) 794 + if (parentid == TC_H_ROOT) 795 795 break; 796 796 797 797 if (sch->flags & TCQ_F_NOPARENT)
+8
net/wireless/core.c
··· 1280 1280 /* deleted from the list, so can't be found from nl80211 any more */ 1281 1281 cqm_config = rcu_access_pointer(wdev->cqm_config); 1282 1282 kfree_rcu(cqm_config, rcu_head); 1283 + RCU_INIT_POINTER(wdev->cqm_config, NULL); 1283 1284 1284 1285 /* 1285 1286 * Ensure that all events have been processed and ··· 1750 1749 wiphy_work_flush(wiphy, &dwork->work); 1751 1750 } 1752 1751 EXPORT_SYMBOL_GPL(wiphy_delayed_work_flush); 1752 + 1753 + bool wiphy_delayed_work_pending(struct wiphy *wiphy, 1754 + struct wiphy_delayed_work *dwork) 1755 + { 1756 + return timer_pending(&dwork->timer); 1757 + } 1758 + EXPORT_SYMBOL_GPL(wiphy_delayed_work_pending); 1753 1759 1754 1760 static int __init cfg80211_init(void) 1755 1761 {
+4
net/wireless/scan.c
··· 3051 3051 freq = ieee80211_channel_to_freq_khz(ap_info->channel, band); 3052 3052 data.channel = ieee80211_get_channel_khz(wiphy, freq); 3053 3053 3054 + /* Skip if RNR element specifies an unsupported channel */ 3055 + if (!data.channel) 3056 + continue; 3057 + 3054 3058 /* Skip if BSS entry generated from MBSSID or DIRECT source 3055 3059 * frame data available already. 3056 3060 */
+3
sound/firewire/amdtp-stream.c
··· 172 172 step = max(step, amdtp_syt_intervals[i]); 173 173 } 174 174 175 + if (step == 0) 176 + return -EINVAL; 177 + 175 178 t.min = roundup(s->min, step); 176 179 t.max = rounddown(s->max, step); 177 180 t.integer = 1;
+4
sound/hda/intel-dsp-config.c
··· 723 723 /* BayTrail */ 724 724 { 725 725 .flags = FLAG_SST_OR_SOF_BYT, 726 + .acpi_hid = "LPE0F28", 727 + }, 728 + { 729 + .flags = FLAG_SST_OR_SOF_BYT, 726 730 .acpi_hid = "80860F28", 727 731 }, 728 732 /* CherryTrail */
+1 -1
sound/pci/hda/Kconfig
··· 198 198 depends on SND_SOC 199 199 select SND_SOC_TAS2781_COMLIB 200 200 select SND_SOC_TAS2781_FMWLIB 201 - select CRC32_SARWATE 201 + select CRC32 202 202 help 203 203 Say Y or M here to include TAS2781 I2C HD-audio side codec support 204 204 in snd-hda-intel driver, such as ALC287.
+59 -24
sound/pci/hda/patch_realtek.c
··· 3868 3868 3869 3869 hp_pin_sense = snd_hda_jack_detect(codec, hp_pin); 3870 3870 3871 - if (hp_pin_sense) 3871 + if (hp_pin_sense) { 3872 3872 msleep(2); 3873 3873 3874 - snd_hda_codec_write(codec, hp_pin, 0, 3875 - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 3874 + snd_hda_codec_write(codec, hp_pin, 0, 3875 + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); 3876 3876 3877 - if (hp_pin_sense) 3878 - msleep(85); 3877 + msleep(75); 3879 3878 3880 - snd_hda_codec_write(codec, hp_pin, 0, 3881 - AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); 3882 - 3883 - if (hp_pin_sense) 3884 - msleep(100); 3879 + snd_hda_codec_write(codec, hp_pin, 0, 3880 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); 3881 + msleep(75); 3882 + } 3885 3883 } 3886 3884 3887 3885 static void alc_default_shutup(struct hda_codec *codec) ··· 3895 3897 3896 3898 hp_pin_sense = snd_hda_jack_detect(codec, hp_pin); 3897 3899 3898 - if (hp_pin_sense) 3900 + if (hp_pin_sense) { 3899 3901 msleep(2); 3900 3902 3901 - snd_hda_codec_write(codec, hp_pin, 0, 3902 - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 3903 - 3904 - if (hp_pin_sense) 3905 - msleep(85); 3906 - 3907 - if (!spec->no_shutup_pins) 3908 3903 snd_hda_codec_write(codec, hp_pin, 0, 3909 - AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); 3904 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 3910 3905 3911 - if (hp_pin_sense) 3912 - msleep(100); 3906 + msleep(75); 3913 3907 3908 + if (!spec->no_shutup_pins) 3909 + snd_hda_codec_write(codec, hp_pin, 0, 3910 + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); 3911 + 3912 + msleep(75); 3913 + } 3914 3914 alc_auto_setup_eapd(codec, false); 3915 3915 alc_shutup_pins(codec); 3916 3916 } ··· 7521 7525 ALC286_FIXUP_SONY_MIC_NO_PRESENCE, 7522 7526 ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT, 7523 7527 ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 7528 + ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, 7524 7529 ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, 7525 7530 ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 7526 7531 ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, ··· 7552 7555 ALC290_FIXUP_SUBWOOFER_HSJACK, 7553 7556 ALC269_FIXUP_THINKPAD_ACPI, 7554 7557 ALC269_FIXUP_DMIC_THINKPAD_ACPI, 7558 + ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13, 7555 7559 ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO, 7556 7560 ALC255_FIXUP_ACER_MIC_NO_PRESENCE, 7557 7561 ALC255_FIXUP_ASUS_MIC_NO_PRESENCE, 7558 7562 ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 7563 + ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, 7559 7564 ALC255_FIXUP_DELL2_MIC_NO_PRESENCE, 7560 7565 ALC255_FIXUP_HEADSET_MODE, 7561 7566 ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC, ··· 7648 7649 ALC286_FIXUP_ACER_AIO_HEADSET_MIC, 7649 7650 ALC256_FIXUP_ASUS_HEADSET_MIC, 7650 7651 ALC256_FIXUP_ASUS_MIC_NO_PRESENCE, 7652 + ALC255_FIXUP_PREDATOR_SUBWOOFER, 7651 7653 ALC299_FIXUP_PREDATOR_SPK, 7652 7654 ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, 7653 7655 ALC289_FIXUP_DELL_SPK1, ··· 7999 7999 .type = HDA_FIXUP_FUNC, 8000 8000 .v.func = alc269_fixup_pincfg_U7x7_headset_mic, 8001 8001 }, 8002 + [ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13] = { 8003 + .type = HDA_FIXUP_PINS, 8004 + .v.pins = (const struct hda_pintbl[]) { 8005 + { 0x14, 0x90170151 }, /* use as internal speaker (LFE) */ 8006 + { 0x1b, 0x90170152 }, /* use as internal speaker (back) */ 8007 + { } 8008 + }, 8009 + .chained = true, 8010 + .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST 8011 + }, 8002 8012 [ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = { 8003 8013 .type = HDA_FIXUP_PINS, 8004 8014 .v.pins = (const struct hda_pintbl[]) { ··· 8126 8116 }, 8127 8117 .chained = true, 8128 8118 .chain_id = ALC269_FIXUP_HEADSET_MODE 8119 + }, 8120 + [ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST] = { 8121 + .type = HDA_FIXUP_FUNC, 8122 + .v.func = alc269_fixup_limit_int_mic_boost, 8123 + .chained = true, 8124 + .chain_id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE 8129 8125 }, 8130 8126 [ALC269_FIXUP_DELL2_MIC_NO_PRESENCE] = { 8131 8127 .type = HDA_FIXUP_PINS, ··· 8412 8396 }, 8413 8397 .chained = true, 8414 8398 .chain_id = ALC255_FIXUP_HEADSET_MODE 8399 + }, 8400 + [ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST] = { 8401 + .type = HDA_FIXUP_FUNC, 8402 + .v.func = alc269_fixup_limit_int_mic_boost, 8403 + .chained = true, 8404 + .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE 8415 8405 }, 8416 8406 [ALC255_FIXUP_DELL2_MIC_NO_PRESENCE] = { 8417 8407 .type = HDA_FIXUP_PINS, ··· 9084 9062 }, 9085 9063 .chained = true, 9086 9064 .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE 9065 + }, 9066 + [ALC255_FIXUP_PREDATOR_SUBWOOFER] = { 9067 + .type = HDA_FIXUP_PINS, 9068 + .v.pins = (const struct hda_pintbl[]) { 9069 + { 0x17, 0x90170151 }, /* use as internal speaker (LFE) */ 9070 + { 0x1b, 0x90170152 } /* use as internal speaker (back) */ 9071 + } 9087 9072 }, 9088 9073 [ALC299_FIXUP_PREDATOR_SPK] = { 9089 9074 .type = HDA_FIXUP_PINS, ··· 10179 10150 SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 10180 10151 SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK), 10181 10152 SND_PCI_QUIRK(0x1025, 0x1167, "Acer Veriton N6640G", ALC269_FIXUP_LIFEBOOK), 10153 + SND_PCI_QUIRK(0x1025, 0x1177, "Acer Predator G9-593", ALC255_FIXUP_PREDATOR_SUBWOOFER), 10154 + SND_PCI_QUIRK(0x1025, 0x1178, "Acer Predator G9-593", ALC255_FIXUP_PREDATOR_SUBWOOFER), 10182 10155 SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK), 10183 10156 SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), 10184 10157 SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE), ··· 10750 10719 SND_PCI_QUIRK(0x1558, 0x1404, "Clevo N150CU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10751 10720 SND_PCI_QUIRK(0x1558, 0x14a1, "Clevo L141MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10752 10721 SND_PCI_QUIRK(0x1558, 0x2624, "Clevo L240TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10722 + SND_PCI_QUIRK(0x1558, 0x28c1, "Clevo V370VND", ALC2XX_FIXUP_HEADSET_MIC), 10753 10723 SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10754 10724 SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10755 10725 SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 11008 10976 SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP), 11009 10977 SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP), 11010 10978 SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), 10979 + SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), 11011 10980 SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), 11012 10981 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 11013 10982 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), ··· 11016 10983 SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC), 11017 10984 SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 11018 10985 SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 10986 + SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 11019 10987 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), 11020 10988 SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME), 11021 10989 SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC), ··· 11104 11070 {.id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, .name = "dell-headset-dock"}, 11105 11071 {.id = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, .name = "dell-headset3"}, 11106 11072 {.id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, .name = "dell-headset4"}, 11073 + {.id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET, .name = "dell-headset4-quiet"}, 11107 11074 {.id = ALC283_FIXUP_CHROME_BOOK, .name = "alc283-dac-wcaps"}, 11108 11075 {.id = ALC283_FIXUP_SENSE_COMBO_JACK, .name = "alc283-sense-combo"}, 11109 11076 {.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"}, ··· 11659 11624 SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, 11660 11625 {0x19, 0x40000000}, 11661 11626 {0x1b, 0x40000000}), 11662 - SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, 11627 + SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET, 11663 11628 {0x19, 0x40000000}, 11664 11629 {0x1b, 0x40000000}), 11665 11630 SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 11666 11631 {0x19, 0x40000000}, 11667 11632 {0x1a, 0x40000000}), 11668 - SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 11633 + SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, 11669 11634 {0x19, 0x40000000}, 11670 11635 {0x1a, 0x40000000}), 11671 - SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB, 11636 + SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, 11672 11637 {0x19, 0x40000000}, 11673 11638 {0x1a, 0x40000000}), 11674 11639 SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC2XX_FIXUP_HEADSET_MIC,
+14
sound/soc/amd/yc/acp6x-mach.c
··· 329 329 .driver_data = &acp6x_card, 330 330 .matches = { 331 331 DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 332 + DMI_MATCH(DMI_PRODUCT_NAME, "E1404FA"), 333 + } 334 + }, 335 + { 336 + .driver_data = &acp6x_card, 337 + .matches = { 338 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 332 339 DMI_MATCH(DMI_PRODUCT_NAME, "E1504FA"), 333 340 } 334 341 }, ··· 344 337 .matches = { 345 338 DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 346 339 DMI_MATCH(DMI_PRODUCT_NAME, "M7600RE"), 340 + } 341 + }, 342 + { 343 + .driver_data = &acp6x_card, 344 + .matches = { 345 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 346 + DMI_MATCH(DMI_PRODUCT_NAME, "M3502RA"), 347 347 } 348 348 }, 349 349 {
+1 -1
sound/soc/codecs/aw88399.c
··· 656 656 if (ret) 657 657 return ret; 658 658 if (!(reg_val & (~AW88399_WDT_CNT_MASK))) 659 - ret = -EPERM; 659 + return -EPERM; 660 660 661 661 return 0; 662 662 }
+5 -2
sound/soc/codecs/cs42l51.c
··· 747 747 748 748 cs42l51->reset_gpio = devm_gpiod_get_optional(dev, "reset", 749 749 GPIOD_OUT_LOW); 750 - if (IS_ERR(cs42l51->reset_gpio)) 751 - return PTR_ERR(cs42l51->reset_gpio); 750 + if (IS_ERR(cs42l51->reset_gpio)) { 751 + ret = PTR_ERR(cs42l51->reset_gpio); 752 + goto error; 753 + } 752 754 753 755 if (cs42l51->reset_gpio) { 754 756 dev_dbg(dev, "Release reset gpio\n"); ··· 782 780 return 0; 783 781 784 782 error: 783 + gpiod_set_value_cansleep(cs42l51->reset_gpio, 1); 785 784 regulator_bulk_disable(ARRAY_SIZE(cs42l51->supplies), 786 785 cs42l51->supplies); 787 786 return ret;
+7 -8
sound/soc/codecs/lpass-rx-macro.c
··· 202 202 #define CDC_RX_RXn_RX_PATH_SEC3(rx, n) (0x042c + rx->rxn_reg_stride * n) 203 203 #define CDC_RX_RX0_RX_PATH_SEC4 (0x0430) 204 204 #define CDC_RX_RX0_RX_PATH_SEC7 (0x0434) 205 - #define CDC_RX_RXn_RX_PATH_SEC7(rx, n) (0x0434 + rx->rxn_reg_stride * n) 205 + #define CDC_RX_RXn_RX_PATH_SEC7(rx, n) \ 206 + (0x0434 + (rx->rxn_reg_stride * n) + ((n > 1) ? rx->rxn_reg_stride2 : 0)) 206 207 #define CDC_RX_DSM_OUT_DELAY_SEL_MASK GENMASK(2, 0) 207 208 #define CDC_RX_DSM_OUT_DELAY_TWO_SAMPLE 0x2 208 209 #define CDC_RX_RX0_RX_PATH_MIX_SEC0 (0x0438) 209 210 #define CDC_RX_RX0_RX_PATH_MIX_SEC1 (0x043C) 210 - #define CDC_RX_RXn_RX_PATH_DSM_CTL(rx, n) (0x0440 + rx->rxn_reg_stride * n) 211 + #define CDC_RX_RXn_RX_PATH_DSM_CTL(rx, n) \ 212 + (0x0440 + (rx->rxn_reg_stride * n) + ((n > 1) ? rx->rxn_reg_stride2 : 0)) 211 213 #define CDC_RX_RXn_DSM_CLK_EN_MASK BIT(0) 212 214 #define CDC_RX_RX0_RX_PATH_DSM_CTL (0x0440) 213 215 #define CDC_RX_RX0_RX_PATH_DSM_DATA1 (0x0444) ··· 647 645 int rx_mclk_cnt; 648 646 enum lpass_codec_version codec_version; 649 647 int rxn_reg_stride; 648 + int rxn_reg_stride2; 650 649 bool is_ear_mode_on; 651 650 bool hph_pwr_mode; 652 651 bool hph_hd2_mode; ··· 1932 1929 CDC_RX_PATH_PGA_MUTE_MASK, 0x0); 1933 1930 } 1934 1931 1935 - if (j == INTERP_AUX) 1936 - dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, 2); 1937 - 1938 1932 int_mux_cfg0 = CDC_RX_INP_MUX_RX_INT0_CFG0 + j * 8; 1939 1933 int_mux_cfg1 = int_mux_cfg0 + 4; 1940 1934 int_mux_cfg0_val = snd_soc_component_read(component, int_mux_cfg0); ··· 2702 2702 2703 2703 main_reg = CDC_RX_RXn_RX_PATH_CTL(rx, interp_idx); 2704 2704 dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, interp_idx); 2705 - if (interp_idx == INTERP_AUX) 2706 - dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, 2); 2707 - 2708 2705 rx_cfg2_reg = CDC_RX_RXn_RX_PATH_CFG2(rx, interp_idx); 2709 2706 2710 2707 if (SND_SOC_DAPM_EVENT_ON(event)) { ··· 3818 3821 case LPASS_CODEC_VERSION_2_0: 3819 3822 case LPASS_CODEC_VERSION_2_1: 3820 3823 rx->rxn_reg_stride = 0x80; 3824 + rx->rxn_reg_stride2 = 0xc; 3821 3825 def_count = ARRAY_SIZE(rx_defaults) + ARRAY_SIZE(rx_pre_2_5_defaults); 3822 3826 reg_defaults = kmalloc_array(def_count, sizeof(struct reg_default), GFP_KERNEL); 3823 3827 if (!reg_defaults) ··· 3832 3834 case LPASS_CODEC_VERSION_2_7: 3833 3835 case LPASS_CODEC_VERSION_2_8: 3834 3836 rx->rxn_reg_stride = 0xc0; 3837 + rx->rxn_reg_stride2 = 0x0; 3835 3838 def_count = ARRAY_SIZE(rx_defaults) + ARRAY_SIZE(rx_2_5_defaults); 3836 3839 reg_defaults = kmalloc_array(def_count, sizeof(struct reg_default), GFP_KERNEL); 3837 3840 if (!reg_defaults)
+1
sound/soc/codecs/max98388.c
··· 763 763 addr = MAX98388_R2044_PCM_TX_CTRL1 + (cnt / 8); 764 764 bits = cnt % 8; 765 765 regmap_update_bits(max98388->regmap, addr, bits, bits); 766 + slot_found++; 766 767 if (slot_found >= MAX_NUM_CH) 767 768 break; 768 769 }
+2 -2
sound/soc/codecs/pcm3060-i2c.c
··· 2 2 // 3 3 // PCM3060 I2C driver 4 4 // 5 - // Copyright (C) 2018 Kirill Marinushkin <kmarinushkin@birdec.com> 5 + // Copyright (C) 2018 Kirill Marinushkin <k.marinushkin@gmail.com> 6 6 7 7 #include <linux/i2c.h> 8 8 #include <linux/module.h> ··· 55 55 module_i2c_driver(pcm3060_i2c_driver); 56 56 57 57 MODULE_DESCRIPTION("PCM3060 I2C driver"); 58 - MODULE_AUTHOR("Kirill Marinushkin <kmarinushkin@birdec.com>"); 58 + MODULE_AUTHOR("Kirill Marinushkin <k.marinushkin@gmail.com>"); 59 59 MODULE_LICENSE("GPL v2");
+2 -2
sound/soc/codecs/pcm3060-spi.c
··· 2 2 // 3 3 // PCM3060 SPI driver 4 4 // 5 - // Copyright (C) 2018 Kirill Marinushkin <kmarinushkin@birdec.com> 5 + // Copyright (C) 2018 Kirill Marinushkin <k.marinushkin@gmail.com> 6 6 7 7 #include <linux/module.h> 8 8 #include <linux/spi/spi.h> ··· 55 55 module_spi_driver(pcm3060_spi_driver); 56 56 57 57 MODULE_DESCRIPTION("PCM3060 SPI driver"); 58 - MODULE_AUTHOR("Kirill Marinushkin <kmarinushkin@birdec.com>"); 58 + MODULE_AUTHOR("Kirill Marinushkin <k.marinushkin@gmail.com>"); 59 59 MODULE_LICENSE("GPL v2");
+2 -2
sound/soc/codecs/pcm3060.c
··· 2 2 // 3 3 // PCM3060 codec driver 4 4 // 5 - // Copyright (C) 2018 Kirill Marinushkin <kmarinushkin@birdec.com> 5 + // Copyright (C) 2018 Kirill Marinushkin <k.marinushkin@gmail.com> 6 6 7 7 #include <linux/module.h> 8 8 #include <sound/pcm_params.h> ··· 343 343 EXPORT_SYMBOL(pcm3060_probe); 344 344 345 345 MODULE_DESCRIPTION("PCM3060 codec driver"); 346 - MODULE_AUTHOR("Kirill Marinushkin <kmarinushkin@birdec.com>"); 346 + MODULE_AUTHOR("Kirill Marinushkin <k.marinushkin@gmail.com>"); 347 347 MODULE_LICENSE("GPL v2");
+1 -1
sound/soc/codecs/pcm3060.h
··· 2 2 /* 3 3 * PCM3060 codec driver 4 4 * 5 - * Copyright (C) 2018 Kirill Marinushkin <kmarinushkin@birdec.com> 5 + * Copyright (C) 2018 Kirill Marinushkin <k.marinushkin@gmail.com> 6 6 */ 7 7 8 8 #ifndef _SND_SOC_PCM3060_H
+15 -12
sound/soc/codecs/rt5640.c
··· 2419 2419 return IRQ_HANDLED; 2420 2420 } 2421 2421 2422 - static void rt5640_cancel_work(void *data) 2422 + static void rt5640_disable_irq_and_cancel_work(void *data) 2423 2423 { 2424 2424 struct rt5640_priv *rt5640 = data; 2425 + 2426 + if (rt5640->jd_gpio_irq_requested) { 2427 + free_irq(rt5640->jd_gpio_irq, rt5640); 2428 + rt5640->jd_gpio_irq_requested = false; 2429 + } 2430 + 2431 + if (rt5640->irq_requested) { 2432 + free_irq(rt5640->irq, rt5640); 2433 + rt5640->irq_requested = false; 2434 + } 2425 2435 2426 2436 cancel_delayed_work_sync(&rt5640->jack_work); 2427 2437 cancel_delayed_work_sync(&rt5640->bp_work); ··· 2473 2463 if (!rt5640->jack) 2474 2464 return; 2475 2465 2476 - if (rt5640->jd_gpio_irq_requested) 2477 - free_irq(rt5640->jd_gpio_irq, rt5640); 2478 - 2479 - if (rt5640->irq_requested) 2480 - free_irq(rt5640->irq, rt5640); 2481 - 2482 - rt5640_cancel_work(rt5640); 2466 + rt5640_disable_irq_and_cancel_work(rt5640); 2483 2467 2484 2468 if (rt5640->jack->status & SND_JACK_MICROPHONE) { 2485 2469 rt5640_disable_micbias1_ovcd_irq(component); ··· 2481 2477 snd_soc_jack_report(rt5640->jack, 0, SND_JACK_BTN_0); 2482 2478 } 2483 2479 2484 - rt5640->jd_gpio_irq_requested = false; 2485 - rt5640->irq_requested = false; 2486 2480 rt5640->jd_gpio = NULL; 2487 2481 rt5640->jack = NULL; 2488 2482 } ··· 2800 2798 if (rt5640->jack) { 2801 2799 /* disable jack interrupts during system suspend */ 2802 2800 disable_irq(rt5640->irq); 2803 - rt5640_cancel_work(rt5640); 2801 + cancel_delayed_work_sync(&rt5640->jack_work); 2802 + cancel_delayed_work_sync(&rt5640->bp_work); 2804 2803 } 2805 2804 2806 2805 snd_soc_component_force_bias_level(component, SND_SOC_BIAS_OFF); ··· 3035 3032 INIT_DELAYED_WORK(&rt5640->jack_work, rt5640_jack_work); 3036 3033 3037 3034 /* Make sure work is stopped on probe-error / remove */ 3038 - ret = devm_add_action_or_reset(&i2c->dev, rt5640_cancel_work, rt5640); 3035 + ret = devm_add_action_or_reset(&i2c->dev, rt5640_disable_irq_and_cancel_work, rt5640); 3039 3036 if (ret) 3040 3037 return ret; 3041 3038
+1 -1
sound/soc/codecs/rt722-sdca-sdw.c
··· 253 253 } 254 254 255 255 /* set the timeout values */ 256 - prop->clk_stop_timeout = 200; 256 + prop->clk_stop_timeout = 900; 257 257 258 258 /* wake-up event */ 259 259 prop->wake_capable = 1;
+10 -2
sound/soc/codecs/wcd937x.c
··· 715 715 struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 716 716 struct wcd937x_priv *wcd937x = snd_soc_component_get_drvdata(component); 717 717 int hph_mode = wcd937x->hph_mode; 718 + u8 val; 718 719 719 720 switch (event) { 720 721 case SND_SOC_DAPM_PRE_PMU: 722 + val = WCD937X_DIGITAL_PDM_WD_CTL2_EN | 723 + WCD937X_DIGITAL_PDM_WD_CTL2_TIMEOUT_SEL | 724 + WCD937X_DIGITAL_PDM_WD_CTL2_HOLD_OFF; 721 725 snd_soc_component_update_bits(component, 722 726 WCD937X_DIGITAL_PDM_WD_CTL2, 723 - BIT(0), BIT(0)); 727 + WCD937X_DIGITAL_PDM_WD_CTL2_MASK, 728 + val); 724 729 break; 725 730 case SND_SOC_DAPM_POST_PMU: 726 731 usleep_range(1000, 1010); ··· 746 741 hph_mode); 747 742 snd_soc_component_update_bits(component, 748 743 WCD937X_DIGITAL_PDM_WD_CTL2, 749 - BIT(0), 0x00); 744 + WCD937X_DIGITAL_PDM_WD_CTL2_MASK, 745 + 0x00); 750 746 break; 751 747 } 752 748 ··· 2054 2048 SOC_SINGLE_EXT("HPHL Switch", WCD937X_HPH_L, 0, 1, 0, 2055 2049 wcd937x_get_swr_port, wcd937x_set_swr_port), 2056 2050 SOC_SINGLE_EXT("HPHR Switch", WCD937X_HPH_R, 0, 1, 0, 2051 + wcd937x_get_swr_port, wcd937x_set_swr_port), 2052 + SOC_SINGLE_EXT("LO Switch", WCD937X_LO, 0, 1, 0, 2057 2053 wcd937x_get_swr_port, wcd937x_set_swr_port), 2058 2054 2059 2055 SOC_SINGLE_EXT("ADC1 Switch", WCD937X_ADC1, 1, 1, 0,
+4
sound/soc/codecs/wcd937x.h
··· 391 391 #define WCD937X_DIGITAL_PDM_WD_CTL0 0x3465 392 392 #define WCD937X_DIGITAL_PDM_WD_CTL1 0x3466 393 393 #define WCD937X_DIGITAL_PDM_WD_CTL2 0x3467 394 + #define WCD937X_DIGITAL_PDM_WD_CTL2_HOLD_OFF BIT(2) 395 + #define WCD937X_DIGITAL_PDM_WD_CTL2_TIMEOUT_SEL BIT(1) 396 + #define WCD937X_DIGITAL_PDM_WD_CTL2_EN BIT(0) 397 + #define WCD937X_DIGITAL_PDM_WD_CTL2_MASK GENMASK(2, 0) 394 398 #define WCD937X_DIGITAL_INTR_MODE 0x346A 395 399 #define WCD937X_DIGITAL_INTR_MASK_0 0x346B 396 400 #define WCD937X_DIGITAL_INTR_MASK_1 0x346C
+2 -2
sound/soc/fsl/fsl_esai.c
··· 119 119 dev_dbg(&pdev->dev, "isr: Transmission Initialized\n"); 120 120 121 121 if (esr & ESAI_ESR_RFF_MASK) 122 - dev_warn(&pdev->dev, "isr: Receiving overrun\n"); 122 + dev_dbg(&pdev->dev, "isr: Receiving overrun\n"); 123 123 124 124 if (esr & ESAI_ESR_TFE_MASK) 125 - dev_warn(&pdev->dev, "isr: Transmission underrun\n"); 125 + dev_dbg(&pdev->dev, "isr: Transmission underrun\n"); 126 126 127 127 if (esr & ESAI_ESR_TLS_MASK) 128 128 dev_dbg(&pdev->dev, "isr: Just transmitted the last slot\n");
+80 -1
sound/soc/fsl/fsl_micfil.c
··· 28 28 29 29 #define MICFIL_OSR_DEFAULT 16 30 30 31 + #define MICFIL_NUM_RATES 7 32 + #define MICFIL_CLK_SRC_NUM 3 33 + /* clock source ids */ 34 + #define MICFIL_AUDIO_PLL1 0 35 + #define MICFIL_AUDIO_PLL2 1 36 + #define MICFIL_CLK_EXT3 2 37 + 31 38 enum quality { 32 39 QUALITY_HIGH, 33 40 QUALITY_MEDIUM, ··· 52 45 struct clk *mclk; 53 46 struct clk *pll8k_clk; 54 47 struct clk *pll11k_clk; 48 + struct clk *clk_src[MICFIL_CLK_SRC_NUM]; 55 49 struct snd_dmaengine_dai_dma_data dma_params_rx; 56 50 struct sdma_peripheral_config sdmacfg; 57 51 struct snd_soc_card *card; 52 + struct snd_pcm_hw_constraint_list constraint_rates; 53 + unsigned int constraint_rates_list[MICFIL_NUM_RATES]; 58 54 unsigned int dataline; 59 55 char name[32]; 60 56 int irq[MICFIL_IRQ_LINES]; ··· 77 67 bool imx; 78 68 bool use_edma; 79 69 bool use_verid; 70 + bool volume_sx; 80 71 u64 formats; 81 72 }; 82 73 ··· 87 76 .fifo_depth = 8, 88 77 .dataline = 0xf, 89 78 .formats = SNDRV_PCM_FMTBIT_S16_LE, 79 + .volume_sx = true, 90 80 }; 91 81 92 82 static struct fsl_micfil_soc_data fsl_micfil_imx8mp = { ··· 96 84 .fifo_depth = 32, 97 85 .dataline = 0xf, 98 86 .formats = SNDRV_PCM_FMTBIT_S32_LE, 87 + .volume_sx = false, 99 88 }; 100 89 101 90 static struct fsl_micfil_soc_data fsl_micfil_imx93 = { ··· 107 94 .formats = SNDRV_PCM_FMTBIT_S32_LE, 108 95 .use_edma = true, 109 96 .use_verid = true, 97 + .volume_sx = false, 110 98 }; 111 99 112 100 static const struct of_device_id fsl_micfil_dt_ids[] = { ··· 331 317 return 0; 332 318 } 333 319 334 - static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = { 320 + static const struct snd_kcontrol_new fsl_micfil_volume_controls[] = { 321 + SOC_SINGLE_TLV("CH0 Volume", REG_MICFIL_OUT_CTRL, 322 + MICFIL_OUTGAIN_CHX_SHIFT(0), 0xF, 0, gain_tlv), 323 + SOC_SINGLE_TLV("CH1 Volume", REG_MICFIL_OUT_CTRL, 324 + MICFIL_OUTGAIN_CHX_SHIFT(1), 0xF, 0, gain_tlv), 325 + SOC_SINGLE_TLV("CH2 Volume", REG_MICFIL_OUT_CTRL, 326 + MICFIL_OUTGAIN_CHX_SHIFT(2), 0xF, 0, gain_tlv), 327 + SOC_SINGLE_TLV("CH3 Volume", REG_MICFIL_OUT_CTRL, 328 + MICFIL_OUTGAIN_CHX_SHIFT(3), 0xF, 0, gain_tlv), 329 + SOC_SINGLE_TLV("CH4 Volume", REG_MICFIL_OUT_CTRL, 330 + MICFIL_OUTGAIN_CHX_SHIFT(4), 0xF, 0, gain_tlv), 331 + SOC_SINGLE_TLV("CH5 Volume", REG_MICFIL_OUT_CTRL, 332 + MICFIL_OUTGAIN_CHX_SHIFT(5), 0xF, 0, gain_tlv), 333 + SOC_SINGLE_TLV("CH6 Volume", REG_MICFIL_OUT_CTRL, 334 + MICFIL_OUTGAIN_CHX_SHIFT(6), 0xF, 0, gain_tlv), 335 + SOC_SINGLE_TLV("CH7 Volume", REG_MICFIL_OUT_CTRL, 336 + MICFIL_OUTGAIN_CHX_SHIFT(7), 0xF, 0, gain_tlv), 337 + }; 338 + 339 + static const struct snd_kcontrol_new fsl_micfil_volume_sx_controls[] = { 335 340 SOC_SINGLE_SX_TLV("CH0 Volume", REG_MICFIL_OUT_CTRL, 336 341 MICFIL_OUTGAIN_CHX_SHIFT(0), 0x8, 0xF, gain_tlv), 337 342 SOC_SINGLE_SX_TLV("CH1 Volume", REG_MICFIL_OUT_CTRL, ··· 367 334 MICFIL_OUTGAIN_CHX_SHIFT(6), 0x8, 0xF, gain_tlv), 368 335 SOC_SINGLE_SX_TLV("CH7 Volume", REG_MICFIL_OUT_CTRL, 369 336 MICFIL_OUTGAIN_CHX_SHIFT(7), 0x8, 0xF, gain_tlv), 337 + }; 338 + 339 + static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = { 370 340 SOC_ENUM_EXT("MICFIL Quality Select", 371 341 fsl_micfil_quality_enum, 372 342 micfil_quality_get, micfil_quality_set), ··· 485 449 struct snd_soc_dai *dai) 486 450 { 487 451 struct fsl_micfil *micfil = snd_soc_dai_get_drvdata(dai); 452 + unsigned int rates[MICFIL_NUM_RATES] = {8000, 11025, 16000, 22050, 32000, 44100, 48000}; 453 + int i, j, k = 0; 454 + u64 clk_rate; 488 455 489 456 if (!micfil) { 490 457 dev_err(dai->dev, "micfil dai priv_data not set\n"); 491 458 return -EINVAL; 492 459 } 460 + 461 + micfil->constraint_rates.list = micfil->constraint_rates_list; 462 + micfil->constraint_rates.count = 0; 463 + 464 + for (j = 0; j < MICFIL_NUM_RATES; j++) { 465 + for (i = 0; i < MICFIL_CLK_SRC_NUM; i++) { 466 + clk_rate = clk_get_rate(micfil->clk_src[i]); 467 + if (clk_rate != 0 && do_div(clk_rate, rates[j]) == 0) { 468 + micfil->constraint_rates_list[k++] = rates[j]; 469 + micfil->constraint_rates.count++; 470 + break; 471 + } 472 + } 473 + } 474 + 475 + if (micfil->constraint_rates.count > 0) 476 + snd_pcm_hw_constraint_list(substream->runtime, 0, 477 + SNDRV_PCM_HW_PARAM_RATE, 478 + &micfil->constraint_rates); 493 479 494 480 return 0; 495 481 } ··· 859 801 return 0; 860 802 } 861 803 804 + static int fsl_micfil_component_probe(struct snd_soc_component *component) 805 + { 806 + struct fsl_micfil *micfil = snd_soc_component_get_drvdata(component); 807 + 808 + if (micfil->soc->volume_sx) 809 + snd_soc_add_component_controls(component, fsl_micfil_volume_sx_controls, 810 + ARRAY_SIZE(fsl_micfil_volume_sx_controls)); 811 + else 812 + snd_soc_add_component_controls(component, fsl_micfil_volume_controls, 813 + ARRAY_SIZE(fsl_micfil_volume_controls)); 814 + 815 + return 0; 816 + } 817 + 862 818 static const struct snd_soc_dai_ops fsl_micfil_dai_ops = { 863 819 .probe = fsl_micfil_dai_probe, 864 820 .startup = fsl_micfil_startup, ··· 893 821 894 822 static const struct snd_soc_component_driver fsl_micfil_component = { 895 823 .name = "fsl-micfil-dai", 824 + .probe = fsl_micfil_component_probe, 896 825 .controls = fsl_micfil_snd_controls, 897 826 .num_controls = ARRAY_SIZE(fsl_micfil_snd_controls), 898 827 .legacy_dai_naming = 1, ··· 1206 1133 1207 1134 fsl_asoc_get_pll_clocks(&pdev->dev, &micfil->pll8k_clk, 1208 1135 &micfil->pll11k_clk); 1136 + 1137 + micfil->clk_src[MICFIL_AUDIO_PLL1] = micfil->pll8k_clk; 1138 + micfil->clk_src[MICFIL_AUDIO_PLL2] = micfil->pll11k_clk; 1139 + micfil->clk_src[MICFIL_CLK_EXT3] = devm_clk_get(&pdev->dev, "clkext3"); 1140 + if (IS_ERR(micfil->clk_src[MICFIL_CLK_EXT3])) 1141 + micfil->clk_src[MICFIL_CLK_EXT3] = NULL; 1209 1142 1210 1143 /* init regmap */ 1211 1144 regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+55 -9
sound/soc/intel/atom/sst/sst_acpi.c
··· 125 125 .acpi_ipc_irq_index = 0 126 126 }; 127 127 128 + /* For "LPE0F28" ACPI device found on some Android factory OS models */ 129 + static const struct sst_res_info lpe8086_res_info = { 130 + .shim_offset = 0x140000, 131 + .shim_size = 0x000100, 132 + .shim_phy_addr = SST_BYT_SHIM_PHY_ADDR, 133 + .ssp0_offset = 0xa0000, 134 + .ssp0_size = 0x1000, 135 + .dma0_offset = 0x98000, 136 + .dma0_size = 0x4000, 137 + .dma1_offset = 0x9c000, 138 + .dma1_size = 0x4000, 139 + .iram_offset = 0x0c0000, 140 + .iram_size = 0x14000, 141 + .dram_offset = 0x100000, 142 + .dram_size = 0x28000, 143 + .mbox_offset = 0x144000, 144 + .mbox_size = 0x1000, 145 + .acpi_lpe_res_index = 1, 146 + .acpi_ddr_index = 0, 147 + .acpi_ipc_irq_index = 0 148 + }; 149 + 128 150 static struct sst_platform_info byt_rvp_platform_data = { 129 151 .probe_data = &byt_fwparse_info, 130 152 .ipc_info = &byt_ipc_info, ··· 290 268 mach->pdata = &chv_platform_data; 291 269 pdata = mach->pdata; 292 270 293 - ret = kstrtouint(id->id, 16, &dev_id); 294 - if (ret < 0) { 295 - dev_err(dev, "Unique device id conversion error: %d\n", ret); 296 - return ret; 271 + if (!strcmp(id->id, "LPE0F28")) { 272 + struct resource *rsrc; 273 + 274 + /* Use regular BYT SST PCI VID:PID */ 275 + dev_id = 0x80860F28; 276 + byt_rvp_platform_data.res_info = &lpe8086_res_info; 277 + 278 + /* 279 + * The "LPE0F28" ACPI device has separate IO-mem resources for: 280 + * DDR, SHIM, MBOX, IRAM, DRAM, CFG 281 + * None of which covers the entire LPE base address range. 282 + * lpe8086_res_info.acpi_lpe_res_index points to the SHIM. 283 + * Patch this to cover the entire base address range as expected 284 + * by sst_platform_get_resources(). 285 + */ 286 + rsrc = platform_get_resource(pdev, IORESOURCE_MEM, 287 + pdata->res_info->acpi_lpe_res_index); 288 + if (!rsrc) { 289 + dev_err(dev, "Invalid SHIM base\n"); 290 + return -EIO; 291 + } 292 + rsrc->start -= pdata->res_info->shim_offset; 293 + rsrc->end = rsrc->start + 0x200000 - 1; 294 + } else { 295 + ret = kstrtouint(id->id, 16, &dev_id); 296 + if (ret < 0) { 297 + dev_err(dev, "Unique device id conversion error: %d\n", ret); 298 + return ret; 299 + } 300 + 301 + if (soc_intel_is_byt_cr(pdev)) 302 + byt_rvp_platform_data.res_info = &bytcr_res_info; 297 303 } 298 304 299 305 dev_dbg(dev, "ACPI device id: %x\n", dev_id); ··· 329 279 ret = sst_alloc_drv_context(&ctx, dev, dev_id); 330 280 if (ret < 0) 331 281 return ret; 332 - 333 - if (soc_intel_is_byt_cr(pdev)) { 334 - /* override resource info */ 335 - byt_rvp_platform_data.res_info = &bytcr_res_info; 336 - } 337 282 338 283 /* update machine parameters */ 339 284 mach->mach_params.acpi_ipc_irq_index = ··· 389 344 } 390 345 391 346 static const struct acpi_device_id sst_acpi_ids[] = { 347 + { "LPE0F28", (unsigned long)&snd_soc_acpi_intel_baytrail_machines}, 392 348 { "80860F28", (unsigned long)&snd_soc_acpi_intel_baytrail_machines}, 393 349 { "808622A8", (unsigned long)&snd_soc_acpi_intel_cherrytrail_machines}, 394 350 { },
+2 -1
sound/soc/intel/avs/core.c
··· 28 28 #include "avs.h" 29 29 #include "cldma.h" 30 30 #include "messages.h" 31 + #include "pcm.h" 31 32 32 33 static u32 pgctl_mask = AZX_PGCTL_LSRMD_MASK; 33 34 module_param(pgctl_mask, uint, 0444); ··· 248 247 static void hdac_update_stream(struct hdac_bus *bus, struct hdac_stream *stream) 249 248 { 250 249 if (stream->substream) { 251 - snd_pcm_period_elapsed(stream->substream); 250 + avs_period_elapsed(stream->substream); 252 251 } else if (stream->cstream) { 253 252 u64 buffer_size = stream->cstream->runtime->buffer_size; 254 253
+19
sound/soc/intel/avs/pcm.c
··· 16 16 #include <sound/soc-component.h> 17 17 #include "avs.h" 18 18 #include "path.h" 19 + #include "pcm.h" 19 20 #include "topology.h" 20 21 #include "../../codecs/hda.h" 21 22 ··· 31 30 struct hdac_ext_stream *host_stream; 32 31 }; 33 32 33 + struct work_struct period_elapsed_work; 34 34 struct snd_pcm_substream *substream; 35 35 }; 36 36 ··· 58 56 return dw->priv; 59 57 } 60 58 59 + static void avs_period_elapsed_work(struct work_struct *work) 60 + { 61 + struct avs_dma_data *data = container_of(work, struct avs_dma_data, period_elapsed_work); 62 + 63 + snd_pcm_period_elapsed(data->substream); 64 + } 65 + 66 + void avs_period_elapsed(struct snd_pcm_substream *substream) 67 + { 68 + struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream); 69 + struct snd_soc_dai *dai = snd_soc_rtd_to_cpu(rtd, 0); 70 + struct avs_dma_data *data = snd_soc_dai_get_dma_data(dai, substream); 71 + 72 + schedule_work(&data->period_elapsed_work); 73 + } 74 + 61 75 static int avs_dai_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) 62 76 { 63 77 struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream); ··· 95 77 data->substream = substream; 96 78 data->template = template; 97 79 data->adev = adev; 80 + INIT_WORK(&data->period_elapsed_work, avs_period_elapsed_work); 98 81 snd_soc_dai_set_dma_data(dai, substream, data); 99 82 100 83 if (rtd->dai_link->ignore_suspend)
+16
sound/soc/intel/avs/pcm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright(c) 2024 Intel Corporation 4 + * 5 + * Authors: Cezary Rojewski <cezary.rojewski@intel.com> 6 + * Amadeusz Slawinski <amadeuszx.slawinski@linux.intel.com> 7 + */ 8 + 9 + #ifndef __SOUND_SOC_INTEL_AVS_PCM_H 10 + #define __SOUND_SOC_INTEL_AVS_PCM_H 11 + 12 + #include <sound/pcm.h> 13 + 14 + void avs_period_elapsed(struct snd_pcm_substream *substream); 15 + 16 + #endif
+45 -3
sound/soc/intel/boards/bytcr_rt5640.c
··· 17 17 #include <linux/acpi.h> 18 18 #include <linux/clk.h> 19 19 #include <linux/device.h> 20 + #include <linux/device/bus.h> 20 21 #include <linux/dmi.h> 21 22 #include <linux/gpio/consumer.h> 22 23 #include <linux/gpio/machine.h> ··· 32 31 #include "../../codecs/rt5640.h" 33 32 #include "../atom/sst-atom-controls.h" 34 33 #include "../common/soc-intel-quirks.h" 34 + 35 + #define BYT_RT5640_FALLBACK_CODEC_DEV_NAME "i2c-rt5640" 35 36 36 37 enum { 37 38 BYT_RT5640_DMIC1_MAP, ··· 1132 1129 BYT_RT5640_SSP0_AIF2 | 1133 1130 BYT_RT5640_MCLK_EN), 1134 1131 }, 1132 + { /* Vexia Edu Atla 10 tablet */ 1133 + .matches = { 1134 + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 1135 + DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"), 1136 + /* Above strings are too generic, also match on BIOS date */ 1137 + DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"), 1138 + }, 1139 + .driver_data = (void *)(BYT_RT5640_IN1_MAP | 1140 + BYT_RT5640_JD_SRC_JD2_IN4N | 1141 + BYT_RT5640_OVCD_TH_2000UA | 1142 + BYT_RT5640_OVCD_SF_0P75 | 1143 + BYT_RT5640_DIFF_MIC | 1144 + BYT_RT5640_SSP0_AIF2 | 1145 + BYT_RT5640_MCLK_EN), 1146 + }, 1135 1147 { /* Voyo Winpad A15 */ 1136 1148 .matches = { 1137 1149 DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), ··· 1716 1698 1717 1699 codec_dev = acpi_get_first_physical_node(adev); 1718 1700 acpi_dev_put(adev); 1719 - if (!codec_dev) 1720 - return -EPROBE_DEFER; 1721 - priv->codec_dev = get_device(codec_dev); 1701 + 1702 + if (codec_dev) { 1703 + priv->codec_dev = get_device(codec_dev); 1704 + } else { 1705 + /* 1706 + * Special case for Android tablets where the codec i2c_client 1707 + * has been manually instantiated by x86_android_tablets.ko due 1708 + * to a broken DSDT. 1709 + */ 1710 + codec_dev = bus_find_device_by_name(&i2c_bus_type, NULL, 1711 + BYT_RT5640_FALLBACK_CODEC_DEV_NAME); 1712 + if (!codec_dev) 1713 + return -EPROBE_DEFER; 1714 + 1715 + if (!i2c_verify_client(codec_dev)) { 1716 + dev_err(dev, "Error '%s' is not an i2c_client\n", 1717 + BYT_RT5640_FALLBACK_CODEC_DEV_NAME); 1718 + put_device(codec_dev); 1719 + } 1720 + 1721 + /* fixup codec name */ 1722 + strscpy(byt_rt5640_codec_name, BYT_RT5640_FALLBACK_CODEC_DEV_NAME, 1723 + sizeof(byt_rt5640_codec_name)); 1724 + 1725 + /* bus_find_device() returns a reference no need to get() */ 1726 + priv->codec_dev = codec_dev; 1727 + } 1722 1728 1723 1729 /* 1724 1730 * swap SSP0 if bytcr is detected
+38
sound/soc/intel/common/soc-acpi-intel-lnl-match.c
··· 225 225 } 226 226 }; 227 227 228 + static const struct snd_soc_acpi_adr_device rt1318_1_adr[] = { 229 + { 230 + .adr = 0x000133025D131801ull, 231 + .num_endpoints = 1, 232 + .endpoints = &single_endpoint, 233 + .name_prefix = "rt1318-1" 234 + } 235 + }; 236 + 228 237 static const struct snd_soc_acpi_adr_device rt1318_1_group1_adr[] = { 229 238 { 230 239 .adr = 0x000130025D131801ull, ··· 249 240 .num_endpoints = 1, 250 241 .endpoints = &spk_r_endpoint, 251 242 .name_prefix = "rt1318-2" 243 + } 244 + }; 245 + 246 + static const struct snd_soc_acpi_adr_device rt713_0_adr[] = { 247 + { 248 + .adr = 0x000031025D071301ull, 249 + .num_endpoints = 1, 250 + .endpoints = &single_endpoint, 251 + .name_prefix = "rt713" 252 252 } 253 253 }; 254 254 ··· 396 378 {} 397 379 }; 398 380 381 + static const struct snd_soc_acpi_link_adr lnl_sdw_rt713_l0_rt1318_l1[] = { 382 + { 383 + .mask = BIT(0), 384 + .num_adr = ARRAY_SIZE(rt713_0_adr), 385 + .adr_d = rt713_0_adr, 386 + }, 387 + { 388 + .mask = BIT(1), 389 + .num_adr = ARRAY_SIZE(rt1318_1_adr), 390 + .adr_d = rt1318_1_adr, 391 + }, 392 + {} 393 + }; 394 + 399 395 /* this table is used when there is no I2S codec present */ 400 396 struct snd_soc_acpi_mach snd_soc_acpi_intel_lnl_sdw_machines[] = { 401 397 /* mockup tests need to be first */ ··· 478 446 .links = lnl_sdw_rt1318_l12_rt714_l0, 479 447 .drv_name = "sof_sdw", 480 448 .sof_tplg_filename = "sof-lnl-rt1318-l12-rt714-l0.tplg" 449 + }, 450 + { 451 + .link_mask = BIT(0) | BIT(1), 452 + .links = lnl_sdw_rt713_l0_rt1318_l1, 453 + .drv_name = "sof_sdw", 454 + .sof_tplg_filename = "sof-lnl-rt713-l0-rt1318-l1.tplg" 481 455 }, 482 456 {}, 483 457 };
+1
sound/soc/loongson/loongson_card.c
··· 144 144 dev_err(dev, "getting cpu dlc error (%d)\n", ret); 145 145 goto err; 146 146 } 147 + loongson_dai_links[i].platforms->of_node = loongson_dai_links[i].cpus->of_node; 147 148 148 149 ret = snd_soc_of_get_dlc(codec, NULL, loongson_dai_links[i].codecs, 0); 149 150 if (ret < 0) {
+2
sound/soc/qcom/Kconfig
··· 157 157 depends on COMMON_CLK 158 158 select SND_SOC_QDSP6 159 159 select SND_SOC_QCOM_COMMON 160 + select SND_SOC_QCOM_SDW 160 161 select SND_SOC_RT5663 161 162 select SND_SOC_MAX98927 162 163 imply SND_SOC_CROS_EC_CODEC ··· 209 208 tristate "SoC Machine driver for SC7280 boards" 210 209 depends on I2C && SOUNDWIRE 211 210 select SND_SOC_QCOM_COMMON 211 + select SND_SOC_QCOM_SDW 212 212 select SND_SOC_LPASS_SC7280 213 213 select SND_SOC_MAX98357A 214 214 select SND_SOC_WCD938X_SDW
+2
sound/soc/qcom/lpass-cpu.c
··· 1242 1242 /* Allocation for i2sctl regmap fields */ 1243 1243 drvdata->i2sctl = devm_kzalloc(&pdev->dev, sizeof(struct lpaif_i2sctl), 1244 1244 GFP_KERNEL); 1245 + if (!drvdata->i2sctl) 1246 + return -ENOMEM; 1245 1247 1246 1248 /* Initialize bitfields for dai I2SCTL register */ 1247 1249 ret = lpass_cpu_init_i2sctl_bitfields(dev, drvdata->i2sctl,
+9 -1
sound/soc/qcom/sc7280.c
··· 23 23 #include "common.h" 24 24 #include "lpass.h" 25 25 #include "qdsp6/q6afe.h" 26 + #include "sdw.h" 26 27 27 28 #define DEFAULT_MCLK_RATE 19200000 28 29 #define RT5682_PLL_FREQ (48000 * 512) ··· 317 316 struct snd_soc_card *card = rtd->card; 318 317 struct sc7280_snd_data *data = snd_soc_card_get_drvdata(card); 319 318 struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(rtd, 0); 319 + struct sdw_stream_runtime *sruntime = data->sruntime[cpu_dai->id]; 320 320 321 321 switch (cpu_dai->id) { 322 322 case MI2S_PRIMARY: ··· 335 333 default: 336 334 break; 337 335 } 336 + 337 + data->sruntime[cpu_dai->id] = NULL; 338 + sdw_release_stream(sruntime); 338 339 } 339 340 340 341 static int sc7280_snd_startup(struct snd_pcm_substream *substream) ··· 352 347 switch (cpu_dai->id) { 353 348 case MI2S_PRIMARY: 354 349 ret = sc7280_rt5682_init(rtd); 350 + if (ret) 351 + return ret; 355 352 break; 356 353 case SECONDARY_MI2S_RX: 357 354 codec_dai_fmt |= SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_I2S; ··· 367 360 default: 368 361 break; 369 362 } 370 - return ret; 363 + 364 + return qcom_snd_sdw_startup(substream); 371 365 } 372 366 373 367 static const struct snd_soc_ops sc7280_ops = {
+6 -1
sound/soc/qcom/sdm845.c
··· 15 15 #include <uapi/linux/input-event-codes.h> 16 16 #include "common.h" 17 17 #include "qdsp6/q6afe.h" 18 + #include "sdw.h" 18 19 #include "../codecs/rt5663.h" 19 20 20 21 #define DRIVER_NAME "sdm845" ··· 417 416 pr_err("%s: invalid dai id 0x%x\n", __func__, cpu_dai->id); 418 417 break; 419 418 } 420 - return 0; 419 + return qcom_snd_sdw_startup(substream); 421 420 } 422 421 423 422 static void sdm845_snd_shutdown(struct snd_pcm_substream *substream) ··· 426 425 struct snd_soc_card *card = rtd->card; 427 426 struct sdm845_snd_data *data = snd_soc_card_get_drvdata(card); 428 427 struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(rtd, 0); 428 + struct sdw_stream_runtime *sruntime = data->sruntime[cpu_dai->id]; 429 429 430 430 switch (cpu_dai->id) { 431 431 case PRIMARY_MI2S_RX: ··· 465 463 pr_err("%s: invalid dai id 0x%x\n", __func__, cpu_dai->id); 466 464 break; 467 465 } 466 + 467 + data->sruntime[cpu_dai->id] = NULL; 468 + sdw_release_stream(sruntime); 468 469 } 469 470 470 471 static int sdm845_snd_prepare(struct snd_pcm_substream *substream)
+5 -2
sound/soc/sh/rcar/core.c
··· 1281 1281 if (!of_node_name_eq(ports, "ports") && 1282 1282 !of_node_name_eq(ports, "port")) 1283 1283 continue; 1284 - priv->component_dais[i] = of_graph_get_endpoint_count(ports); 1284 + priv->component_dais[i] = 1285 + of_graph_get_endpoint_count(of_node_name_eq(ports, "ports") ? 1286 + ports : np); 1285 1287 nr += priv->component_dais[i]; 1286 1288 i++; 1287 1289 if (i >= RSND_MAX_COMPONENT) { ··· 1495 1493 if (!of_node_name_eq(ports, "ports") && 1496 1494 !of_node_name_eq(ports, "port")) 1497 1495 continue; 1498 - for_each_endpoint_of_node(ports, dai_np) { 1496 + for_each_endpoint_of_node(of_node_name_eq(ports, "ports") ? 1497 + ports : np, dai_np) { 1499 1498 __rsnd_dai_probe(priv, dai_np, dai_np, 0, dai_i); 1500 1499 if (!rsnd_is_gen1(priv) && !rsnd_is_gen2(priv)) { 1501 1500 rdai = rsnd_rdai_get(priv, dai_i);
+4 -2
sound/soc/soc-dapm.c
··· 1147 1147 if (*list == NULL) 1148 1148 return -ENOMEM; 1149 1149 1150 + (*list)->num_widgets = size; 1151 + 1150 1152 list_for_each_entry(w, widgets, work_list) 1151 1153 (*list)->widgets[i++] = w; 1152 1154 ··· 2787 2785 2788 2786 int snd_soc_dapm_widget_name_cmp(struct snd_soc_dapm_widget *widget, const char *s) 2789 2787 { 2790 - struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm); 2788 + struct snd_soc_component *component = widget->dapm->component; 2791 2789 const char *wname = widget->name; 2792 2790 2793 - if (component->name_prefix) 2791 + if (component && component->name_prefix) 2794 2792 wname += strlen(component->name_prefix) + 1; /* plus space */ 2795 2793 2796 2794 return strcmp(wname, s);
+4 -1
sound/soc/sof/amd/acp-loader.c
··· 206 206 configure_pte_for_fw_loading(FW_SRAM_DATA_BIN, ACP_SRAM_PAGE_COUNT, adata); 207 207 src_addr = ACP_SYSTEM_MEMORY_WINDOW + ACP_DEFAULT_SRAM_LENGTH + 208 208 (page_count * ACP_PAGE_SIZE); 209 - dest_addr = ACP_SRAM_BASE_ADDRESS; 209 + if (adata->pci_rev > ACP63_PCI_ID) 210 + dest_addr = ACP7X_SRAM_BASE_ADDRESS; 211 + else 212 + dest_addr = ACP_SRAM_BASE_ADDRESS; 210 213 211 214 ret = configure_and_run_dma(adata, src_addr, dest_addr, 212 215 adata->fw_sram_data_bin_size);
+3 -1
sound/soc/sof/amd/acp.c
··· 329 329 fw_qualifier, fw_qualifier & DSP_FW_RUN_ENABLE, 330 330 ACP_REG_POLL_INTERVAL, ACP_DMA_COMPLETE_TIMEOUT_US); 331 331 if (ret < 0) { 332 - dev_err(sdev->dev, "PSP validation failed\n"); 332 + val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, ACP_SHA_PSP_ACK); 333 + dev_err(sdev->dev, "PSP validation failed: fw_qualifier = %#x, ACP_SHA_PSP_ACK = %#x\n", 334 + fw_qualifier, val); 333 335 return ret; 334 336 } 335 337
+10 -13
sound/soc/sof/intel/hda-dai-ops.c
··· 346 346 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 347 347 snd_hdac_ext_stream_start(hext_stream); 348 348 break; 349 - case SNDRV_PCM_TRIGGER_SUSPEND: 350 - case SNDRV_PCM_TRIGGER_STOP: 351 349 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 352 - snd_hdac_ext_stream_clear(hext_stream); 353 - 354 350 /* 355 - * Save the LLP registers in case the stream is 356 - * restarting due PAUSE_RELEASE, or START without a pcm 357 - * close/open since in this case the LLP register is not reset 358 - * to 0 and the delay calculation will return with invalid 359 - * results. 351 + * Save the LLP registers since in case of PAUSE the LLP 352 + * register are not reset to 0, the delay calculation will use 353 + * the saved offsets for compensating the delay calculation. 360 354 */ 361 355 hext_stream->pplcllpl = readl(hext_stream->pplc_addr + AZX_REG_PPLCLLPL); 362 356 hext_stream->pplcllpu = readl(hext_stream->pplc_addr + AZX_REG_PPLCLLPU); 357 + snd_hdac_ext_stream_clear(hext_stream); 358 + break; 359 + case SNDRV_PCM_TRIGGER_SUSPEND: 360 + case SNDRV_PCM_TRIGGER_STOP: 361 + hext_stream->pplcllpl = 0; 362 + hext_stream->pplcllpu = 0; 363 + snd_hdac_ext_stream_clear(hext_stream); 363 364 break; 364 365 default: 365 366 dev_err(sdev->dev, "unknown trigger command %d\n", cmd); ··· 513 512 static int hda_ipc3_post_trigger(struct snd_sof_dev *sdev, struct snd_soc_dai *cpu_dai, 514 513 struct snd_pcm_substream *substream, int cmd) 515 514 { 516 - struct hdac_ext_stream *hext_stream = hda_get_hext_stream(sdev, cpu_dai, substream); 517 515 struct snd_soc_dapm_widget *w = snd_soc_dai_get_widget(cpu_dai, substream->stream); 518 516 519 517 switch (cmd) { ··· 526 526 ret = hda_dai_config(w, SOF_DAI_CONFIG_FLAGS_HW_FREE, &data); 527 527 if (ret < 0) 528 528 return ret; 529 - 530 - if (cmd == SNDRV_PCM_TRIGGER_STOP) 531 - return hda_link_dma_cleanup(substream, hext_stream, cpu_dai); 532 529 533 530 break; 534 531 }
+33 -4
sound/soc/sof/intel/hda-dai.c
··· 302 302 } 303 303 304 304 switch (cmd) { 305 + case SNDRV_PCM_TRIGGER_STOP: 305 306 case SNDRV_PCM_TRIGGER_SUSPEND: 306 307 ret = hda_link_dma_cleanup(substream, hext_stream, dai); 307 308 if (ret < 0) { ··· 371 370 return -EINVAL; 372 371 } 373 372 373 + sdev = widget_to_sdev(w); 374 + hext_stream = ops->get_hext_stream(sdev, cpu_dai, substream); 375 + 376 + /* nothing more to do if the link is already prepared */ 377 + if (hext_stream && hext_stream->link_prepared) 378 + return 0; 379 + 374 380 /* use HDaudio stream handling */ 375 381 ret = hda_dai_hw_params_data(substream, params, cpu_dai, data, flags); 376 382 if (ret < 0) { ··· 385 377 return ret; 386 378 } 387 379 388 - sdev = widget_to_sdev(w); 389 380 if (sdev->dspless_mode_selected) 390 381 return 0; 391 382 ··· 489 482 int ret; 490 483 int i; 491 484 485 + ops = hda_dai_get_ops(substream, cpu_dai); 486 + if (!ops) { 487 + dev_err(cpu_dai->dev, "DAI widget ops not set\n"); 488 + return -EINVAL; 489 + } 490 + 491 + sdev = widget_to_sdev(w); 492 + hext_stream = ops->get_hext_stream(sdev, cpu_dai, substream); 493 + 494 + /* nothing more to do if the link is already prepared */ 495 + if (hext_stream && hext_stream->link_prepared) 496 + return 0; 497 + 498 + /* 499 + * reset the PCMSyCM registers to handle a prepare callback when the PCM is restarted 500 + * due to xruns or after a call to snd_pcm_drain/drop() 501 + */ 502 + ret = hdac_bus_eml_sdw_map_stream_ch(sof_to_bus(sdev), link_id, cpu_dai->id, 503 + 0, 0, substream->stream); 504 + if (ret < 0) { 505 + dev_err(cpu_dai->dev, "%s: hdac_bus_eml_sdw_map_stream_ch failed %d\n", 506 + __func__, ret); 507 + return ret; 508 + } 509 + 492 510 data.dai_index = (link_id << 8) | cpu_dai->id; 493 511 data.dai_node_id = intel_alh_id; 494 512 ret = non_hda_dai_hw_params_data(substream, params, cpu_dai, &data, flags); ··· 522 490 return ret; 523 491 } 524 492 525 - ops = hda_dai_get_ops(substream, cpu_dai); 526 - sdev = widget_to_sdev(w); 527 493 hext_stream = ops->get_hext_stream(sdev, cpu_dai, substream); 528 - 529 494 if (!hext_stream) 530 495 return -ENODEV; 531 496
-17
sound/soc/sof/intel/hda-loader.c
··· 294 294 { 295 295 struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata; 296 296 const struct sof_intel_dsp_desc *chip = hda->desc; 297 - struct sof_intel_hda_stream *hda_stream; 298 - unsigned long time_left; 299 297 unsigned int reg; 300 298 int ret, status; 301 - 302 - hda_stream = container_of(hext_stream, struct sof_intel_hda_stream, 303 - hext_stream); 304 299 305 300 dev_dbg(sdev->dev, "Code loader DMA starting\n"); 306 301 ··· 303 308 if (ret < 0) { 304 309 dev_err(sdev->dev, "error: DMA trigger start failed\n"); 305 310 return ret; 306 - } 307 - 308 - if (sdev->pdata->ipc_type == SOF_IPC_TYPE_4) { 309 - /* Wait for completion of transfer */ 310 - time_left = wait_for_completion_timeout(&hda_stream->ioc, 311 - msecs_to_jiffies(HDA_CL_DMA_IOC_TIMEOUT_MS)); 312 - 313 - if (!time_left) { 314 - dev_err(sdev->dev, "Code loader DMA did not complete\n"); 315 - return -ETIMEDOUT; 316 - } 317 - dev_dbg(sdev->dev, "Code loader DMA done\n"); 318 311 } 319 312 320 313 dev_dbg(sdev->dev, "waiting for FW_ENTERED status\n");
+13 -2
sound/soc/sof/ipc4-topology.c
··· 3129 3129 * group_id during copier's ipc_prepare op. 3130 3130 */ 3131 3131 if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS) { 3132 + struct sof_ipc4_alh_configuration_blob *blob; 3133 + 3134 + blob = (struct sof_ipc4_alh_configuration_blob *)ipc4_copier->copier_config; 3132 3135 ipc4_copier->dai_index = data->dai_node_id; 3133 - copier_data->gtw_cfg.node_id &= ~SOF_IPC4_NODE_INDEX_MASK; 3134 - copier_data->gtw_cfg.node_id |= SOF_IPC4_NODE_INDEX(data->dai_node_id); 3136 + 3137 + /* 3138 + * no need to set the node_id for aggregated DAI's. These will be assigned 3139 + * a group_id during widget ipc_prepare 3140 + */ 3141 + if (blob->alh_cfg.device_count == 1) { 3142 + copier_data->gtw_cfg.node_id &= ~SOF_IPC4_NODE_INDEX_MASK; 3143 + copier_data->gtw_cfg.node_id |= 3144 + SOF_IPC4_NODE_INDEX(data->dai_node_id); 3145 + } 3135 3146 } 3136 3147 3137 3148 break;
+3
sound/usb/mixer_quirks.c
··· 4042 4042 break; 4043 4043 err = dell_dock_mixer_init(mixer); 4044 4044 break; 4045 + case USB_ID(0x0bda, 0x402e): /* Dell WD19 dock */ 4046 + err = dell_dock_mixer_create(mixer); 4047 + break; 4045 4048 4046 4049 case USB_ID(0x2a39, 0x3fd2): /* RME ADI-2 Pro */ 4047 4050 case USB_ID(0x2a39, 0x3fd3): /* RME ADI-2 DAC */
+2
tools/arch/arm64/include/asm/cputype.h
··· 94 94 #define ARM_CPU_PART_NEOVERSE_V3 0xD84 95 95 #define ARM_CPU_PART_CORTEX_X925 0xD85 96 96 #define ARM_CPU_PART_CORTEX_A725 0xD87 97 + #define ARM_CPU_PART_NEOVERSE_N3 0xD8E 97 98 98 99 #define APM_CPU_PART_XGENE 0x000 99 100 #define APM_CPU_VAR_POTENZA 0x00 ··· 177 176 #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) 178 177 #define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925) 179 178 #define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725) 179 + #define MIDR_NEOVERSE_N3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N3) 180 180 #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) 181 181 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 182 182 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+20 -14
tools/arch/x86/include/asm/msr-index.h
··· 36 36 #define EFER_FFXSR (1<<_EFER_FFXSR) 37 37 #define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS) 38 38 39 + /* 40 + * Architectural memory types that are common to MTRRs, PAT, VMX MSRs, etc. 41 + * Most MSRs support/allow only a subset of memory types, but the values 42 + * themselves are common across all relevant MSRs. 43 + */ 44 + #define X86_MEMTYPE_UC 0ull /* Uncacheable, a.k.a. Strong Uncacheable */ 45 + #define X86_MEMTYPE_WC 1ull /* Write Combining */ 46 + /* RESERVED 2 */ 47 + /* RESERVED 3 */ 48 + #define X86_MEMTYPE_WT 4ull /* Write Through */ 49 + #define X86_MEMTYPE_WP 5ull /* Write Protected */ 50 + #define X86_MEMTYPE_WB 6ull /* Write Back */ 51 + #define X86_MEMTYPE_UC_MINUS 7ull /* Weak Uncacheabled (PAT only) */ 52 + 39 53 /* FRED MSRs */ 40 54 #define MSR_IA32_FRED_RSP0 0x1cc /* Level 0 stack pointer */ 41 55 #define MSR_IA32_FRED_RSP1 0x1cd /* Level 1 stack pointer */ ··· 378 364 #define MSR_MTRRdefType 0x000002ff 379 365 380 366 #define MSR_IA32_CR_PAT 0x00000277 367 + 368 + #define PAT_VALUE(p0, p1, p2, p3, p4, p5, p6, p7) \ 369 + ((X86_MEMTYPE_ ## p0) | (X86_MEMTYPE_ ## p1 << 8) | \ 370 + (X86_MEMTYPE_ ## p2 << 16) | (X86_MEMTYPE_ ## p3 << 24) | \ 371 + (X86_MEMTYPE_ ## p4 << 32) | (X86_MEMTYPE_ ## p5 << 40) | \ 372 + (X86_MEMTYPE_ ## p6 << 48) | (X86_MEMTYPE_ ## p7 << 56)) 381 373 382 374 #define MSR_IA32_DEBUGCTLMSR 0x000001d9 383 375 #define MSR_IA32_LASTBRANCHFROMIP 0x000001db ··· 1179 1159 #define MSR_IA32_VMX_VMFUNC 0x00000491 1180 1160 #define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492 1181 1161 1182 - /* VMX_BASIC bits and bitmasks */ 1183 - #define VMX_BASIC_VMCS_SIZE_SHIFT 32 1184 - #define VMX_BASIC_TRUE_CTLS (1ULL << 55) 1185 - #define VMX_BASIC_64 0x0001000000000000LLU 1186 - #define VMX_BASIC_MEM_TYPE_SHIFT 50 1187 - #define VMX_BASIC_MEM_TYPE_MASK 0x003c000000000000LLU 1188 - #define VMX_BASIC_MEM_TYPE_WB 6LLU 1189 - #define VMX_BASIC_INOUT 0x0040000000000000LLU 1190 - 1191 1162 /* Resctrl MSRs: */ 1192 1163 /* - Intel: */ 1193 1164 #define MSR_IA32_L3_QOS_CFG 0xc81 ··· 1195 1184 #define MSR_IA32_MBA_BW_BASE 0xc0000200 1196 1185 #define MSR_IA32_SMBA_BW_BASE 0xc0000280 1197 1186 #define MSR_IA32_EVT_CFG_BASE 0xc0000400 1198 - 1199 - /* MSR_IA32_VMX_MISC bits */ 1200 - #define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14) 1201 - #define MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS (1ULL << 29) 1202 - #define MSR_IA32_VMX_MISC_PREEMPTION_TIMER_SCALE 0x1F 1203 1187 1204 1188 /* AMD-V MSRs */ 1205 1189 #define MSR_VM_CR 0xc0010114
+1
tools/arch/x86/include/uapi/asm/kvm.h
··· 439 439 #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) 440 440 #define KVM_X86_QUIRK_FIX_HYPERCALL_INSN (1 << 5) 441 441 #define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6) 442 + #define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7) 442 443 443 444 #define KVM_STATE_NESTED_FORMAT_VMX 0 444 445 #define KVM_STATE_NESTED_FORMAT_SVM 1
+3
tools/arch/x86/include/uapi/asm/unistd_32.h
··· 11 11 #ifndef __NR_getpgid 12 12 #define __NR_getpgid 132 13 13 #endif 14 + #ifndef __NR_capget 15 + #define __NR_capget 184 16 + #endif 14 17 #ifndef __NR_gettid 15 18 #define __NR_gettid 224 16 19 #endif
+3
tools/arch/x86/include/uapi/asm/unistd_64.h
··· 11 11 #ifndef __NR_getpgid 12 12 #define __NR_getpgid 121 13 13 #endif 14 + #ifndef __NR_capget 15 + #define __NR_capget 125 16 + #endif 14 17 #ifndef __NR_gettid 15 18 #define __NR_gettid 186 16 19 #endif
+15
tools/include/linux/bits.h
··· 36 36 #define GENMASK_ULL(h, l) \ 37 37 (GENMASK_INPUT_CHECK(h, l) + __GENMASK_ULL(h, l)) 38 38 39 + #if !defined(__ASSEMBLY__) 40 + /* 41 + * Missing asm support 42 + * 43 + * __GENMASK_U128() depends on _BIT128() which would not work 44 + * in the asm code, as it shifts an 'unsigned __init128' data 45 + * type instead of direct representation of 128 bit constants 46 + * such as long and unsigned long. The fundamental problem is 47 + * that a 128 bit constant will get silently truncated by the 48 + * gcc compiler. 49 + */ 50 + #define GENMASK_U128(h, l) \ 51 + (GENMASK_INPUT_CHECK(h, l) + __GENMASK_U128(h, l)) 52 + #endif 53 + 39 54 #endif /* __LINUX_BITS_H */
+1 -10
tools/include/linux/unaligned.h
··· 9 9 #pragma GCC diagnostic push 10 10 #pragma GCC diagnostic ignored "-Wpacked" 11 11 #pragma GCC diagnostic ignored "-Wattributes" 12 - 13 - #define __get_unaligned_t(type, ptr) ({ \ 14 - const struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \ 15 - __pptr->x; \ 16 - }) 17 - 18 - #define __put_unaligned_t(type, val, ptr) do { \ 19 - struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \ 20 - __pptr->x = (val); \ 21 - } while (0) 12 + #include <vdso/unaligned.h> 22 13 23 14 #define get_unaligned(ptr) __get_unaligned_t(typeof(*(ptr)), (ptr)) 24 15 #define put_unaligned(val, ptr) __put_unaligned_t(typeof(*(ptr)), (val), (ptr))
+3
tools/include/uapi/linux/bits.h
··· 12 12 (((~_ULL(0)) - (_ULL(1) << (l)) + 1) & \ 13 13 (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 14 14 15 + #define __GENMASK_U128(h, l) \ 16 + ((_BIT128((h)) << 1) - (_BIT128(l))) 17 + 15 18 #endif /* _UAPI_LINUX_BITS_H */
+3
tools/include/uapi/linux/bpf.h
··· 1121 1121 1122 1122 #define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE 1123 1123 1124 + /* Add BPF_LINK_TYPE(type, name) in bpf_types.h to keep bpf_link_type_strs[] 1125 + * in sync with the definitions below. 1126 + */ 1124 1127 enum bpf_link_type { 1125 1128 BPF_LINK_TYPE_UNSPEC = 0, 1126 1129 BPF_LINK_TYPE_RAW_TRACEPOINT = 1,
+17
tools/include/uapi/linux/const.h
··· 28 28 #define _BITUL(x) (_UL(1) << (x)) 29 29 #define _BITULL(x) (_ULL(1) << (x)) 30 30 31 + #if !defined(__ASSEMBLY__) 32 + /* 33 + * Missing asm support 34 + * 35 + * __BIT128() would not work in the asm code, as it shifts an 36 + * 'unsigned __init128' data type as direct representation of 37 + * 128 bit constants is not supported in the gcc compiler, as 38 + * they get silently truncated. 39 + * 40 + * TODO: Please revisit this implementation when gcc compiler 41 + * starts representing 128 bit constants directly like long 42 + * and unsigned long etc. Subsequently drop the comment for 43 + * GENMASK_U128() which would then start supporting asm code. 44 + */ 45 + #define _BIT128(x) ((unsigned __int128)(1) << (x)) 46 + #endif 47 + 31 48 #define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (__typeof__(x))(a) - 1) 32 49 #define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask)) 33 50
+15
tools/include/vdso/unaligned.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __VDSO_UNALIGNED_H 3 + #define __VDSO_UNALIGNED_H 4 + 5 + #define __get_unaligned_t(type, ptr) ({ \ 6 + const struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \ 7 + __pptr->x; \ 8 + }) 9 + 10 + #define __put_unaligned_t(type, val, ptr) do { \ 11 + struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \ 12 + __pptr->x = (val); \ 13 + } while (0) 14 + 15 + #endif /* __VDSO_UNALIGNED_H */
+2 -2
tools/perf/Makefile.config
··· 704 704 BUILD_BPF_SKEL := 0 705 705 else 706 706 CLANG_VERSION := $(shell $(CLANG) --version | head -1 | sed 's/.*clang version \([[:digit:]]\+.[[:digit:]]\+.[[:digit:]]\+\).*/\1/g') 707 - ifeq ($(call version-lt3,$(CLANG_VERSION),16.0.6),1) 708 - $(warning Warning: Disabled BPF skeletons as at least $(CLANG) version 16.0.6 is reported to be a working setup with the current of BPF based perf features) 707 + ifeq ($(call version-lt3,$(CLANG_VERSION),12.0.1),1) 708 + $(warning Warning: Disabled BPF skeletons as reliable BTF generation needs at least $(CLANG) version 12.0.1) 709 709 BUILD_BPF_SKEL := 0 710 710 endif 711 711 endif
+1 -1
tools/perf/builtin-trace.c
··· 1399 1399 .arg = { [2] = { .scnprintf = SCA_WAITID_OPTIONS, /* options */ }, }, }, 1400 1400 { .name = "waitid", .errpid = true, 1401 1401 .arg = { [3] = { .scnprintf = SCA_WAITID_OPTIONS, /* options */ }, }, }, 1402 - { .name = "write", .errpid = true, 1402 + { .name = "write", 1403 1403 .arg = { [1] = { .scnprintf = SCA_BUF /* buf */, .from_user = true, }, }, }, 1404 1404 }; 1405 1405
+1
tools/perf/check-headers.sh
··· 22 22 "include/vdso/bits.h" 23 23 "include/linux/const.h" 24 24 "include/vdso/const.h" 25 + "include/vdso/unaligned.h" 25 26 "include/linux/hash.h" 26 27 "include/linux/list-sort.h" 27 28 "include/uapi/linux/hw_breakpoint.h"
+52 -13
tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
··· 19 19 TEST_RESULT=0 20 20 21 21 # skip if not supported 22 - BLACKFUNC=`head -n 1 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2` 23 - if [ -z "$BLACKFUNC" ]; then 22 + BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2` 23 + if [ -z "$BLACKFUNC_LIST" ]; then 24 24 print_overall_skipped 25 25 exit 0 26 26 fi 27 + 28 + # try to find vmlinux with DWARF debug info 29 + VMLINUX_FILE=$(perf probe -v random_probe |& grep "Using.*for symbols" | sed -r 's/^Using (.*) for symbols$/\1/') 27 30 28 31 # remove all previously added probes 29 32 clear_all_probes 30 33 31 34 32 35 ### adding blacklisted function 33 - 34 - # functions from blacklist should be skipped by perf probe 35 - ! $CMD_PERF probe $BLACKFUNC > $LOGS_DIR/adding_blacklisted.log 2> $LOGS_DIR/adding_blacklisted.err 36 - PERF_EXIT_CODE=$? 37 - 38 36 REGEX_SCOPE_FAIL="Failed to find scope of probe point" 39 37 REGEX_SKIP_MESSAGE=" is blacklisted function, skip it\." 40 - REGEX_NOT_FOUND_MESSAGE="Probe point \'$BLACKFUNC\' not found." 38 + REGEX_NOT_FOUND_MESSAGE="Probe point \'$RE_EVENT\' not found." 41 39 REGEX_ERROR_MESSAGE="Error: Failed to add events." 42 40 REGEX_INVALID_ARGUMENT="Failed to write event: Invalid argument" 43 41 REGEX_SYMBOL_FAIL="Failed to find symbol at $RE_ADDRESS" 44 - REGEX_OUT_SECTION="$BLACKFUNC is out of \.\w+, skip it" 45 - ../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err 46 - CHECK_EXIT_CODE=$? 42 + REGEX_OUT_SECTION="$RE_EVENT is out of \.\w+, skip it" 43 + REGEX_MISSING_DECL_LINE="A function DIE doesn't have decl_line. Maybe broken DWARF?" 47 44 48 - print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding blacklisted function $BLACKFUNC" 49 - (( TEST_RESULT += $? )) 45 + BLACKFUNC="" 46 + SKIP_DWARF=0 50 47 48 + for BLACKFUNC in $BLACKFUNC_LIST; do 49 + echo "Probing $BLACKFUNC" 50 + 51 + # functions from blacklist should be skipped by perf probe 52 + ! $CMD_PERF probe $BLACKFUNC > $LOGS_DIR/adding_blacklisted.log 2> $LOGS_DIR/adding_blacklisted.err 53 + PERF_EXIT_CODE=$? 54 + 55 + # check for bad DWARF polluting the result 56 + ../common/check_all_patterns_found.pl "$REGEX_MISSING_DECL_LINE" >/dev/null < $LOGS_DIR/adding_blacklisted.err 57 + 58 + if [ $? -eq 0 ]; then 59 + SKIP_DWARF=1 60 + echo "Result polluted by broken DWARF, trying another probe" 61 + 62 + # confirm that the broken DWARF comes from assembler 63 + if [ -n "$VMLINUX_FILE" ]; then 64 + readelf -wi "$VMLINUX_FILE" | 65 + awk -v probe="$BLACKFUNC" '/DW_AT_language/ { comp_lang = $0 } 66 + $0 ~ probe { if (comp_lang) { print comp_lang }; exit }' | 67 + grep -q "MIPS assembler" 68 + 69 + CHECK_EXIT_CODE=$? 70 + if [ $CHECK_EXIT_CODE -ne 0 ]; then 71 + SKIP_DWARF=0 # broken DWARF while available 72 + break 73 + fi 74 + fi 75 + else 76 + ../common/check_all_lines_matched.pl "$REGEX_SKIP_MESSAGE" "$REGEX_NOT_FOUND_MESSAGE" "$REGEX_ERROR_MESSAGE" "$REGEX_SCOPE_FAIL" "$REGEX_INVALID_ARGUMENT" "$REGEX_SYMBOL_FAIL" "$REGEX_OUT_SECTION" < $LOGS_DIR/adding_blacklisted.err 77 + CHECK_EXIT_CODE=$? 78 + 79 + SKIP_DWARF=0 80 + break 81 + fi 82 + done 83 + 84 + if [ $SKIP_DWARF -eq 1 ]; then 85 + print_testcase_skipped "adding blacklisted function $BLACKFUNC" 86 + else 87 + print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "adding blacklisted function $BLACKFUNC" 88 + (( TEST_RESULT += $? )) 89 + fi 51 90 52 91 ### listing not-added probe 53 92
+20 -2
tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
··· 288 288 augmented_args->arg.size = PERF_ALIGN(oldpath_len + 1, sizeof(u64)); 289 289 len += augmented_args->arg.size; 290 290 291 + /* Every read from userspace is limited to value size */ 292 + if (augmented_args->arg.size > sizeof(augmented_args->arg.value)) 293 + return 1; /* Failure: don't filter */ 294 + 291 295 struct augmented_arg *arg2 = (void *)&augmented_args->arg.value + augmented_args->arg.size; 292 296 293 297 newpath_len = augmented_arg__read_str(arg2, newpath_arg, sizeof(augmented_args->arg.value)); ··· 318 314 oldpath_len = augmented_arg__read_str(&augmented_args->arg, oldpath_arg, sizeof(augmented_args->arg.value)); 319 315 augmented_args->arg.size = PERF_ALIGN(oldpath_len + 1, sizeof(u64)); 320 316 len += augmented_args->arg.size; 317 + 318 + /* Every read from userspace is limited to value size */ 319 + if (augmented_args->arg.size > sizeof(augmented_args->arg.value)) 320 + return 1; /* Failure: don't filter */ 321 321 322 322 struct augmented_arg *arg2 = (void *)&augmented_args->arg.value + augmented_args->arg.size; 323 323 ··· 431 423 static int augment_sys_enter(void *ctx, struct syscall_enter_args *args) 432 424 { 433 425 bool augmented, do_output = false; 434 - int zero = 0, size, aug_size, index, output = 0, 426 + int zero = 0, size, aug_size, index, 435 427 value_size = sizeof(struct augmented_arg) - offsetof(struct augmented_arg, value); 428 + u64 output = 0; /* has to be u64, otherwise it won't pass the verifier */ 436 429 unsigned int nr, *beauty_map; 437 430 struct beauty_payload_enter *payload; 438 431 void *arg, *payload_offset; ··· 486 477 augmented = true; 487 478 } else if (size < 0 && size >= -6) { /* buffer */ 488 479 index = -(size + 1); 480 + barrier_var(index); // Prevent clang (noticed with v18) from removing the &= 7 trick. 481 + index &= 7; // Satisfy the bounds checking with the verifier in some kernels. 489 482 aug_size = args->args[index]; 490 483 491 484 if (aug_size > TRACE_AUG_MAX_BUF) ··· 499 488 } 500 489 } 501 490 491 + /* Augmented data size is limited to sizeof(augmented_arg->unnamed union with value field) */ 492 + if (aug_size > value_size) 493 + aug_size = value_size; 494 + 502 495 /* write data to payload */ 503 496 if (augmented) { 504 497 int written = offsetof(struct augmented_arg, value) + aug_size; 498 + 499 + if (written < 0 || written > sizeof(struct augmented_arg)) 500 + return 1; 505 501 506 502 ((struct augmented_arg *)payload_offset)->size = aug_size; 507 503 output += written; ··· 517 499 } 518 500 } 519 501 520 - if (!do_output) 502 + if (!do_output || (sizeof(struct syscall_enter_args) + output) > sizeof(struct beauty_payload_enter)) 521 503 return 1; 522 504 523 505 return augmented__beauty_output(ctx, payload, sizeof(struct syscall_enter_args) + output);
+3 -7
tools/perf/util/cap.c
··· 7 7 #include "debug.h" 8 8 #include <errno.h> 9 9 #include <string.h> 10 - #include <unistd.h> 11 10 #include <linux/capability.h> 12 11 #include <sys/syscall.h> 13 - 14 - #ifndef SYS_capget 15 - #define SYS_capget 90 16 - #endif 12 + #include <unistd.h> 17 13 18 14 #define MAX_LINUX_CAPABILITY_U32S _LINUX_CAPABILITY_U32S_3 19 15 ··· 17 21 { 18 22 struct __user_cap_header_struct header = { 19 23 .version = _LINUX_CAPABILITY_VERSION_3, 20 - .pid = getpid(), 24 + .pid = 0, 21 25 }; 22 - struct __user_cap_data_struct data[MAX_LINUX_CAPABILITY_U32S]; 26 + struct __user_cap_data_struct data[MAX_LINUX_CAPABILITY_U32S] = {}; 23 27 __u32 cap_val; 24 28 25 29 *used_root = false;
+3
tools/perf/util/python.c
··· 19 19 #include "util/bpf-filter.h" 20 20 #include "util/env.h" 21 21 #include "util/kvm-stat.h" 22 + #include "util/stat.h" 22 23 #include "util/kwork.h" 23 24 #include "util/sample.h" 24 25 #include "util/lock-contention.h" ··· 1356 1355 1357 1356 unsigned int scripting_max_stack = PERF_MAX_STACK_DEPTH; 1358 1357 1358 + #ifdef HAVE_KVM_STAT_SUPPORT 1359 1359 bool kvm_entry_event(struct evsel *evsel __maybe_unused) 1360 1360 { 1361 1361 return false; ··· 1386 1384 char *decode __maybe_unused) 1387 1385 { 1388 1386 } 1387 + #endif // HAVE_KVM_STAT_SUPPORT 1389 1388 1390 1389 int find_scripts(char **scripts_array __maybe_unused, char **scripts_path_array __maybe_unused, 1391 1390 int num __maybe_unused, int pathlen __maybe_unused)
+10
tools/perf/util/syscalltbl.c
··· 46 46 #include <asm/syscalls.c> 47 47 const int syscalltbl_native_max_id = SYSCALLTBL_LOONGARCH_MAX_ID; 48 48 static const char *const *syscalltbl_native = syscalltbl_loongarch; 49 + #else 50 + const int syscalltbl_native_max_id = 0; 51 + static const char *const syscalltbl_native[] = { 52 + [0] = "unknown", 53 + }; 49 54 #endif 50 55 51 56 struct syscall { ··· 185 180 int syscalltbl__id(struct syscalltbl *tbl, const char *name) 186 181 { 187 182 return audit_name_to_syscall(name, tbl->audit_machine); 183 + } 184 + 185 + int syscalltbl__id_at_idx(struct syscalltbl *tbl __maybe_unused, int idx) 186 + { 187 + return idx; 188 188 } 189 189 190 190 int syscalltbl__strglobmatch_next(struct syscalltbl *tbl __maybe_unused,
+1 -1
tools/sched_ext/include/scx/common.bpf.h
··· 320 320 /* 321 321 * Access a cpumask in read-only mode (typically to check bits). 322 322 */ 323 - const struct cpumask *cast_mask(struct bpf_cpumask *mask) 323 + static __always_inline const struct cpumask *cast_mask(struct bpf_cpumask *mask) 324 324 { 325 325 return (const struct cpumask *)mask; 326 326 }
+109
tools/testing/selftests/bpf/map_tests/lpm_trie_map_get_next_key.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define _GNU_SOURCE 4 + #include <linux/bpf.h> 5 + #include <stdio.h> 6 + #include <stdbool.h> 7 + #include <unistd.h> 8 + #include <errno.h> 9 + #include <stdlib.h> 10 + #include <string.h> 11 + #include <pthread.h> 12 + 13 + #include <bpf/bpf.h> 14 + #include <bpf/libbpf.h> 15 + 16 + #include <test_maps.h> 17 + 18 + struct test_lpm_key { 19 + __u32 prefix; 20 + __u32 data; 21 + }; 22 + 23 + struct get_next_key_ctx { 24 + struct test_lpm_key key; 25 + bool start; 26 + bool stop; 27 + int map_fd; 28 + int loop; 29 + }; 30 + 31 + static void *get_next_key_fn(void *arg) 32 + { 33 + struct get_next_key_ctx *ctx = arg; 34 + struct test_lpm_key next_key; 35 + int i = 0; 36 + 37 + while (!ctx->start) 38 + usleep(1); 39 + 40 + while (!ctx->stop && i++ < ctx->loop) 41 + bpf_map_get_next_key(ctx->map_fd, &ctx->key, &next_key); 42 + 43 + return NULL; 44 + } 45 + 46 + static void abort_get_next_key(struct get_next_key_ctx *ctx, pthread_t *tids, 47 + unsigned int nr) 48 + { 49 + unsigned int i; 50 + 51 + ctx->stop = true; 52 + ctx->start = true; 53 + for (i = 0; i < nr; i++) 54 + pthread_join(tids[i], NULL); 55 + } 56 + 57 + /* This test aims to prevent regression of future. As long as the kernel does 58 + * not panic, it is considered as success. 59 + */ 60 + void test_lpm_trie_map_get_next_key(void) 61 + { 62 + #define MAX_NR_THREADS 8 63 + LIBBPF_OPTS(bpf_map_create_opts, create_opts, 64 + .map_flags = BPF_F_NO_PREALLOC); 65 + struct test_lpm_key key = {}; 66 + __u32 val = 0; 67 + int map_fd; 68 + const __u32 max_prefixlen = 8 * (sizeof(key) - sizeof(key.prefix)); 69 + const __u32 max_entries = max_prefixlen + 1; 70 + unsigned int i, nr = MAX_NR_THREADS, loop = 65536; 71 + pthread_t tids[MAX_NR_THREADS]; 72 + struct get_next_key_ctx ctx; 73 + int err; 74 + 75 + map_fd = bpf_map_create(BPF_MAP_TYPE_LPM_TRIE, "lpm_trie_map", 76 + sizeof(struct test_lpm_key), sizeof(__u32), 77 + max_entries, &create_opts); 78 + CHECK(map_fd == -1, "bpf_map_create()", "error:%s\n", 79 + strerror(errno)); 80 + 81 + for (i = 0; i <= max_prefixlen; i++) { 82 + key.prefix = i; 83 + err = bpf_map_update_elem(map_fd, &key, &val, BPF_ANY); 84 + CHECK(err, "bpf_map_update_elem()", "error:%s\n", 85 + strerror(errno)); 86 + } 87 + 88 + ctx.start = false; 89 + ctx.stop = false; 90 + ctx.map_fd = map_fd; 91 + ctx.loop = loop; 92 + memcpy(&ctx.key, &key, sizeof(key)); 93 + 94 + for (i = 0; i < nr; i++) { 95 + err = pthread_create(&tids[i], NULL, get_next_key_fn, &ctx); 96 + if (err) { 97 + abort_get_next_key(&ctx, tids, i); 98 + CHECK(err, "pthread_create", "error %d\n", err); 99 + } 100 + } 101 + 102 + ctx.start = true; 103 + for (i = 0; i < nr; i++) 104 + pthread_join(tids[i], NULL); 105 + 106 + printf("%s:PASS\n", __func__); 107 + 108 + close(map_fd); 109 + }
+19
tools/testing/selftests/bpf/prog_tests/verifier.c
··· 54 54 #include "verifier_masking.skel.h" 55 55 #include "verifier_meta_access.skel.h" 56 56 #include "verifier_movsx.skel.h" 57 + #include "verifier_mtu.skel.h" 57 58 #include "verifier_netfilter_ctx.skel.h" 58 59 #include "verifier_netfilter_retcode.skel.h" 59 60 #include "verifier_bpf_fastcall.skel.h" ··· 223 222 void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); } 224 223 void test_verifier_bits_iter(void) { RUN(verifier_bits_iter); } 225 224 void test_verifier_lsm(void) { RUN(verifier_lsm); } 225 + 226 + void test_verifier_mtu(void) 227 + { 228 + __u64 caps = 0; 229 + int ret; 230 + 231 + /* In case CAP_BPF and CAP_PERFMON is not set */ 232 + ret = cap_enable_effective(1ULL << CAP_BPF | 1ULL << CAP_NET_ADMIN, &caps); 233 + if (!ASSERT_OK(ret, "set_cap_bpf_cap_net_admin")) 234 + return; 235 + ret = cap_disable_effective(1ULL << CAP_SYS_ADMIN | 1ULL << CAP_PERFMON, NULL); 236 + if (!ASSERT_OK(ret, "disable_cap_sys_admin")) 237 + goto restore_cap; 238 + RUN(verifier_mtu); 239 + restore_cap: 240 + if (caps) 241 + cap_enable_effective(caps, NULL); 242 + } 226 243 227 244 static int init_test_val_map(struct bpf_object *obj, char *map_name) 228 245 {
+58 -3
tools/testing/selftests/bpf/progs/verifier_bits_iter.c
··· 15 15 int *bpf_iter_bits_next(struct bpf_iter_bits *it) __ksym __weak; 16 16 void bpf_iter_bits_destroy(struct bpf_iter_bits *it) __ksym __weak; 17 17 18 + u64 bits_array[511] = {}; 19 + 18 20 SEC("iter.s/cgroup") 19 21 __description("bits iter without destroy") 20 22 __failure __msg("Unreleased reference") ··· 112 110 } 113 111 114 112 SEC("syscall") 115 - __description("bits nomem") 113 + __description("bits too big") 116 114 __success __retval(0) 117 - int bits_nomem(void) 115 + int bits_too_big(void) 118 116 { 119 117 u64 data[4]; 120 118 int nr = 0; 121 119 int *bit; 122 120 123 121 __builtin_memset(&data, 0xff, sizeof(data)); 124 - bpf_for_each(bits, bit, &data[0], 513) /* Be greater than 512 */ 122 + bpf_for_each(bits, bit, &data[0], 512) /* Be greater than 511 */ 125 123 nr++; 126 124 return nr; 127 125 } ··· 151 149 152 150 bpf_for_each(bits, bit, &data[0], 0) 153 151 nr++; 152 + return nr; 153 + } 154 + 155 + SEC("syscall") 156 + __description("huge words") 157 + __success __retval(0) 158 + int huge_words(void) 159 + { 160 + u64 data[8] = {0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1}; 161 + int nr = 0; 162 + int *bit; 163 + 164 + bpf_for_each(bits, bit, &data[0], 67108865) 165 + nr++; 166 + return nr; 167 + } 168 + 169 + SEC("syscall") 170 + __description("max words") 171 + __success __retval(4) 172 + int max_words(void) 173 + { 174 + volatile int nr = 0; 175 + int *bit; 176 + 177 + bits_array[0] = (1ULL << 63) | 1U; 178 + bits_array[510] = (1ULL << 33) | (1ULL << 32); 179 + 180 + bpf_for_each(bits, bit, bits_array, 511) { 181 + if (nr == 0 && *bit != 0) 182 + break; 183 + if (nr == 2 && *bit != 32672) 184 + break; 185 + nr++; 186 + } 187 + return nr; 188 + } 189 + 190 + SEC("syscall") 191 + __description("bad words") 192 + __success __retval(0) 193 + int bad_words(void) 194 + { 195 + void *bad_addr = (void *)(3UL << 30); 196 + int nr = 0; 197 + int *bit; 198 + 199 + bpf_for_each(bits, bit, bad_addr, 1) 200 + nr++; 201 + 202 + bpf_for_each(bits, bit, bad_addr, 4) 203 + nr++; 204 + 154 205 return nr; 155 206 }
-55
tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
··· 790 790 :: __imm(bpf_get_smp_processor_id) : __clobber_all); 791 791 } 792 792 793 - SEC("raw_tp") 794 - __arch_x86_64 795 - __log_level(4) 796 - __msg("stack depth 512") 797 - __xlated("0: r1 = 42") 798 - __xlated("1: *(u64 *)(r10 -512) = r1") 799 - __xlated("2: w0 = ") 800 - __xlated("3: r0 = &(void __percpu *)(r0)") 801 - __xlated("4: r0 = *(u32 *)(r0 +0)") 802 - __xlated("5: exit") 803 - __success 804 - __naked int bpf_fastcall_max_stack_ok(void) 805 - { 806 - asm volatile( 807 - "r1 = 42;" 808 - "*(u64 *)(r10 - %[max_bpf_stack]) = r1;" 809 - "*(u64 *)(r10 - %[max_bpf_stack_8]) = r1;" 810 - "call %[bpf_get_smp_processor_id];" 811 - "r1 = *(u64 *)(r10 - %[max_bpf_stack_8]);" 812 - "exit;" 813 - : 814 - : __imm_const(max_bpf_stack, MAX_BPF_STACK), 815 - __imm_const(max_bpf_stack_8, MAX_BPF_STACK + 8), 816 - __imm(bpf_get_smp_processor_id) 817 - : __clobber_all 818 - ); 819 - } 820 - 821 - SEC("raw_tp") 822 - __arch_x86_64 823 - __log_level(4) 824 - __msg("stack depth 520") 825 - __failure 826 - __naked int bpf_fastcall_max_stack_fail(void) 827 - { 828 - asm volatile( 829 - "r1 = 42;" 830 - "*(u64 *)(r10 - %[max_bpf_stack]) = r1;" 831 - "*(u64 *)(r10 - %[max_bpf_stack_8]) = r1;" 832 - "call %[bpf_get_smp_processor_id];" 833 - "r1 = *(u64 *)(r10 - %[max_bpf_stack_8]);" 834 - /* call to prandom blocks bpf_fastcall rewrite */ 835 - "*(u64 *)(r10 - %[max_bpf_stack_8]) = r1;" 836 - "call %[bpf_get_prandom_u32];" 837 - "r1 = *(u64 *)(r10 - %[max_bpf_stack_8]);" 838 - "exit;" 839 - : 840 - : __imm_const(max_bpf_stack, MAX_BPF_STACK), 841 - __imm_const(max_bpf_stack_8, MAX_BPF_STACK + 8), 842 - __imm(bpf_get_smp_processor_id), 843 - __imm(bpf_get_prandom_u32) 844 - : __clobber_all 845 - ); 846 - } 847 - 848 793 SEC("cgroup/getsockname_unix") 849 794 __xlated("0: r2 = 1") 850 795 /* bpf_cast_to_kern_ctx is replaced by a single assignment */
+30 -1
tools/testing/selftests/bpf/progs/verifier_const.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright (c) 2024 Isovalent */ 3 3 4 - #include <linux/bpf.h> 4 + #include "vmlinux.h" 5 5 #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 6 7 #include "bpf_misc.h" 7 8 8 9 const volatile long foo = 42; ··· 65 64 { 66 65 bpf_check_mtu(skb, skb->ifindex, (__u32 *)&bart, 0, 0); 67 66 return TCX_PASS; 67 + } 68 + 69 + static inline void write_fixed(volatile void *p, __u32 val) 70 + { 71 + *(volatile __u32 *)p = val; 72 + } 73 + 74 + static inline void write_dyn(void *p, void *val, int len) 75 + { 76 + bpf_copy_from_user(p, len, val); 77 + } 78 + 79 + SEC("tc/ingress") 80 + __description("rodata/mark: write with unknown reg rejected") 81 + __failure __msg("write into map forbidden") 82 + int tcx7(struct __sk_buff *skb) 83 + { 84 + write_fixed((void *)&foo, skb->mark); 85 + return TCX_PASS; 86 + } 87 + 88 + SEC("lsm.s/bprm_committed_creds") 89 + __description("rodata/mark: write with unknown reg rejected") 90 + __failure __msg("write into map forbidden") 91 + int BPF_PROG(bprm, struct linux_binprm *bprm) 92 + { 93 + write_dyn((void *)&foo, &bart, bpf_get_prandom_u32() & 3); 94 + return 0; 68 95 } 69 96 70 97 char LICENSE[] SEC("license") = "GPL";
+18
tools/testing/selftests/bpf/progs/verifier_mtu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + #include "bpf_misc.h" 6 + 7 + SEC("tc/ingress") 8 + __description("uninit/mtu: write rejected") 9 + __failure __msg("invalid indirect read from stack") 10 + int tc_uninit_mtu(struct __sk_buff *ctx) 11 + { 12 + __u32 mtu; 13 + 14 + bpf_check_mtu(ctx, 0, &mtu, 0, 0); 15 + return TCX_PASS; 16 + } 17 + 18 + char LICENSE[] SEC("license") = "GPL";
+23
tools/testing/selftests/bpf/progs/verifier_search_pruning.c
··· 2 2 /* Converted from tools/testing/selftests/bpf/verifier/search_pruning.c */ 3 3 4 4 #include <linux/bpf.h> 5 + #include <../../../include/linux/filter.h> 5 6 #include <bpf/bpf_helpers.h> 6 7 #include "bpf_misc.h" 7 8 ··· 334 333 exit; \ 335 334 " : 336 335 : __imm(bpf_ktime_get_ns) 336 + : __clobber_all); 337 + } 338 + 339 + /* Without checkpoint forcibly inserted at the back-edge a loop this 340 + * test would take a very long time to verify. 341 + */ 342 + SEC("kprobe") 343 + __failure __log_level(4) 344 + __msg("BPF program is too large.") 345 + __naked void short_loop1(void) 346 + { 347 + asm volatile ( 348 + " r7 = *(u16 *)(r1 +0);" 349 + "1: r7 += 0x1ab064b9;" 350 + " .8byte %[jset];" /* same as 'if r7 & 0x702000 goto 1b;' */ 351 + " r7 &= 0x1ee60e;" 352 + " r7 += r1;" 353 + " if r7 s> 0x37d2 goto +0;" 354 + " r0 = 0;" 355 + " exit;" 356 + : 357 + : __imm_insn(jset, BPF_JMP_IMM(BPF_JSET, BPF_REG_7, 0x702000, -2)) 337 358 : __clobber_all); 338 359 } 339 360
+1
tools/testing/selftests/bpf/veristat.cfg
··· 15 15 test_verif_scale* 16 16 test_xdp_noinline* 17 17 xdp_synproxy* 18 + verifier_search_pruning*
+2 -3
tools/testing/selftests/mm/uffd-common.c
··· 18 18 unsigned long long *count_verify; 19 19 uffd_test_ops_t *uffd_test_ops; 20 20 uffd_test_case_ops_t *uffd_test_case_ops; 21 - pthread_barrier_t ready_for_fork; 21 + atomic_bool ready_for_fork; 22 22 23 23 static int uffd_mem_fd_create(off_t mem_size, bool hugetlb) 24 24 { ··· 519 519 pollfd[1].fd = pipefd[cpu*2]; 520 520 pollfd[1].events = POLLIN; 521 521 522 - /* Ready for parent thread to fork */ 523 - pthread_barrier_wait(&ready_for_fork); 522 + ready_for_fork = true; 524 523 525 524 for (;;) { 526 525 ret = poll(pollfd, 2, -1);
+2 -1
tools/testing/selftests/mm/uffd-common.h
··· 33 33 #include <inttypes.h> 34 34 #include <stdint.h> 35 35 #include <sys/random.h> 36 + #include <stdatomic.h> 36 37 37 38 #include "../kselftest.h" 38 39 #include "vm_util.h" ··· 105 104 extern bool test_uffdio_wp; 106 105 extern unsigned long long *count_verify; 107 106 extern volatile bool test_uffdio_copy_eexist; 108 - extern pthread_barrier_t ready_for_fork; 107 + extern atomic_bool ready_for_fork; 109 108 110 109 extern uffd_test_ops_t anon_uffd_test_ops; 111 110 extern uffd_test_ops_t shmem_uffd_test_ops;
+10 -14
tools/testing/selftests/mm/uffd-unit-tests.c
··· 241 241 fork_event_args *args = data; 242 242 struct uffd_msg msg = { 0 }; 243 243 244 - /* Ready for parent thread to fork */ 245 - pthread_barrier_wait(&ready_for_fork); 244 + ready_for_fork = true; 246 245 247 246 /* Read until a full msg received */ 248 247 while (uffd_read_msg(args->parent_uffd, &msg)); ··· 310 311 311 312 /* Prepare a thread to resolve EVENT_FORK */ 312 313 if (with_event) { 313 - pthread_barrier_init(&ready_for_fork, NULL, 2); 314 + ready_for_fork = false; 314 315 if (pthread_create(&thread, NULL, fork_event_consumer, &args)) 315 316 err("pthread_create()"); 316 - /* Wait for child thread to start before forking */ 317 - pthread_barrier_wait(&ready_for_fork); 318 - pthread_barrier_destroy(&ready_for_fork); 317 + while (!ready_for_fork) 318 + ; /* Wait for the poll_thread to start executing before forking */ 319 319 } 320 320 321 321 child = fork(); ··· 779 781 char c; 780 782 struct uffd_args args = { 0 }; 781 783 782 - pthread_barrier_init(&ready_for_fork, NULL, 2); 784 + ready_for_fork = false; 783 785 784 786 fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); 785 787 ··· 796 798 if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) 797 799 err("uffd_poll_thread create"); 798 800 799 - /* Wait for child thread to start before forking */ 800 - pthread_barrier_wait(&ready_for_fork); 801 - pthread_barrier_destroy(&ready_for_fork); 801 + while (!ready_for_fork) 802 + ; /* Wait for the poll_thread to start executing before forking */ 802 803 803 804 pid = fork(); 804 805 if (pid < 0) ··· 838 841 char c; 839 842 struct uffd_args args = { 0 }; 840 843 841 - pthread_barrier_init(&ready_for_fork, NULL, 2); 844 + ready_for_fork = false; 842 845 843 846 fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); 844 847 if (uffd_register(uffd, area_dst, nr_pages * page_size, ··· 849 852 if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) 850 853 err("uffd_poll_thread create"); 851 854 852 - /* Wait for child thread to start before forking */ 853 - pthread_barrier_wait(&ready_for_fork); 854 - pthread_barrier_destroy(&ready_for_fork); 855 + while (!ready_for_fork) 856 + ; /* Wait for the poll_thread to start executing before forking */ 855 857 856 858 pid = fork(); 857 859 if (pid < 0)
+14
tools/testing/selftests/net/forwarding/ip6gre_flat.sh
··· 8 8 ALL_TESTS=" 9 9 gre_flat 10 10 gre_mtu_change 11 + gre_flat_remote_change 11 12 " 12 13 13 14 NUM_NETIFS=6 ··· 43 42 gre_mtu_change() 44 43 { 45 44 test_mtu_change 45 + } 46 + 47 + gre_flat_remote_change() 48 + { 49 + flat_remote_change 50 + 51 + test_traffic_ip4ip6 "GRE flat IPv4-in-IPv6 (new remote)" 52 + test_traffic_ip6ip6 "GRE flat IPv6-in-IPv6 (new remote)" 53 + 54 + flat_remote_restore 55 + 56 + test_traffic_ip4ip6 "GRE flat IPv4-in-IPv6 (old remote)" 57 + test_traffic_ip6ip6 "GRE flat IPv6-in-IPv6 (old remote)" 46 58 } 47 59 48 60 cleanup()
+14
tools/testing/selftests/net/forwarding/ip6gre_flat_key.sh
··· 8 8 ALL_TESTS=" 9 9 gre_flat 10 10 gre_mtu_change 11 + gre_flat_remote_change 11 12 " 12 13 13 14 NUM_NETIFS=6 ··· 43 42 gre_mtu_change() 44 43 { 45 44 test_mtu_change 45 + } 46 + 47 + gre_flat_remote_change() 48 + { 49 + flat_remote_change 50 + 51 + test_traffic_ip4ip6 "GRE flat IPv4-in-IPv6 with key (new remote)" 52 + test_traffic_ip6ip6 "GRE flat IPv6-in-IPv6 with key (new remote)" 53 + 54 + flat_remote_restore 55 + 56 + test_traffic_ip4ip6 "GRE flat IPv4-in-IPv6 with key (old remote)" 57 + test_traffic_ip6ip6 "GRE flat IPv6-in-IPv6 with key (old remote)" 46 58 } 47 59 48 60 cleanup()
+14
tools/testing/selftests/net/forwarding/ip6gre_flat_keys.sh
··· 8 8 ALL_TESTS=" 9 9 gre_flat 10 10 gre_mtu_change 11 + gre_flat_remote_change 11 12 " 12 13 13 14 NUM_NETIFS=6 ··· 43 42 gre_mtu_change() 44 43 { 45 44 test_mtu_change gre 45 + } 46 + 47 + gre_flat_remote_change() 48 + { 49 + flat_remote_change 50 + 51 + test_traffic_ip4ip6 "GRE flat IPv4-in-IPv6 with ikey/okey (new remote)" 52 + test_traffic_ip6ip6 "GRE flat IPv6-in-IPv6 with ikey/okey (new remote)" 53 + 54 + flat_remote_restore 55 + 56 + test_traffic_ip4ip6 "GRE flat IPv4-in-IPv6 with ikey/okey (old remote)" 57 + test_traffic_ip6ip6 "GRE flat IPv6-in-IPv6 with ikey/okey (old remote)" 46 58 } 47 59 48 60 cleanup()
+14
tools/testing/selftests/net/forwarding/ip6gre_hier.sh
··· 8 8 ALL_TESTS=" 9 9 gre_hier 10 10 gre_mtu_change 11 + gre_hier_remote_change 11 12 " 12 13 13 14 NUM_NETIFS=6 ··· 43 42 gre_mtu_change() 44 43 { 45 44 test_mtu_change gre 45 + } 46 + 47 + gre_hier_remote_change() 48 + { 49 + hier_remote_change 50 + 51 + test_traffic_ip4ip6 "GRE hierarchical IPv4-in-IPv6 (new remote)" 52 + test_traffic_ip6ip6 "GRE hierarchical IPv6-in-IPv6 (new remote)" 53 + 54 + hier_remote_restore 55 + 56 + test_traffic_ip4ip6 "GRE hierarchical IPv4-in-IPv6 (old remote)" 57 + test_traffic_ip6ip6 "GRE hierarchical IPv6-in-IPv6 (old remote)" 46 58 } 47 59 48 60 cleanup()
+14
tools/testing/selftests/net/forwarding/ip6gre_hier_key.sh
··· 8 8 ALL_TESTS=" 9 9 gre_hier 10 10 gre_mtu_change 11 + gre_hier_remote_change 11 12 " 12 13 13 14 NUM_NETIFS=6 ··· 43 42 gre_mtu_change() 44 43 { 45 44 test_mtu_change gre 45 + } 46 + 47 + gre_hier_remote_change() 48 + { 49 + hier_remote_change 50 + 51 + test_traffic_ip4ip6 "GRE hierarchical IPv4-in-IPv6 with key (new remote)" 52 + test_traffic_ip6ip6 "GRE hierarchical IPv6-in-IPv6 with key (new remote)" 53 + 54 + hier_remote_restore 55 + 56 + test_traffic_ip4ip6 "GRE hierarchical IPv4-in-IPv6 with key (old remote)" 57 + test_traffic_ip6ip6 "GRE hierarchical IPv6-in-IPv6 with key (old remote)" 46 58 } 47 59 48 60 cleanup()
+14
tools/testing/selftests/net/forwarding/ip6gre_hier_keys.sh
··· 8 8 ALL_TESTS=" 9 9 gre_hier 10 10 gre_mtu_change 11 + gre_hier_remote_change 11 12 " 12 13 13 14 NUM_NETIFS=6 ··· 43 42 gre_mtu_change() 44 43 { 45 44 test_mtu_change gre 45 + } 46 + 47 + gre_hier_remote_change() 48 + { 49 + hier_remote_change 50 + 51 + test_traffic_ip4ip6 "GRE hierarchical IPv4-in-IPv6 with ikey/okey (new remote)" 52 + test_traffic_ip6ip6 "GRE hierarchical IPv6-in-IPv6 with ikey/okey (new remote)" 53 + 54 + hier_remote_restore 55 + 56 + test_traffic_ip4ip6 "GRE hierarchical IPv4-in-IPv6 with ikey/okey (old remote)" 57 + test_traffic_ip6ip6 "GRE hierarchical IPv6-in-IPv6 with ikey/okey (old remote)" 46 58 } 47 59 48 60 cleanup()
+80
tools/testing/selftests/net/forwarding/ip6gre_lib.sh
··· 436 436 check_err $? 437 437 log_test "ping GRE IPv6, packet size 1800 after MTU change" 438 438 } 439 + 440 + topo_flat_remote_change() 441 + { 442 + local old1=$1; shift 443 + local new1=$1; shift 444 + local old2=$1; shift 445 + local new2=$1; shift 446 + 447 + ip link set dev g1a type ip6gre local $new1 remote $new2 448 + __addr_add_del g1a add "$new1/128" 449 + __addr_add_del g1a del "$old1/128" 450 + ip -6 route add $new2/128 via 2001:db8:10::2 451 + ip -6 route del $old2/128 452 + 453 + ip link set dev g2a type ip6gre local $new2 remote $new1 454 + __addr_add_del g2a add "$new2/128" 455 + __addr_add_del g2a del "$old2/128" 456 + ip -6 route add vrf v$ol2 $new1/128 via 2001:db8:10::1 457 + ip -6 route del vrf v$ol2 $old1/128 458 + } 459 + 460 + flat_remote_change() 461 + { 462 + local old1=2001:db8:3::1 463 + local new1=2001:db8:3::10 464 + local old2=2001:db8:3::2 465 + local new2=2001:db8:3::20 466 + 467 + topo_flat_remote_change $old1 $new1 $old2 $new2 468 + } 469 + 470 + flat_remote_restore() 471 + { 472 + local old1=2001:db8:3::10 473 + local new1=2001:db8:3::1 474 + local old2=2001:db8:3::20 475 + local new2=2001:db8:3::2 476 + 477 + topo_flat_remote_change $old1 $new1 $old2 $new2 478 + } 479 + 480 + topo_hier_remote_change() 481 + { 482 + local old1=$1; shift 483 + local new1=$1; shift 484 + local old2=$1; shift 485 + local new2=$1; shift 486 + 487 + __addr_add_del dummy1 del "$old1/64" 488 + __addr_add_del dummy1 add "$new1/64" 489 + ip link set dev g1a type ip6gre local $new1 remote $new2 490 + ip -6 route add vrf v$ul1 $new2/128 via 2001:db8:10::2 491 + ip -6 route del vrf v$ul1 $old2/128 492 + 493 + __addr_add_del dummy2 del "$old2/64" 494 + __addr_add_del dummy2 add "$new2/64" 495 + ip link set dev g2a type ip6gre local $new2 remote $new1 496 + ip -6 route add vrf v$ul2 $new1/128 via 2001:db8:10::1 497 + ip -6 route del vrf v$ul2 $old1/128 498 + } 499 + 500 + hier_remote_change() 501 + { 502 + local old1=2001:db8:3::1 503 + local new1=2001:db8:3::10 504 + local old2=2001:db8:3::2 505 + local new2=2001:db8:3::20 506 + 507 + topo_hier_remote_change $old1 $new1 $old2 $new2 508 + } 509 + 510 + hier_remote_restore() 511 + { 512 + local old1=2001:db8:3::10 513 + local new1=2001:db8:3::1 514 + local old2=2001:db8:3::20 515 + local new2=2001:db8:3::2 516 + 517 + topo_hier_remote_change $old1 $new1 $old2 $new2 518 + }
+9
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 259 259 mptcp_lib_ns_init disabled_ns 260 260 261 261 print_larger_title "New MPTCP socket can be blocked via sysctl" 262 + 263 + # mainly to cover more code 264 + if ! ip netns exec ${disabled_ns} sysctl net.mptcp >/dev/null; then 265 + mptcp_lib_pr_fail "not able to list net.mptcp sysctl knobs" 266 + mptcp_lib_result_fail "not able to list net.mptcp sysctl knobs" 267 + ret=${KSFT_FAIL} 268 + return 1 269 + fi 270 + 262 271 # net.mptcp.enabled should be enabled by default 263 272 if [ "$(ip netns exec ${disabled_ns} sysctl net.mptcp.enabled | awk '{ print $3 }')" -ne 1 ]; then 264 273 mptcp_lib_pr_fail "net.mptcp.enabled sysctl is not 1 by default"
+3 -3
tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
··· 98 98 char buf[MNL_SOCKET_BUFFER_SIZE]; 99 99 struct nlmsghdr *rplnlh; 100 100 unsigned int portid; 101 - int err, ret; 101 + int ret; 102 102 103 103 portid = mnl_socket_get_portid(sock); 104 104 ··· 217 217 struct nfgenmsg *nfh; 218 218 struct nlattr *nest; 219 219 unsigned int portid; 220 - int err, ret; 220 + int ret; 221 221 222 222 portid = mnl_socket_get_portid(sock); 223 223 ··· 264 264 struct nfgenmsg *nfh; 265 265 struct nlattr *nest; 266 266 unsigned int portid; 267 - int err, ret; 267 + int ret; 268 268 269 269 portid = mnl_socket_get_portid(sock); 270 270
+21 -18
tools/testing/selftests/net/netfilter/nft_flowtable.sh
··· 71 71 lmtu=1500 72 72 rmtu=2000 73 73 74 + filesize=$((2 * 1024 * 1024)) 75 + 74 76 usage(){ 75 77 echo "nft_flowtable.sh [OPTIONS]" 76 78 echo ··· 83 81 exit 1 84 82 } 85 83 86 - while getopts "o:l:r:" o 84 + while getopts "o:l:r:s:" o 87 85 do 88 86 case $o in 89 87 o) omtu=$OPTARG;; 90 88 l) lmtu=$OPTARG;; 91 89 r) rmtu=$OPTARG;; 90 + s) filesize=$OPTARG;; 92 91 *) usage;; 93 92 esac 94 93 done ··· 220 217 221 218 make_file() 222 219 { 223 - name=$1 220 + name="$1" 221 + sz="$2" 224 222 225 - SIZE=$((RANDOM % (1024 * 128))) 226 - SIZE=$((SIZE + (1024 * 8))) 227 - TSIZE=$((SIZE * 1024)) 228 - 229 - dd if=/dev/urandom of="$name" bs=1024 count=$SIZE 2> /dev/null 230 - 231 - SIZE=$((RANDOM % 1024)) 232 - SIZE=$((SIZE + 128)) 233 - TSIZE=$((TSIZE + SIZE)) 234 - dd if=/dev/urandom conf=notrunc of="$name" bs=1 count=$SIZE 2> /dev/null 223 + head -c "$sz" < /dev/urandom > "$name" 235 224 } 236 225 237 226 check_counters() ··· 241 246 local fs 242 247 fs=$(du -sb "$nsin") 243 248 local max_orig=${fs%%/*} 244 - local max_repl=$((max_orig/4)) 249 + local max_repl=$((max_orig)) 245 250 246 251 # flowtable fastpath should bypass normal routing one, i.e. the counters in forward hook 247 252 # should always be lower than the size of the transmitted file (max_orig). 248 253 if [ "$orig_cnt" -gt "$max_orig" ];then 249 - echo "FAIL: $what: original counter $orig_cnt exceeds expected value $max_orig" 1>&2 254 + echo "FAIL: $what: original counter $orig_cnt exceeds expected value $max_orig, reply counter $repl_cnt" 1>&2 250 255 ret=1 251 256 ok=0 252 257 fi 253 258 254 259 if [ "$repl_cnt" -gt $max_repl ];then 255 - echo "FAIL: $what: reply counter $repl_cnt exceeds expected value $max_repl" 1>&2 260 + echo "FAIL: $what: reply counter $repl_cnt exceeds expected value $max_repl, original counter $orig_cnt" 1>&2 256 261 ret=1 257 262 ok=0 258 263 fi ··· 450 455 return $lret 451 456 } 452 457 453 - make_file "$nsin" 458 + make_file "$nsin" "$filesize" 454 459 455 460 # First test: 456 461 # No PMTU discovery, nsr1 is expected to fragment packets from ns1 to ns2 as needed. ··· 659 664 l=$(((RANDOM%mtu) + low)) 660 665 r=$(((RANDOM%mtu) + low)) 661 666 662 - echo "re-run with random mtus: -o $o -l $l -r $r" 663 - $0 -o "$o" -l "$l" -r "$r" 667 + MINSIZE=$((2 * 1000 * 1000)) 668 + MAXSIZE=$((64 * 1000 * 1000)) 669 + 670 + filesize=$(((RANDOM * RANDOM) % MAXSIZE)) 671 + if [ "$filesize" -lt "$MINSIZE" ]; then 672 + filesize=$((filesize+MINSIZE)) 673 + fi 674 + 675 + echo "re-run with random mtus and file size: -o $o -l $l -r $r -s $filesize" 676 + $0 -o "$o" -l "$l" -r "$r" -s "$filesize" 664 677 fi 665 678 666 679 exit $ret
+1 -1
tools/testing/selftests/sched_ext/Makefile
··· 184 184 185 185 testcase-targets := $(addsuffix .o,$(addprefix $(SCXOBJ_DIR)/,$(auto-test-targets))) 186 186 187 - $(SCXOBJ_DIR)/runner.o: runner.c | $(SCXOBJ_DIR) 187 + $(SCXOBJ_DIR)/runner.o: runner.c | $(SCXOBJ_DIR) $(BPFOBJ) 188 188 $(CC) $(CFLAGS) -c $< -o $@ 189 189 190 190 # Create all of the test targets object files, whose testcase objects will be
+3 -3
tools/testing/selftests/sched_ext/create_dsq.bpf.c
··· 51 51 52 52 SEC(".struct_ops.link") 53 53 struct sched_ext_ops create_dsq_ops = { 54 - .init_task = create_dsq_init_task, 55 - .exit_task = create_dsq_exit_task, 56 - .init = create_dsq_init, 54 + .init_task = (void *) create_dsq_init_task, 55 + .exit_task = (void *) create_dsq_exit_task, 56 + .init = (void *) create_dsq_init, 57 57 .name = "create_dsq", 58 58 };
+2 -2
tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
··· 35 35 36 36 SEC(".struct_ops.link") 37 37 struct sched_ext_ops ddsp_bogus_dsq_fail_ops = { 38 - .select_cpu = ddsp_bogus_dsq_fail_select_cpu, 39 - .exit = ddsp_bogus_dsq_fail_exit, 38 + .select_cpu = (void *) ddsp_bogus_dsq_fail_select_cpu, 39 + .exit = (void *) ddsp_bogus_dsq_fail_exit, 40 40 .name = "ddsp_bogus_dsq_fail", 41 41 .timeout_ms = 1000U, 42 42 };
+2 -2
tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
··· 32 32 33 33 SEC(".struct_ops.link") 34 34 struct sched_ext_ops ddsp_vtimelocal_fail_ops = { 35 - .select_cpu = ddsp_vtimelocal_fail_select_cpu, 36 - .exit = ddsp_vtimelocal_fail_exit, 35 + .select_cpu = (void *) ddsp_vtimelocal_fail_select_cpu, 36 + .exit = (void *) ddsp_vtimelocal_fail_exit, 37 37 .name = "ddsp_vtimelocal_fail", 38 38 .timeout_ms = 1000U, 39 39 };
+4 -4
tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
··· 56 56 57 57 SEC(".struct_ops.link") 58 58 struct sched_ext_ops dsp_local_on_ops = { 59 - .select_cpu = dsp_local_on_select_cpu, 60 - .enqueue = dsp_local_on_enqueue, 61 - .dispatch = dsp_local_on_dispatch, 62 - .exit = dsp_local_on_exit, 59 + .select_cpu = (void *) dsp_local_on_select_cpu, 60 + .enqueue = (void *) dsp_local_on_enqueue, 61 + .dispatch = (void *) dsp_local_on_dispatch, 62 + .exit = (void *) dsp_local_on_exit, 63 63 .name = "dsp_local_on", 64 64 .timeout_ms = 1000U, 65 65 };
+8
tools/testing/selftests/sched_ext/enq_last_no_enq_fails.bpf.c
··· 12 12 13 13 char _license[] SEC("license") = "GPL"; 14 14 15 + u32 exit_kind; 16 + 17 + void BPF_STRUCT_OPS_SLEEPABLE(enq_last_no_enq_fails_exit, struct scx_exit_info *info) 18 + { 19 + exit_kind = info->kind; 20 + } 21 + 15 22 SEC(".struct_ops.link") 16 23 struct sched_ext_ops enq_last_no_enq_fails_ops = { 17 24 .name = "enq_last_no_enq_fails", 18 25 /* Need to define ops.enqueue() with SCX_OPS_ENQ_LAST */ 19 26 .flags = SCX_OPS_ENQ_LAST, 27 + .exit = (void *) enq_last_no_enq_fails_exit, 20 28 .timeout_ms = 1000U, 21 29 };
+7 -3
tools/testing/selftests/sched_ext/enq_last_no_enq_fails.c
··· 31 31 struct bpf_link *link; 32 32 33 33 link = bpf_map__attach_struct_ops(skel->maps.enq_last_no_enq_fails_ops); 34 - if (link) { 35 - SCX_ERR("Incorrectly succeeded in to attaching scheduler"); 34 + if (!link) { 35 + SCX_ERR("Incorrectly failed at attaching scheduler"); 36 + return SCX_TEST_FAIL; 37 + } 38 + if (!skel->bss->exit_kind) { 39 + SCX_ERR("Incorrectly stayed loaded"); 36 40 return SCX_TEST_FAIL; 37 41 } 38 42 ··· 54 50 55 51 struct scx_test enq_last_no_enq_fails = { 56 52 .name = "enq_last_no_enq_fails", 57 - .description = "Verify we fail to load a scheduler if we specify " 53 + .description = "Verify we eject a scheduler if we specify " 58 54 "the SCX_OPS_ENQ_LAST flag without defining " 59 55 "ops.enqueue()", 60 56 .setup = setup,
+2 -2
tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
··· 36 36 37 37 SEC(".struct_ops.link") 38 38 struct sched_ext_ops enq_select_cpu_fails_ops = { 39 - .select_cpu = enq_select_cpu_fails_select_cpu, 40 - .enqueue = enq_select_cpu_fails_enqueue, 39 + .select_cpu = (void *) enq_select_cpu_fails_select_cpu, 40 + .enqueue = (void *) enq_select_cpu_fails_enqueue, 41 41 .name = "enq_select_cpu_fails", 42 42 .timeout_ms = 1000U, 43 43 };
+12 -10
tools/testing/selftests/sched_ext/exit.bpf.c
··· 15 15 16 16 #define EXIT_CLEANLY() scx_bpf_exit(exit_point, "%d", exit_point) 17 17 18 + #define DSQ_ID 0 19 + 18 20 s32 BPF_STRUCT_OPS(exit_select_cpu, struct task_struct *p, 19 21 s32 prev_cpu, u64 wake_flags) 20 22 { ··· 33 31 if (exit_point == EXIT_ENQUEUE) 34 32 EXIT_CLEANLY(); 35 33 36 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 34 + scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); 37 35 } 38 36 39 37 void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p) ··· 41 39 if (exit_point == EXIT_DISPATCH) 42 40 EXIT_CLEANLY(); 43 41 44 - scx_bpf_consume(SCX_DSQ_GLOBAL); 42 + scx_bpf_consume(DSQ_ID); 45 43 } 46 44 47 45 void BPF_STRUCT_OPS(exit_enable, struct task_struct *p) ··· 69 67 if (exit_point == EXIT_INIT) 70 68 EXIT_CLEANLY(); 71 69 72 - return 0; 70 + return scx_bpf_create_dsq(DSQ_ID, -1); 73 71 } 74 72 75 73 SEC(".struct_ops.link") 76 74 struct sched_ext_ops exit_ops = { 77 - .select_cpu = exit_select_cpu, 78 - .enqueue = exit_enqueue, 79 - .dispatch = exit_dispatch, 80 - .init_task = exit_init_task, 81 - .enable = exit_enable, 82 - .exit = exit_exit, 83 - .init = exit_init, 75 + .select_cpu = (void *) exit_select_cpu, 76 + .enqueue = (void *) exit_enqueue, 77 + .dispatch = (void *) exit_dispatch, 78 + .init_task = (void *) exit_init_task, 79 + .enable = (void *) exit_enable, 80 + .exit = (void *) exit_exit, 81 + .init = (void *) exit_init, 84 82 .name = "exit", 85 83 .timeout_ms = 1000U, 86 84 };
+4 -4
tools/testing/selftests/sched_ext/hotplug.bpf.c
··· 46 46 47 47 SEC(".struct_ops.link") 48 48 struct sched_ext_ops hotplug_cb_ops = { 49 - .cpu_online = hotplug_cpu_online, 50 - .cpu_offline = hotplug_cpu_offline, 51 - .exit = hotplug_exit, 49 + .cpu_online = (void *) hotplug_cpu_online, 50 + .cpu_offline = (void *) hotplug_cpu_offline, 51 + .exit = (void *) hotplug_exit, 52 52 .name = "hotplug_cbs", 53 53 .timeout_ms = 1000U, 54 54 }; 55 55 56 56 SEC(".struct_ops.link") 57 57 struct sched_ext_ops hotplug_nocb_ops = { 58 - .exit = hotplug_exit, 58 + .exit = (void *) hotplug_exit, 59 59 .name = "hotplug_nocbs", 60 60 .timeout_ms = 1000U, 61 61 };
+4 -4
tools/testing/selftests/sched_ext/init_enable_count.bpf.c
··· 45 45 46 46 SEC(".struct_ops.link") 47 47 struct sched_ext_ops init_enable_count_ops = { 48 - .init_task = cnt_init_task, 49 - .exit_task = cnt_exit_task, 50 - .enable = cnt_enable, 51 - .disable = cnt_disable, 48 + .init_task = (void *) cnt_init_task, 49 + .exit_task = (void *) cnt_exit_task, 50 + .enable = (void *) cnt_enable, 51 + .disable = (void *) cnt_disable, 52 52 .name = "init_enable_count", 53 53 };
+29 -29
tools/testing/selftests/sched_ext/maximal.bpf.c
··· 131 131 132 132 SEC(".struct_ops.link") 133 133 struct sched_ext_ops maximal_ops = { 134 - .select_cpu = maximal_select_cpu, 135 - .enqueue = maximal_enqueue, 136 - .dequeue = maximal_dequeue, 137 - .dispatch = maximal_dispatch, 138 - .runnable = maximal_runnable, 139 - .running = maximal_running, 140 - .stopping = maximal_stopping, 141 - .quiescent = maximal_quiescent, 142 - .yield = maximal_yield, 143 - .core_sched_before = maximal_core_sched_before, 144 - .set_weight = maximal_set_weight, 145 - .set_cpumask = maximal_set_cpumask, 146 - .update_idle = maximal_update_idle, 147 - .cpu_acquire = maximal_cpu_acquire, 148 - .cpu_release = maximal_cpu_release, 149 - .cpu_online = maximal_cpu_online, 150 - .cpu_offline = maximal_cpu_offline, 151 - .init_task = maximal_init_task, 152 - .enable = maximal_enable, 153 - .exit_task = maximal_exit_task, 154 - .disable = maximal_disable, 155 - .cgroup_init = maximal_cgroup_init, 156 - .cgroup_exit = maximal_cgroup_exit, 157 - .cgroup_prep_move = maximal_cgroup_prep_move, 158 - .cgroup_move = maximal_cgroup_move, 159 - .cgroup_cancel_move = maximal_cgroup_cancel_move, 160 - .cgroup_set_weight = maximal_cgroup_set_weight, 161 - .init = maximal_init, 162 - .exit = maximal_exit, 134 + .select_cpu = (void *) maximal_select_cpu, 135 + .enqueue = (void *) maximal_enqueue, 136 + .dequeue = (void *) maximal_dequeue, 137 + .dispatch = (void *) maximal_dispatch, 138 + .runnable = (void *) maximal_runnable, 139 + .running = (void *) maximal_running, 140 + .stopping = (void *) maximal_stopping, 141 + .quiescent = (void *) maximal_quiescent, 142 + .yield = (void *) maximal_yield, 143 + .core_sched_before = (void *) maximal_core_sched_before, 144 + .set_weight = (void *) maximal_set_weight, 145 + .set_cpumask = (void *) maximal_set_cpumask, 146 + .update_idle = (void *) maximal_update_idle, 147 + .cpu_acquire = (void *) maximal_cpu_acquire, 148 + .cpu_release = (void *) maximal_cpu_release, 149 + .cpu_online = (void *) maximal_cpu_online, 150 + .cpu_offline = (void *) maximal_cpu_offline, 151 + .init_task = (void *) maximal_init_task, 152 + .enable = (void *) maximal_enable, 153 + .exit_task = (void *) maximal_exit_task, 154 + .disable = (void *) maximal_disable, 155 + .cgroup_init = (void *) maximal_cgroup_init, 156 + .cgroup_exit = (void *) maximal_cgroup_exit, 157 + .cgroup_prep_move = (void *) maximal_cgroup_prep_move, 158 + .cgroup_move = (void *) maximal_cgroup_move, 159 + .cgroup_cancel_move = (void *) maximal_cgroup_cancel_move, 160 + .cgroup_set_weight = (void *) maximal_cgroup_set_weight, 161 + .init = (void *) maximal_init, 162 + .exit = (void *) maximal_exit, 163 163 .name = "maximal", 164 164 };
+3 -3
tools/testing/selftests/sched_ext/maybe_null.bpf.c
··· 29 29 30 30 SEC(".struct_ops.link") 31 31 struct sched_ext_ops maybe_null_success = { 32 - .dispatch = maybe_null_success_dispatch, 33 - .yield = maybe_null_success_yield, 34 - .enable = maybe_null_running, 32 + .dispatch = (void *) maybe_null_success_dispatch, 33 + .yield = (void *) maybe_null_success_yield, 34 + .enable = (void *) maybe_null_running, 35 35 .name = "minimal", 36 36 };
+2 -2
tools/testing/selftests/sched_ext/maybe_null_fail_dsp.bpf.c
··· 19 19 20 20 SEC(".struct_ops.link") 21 21 struct sched_ext_ops maybe_null_fail = { 22 - .dispatch = maybe_null_fail_dispatch, 23 - .enable = maybe_null_running, 22 + .dispatch = (void *) maybe_null_fail_dispatch, 23 + .enable = (void *) maybe_null_running, 24 24 .name = "maybe_null_fail_dispatch", 25 25 };
+2 -2
tools/testing/selftests/sched_ext/maybe_null_fail_yld.bpf.c
··· 22 22 23 23 SEC(".struct_ops.link") 24 24 struct sched_ext_ops maybe_null_fail = { 25 - .yield = maybe_null_fail_yield, 26 - .enable = maybe_null_running, 25 + .yield = (void *) maybe_null_fail_yield, 26 + .enable = (void *) maybe_null_running, 27 27 .name = "maybe_null_fail_yield", 28 28 };
+1 -1
tools/testing/selftests/sched_ext/prog_run.bpf.c
··· 28 28 29 29 SEC(".struct_ops.link") 30 30 struct sched_ext_ops prog_run_ops = { 31 - .exit = prog_run_exit, 31 + .exit = (void *) prog_run_exit, 32 32 .name = "prog_run", 33 33 };
+1 -1
tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
··· 35 35 36 36 SEC(".struct_ops.link") 37 37 struct sched_ext_ops select_cpu_dfl_ops = { 38 - .enqueue = select_cpu_dfl_enqueue, 38 + .enqueue = (void *) select_cpu_dfl_enqueue, 39 39 .name = "select_cpu_dfl", 40 40 };
+3 -3
tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
··· 82 82 83 83 SEC(".struct_ops.link") 84 84 struct sched_ext_ops select_cpu_dfl_nodispatch_ops = { 85 - .select_cpu = select_cpu_dfl_nodispatch_select_cpu, 86 - .enqueue = select_cpu_dfl_nodispatch_enqueue, 87 - .init_task = select_cpu_dfl_nodispatch_init_task, 85 + .select_cpu = (void *) select_cpu_dfl_nodispatch_select_cpu, 86 + .enqueue = (void *) select_cpu_dfl_nodispatch_enqueue, 87 + .init_task = (void *) select_cpu_dfl_nodispatch_init_task, 88 88 .name = "select_cpu_dfl_nodispatch", 89 89 };
+1 -1
tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
··· 35 35 36 36 SEC(".struct_ops.link") 37 37 struct sched_ext_ops select_cpu_dispatch_ops = { 38 - .select_cpu = select_cpu_dispatch_select_cpu, 38 + .select_cpu = (void *) select_cpu_dispatch_select_cpu, 39 39 .name = "select_cpu_dispatch", 40 40 .timeout_ms = 1000U, 41 41 };
+2 -2
tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
··· 30 30 31 31 SEC(".struct_ops.link") 32 32 struct sched_ext_ops select_cpu_dispatch_bad_dsq_ops = { 33 - .select_cpu = select_cpu_dispatch_bad_dsq_select_cpu, 34 - .exit = select_cpu_dispatch_bad_dsq_exit, 33 + .select_cpu = (void *) select_cpu_dispatch_bad_dsq_select_cpu, 34 + .exit = (void *) select_cpu_dispatch_bad_dsq_exit, 35 35 .name = "select_cpu_dispatch_bad_dsq", 36 36 .timeout_ms = 1000U, 37 37 };
+2 -2
tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
··· 31 31 32 32 SEC(".struct_ops.link") 33 33 struct sched_ext_ops select_cpu_dispatch_dbl_dsp_ops = { 34 - .select_cpu = select_cpu_dispatch_dbl_dsp_select_cpu, 35 - .exit = select_cpu_dispatch_dbl_dsp_exit, 34 + .select_cpu = (void *) select_cpu_dispatch_dbl_dsp_select_cpu, 35 + .exit = (void *) select_cpu_dispatch_dbl_dsp_exit, 36 36 .name = "select_cpu_dispatch_dbl_dsp", 37 37 .timeout_ms = 1000U, 38 38 };
+6 -6
tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
··· 81 81 82 82 SEC(".struct_ops.link") 83 83 struct sched_ext_ops select_cpu_vtime_ops = { 84 - .select_cpu = select_cpu_vtime_select_cpu, 85 - .dispatch = select_cpu_vtime_dispatch, 86 - .running = select_cpu_vtime_running, 87 - .stopping = select_cpu_vtime_stopping, 88 - .enable = select_cpu_vtime_enable, 89 - .init = select_cpu_vtime_init, 84 + .select_cpu = (void *) select_cpu_vtime_select_cpu, 85 + .dispatch = (void *) select_cpu_vtime_dispatch, 86 + .running = (void *) select_cpu_vtime_running, 87 + .stopping = (void *) select_cpu_vtime_stopping, 88 + .enable = (void *) select_cpu_vtime_enable, 89 + .init = (void *) select_cpu_vtime_init, 90 90 .name = "select_cpu_vtime", 91 91 .timeout_ms = 1000U, 92 92 };
+40
tools/testing/vma/vma.c
··· 1522 1522 return true; 1523 1523 } 1524 1524 1525 + static bool test_expand_only_mode(void) 1526 + { 1527 + unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; 1528 + struct mm_struct mm = {}; 1529 + VMA_ITERATOR(vmi, &mm, 0); 1530 + struct vm_area_struct *vma_prev, *vma; 1531 + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, flags, 5); 1532 + 1533 + /* 1534 + * Place a VMA prior to the one we're expanding so we assert that we do 1535 + * not erroneously try to traverse to the previous VMA even though we 1536 + * have, through the use of VMG_FLAG_JUST_EXPAND, indicated we do not 1537 + * need to do so. 1538 + */ 1539 + alloc_and_link_vma(&mm, 0, 0x2000, 0, flags); 1540 + 1541 + /* 1542 + * We will be positioned at the prev VMA, but looking to expand to 1543 + * 0x9000. 1544 + */ 1545 + vma_iter_set(&vmi, 0x3000); 1546 + vma_prev = alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); 1547 + vmg.prev = vma_prev; 1548 + vmg.merge_flags = VMG_FLAG_JUST_EXPAND; 1549 + 1550 + vma = vma_merge_new_range(&vmg); 1551 + ASSERT_NE(vma, NULL); 1552 + ASSERT_EQ(vma, vma_prev); 1553 + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); 1554 + ASSERT_EQ(vma->vm_start, 0x3000); 1555 + ASSERT_EQ(vma->vm_end, 0x9000); 1556 + ASSERT_EQ(vma->vm_pgoff, 3); 1557 + ASSERT_TRUE(vma_write_started(vma)); 1558 + ASSERT_EQ(vma_iter_addr(&vmi), 0x3000); 1559 + 1560 + cleanup_mm(&mm, &vmi); 1561 + return true; 1562 + } 1563 + 1525 1564 int main(void) 1526 1565 { 1527 1566 int num_tests = 0, num_fail = 0; ··· 1592 1553 TEST(vmi_prealloc_fail); 1593 1554 TEST(merge_extend); 1594 1555 TEST(copy_vma); 1556 + TEST(expand_only_mode); 1595 1557 1596 1558 #undef TEST 1597 1559