Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

net/sched/cls_api.c has overlapping changes to a call to
nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL
to the 5th argument, and another (from 'net-next') added cb->extack
instead of NULL to the 6th argument.

net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to
code which moved (to mr_table_dump)) in 'net-next'. Thanks to David
Ahern for the heads up.

Signed-off-by: David S. Miller <davem@davemloft.net>

+569 -876
+1 -1
Documentation/core-api/idr.rst
··· 1 - .. SPDX-License-Identifier: CC-BY-SA-4.0 1 + .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 3 ============= 4 4 ID Allocation
-397
LICENSES/other/CC-BY-SA-4.0
··· 1 - Valid-License-Identifier: CC-BY-SA-4.0 2 - SPDX-URL: https://spdx.org/licenses/CC-BY-SA-4.0 3 - Usage-Guide: 4 - To use the Creative Commons Attribution Share Alike 4.0 International 5 - license put the following SPDX tag/value pair into a comment according to 6 - the placement guidelines in the licensing rules documentation: 7 - SPDX-License-Identifier: CC-BY-SA-4.0 8 - License-Text: 9 - 10 - Creative Commons Attribution-ShareAlike 4.0 International 11 - 12 - Creative Commons Corporation ("Creative Commons") is not a law firm and 13 - does not provide legal services or legal advice. Distribution of Creative 14 - Commons public licenses does not create a lawyer-client or other 15 - relationship. Creative Commons makes its licenses and related information 16 - available on an "as-is" basis. Creative Commons gives no warranties 17 - regarding its licenses, any material licensed under their terms and 18 - conditions, or any related information. Creative Commons disclaims all 19 - liability for damages resulting from their use to the fullest extent 20 - possible. 21 - 22 - Using Creative Commons Public Licenses 23 - 24 - Creative Commons public licenses provide a standard set of terms and 25 - conditions that creators and other rights holders may use to share original 26 - works of authorship and other material subject to copyright and certain 27 - other rights specified in the public license below. The following 28 - considerations are for informational purposes only, are not exhaustive, and 29 - do not form part of our licenses. 30 - 31 - Considerations for licensors: Our public licenses are intended for use by 32 - those authorized to give the public permission to use material in ways 33 - otherwise restricted by copyright and certain other rights. Our licenses 34 - are irrevocable. Licensors should read and understand the terms and 35 - conditions of the license they choose before applying it. Licensors should 36 - also secure all rights necessary before applying our licenses so that the 37 - public can reuse the material as expected. Licensors should clearly mark 38 - any material not subject to the license. This includes other CC-licensed 39 - material, or material used under an exception or limitation to 40 - copyright. More considerations for licensors : 41 - wiki.creativecommons.org/Considerations_for_licensors 42 - 43 - Considerations for the public: By using one of our public licenses, a 44 - licensor grants the public permission to use the licensed material under 45 - specified terms and conditions. If the licensor's permission is not 46 - necessary for any reason - for example, because of any applicable exception 47 - or limitation to copyright - then that use is not regulated by the 48 - license. Our licenses grant only permissions under copyright and certain 49 - other rights that a licensor has authority to grant. Use of the licensed 50 - material may still be restricted for other reasons, including because 51 - others have copyright or other rights in the material. A licensor may make 52 - special requests, such as asking that all changes be marked or described. 53 - 54 - Although not required by our licenses, you are encouraged to respect those 55 - requests where reasonable. More considerations for the public : 56 - wiki.creativecommons.org/Considerations_for_licensees 57 - 58 - Creative Commons Attribution-ShareAlike 4.0 International Public License 59 - 60 - By exercising the Licensed Rights (defined below), You accept and agree to 61 - be bound by the terms and conditions of this Creative Commons 62 - Attribution-ShareAlike 4.0 International Public License ("Public 63 - License"). To the extent this Public License may be interpreted as a 64 - contract, You are granted the Licensed Rights in consideration of Your 65 - acceptance of these terms and conditions, and the Licensor grants You such 66 - rights in consideration of benefits the Licensor receives from making the 67 - Licensed Material available under these terms and conditions. 68 - 69 - Section 1 - Definitions. 70 - 71 - a. Adapted Material means material subject to Copyright and Similar 72 - Rights that is derived from or based upon the Licensed Material and 73 - in which the Licensed Material is translated, altered, arranged, 74 - transformed, or otherwise modified in a manner requiring permission 75 - under the Copyright and Similar Rights held by the Licensor. For 76 - purposes of this Public License, where the Licensed Material is a 77 - musical work, performance, or sound recording, Adapted Material is 78 - always produced where the Licensed Material is synched in timed 79 - relation with a moving image. 80 - 81 - b. Adapter's License means the license You apply to Your Copyright and 82 - Similar Rights in Your contributions to Adapted Material in 83 - accordance with the terms and conditions of this Public License. 84 - 85 - c. BY-SA Compatible License means a license listed at 86 - creativecommons.org/compatiblelicenses, approved by Creative Commons 87 - as essentially the equivalent of this Public License. 88 - 89 - d. Copyright and Similar Rights means copyright and/or similar rights 90 - closely related to copyright including, without limitation, 91 - performance, broadcast, sound recording, and Sui Generis Database 92 - Rights, without regard to how the rights are labeled or 93 - categorized. For purposes of this Public License, the rights 94 - specified in Section 2(b)(1)-(2) are not Copyright and Similar 95 - Rights. 96 - 97 - e. Effective Technological Measures means those measures that, in the 98 - absence of proper authority, may not be circumvented under laws 99 - fulfilling obligations under Article 11 of the WIPO Copyright Treaty 100 - adopted on December 20, 1996, and/or similar international 101 - agreements. 102 - 103 - f. Exceptions and Limitations means fair use, fair dealing, and/or any 104 - other exception or limitation to Copyright and Similar Rights that 105 - applies to Your use of the Licensed Material. 106 - 107 - g. License Elements means the license attributes listed in the name of 108 - a Creative Commons Public License. The License Elements of this 109 - Public License are Attribution and ShareAlike. 110 - 111 - h. Licensed Material means the artistic or literary work, database, or 112 - other material to which the Licensor applied this Public License. 113 - 114 - i. Licensed Rights means the rights granted to You subject to the terms 115 - and conditions of this Public License, which are limited to all 116 - Copyright and Similar Rights that apply to Your use of the Licensed 117 - Material and that the Licensor has authority to license. 118 - 119 - j. Licensor means the individual(s) or entity(ies) granting rights 120 - under this Public License. 121 - 122 - k. Share means to provide material to the public by any means or 123 - process that requires permission under the Licensed Rights, such as 124 - reproduction, public display, public performance, distribution, 125 - dissemination, communication, or importation, and to make material 126 - available to the public including in ways that members of the public 127 - may access the material from a place and at a time individually 128 - chosen by them. 129 - 130 - l. Sui Generis Database Rights means rights other than copyright 131 - resulting from Directive 96/9/EC of the European Parliament and of 132 - the Council of 11 March 1996 on the legal protection of databases, 133 - as amended and/or succeeded, as well as other essentially equivalent 134 - rights anywhere in the world. m. You means the individual or entity 135 - exercising the Licensed Rights under this Public License. Your has a 136 - corresponding meaning. 137 - 138 - Section 2 - Scope. 139 - 140 - a. License grant. 141 - 142 - 1. Subject to the terms and conditions of this Public License, the 143 - Licensor hereby grants You a worldwide, royalty-free, 144 - non-sublicensable, non-exclusive, irrevocable license to 145 - exercise the Licensed Rights in the Licensed Material to: 146 - 147 - A. reproduce and Share the Licensed Material, in whole or in part; and 148 - 149 - B. produce, reproduce, and Share Adapted Material. 150 - 151 - 2. Exceptions and Limitations. For the avoidance of doubt, where 152 - Exceptions and Limitations apply to Your use, this Public 153 - License does not apply, and You do not need to comply with its 154 - terms and conditions. 155 - 156 - 3. Term. The term of this Public License is specified in Section 6(a). 157 - 158 - 4. Media and formats; technical modifications allowed. The Licensor 159 - authorizes You to exercise the Licensed Rights in all media and 160 - formats whether now known or hereafter created, and to make 161 - technical modifications necessary to do so. The Licensor waives 162 - and/or agrees not to assert any right or authority to forbid You 163 - from making technical modifications necessary to exercise the 164 - Licensed Rights, including technical modifications necessary to 165 - circumvent Effective Technological Measures. For purposes of 166 - this Public License, simply making modifications authorized by 167 - this Section 2(a)(4) never produces Adapted Material. 168 - 169 - 5. Downstream recipients. 170 - 171 - A. Offer from the Licensor - Licensed Material. Every recipient 172 - of the Licensed Material automatically receives an offer 173 - from the Licensor to exercise the Licensed Rights under the 174 - terms and conditions of this Public License. 175 - 176 - B. Additional offer from the Licensor - Adapted Material. Every 177 - recipient of Adapted Material from You automatically 178 - receives an offer from the Licensor to exercise the Licensed 179 - Rights in the Adapted Material under the conditions of the 180 - Adapter's License You apply. 181 - 182 - C. No downstream restrictions. You may not offer or impose any 183 - additional or different terms or conditions on, or apply any 184 - Effective Technological Measures to, the Licensed Material 185 - if doing so restricts exercise of the Licensed Rights by any 186 - recipient of the Licensed Material. 187 - 188 - 6. No endorsement. Nothing in this Public License constitutes or 189 - may be construed as permission to assert or imply that You are, 190 - or that Your use of the Licensed Material is, connected with, or 191 - sponsored, endorsed, or granted official status by, the Licensor 192 - or others designated to receive attribution as provided in 193 - Section 3(a)(1)(A)(i). 194 - 195 - b. Other rights. 196 - 197 - 1. Moral rights, such as the right of integrity, are not licensed 198 - under this Public License, nor are publicity, privacy, and/or 199 - other similar personality rights; however, to the extent 200 - possible, the Licensor waives and/or agrees not to assert any 201 - such rights held by the Licensor to the limited extent necessary 202 - to allow You to exercise the Licensed Rights, but not otherwise. 203 - 204 - 2. Patent and trademark rights are not licensed under this Public 205 - License. 206 - 207 - 3. To the extent possible, the Licensor waives any right to collect 208 - royalties from You for the exercise of the Licensed Rights, 209 - whether directly or through a collecting society under any 210 - voluntary or waivable statutory or compulsory licensing 211 - scheme. In all other cases the Licensor expressly reserves any 212 - right to collect such royalties. 213 - 214 - Section 3 - License Conditions. 215 - 216 - Your exercise of the Licensed Rights is expressly made subject to the 217 - following conditions. 218 - 219 - a. Attribution. 220 - 221 - 1. If You Share the Licensed Material (including in modified form), 222 - You must: 223 - 224 - A. retain the following if it is supplied by the Licensor with 225 - the Licensed Material: 226 - 227 - i. identification of the creator(s) of the Licensed 228 - Material and any others designated to receive 229 - attribution, in any reasonable manner requested by the 230 - Licensor (including by pseudonym if designated); 231 - 232 - ii. a copyright notice; 233 - 234 - iii. a notice that refers to this Public License; 235 - 236 - iv. a notice that refers to the disclaimer of warranties; 237 - 238 - v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; 239 - 240 - B. indicate if You modified the Licensed Material and retain an 241 - indication of any previous modifications; and 242 - 243 - C. indicate the Licensed Material is licensed under this Public 244 - License, and include the text of, or the URI or hyperlink to, 245 - this Public License. 246 - 247 - 2. You may satisfy the conditions in Section 3(a)(1) in any 248 - reasonable manner based on the medium, means, and context in 249 - which You Share the Licensed Material. For example, it may be 250 - reasonable to satisfy the conditions by providing a URI or 251 - hyperlink to a resource that includes the required information. 252 - 253 - 3. If requested by the Licensor, You must remove any of the 254 - information required by Section 3(a)(1)(A) to the extent 255 - reasonably practicable. b. ShareAlike.In addition to the 256 - conditions in Section 3(a), if You Share Adapted Material You 257 - produce, the following conditions also apply. 258 - 259 - 1. The Adapter's License You apply must be a Creative Commons 260 - license with the same License Elements, this version or 261 - later, or a BY-SA Compatible License. 262 - 263 - 2. You must include the text of, or the URI or hyperlink to, the 264 - Adapter's License You apply. You may satisfy this condition 265 - in any reasonable manner based on the medium, means, and 266 - context in which You Share Adapted Material. 267 - 268 - 3. You may not offer or impose any additional or different terms 269 - or conditions on, or apply any Effective Technological 270 - Measures to, Adapted Material that restrict exercise of the 271 - rights granted under the Adapter's License You apply. 272 - 273 - Section 4 - Sui Generis Database Rights. 274 - 275 - Where the Licensed Rights include Sui Generis Database Rights that apply to 276 - Your use of the Licensed Material: 277 - 278 - a. for the avoidance of doubt, Section 2(a)(1) grants You the right to 279 - extract, reuse, reproduce, and Share all or a substantial portion of 280 - the contents of the database; 281 - 282 - b. if You include all or a substantial portion of the database contents 283 - in a database in which You have Sui Generis Database Rights, then 284 - the database in which You have Sui Generis Database Rights (but not 285 - its individual contents) is Adapted Material, including for purposes 286 - of Section 3(b); and 287 - 288 - c. You must comply with the conditions in Section 3(a) if You Share all 289 - or a substantial portion of the contents of the database. 290 - 291 - For the avoidance of doubt, this Section 4 supplements and does not 292 - replace Your obligations under this Public License where the Licensed 293 - Rights include other Copyright and Similar Rights. 294 - 295 - Section 5 - Disclaimer of Warranties and Limitation of Liability. 296 - 297 - a. Unless otherwise separately undertaken by the Licensor, to the 298 - extent possible, the Licensor offers the Licensed Material as-is and 299 - as-available, and makes no representations or warranties of any kind 300 - concerning the Licensed Material, whether express, implied, 301 - statutory, or other. This includes, without limitation, warranties 302 - of title, merchantability, fitness for a particular purpose, 303 - non-infringement, absence of latent or other defects, accuracy, or 304 - the presence or absence of errors, whether or not known or 305 - discoverable. Where disclaimers of warranties are not allowed in 306 - full or in part, this disclaimer may not apply to You. 307 - 308 - b. To the extent possible, in no event will the Licensor be liable to 309 - You on any legal theory (including, without limitation, negligence) 310 - or otherwise for any direct, special, indirect, incidental, 311 - consequential, punitive, exemplary, or other losses, costs, 312 - expenses, or damages arising out of this Public License or use of 313 - the Licensed Material, even if the Licensor has been advised of the 314 - possibility of such losses, costs, expenses, or damages. Where a 315 - limitation of liability is not allowed in full or in part, this 316 - limitation may not apply to You. 317 - 318 - c. The disclaimer of warranties and limitation of liability provided 319 - above shall be interpreted in a manner that, to the extent possible, 320 - most closely approximates an absolute disclaimer and waiver of all 321 - liability. 322 - 323 - Section 6 - Term and Termination. 324 - 325 - a. This Public License applies for the term of the Copyright and 326 - Similar Rights licensed here. However, if You fail to comply with 327 - this Public License, then Your rights under this Public License 328 - terminate automatically. 329 - 330 - b. Where Your right to use the Licensed Material has terminated under 331 - Section 6(a), it reinstates: 332 - 333 - 1. automatically as of the date the violation is cured, provided it 334 - is cured within 30 days of Your discovery of the violation; or 335 - 336 - 2. upon express reinstatement by the Licensor. 337 - 338 - c. For the avoidance of doubt, this Section 6(b) does not affect any 339 - right the Licensor may have to seek remedies for Your violations of 340 - this Public License. 341 - 342 - d. For the avoidance of doubt, the Licensor may also offer the Licensed 343 - Material under separate terms or conditions or stop distributing the 344 - Licensed Material at any time; however, doing so will not terminate 345 - this Public License. 346 - 347 - e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. 348 - 349 - Section 7 - Other Terms and Conditions. 350 - 351 - a. The Licensor shall not be bound by any additional or different terms 352 - or conditions communicated by You unless expressly agreed. 353 - 354 - b. Any arrangements, understandings, or agreements regarding the 355 - Licensed Material not stated herein are separate from and 356 - independent of the terms and conditions of this Public License. 357 - 358 - Section 8 - Interpretation. 359 - 360 - a. For the avoidance of doubt, this Public License does not, and shall 361 - not be interpreted to, reduce, limit, restrict, or impose conditions 362 - on any use of the Licensed Material that could lawfully be made 363 - without permission under this Public License. 364 - 365 - b. To the extent possible, if any provision of this Public License is 366 - deemed unenforceable, it shall be automatically reformed to the 367 - minimum extent necessary to make it enforceable. If the provision 368 - cannot be reformed, it shall be severed from this Public License 369 - without affecting the enforceability of the remaining terms and 370 - conditions. 371 - 372 - c. No term or condition of this Public License will be waived and no 373 - failure to comply consented to unless expressly agreed to by the 374 - Licensor. 375 - 376 - d. Nothing in this Public License constitutes or may be interpreted as 377 - a limitation upon, or waiver of, any privileges and immunities that 378 - apply to the Licensor or You, including from the legal processes of 379 - any jurisdiction or authority. 380 - 381 - Creative Commons is not a party to its public licenses. Notwithstanding, 382 - Creative Commons may elect to apply one of its public licenses to material 383 - it publishes and in those instances will be considered the "Licensor." The 384 - text of the Creative Commons public licenses is dedicated to the public 385 - domain under the CC0 Public Domain Dedication. Except for the limited 386 - purpose of indicating that material is shared under a Creative Commons 387 - public license or as otherwise permitted by the Creative Commons policies 388 - published at creativecommons.org/policies, Creative Commons does not 389 - authorize the use of the trademark "Creative Commons" or any other 390 - trademark or logo of Creative Commons without its prior written consent 391 - including, without limitation, in connection with any unauthorized 392 - modifications to any of its public licenses or any other arrangements, 393 - understandings, or agreements concerning use of licensed material. For the 394 - avoidance of doubt, this paragraph does not form part of the public 395 - licenses. 396 - 397 - Creative Commons may be contacted at creativecommons.org.
+1 -2
MAINTAINERS
··· 10161 10161 T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git 10162 10162 T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git 10163 10163 S: Maintained 10164 - F: net/core/flow.c 10165 10164 F: net/xfrm/ 10166 10165 F: net/key/ 10167 10166 F: net/ipv4/xfrm* ··· 13100 13101 M: Paul Moore <paul@paul-moore.com> 13101 13102 M: Stephen Smalley <sds@tycho.nsa.gov> 13102 13103 M: Eric Paris <eparis@parisplace.org> 13103 - L: selinux@tycho.nsa.gov (moderated for non-subscribers) 13104 + L: selinux@vger.kernel.org 13104 13105 W: https://selinuxproject.org 13105 13106 W: https://github.com/SELinuxProject 13106 13107 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux.git
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = -rc8 6 6 NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION*
+4 -4
arch/arm/kvm/coproc.c
··· 478 478 479 479 /* ICC_SGI1R */ 480 480 { CRm64(12), Op1( 0), is64, access_gic_sgi}, 481 - /* ICC_ASGI1R */ 482 - { CRm64(12), Op1( 1), is64, access_gic_sgi}, 483 - /* ICC_SGI0R */ 484 - { CRm64(12), Op1( 2), is64, access_gic_sgi}, 485 481 486 482 /* VBAR: swapped by interrupt.S. */ 487 483 { CRn(12), CRm( 0), Op1( 0), Op2( 0), is32, 488 484 NULL, reset_val, c12_VBAR, 0x00000000 }, 489 485 486 + /* ICC_ASGI1R */ 487 + { CRm64(12), Op1( 1), is64, access_gic_sgi}, 488 + /* ICC_SGI0R */ 489 + { CRm64(12), Op1( 2), is64, access_gic_sgi}, 490 490 /* ICC_SRE */ 491 491 { CRn(12), CRm(12), Op1( 0), Op2(5), is32, access_gic_sre }, 492 492
+7
arch/arm64/kernel/perf_event.c
··· 966 966 return 0; 967 967 } 968 968 969 + static int armv8pmu_filter_match(struct perf_event *event) 970 + { 971 + unsigned long evtype = event->hw.config_base & ARMV8_PMU_EVTYPE_EVENT; 972 + return evtype != ARMV8_PMUV3_PERFCTR_CHAIN; 973 + } 974 + 969 975 static void armv8pmu_reset(void *info) 970 976 { 971 977 struct arm_pmu *cpu_pmu = (struct arm_pmu *)info; ··· 1120 1114 cpu_pmu->stop = armv8pmu_stop, 1121 1115 cpu_pmu->reset = armv8pmu_reset, 1122 1116 cpu_pmu->set_event_filter = armv8pmu_set_event_filter; 1117 + cpu_pmu->filter_match = armv8pmu_filter_match; 1123 1118 1124 1119 return 0; 1125 1120 }
+25 -27
arch/arm64/kernel/setup.c
··· 64 64 #include <asm/xen/hypervisor.h> 65 65 #include <asm/mmu_context.h> 66 66 67 + static int num_standard_resources; 68 + static struct resource *standard_resources; 69 + 67 70 phys_addr_t __fdt_pointer __initdata; 68 71 69 72 /* ··· 209 206 { 210 207 struct memblock_region *region; 211 208 struct resource *res; 209 + unsigned long i = 0; 212 210 213 211 kernel_code.start = __pa_symbol(_text); 214 212 kernel_code.end = __pa_symbol(__init_begin - 1); 215 213 kernel_data.start = __pa_symbol(_sdata); 216 214 kernel_data.end = __pa_symbol(_end - 1); 217 215 216 + num_standard_resources = memblock.memory.cnt; 217 + standard_resources = alloc_bootmem_low(num_standard_resources * 218 + sizeof(*standard_resources)); 219 + 218 220 for_each_memblock(memory, region) { 219 - res = alloc_bootmem_low(sizeof(*res)); 221 + res = &standard_resources[i++]; 220 222 if (memblock_is_nomap(region)) { 221 223 res->name = "reserved"; 222 224 res->flags = IORESOURCE_MEM; ··· 251 243 252 244 static int __init reserve_memblock_reserved_regions(void) 253 245 { 254 - phys_addr_t start, end, roundup_end = 0; 255 - struct resource *mem, *res; 256 - u64 i; 246 + u64 i, j; 257 247 258 - for_each_reserved_mem_region(i, &start, &end) { 259 - if (end <= roundup_end) 260 - continue; /* done already */ 248 + for (i = 0; i < num_standard_resources; ++i) { 249 + struct resource *mem = &standard_resources[i]; 250 + phys_addr_t r_start, r_end, mem_size = resource_size(mem); 261 251 262 - start = __pfn_to_phys(PFN_DOWN(start)); 263 - end = __pfn_to_phys(PFN_UP(end)) - 1; 264 - roundup_end = end; 265 - 266 - res = kzalloc(sizeof(*res), GFP_ATOMIC); 267 - if (WARN_ON(!res)) 268 - return -ENOMEM; 269 - res->start = start; 270 - res->end = end; 271 - res->name = "reserved"; 272 - res->flags = IORESOURCE_MEM; 273 - 274 - mem = request_resource_conflict(&iomem_resource, res); 275 - /* 276 - * We expected memblock_reserve() regions to conflict with 277 - * memory created by request_standard_resources(). 278 - */ 279 - if (WARN_ON_ONCE(!mem)) 252 + if (!memblock_is_region_reserved(mem->start, mem_size)) 280 253 continue; 281 - kfree(res); 282 254 283 - reserve_region_with_split(mem, start, end, "reserved"); 255 + for_each_reserved_mem_region(j, &r_start, &r_end) { 256 + resource_size_t start, end; 257 + 258 + start = max(PFN_PHYS(PFN_DOWN(r_start)), mem->start); 259 + end = min(PFN_PHYS(PFN_UP(r_end)) - 1, mem->end); 260 + 261 + if (start > mem->end || end < mem->start) 262 + continue; 263 + 264 + reserve_region_with_split(mem, start, end, "reserved"); 265 + } 284 266 } 285 267 286 268 return 0;
+1 -1
arch/parisc/kernel/unwind.c
··· 426 426 r.gr[30] = get_parisc_stackpointer(); 427 427 regs = &r; 428 428 } 429 - unwind_frame_init(info, task, &r); 429 + unwind_frame_init(info, task, regs); 430 430 } else { 431 431 unwind_frame_init_from_blocked_task(info, task); 432 432 }
+2 -2
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 114 114 */ 115 115 #define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ 116 116 _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \ 117 - _PAGE_SOFT_DIRTY) 117 + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) 118 118 /* 119 119 * user access blocked by key 120 120 */ ··· 132 132 */ 133 133 #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ 134 134 _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \ 135 - _PAGE_SOFT_DIRTY) 135 + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) 136 136 137 137 #define H_PTE_PKEY (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \ 138 138 H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)
+1 -1
arch/sparc/include/asm/cpudata_64.h
··· 28 28 unsigned short sock_id; /* physical package */ 29 29 unsigned short core_id; 30 30 unsigned short max_cache_id; /* groupings of highest shared cache */ 31 - unsigned short proc_id; /* strand (aka HW thread) id */ 31 + signed short proc_id; /* strand (aka HW thread) id */ 32 32 } cpuinfo_sparc; 33 33 34 34 DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
+2 -1
arch/sparc/include/uapi/asm/unistd.h
··· 427 427 #define __NR_preadv2 358 428 428 #define __NR_pwritev2 359 429 429 #define __NR_statx 360 430 + #define __NR_io_pgetevents 361 430 431 431 - #define NR_syscalls 361 432 + #define NR_syscalls 362 432 433 433 434 /* Bitmask values returned from kern_features system call. */ 434 435 #define KERN_FEATURE_MIXED_MODE_STACK 0x00000001
+2 -2
arch/sparc/kernel/auxio_64.c
··· 115 115 auxio_devtype = AUXIO_TYPE_SBUS; 116 116 size = 1; 117 117 } else { 118 - printk("auxio: Unknown parent bus type [%pOFn]\n", 119 - dp->parent); 118 + printk("auxio: Unknown parent bus type [%s]\n", 119 + dp->parent->name); 120 120 return -ENODEV; 121 121 } 122 122 auxio_register = of_ioremap(&dev->resource[0], 0, size, "auxio");
+22 -4
arch/sparc/kernel/perf_event.c
··· 24 24 #include <asm/cpudata.h> 25 25 #include <linux/uaccess.h> 26 26 #include <linux/atomic.h> 27 + #include <linux/sched/clock.h> 27 28 #include <asm/nmi.h> 28 29 #include <asm/pcr.h> 29 30 #include <asm/cacheflush.h> ··· 928 927 sparc_perf_event_update(cp, &cp->hw, 929 928 cpuc->current_idx[i]); 930 929 cpuc->current_idx[i] = PIC_NO_INDEX; 930 + if (cp->hw.state & PERF_HES_STOPPED) 931 + cp->hw.state |= PERF_HES_ARCH; 931 932 } 932 933 } 933 934 } ··· 962 959 963 960 enc = perf_event_get_enc(cpuc->events[i]); 964 961 cpuc->pcr[0] &= ~mask_for_index(idx); 965 - if (hwc->state & PERF_HES_STOPPED) 962 + if (hwc->state & PERF_HES_ARCH) { 966 963 cpuc->pcr[0] |= nop_for_index(idx); 967 - else 964 + } else { 968 965 cpuc->pcr[0] |= event_encoding(enc, idx); 966 + hwc->state = 0; 967 + } 969 968 } 970 969 out: 971 970 cpuc->pcr[0] |= cpuc->event[0]->hw.config_base; ··· 992 987 continue; 993 988 994 989 cpuc->current_idx[i] = idx; 990 + 991 + if (cp->hw.state & PERF_HES_ARCH) 992 + continue; 995 993 996 994 sparc_pmu_start(cp, PERF_EF_RELOAD); 997 995 } ··· 1087 1079 event->hw.state = 0; 1088 1080 1089 1081 sparc_pmu_enable_event(cpuc, &event->hw, idx); 1082 + 1083 + perf_event_update_userpage(event); 1090 1084 } 1091 1085 1092 1086 static void sparc_pmu_stop(struct perf_event *event, int flags) ··· 1381 1371 cpuc->events[n0] = event->hw.event_base; 1382 1372 cpuc->current_idx[n0] = PIC_NO_INDEX; 1383 1373 1384 - event->hw.state = PERF_HES_UPTODATE; 1374 + event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED; 1385 1375 if (!(ef_flags & PERF_EF_START)) 1386 - event->hw.state |= PERF_HES_STOPPED; 1376 + event->hw.state |= PERF_HES_ARCH; 1387 1377 1388 1378 /* 1389 1379 * If group events scheduling transaction was started, ··· 1613 1603 struct perf_sample_data data; 1614 1604 struct cpu_hw_events *cpuc; 1615 1605 struct pt_regs *regs; 1606 + u64 finish_clock; 1607 + u64 start_clock; 1616 1608 int i; 1617 1609 1618 1610 if (!atomic_read(&active_events)) ··· 1627 1615 default: 1628 1616 return NOTIFY_DONE; 1629 1617 } 1618 + 1619 + start_clock = sched_clock(); 1630 1620 1631 1621 regs = args->regs; 1632 1622 ··· 1667 1653 if (perf_event_overflow(event, &data, regs)) 1668 1654 sparc_pmu_stop(event, 0); 1669 1655 } 1656 + 1657 + finish_clock = sched_clock(); 1658 + 1659 + perf_sample_event_took(finish_clock - start_clock); 1670 1660 1671 1661 return NOTIFY_STOP; 1672 1662 }
+2 -2
arch/sparc/kernel/power.c
··· 41 41 42 42 power_reg = of_ioremap(res, 0, 0x4, "power"); 43 43 44 - printk(KERN_INFO "%pOFn: Control reg at %llx\n", 45 - op->dev.of_node, res->start); 44 + printk(KERN_INFO "%s: Control reg at %llx\n", 45 + op->dev.of_node->name, res->start); 46 46 47 47 if (has_button_interrupt(irq, op->dev.of_node)) { 48 48 if (request_irq(irq,
+13 -13
arch/sparc/kernel/prom_32.c
··· 68 68 return; 69 69 70 70 regs = rprop->value; 71 - sprintf(tmp_buf, "%pOFn@%x,%x", 72 - dp, 71 + sprintf(tmp_buf, "%s@%x,%x", 72 + dp->name, 73 73 regs->which_io, regs->phys_addr); 74 74 } 75 75 ··· 84 84 return; 85 85 86 86 regs = prop->value; 87 - sprintf(tmp_buf, "%pOFn@%x,%x", 88 - dp, 87 + sprintf(tmp_buf, "%s@%x,%x", 88 + dp->name, 89 89 regs->which_io, 90 90 regs->phys_addr); 91 91 } ··· 104 104 regs = prop->value; 105 105 devfn = (regs->phys_hi >> 8) & 0xff; 106 106 if (devfn & 0x07) { 107 - sprintf(tmp_buf, "%pOFn@%x,%x", 108 - dp, 107 + sprintf(tmp_buf, "%s@%x,%x", 108 + dp->name, 109 109 devfn >> 3, 110 110 devfn & 0x07); 111 111 } else { 112 - sprintf(tmp_buf, "%pOFn@%x", 113 - dp, 112 + sprintf(tmp_buf, "%s@%x", 113 + dp->name, 114 114 devfn >> 3); 115 115 } 116 116 } ··· 127 127 128 128 regs = prop->value; 129 129 130 - sprintf(tmp_buf, "%pOFn@%x,%x", 131 - dp, 130 + sprintf(tmp_buf, "%s@%x,%x", 131 + dp->name, 132 132 regs->which_io, regs->phys_addr); 133 133 } 134 134 ··· 167 167 return; 168 168 device = prop->value; 169 169 170 - sprintf(tmp_buf, "%pOFn:%d:%d@%x,%x", 171 - dp, *vendor, *device, 170 + sprintf(tmp_buf, "%s:%d:%d@%x,%x", 171 + dp->name, *vendor, *device, 172 172 *intr, reg0); 173 173 } 174 174 ··· 201 201 tmp_buf[0] = '\0'; 202 202 __build_path_component(dp, tmp_buf); 203 203 if (tmp_buf[0] == '\0') 204 - snprintf(tmp_buf, sizeof(tmp_buf), "%pOFn", dp); 204 + strcpy(tmp_buf, dp->name); 205 205 206 206 n = prom_early_alloc(strlen(tmp_buf) + 1); 207 207 strcpy(n, tmp_buf);
+34 -34
arch/sparc/kernel/prom_64.c
··· 82 82 83 83 regs = rprop->value; 84 84 if (!of_node_is_root(dp->parent)) { 85 - sprintf(tmp_buf, "%pOFn@%x,%x", 86 - dp, 85 + sprintf(tmp_buf, "%s@%x,%x", 86 + dp->name, 87 87 (unsigned int) (regs->phys_addr >> 32UL), 88 88 (unsigned int) (regs->phys_addr & 0xffffffffUL)); 89 89 return; ··· 97 97 const char *prefix = (type == 0) ? "m" : "i"; 98 98 99 99 if (low_bits) 100 - sprintf(tmp_buf, "%pOFn@%s%x,%x", 101 - dp, prefix, 100 + sprintf(tmp_buf, "%s@%s%x,%x", 101 + dp->name, prefix, 102 102 high_bits, low_bits); 103 103 else 104 - sprintf(tmp_buf, "%pOFn@%s%x", 105 - dp, 104 + sprintf(tmp_buf, "%s@%s%x", 105 + dp->name, 106 106 prefix, 107 107 high_bits); 108 108 } else if (type == 12) { 109 - sprintf(tmp_buf, "%pOFn@%x", 110 - dp, high_bits); 109 + sprintf(tmp_buf, "%s@%x", 110 + dp->name, high_bits); 111 111 } 112 112 } 113 113 ··· 122 122 123 123 regs = prop->value; 124 124 if (!of_node_is_root(dp->parent)) { 125 - sprintf(tmp_buf, "%pOFn@%x,%x", 126 - dp, 125 + sprintf(tmp_buf, "%s@%x,%x", 126 + dp->name, 127 127 (unsigned int) (regs->phys_addr >> 32UL), 128 128 (unsigned int) (regs->phys_addr & 0xffffffffUL)); 129 129 return; ··· 138 138 if (tlb_type >= cheetah) 139 139 mask = 0x7fffff; 140 140 141 - sprintf(tmp_buf, "%pOFn@%x,%x", 142 - dp, 141 + sprintf(tmp_buf, "%s@%x,%x", 142 + dp->name, 143 143 *(u32 *)prop->value, 144 144 (unsigned int) (regs->phys_addr & mask)); 145 145 } ··· 156 156 return; 157 157 158 158 regs = prop->value; 159 - sprintf(tmp_buf, "%pOFn@%x,%x", 160 - dp, 159 + sprintf(tmp_buf, "%s@%x,%x", 160 + dp->name, 161 161 regs->which_io, 162 162 regs->phys_addr); 163 163 } ··· 176 176 regs = prop->value; 177 177 devfn = (regs->phys_hi >> 8) & 0xff; 178 178 if (devfn & 0x07) { 179 - sprintf(tmp_buf, "%pOFn@%x,%x", 180 - dp, 179 + sprintf(tmp_buf, "%s@%x,%x", 180 + dp->name, 181 181 devfn >> 3, 182 182 devfn & 0x07); 183 183 } else { 184 - sprintf(tmp_buf, "%pOFn@%x", 185 - dp, 184 + sprintf(tmp_buf, "%s@%x", 185 + dp->name, 186 186 devfn >> 3); 187 187 } 188 188 } ··· 203 203 if (!prop) 204 204 return; 205 205 206 - sprintf(tmp_buf, "%pOFn@%x,%x", 207 - dp, 206 + sprintf(tmp_buf, "%s@%x,%x", 207 + dp->name, 208 208 *(u32 *) prop->value, 209 209 (unsigned int) (regs->phys_addr & 0xffffffffUL)); 210 210 } ··· 221 221 222 222 regs = prop->value; 223 223 224 - sprintf(tmp_buf, "%pOFn@%x", dp, *regs); 224 + sprintf(tmp_buf, "%s@%x", dp->name, *regs); 225 225 } 226 226 227 227 /* "name@addrhi,addrlo" */ ··· 236 236 237 237 regs = prop->value; 238 238 239 - sprintf(tmp_buf, "%pOFn@%x,%x", 240 - dp, 239 + sprintf(tmp_buf, "%s@%x,%x", 240 + dp->name, 241 241 (unsigned int) (regs->phys_addr >> 32UL), 242 242 (unsigned int) (regs->phys_addr & 0xffffffffUL)); 243 243 } ··· 257 257 /* This actually isn't right... should look at the #address-cells 258 258 * property of the i2c bus node etc. etc. 259 259 */ 260 - sprintf(tmp_buf, "%pOFn@%x,%x", 261 - dp, regs[0], regs[1]); 260 + sprintf(tmp_buf, "%s@%x,%x", 261 + dp->name, regs[0], regs[1]); 262 262 } 263 263 264 264 /* "name@reg0[,reg1]" */ ··· 274 274 regs = prop->value; 275 275 276 276 if (prop->length == sizeof(u32) || regs[1] == 1) { 277 - sprintf(tmp_buf, "%pOFn@%x", 278 - dp, regs[0]); 277 + sprintf(tmp_buf, "%s@%x", 278 + dp->name, regs[0]); 279 279 } else { 280 - sprintf(tmp_buf, "%pOFn@%x,%x", 281 - dp, regs[0], regs[1]); 280 + sprintf(tmp_buf, "%s@%x,%x", 281 + dp->name, regs[0], regs[1]); 282 282 } 283 283 } 284 284 ··· 295 295 regs = prop->value; 296 296 297 297 if (regs[2] || regs[3]) { 298 - sprintf(tmp_buf, "%pOFn@%08x%08x,%04x%08x", 299 - dp, regs[0], regs[1], regs[2], regs[3]); 298 + sprintf(tmp_buf, "%s@%08x%08x,%04x%08x", 299 + dp->name, regs[0], regs[1], regs[2], regs[3]); 300 300 } else { 301 - sprintf(tmp_buf, "%pOFn@%08x%08x", 302 - dp, regs[0], regs[1]); 301 + sprintf(tmp_buf, "%s@%08x%08x", 302 + dp->name, regs[0], regs[1]); 303 303 } 304 304 } 305 305 ··· 361 361 tmp_buf[0] = '\0'; 362 362 __build_path_component(dp, tmp_buf); 363 363 if (tmp_buf[0] == '\0') 364 - snprintf(tmp_buf, sizeof(tmp_buf), "%pOFn", dp); 364 + strcpy(tmp_buf, dp->name); 365 365 366 366 n = prom_early_alloc(strlen(tmp_buf) + 1); 367 367 strcpy(n, tmp_buf);
+2 -1
arch/sparc/kernel/rtrap_64.S
··· 84 84 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 85 85 sethi %hi(0xf << 20), %l4 86 86 and %l1, %l4, %l4 87 + andn %l1, %l4, %l1 87 88 ba,pt %xcc, __handle_preemption_continue 88 - andn %l1, %l4, %l1 89 + srl %l4, 20, %l4 89 90 90 91 /* When returning from a NMI (%pil==15) interrupt we want to 91 92 * avoid running softirqs, doing IRQ tracing, preempting, etc.
+1 -1
arch/sparc/kernel/systbls_32.S
··· 90 90 /*345*/ .long sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 91 91 /*350*/ .long sys_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 92 92 /*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2 93 - /*360*/ .long sys_statx 93 + /*360*/ .long sys_statx, sys_io_pgetevents
+2 -2
arch/sparc/kernel/systbls_64.S
··· 91 91 .word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 92 92 /*350*/ .word sys32_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 93 93 .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range, compat_sys_preadv2, compat_sys_pwritev2 94 - /*360*/ .word sys_statx 94 + /*360*/ .word sys_statx, compat_sys_io_pgetevents 95 95 96 96 #endif /* CONFIG_COMPAT */ 97 97 ··· 173 173 .word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 174 174 /*350*/ .word sys64_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 175 175 .word sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2 176 - /*360*/ .word sys_statx 176 + /*360*/ .word sys_statx, sys_io_pgetevents
+11 -1
arch/sparc/vdso/vclock_gettime.c
··· 33 33 #define TICK_PRIV_BIT (1ULL << 63) 34 34 #endif 35 35 36 + #ifdef CONFIG_SPARC64 36 37 #define SYSCALL_STRING \ 37 38 "ta 0x6d;" \ 38 - "sub %%g0, %%o0, %%o0;" \ 39 + "bcs,a 1f;" \ 40 + " sub %%g0, %%o0, %%o0;" \ 41 + "1:" 42 + #else 43 + #define SYSCALL_STRING \ 44 + "ta 0x10;" \ 45 + "bcs,a 1f;" \ 46 + " sub %%g0, %%o0, %%o0;" \ 47 + "1:" 48 + #endif 39 49 40 50 #define SYSCALL_CLOBBERS \ 41 51 "f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", \
+3 -1
arch/sparc/vdso/vma.c
··· 262 262 unsigned long val; 263 263 264 264 err = kstrtoul(s, 10, &val); 265 + if (err) 266 + return err; 265 267 vdso_enabled = val; 266 - return err; 268 + return 0; 267 269 } 268 270 __setup("vdso=", vdso_setup);
+1 -1
arch/x86/include/asm/pgtable_types.h
··· 124 124 */ 125 125 #define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ 126 126 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \ 127 - _PAGE_SOFT_DIRTY) 127 + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) 128 128 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE) 129 129 130 130 /*
+5 -1
arch/x86/kvm/svm.c
··· 436 436 437 437 static inline bool svm_sev_enabled(void) 438 438 { 439 - return max_sev_asid; 439 + return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0; 440 440 } 441 441 442 442 static inline bool sev_guest(struct kvm *kvm) 443 443 { 444 + #ifdef CONFIG_KVM_AMD_SEV 444 445 struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; 445 446 446 447 return sev->active; 448 + #else 449 + return false; 450 + #endif 447 451 } 448 452 449 453 static inline int sev_get_asid(struct kvm *kvm)
+5 -1
arch/x86/kvm/vmx.c
··· 1572 1572 goto out; 1573 1573 } 1574 1574 1575 + /* 1576 + * FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE hypercall needs the address of the 1577 + * base of EPT PML4 table, strip off EPT configuration information. 1578 + */ 1575 1579 ret = hyperv_flush_guest_mapping( 1576 - to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer); 1580 + to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer & PAGE_MASK); 1577 1581 1578 1582 out: 1579 1583 spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
+1 -1
block/blk-wbt.c
··· 310 310 rq_depth_scale_up(&rwb->rq_depth); 311 311 calc_wb_limits(rwb); 312 312 rwb->unknown_cnt = 0; 313 + rwb_wake_all(rwb); 313 314 rwb_trace_step(rwb, "scale up"); 314 315 } 315 316 ··· 319 318 rq_depth_scale_down(&rwb->rq_depth, hard_throttle); 320 319 calc_wb_limits(rwb); 321 320 rwb->unknown_cnt = 0; 322 - rwb_wake_all(rwb); 323 321 rwb_trace_step(rwb, "scale down"); 324 322 } 325 323
+10 -4
drivers/block/sunvdc.c
··· 36 36 #define VDC_TX_RING_SIZE 512 37 37 #define VDC_DEFAULT_BLK_SIZE 512 38 38 39 + #define MAX_XFER_BLKS (128 * 1024) 40 + #define MAX_XFER_SIZE (MAX_XFER_BLKS / VDC_DEFAULT_BLK_SIZE) 41 + #define MAX_RING_COOKIES ((MAX_XFER_BLKS / PAGE_SIZE) + 2) 42 + 39 43 #define WAITING_FOR_LINK_UP 0x01 40 44 #define WAITING_FOR_TX_SPACE 0x02 41 45 #define WAITING_FOR_GEN_CMD 0x04 ··· 454 450 { 455 451 struct vdc_port *port = req->rq_disk->private_data; 456 452 struct vio_dring_state *dr = &port->vio.drings[VIO_DRIVER_TX_RING]; 457 - struct scatterlist sg[port->ring_cookies]; 453 + struct scatterlist sg[MAX_RING_COOKIES]; 458 454 struct vdc_req_entry *rqe; 459 455 struct vio_disk_desc *desc; 460 456 unsigned int map_perm; 461 457 int nsg, err, i; 462 458 u64 len; 463 459 u8 op; 460 + 461 + if (WARN_ON(port->ring_cookies > MAX_RING_COOKIES)) 462 + return -EINVAL; 464 463 465 464 map_perm = LDC_MAP_SHADOW | LDC_MAP_DIRECT | LDC_MAP_IO; 466 465 ··· 991 984 goto err_out_free_port; 992 985 993 986 port->vdisk_block_size = VDC_DEFAULT_BLK_SIZE; 994 - port->max_xfer_size = ((128 * 1024) / port->vdisk_block_size); 995 - port->ring_cookies = ((port->max_xfer_size * 996 - port->vdisk_block_size) / PAGE_SIZE) + 2; 987 + port->max_xfer_size = MAX_XFER_SIZE; 988 + port->ring_cookies = MAX_RING_COOKIES; 997 989 998 990 err = vio_ldc_alloc(&port->vio, &vdc_ldc_cfg, port); 999 991 if (err)
+9 -1
drivers/clk/sunxi-ng/ccu-sun4i-a10.c
··· 1434 1434 return; 1435 1435 } 1436 1436 1437 - /* Force the PLL-Audio-1x divider to 1 */ 1438 1437 val = readl(reg + SUN4I_PLL_AUDIO_REG); 1438 + 1439 + /* 1440 + * Force VCO and PLL bias current to lowest setting. Higher 1441 + * settings interfere with sigma-delta modulation and result 1442 + * in audible noise and distortions when using SPDIF or I2S. 1443 + */ 1444 + val &= ~GENMASK(25, 16); 1445 + 1446 + /* Force the PLL-Audio-1x divider to 1 */ 1439 1447 val &= ~GENMASK(29, 26); 1440 1448 writel(val | (1 << 26), reg + SUN4I_PLL_AUDIO_REG); 1441 1449
+7 -3
drivers/gpu/drm/drm_crtc.c
··· 567 567 struct drm_mode_crtc *crtc_req = data; 568 568 struct drm_crtc *crtc; 569 569 struct drm_plane *plane; 570 - struct drm_connector **connector_set = NULL, *connector; 571 - struct drm_framebuffer *fb = NULL; 572 - struct drm_display_mode *mode = NULL; 570 + struct drm_connector **connector_set, *connector; 571 + struct drm_framebuffer *fb; 572 + struct drm_display_mode *mode; 573 573 struct drm_mode_set set; 574 574 uint32_t __user *set_connectors_ptr; 575 575 struct drm_modeset_acquire_ctx ctx; ··· 598 598 mutex_lock(&crtc->dev->mode_config.mutex); 599 599 drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); 600 600 retry: 601 + connector_set = NULL; 602 + fb = NULL; 603 + mode = NULL; 604 + 601 605 ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx); 602 606 if (ret) 603 607 goto out;
+4 -1
drivers/gpu/drm/drm_edid.c
··· 113 113 /* AEO model 0 reports 8 bpc, but is a 6 bpc panel */ 114 114 { "AEO", 0, EDID_QUIRK_FORCE_6BPC }, 115 115 116 + /* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */ 117 + { "BOE", 0x78b, EDID_QUIRK_FORCE_6BPC }, 118 + 116 119 /* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */ 117 120 { "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC }, 118 121 ··· 4282 4279 struct drm_hdmi_info *hdmi = &connector->display_info.hdmi; 4283 4280 4284 4281 dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK; 4285 - hdmi->y420_dc_modes |= dc_mask; 4282 + hdmi->y420_dc_modes = dc_mask; 4286 4283 } 4287 4284 4288 4285 static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
+26 -65
drivers/gpu/drm/drm_fb_helper.c
··· 1580 1580 } 1581 1581 EXPORT_SYMBOL(drm_fb_helper_ioctl); 1582 1582 1583 + static bool drm_fb_pixel_format_equal(const struct fb_var_screeninfo *var_1, 1584 + const struct fb_var_screeninfo *var_2) 1585 + { 1586 + return var_1->bits_per_pixel == var_2->bits_per_pixel && 1587 + var_1->grayscale == var_2->grayscale && 1588 + var_1->red.offset == var_2->red.offset && 1589 + var_1->red.length == var_2->red.length && 1590 + var_1->red.msb_right == var_2->red.msb_right && 1591 + var_1->green.offset == var_2->green.offset && 1592 + var_1->green.length == var_2->green.length && 1593 + var_1->green.msb_right == var_2->green.msb_right && 1594 + var_1->blue.offset == var_2->blue.offset && 1595 + var_1->blue.length == var_2->blue.length && 1596 + var_1->blue.msb_right == var_2->blue.msb_right && 1597 + var_1->transp.offset == var_2->transp.offset && 1598 + var_1->transp.length == var_2->transp.length && 1599 + var_1->transp.msb_right == var_2->transp.msb_right; 1600 + } 1601 + 1583 1602 /** 1584 1603 * drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var 1585 1604 * @var: screeninfo to check ··· 1609 1590 { 1610 1591 struct drm_fb_helper *fb_helper = info->par; 1611 1592 struct drm_framebuffer *fb = fb_helper->fb; 1612 - int depth; 1613 1593 1614 1594 if (var->pixclock != 0 || in_dbg_master()) 1615 1595 return -EINVAL; ··· 1628 1610 return -EINVAL; 1629 1611 } 1630 1612 1631 - switch (var->bits_per_pixel) { 1632 - case 16: 1633 - depth = (var->green.length == 6) ? 16 : 15; 1634 - break; 1635 - case 32: 1636 - depth = (var->transp.length > 0) ? 32 : 24; 1637 - break; 1638 - default: 1639 - depth = var->bits_per_pixel; 1640 - break; 1641 - } 1642 - 1643 - switch (depth) { 1644 - case 8: 1645 - var->red.offset = 0; 1646 - var->green.offset = 0; 1647 - var->blue.offset = 0; 1648 - var->red.length = 8; 1649 - var->green.length = 8; 1650 - var->blue.length = 8; 1651 - var->transp.length = 0; 1652 - var->transp.offset = 0; 1653 - break; 1654 - case 15: 1655 - var->red.offset = 10; 1656 - var->green.offset = 5; 1657 - var->blue.offset = 0; 1658 - var->red.length = 5; 1659 - var->green.length = 5; 1660 - var->blue.length = 5; 1661 - var->transp.length = 1; 1662 - var->transp.offset = 15; 1663 - break; 1664 - case 16: 1665 - var->red.offset = 11; 1666 - var->green.offset = 5; 1667 - var->blue.offset = 0; 1668 - var->red.length = 5; 1669 - var->green.length = 6; 1670 - var->blue.length = 5; 1671 - var->transp.length = 0; 1672 - var->transp.offset = 0; 1673 - break; 1674 - case 24: 1675 - var->red.offset = 16; 1676 - var->green.offset = 8; 1677 - var->blue.offset = 0; 1678 - var->red.length = 8; 1679 - var->green.length = 8; 1680 - var->blue.length = 8; 1681 - var->transp.length = 0; 1682 - var->transp.offset = 0; 1683 - break; 1684 - case 32: 1685 - var->red.offset = 16; 1686 - var->green.offset = 8; 1687 - var->blue.offset = 0; 1688 - var->red.length = 8; 1689 - var->green.length = 8; 1690 - var->blue.length = 8; 1691 - var->transp.length = 8; 1692 - var->transp.offset = 24; 1693 - break; 1694 - default: 1613 + /* 1614 + * drm fbdev emulation doesn't support changing the pixel format at all, 1615 + * so reject all pixel format changing requests. 1616 + */ 1617 + if (!drm_fb_pixel_format_equal(var, &info->var)) { 1618 + DRM_DEBUG("fbdev emulation doesn't support changing the pixel format\n"); 1695 1619 return -EINVAL; 1696 1620 } 1621 + 1697 1622 return 0; 1698 1623 } 1699 1624 EXPORT_SYMBOL(drm_fb_helper_check_var);
+1 -1
drivers/i2c/i2c-core-base.c
··· 2270 2270 * 2271 2271 * Return: NULL if a DMA safe buffer was not obtained. Use msg->buf with PIO. 2272 2272 * Or a valid pointer to be used with DMA. After use, release it by 2273 - * calling i2c_release_dma_safe_msg_buf(). 2273 + * calling i2c_put_dma_safe_msg_buf(). 2274 2274 * 2275 2275 * This function must only be called from process context! 2276 2276 */
+3
drivers/infiniband/core/ucm.c
··· 46 46 #include <linux/mutex.h> 47 47 #include <linux/slab.h> 48 48 49 + #include <linux/nospec.h> 50 + 49 51 #include <linux/uaccess.h> 50 52 51 53 #include <rdma/ib.h> ··· 1122 1120 1123 1121 if (hdr.cmd >= ARRAY_SIZE(ucm_cmd_table)) 1124 1122 return -EINVAL; 1123 + hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucm_cmd_table)); 1125 1124 1126 1125 if (hdr.in + sizeof(hdr) > len) 1127 1126 return -EINVAL;
+3
drivers/infiniband/core/ucma.c
··· 44 44 #include <linux/module.h> 45 45 #include <linux/nsproxy.h> 46 46 47 + #include <linux/nospec.h> 48 + 47 49 #include <rdma/rdma_user_cm.h> 48 50 #include <rdma/ib_marshall.h> 49 51 #include <rdma/rdma_cm.h> ··· 1678 1676 1679 1677 if (hdr.cmd >= ARRAY_SIZE(ucma_cmd_table)) 1680 1678 return -EINVAL; 1679 + hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucma_cmd_table)); 1681 1680 1682 1681 if (hdr.in + sizeof(hdr) > len) 1683 1682 return -EINVAL;
+5 -2
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 320 320 phydev->advertising = phydev->supported; 321 321 322 322 /* The internal PHY has its link interrupts routed to the 323 - * Ethernet MAC ISRs 323 + * Ethernet MAC ISRs. On GENETv5 there is a hardware issue 324 + * that prevents the signaling of link UP interrupts when 325 + * the link operates at 10Mbps, so fallback to polling for 326 + * those versions of GENET. 324 327 */ 325 - if (priv->internal_phy) 328 + if (priv->internal_phy && !GENET_IS_V5(priv)) 326 329 dev->phydev->irq = PHY_IGNORE_INTERRUPT; 327 330 328 331 return 0;
+4
drivers/net/ethernet/freescale/fec.h
··· 452 452 * initialisation. 453 453 */ 454 454 #define FEC_QUIRK_MIB_CLEAR (1 << 15) 455 + /* Only i.MX25/i.MX27/i.MX28 controller supports FRBR,FRSR registers, 456 + * those FIFO receive registers are resolved in other platforms. 457 + */ 458 + #define FEC_QUIRK_HAS_FRREG (1 << 16) 455 459 456 460 struct bufdesc_prop { 457 461 int qid;
+12 -4
drivers/net/ethernet/freescale/fec_main.c
··· 91 91 .driver_data = 0, 92 92 }, { 93 93 .name = "imx25-fec", 94 - .driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR, 94 + .driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR | 95 + FEC_QUIRK_HAS_FRREG, 95 96 }, { 96 97 .name = "imx27-fec", 97 - .driver_data = FEC_QUIRK_MIB_CLEAR, 98 + .driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG, 98 99 }, { 99 100 .name = "imx28-fec", 100 101 .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME | 101 - FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC, 102 + FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC | 103 + FEC_QUIRK_HAS_FRREG, 102 104 }, { 103 105 .name = "imx6q-fec", 104 106 .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | ··· 2164 2162 memset(buf, 0, regs->len); 2165 2163 2166 2164 for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) { 2167 - off = fec_enet_register_offset[i] / 4; 2165 + off = fec_enet_register_offset[i]; 2166 + 2167 + if ((off == FEC_R_BOUND || off == FEC_R_FSTART) && 2168 + !(fep->quirks & FEC_QUIRK_HAS_FRREG)) 2169 + continue; 2170 + 2171 + off >>= 2; 2168 2172 buf[off] = readl(&theregs[off]); 2169 2173 } 2170 2174 }
+5 -7
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 433 433 434 434 static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq, 435 435 struct mlx5_wq_cyc *wq, 436 - u16 pi, u16 frag_pi) 436 + u16 pi, u16 nnops) 437 437 { 438 438 struct mlx5e_sq_wqe_info *edge_wi, *wi = &sq->db.ico_wqe[pi]; 439 - u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi; 440 439 441 440 edge_wi = wi + nnops; 442 441 ··· 454 455 struct mlx5_wq_cyc *wq = &sq->wq; 455 456 struct mlx5e_umr_wqe *umr_wqe; 456 457 u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1); 457 - u16 pi, frag_pi; 458 + u16 pi, contig_wqebbs_room; 458 459 int err; 459 460 int i; 460 461 461 462 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 462 - frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc); 463 - 464 - if (unlikely(frag_pi + MLX5E_UMR_WQEBBS > mlx5_wq_cyc_get_frag_size(wq))) { 465 - mlx5e_fill_icosq_frag_edge(sq, wq, pi, frag_pi); 463 + contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi); 464 + if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) { 465 + mlx5e_fill_icosq_frag_edge(sq, wq, pi, contig_wqebbs_room); 466 466 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 467 467 } 468 468
+11 -11
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 290 290 291 291 static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq, 292 292 struct mlx5_wq_cyc *wq, 293 - u16 pi, u16 frag_pi) 293 + u16 pi, u16 nnops) 294 294 { 295 295 struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi]; 296 - u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi; 297 296 298 297 edge_wi = wi + nnops; 299 298 ··· 347 348 struct mlx5e_tx_wqe_info *wi; 348 349 349 350 struct mlx5e_sq_stats *stats = sq->stats; 351 + u16 headlen, ihs, contig_wqebbs_room; 350 352 u16 ds_cnt, ds_cnt_inl = 0; 351 - u16 headlen, ihs, frag_pi; 352 353 u8 num_wqebbs, opcode; 353 354 u32 num_bytes; 354 355 int num_dma; ··· 385 386 } 386 387 387 388 num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); 388 - frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc); 389 - if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) { 390 - mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi); 389 + contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi); 390 + if (unlikely(contig_wqebbs_room < num_wqebbs)) { 391 + mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 391 392 mlx5e_sq_fetch_wqe(sq, &wqe, &pi); 392 393 } 393 394 ··· 635 636 struct mlx5e_tx_wqe_info *wi; 636 637 637 638 struct mlx5e_sq_stats *stats = sq->stats; 638 - u16 headlen, ihs, pi, frag_pi; 639 + u16 headlen, ihs, pi, contig_wqebbs_room; 639 640 u16 ds_cnt, ds_cnt_inl = 0; 640 641 u8 num_wqebbs, opcode; 641 642 u32 num_bytes; ··· 671 672 } 672 673 673 674 num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); 674 - frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc); 675 - if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) { 675 + pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 676 + contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi); 677 + if (unlikely(contig_wqebbs_room < num_wqebbs)) { 678 + mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 676 679 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 677 - mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi); 678 680 } 679 681 680 - mlx5i_sq_fetch_wqe(sq, &wqe, &pi); 682 + mlx5i_sq_fetch_wqe(sq, &wqe, pi); 681 683 682 684 /* fill wqe */ 683 685 wi = &sq->db.wqe_info[pi];
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 273 273 case MLX5_PFAULT_SUBTYPE_WQE: 274 274 /* WQE based event */ 275 275 pfault->type = 276 - be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24; 276 + (be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7; 277 277 pfault->token = 278 278 be32_to_cpu(pf_eqe->wqe.token); 279 279 pfault->wqe.wq_num =
+4 -5
drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
··· 245 245 return ERR_PTR(res); 246 246 } 247 247 248 - /* Context will be freed by wait func after completion */ 248 + /* Context should be freed by the caller after completion. */ 249 249 return context; 250 250 } 251 251 ··· 418 418 cmd.cmd = htonl(MLX5_FPGA_IPSEC_CMD_OP_SET_CAP); 419 419 cmd.flags = htonl(flags); 420 420 context = mlx5_fpga_ipsec_cmd_exec(mdev, &cmd, sizeof(cmd)); 421 - if (IS_ERR(context)) { 422 - err = PTR_ERR(context); 423 - goto out; 424 - } 421 + if (IS_ERR(context)) 422 + return PTR_ERR(context); 425 423 426 424 err = mlx5_fpga_ipsec_cmd_wait(context); 427 425 if (err) ··· 433 435 } 434 436 435 437 out: 438 + kfree(context); 436 439 return err; 437 440 } 438 441
+2 -3
drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
··· 110 110 111 111 static inline void mlx5i_sq_fetch_wqe(struct mlx5e_txqsq *sq, 112 112 struct mlx5i_tx_wqe **wqe, 113 - u16 *pi) 113 + u16 pi) 114 114 { 115 115 struct mlx5_wq_cyc *wq = &sq->wq; 116 116 117 - *pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 118 - *wqe = mlx5_wq_cyc_get_wqe(wq, *pi); 117 + *wqe = mlx5_wq_cyc_get_wqe(wq, pi); 119 118 memset(*wqe, 0, sizeof(**wqe)); 120 119 } 121 120
-5
drivers/net/ethernet/mellanox/mlx5/core/wq.c
··· 39 39 return (u32)wq->fbc.sz_m1 + 1; 40 40 } 41 41 42 - u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq) 43 - { 44 - return wq->fbc.frag_sz_m1 + 1; 45 - } 46 - 47 42 u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq) 48 43 { 49 44 return wq->fbc.sz_m1 + 1;
+5 -6
drivers/net/ethernet/mellanox/mlx5/core/wq.h
··· 80 80 void *wqc, struct mlx5_wq_cyc *wq, 81 81 struct mlx5_wq_ctrl *wq_ctrl); 82 82 u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq); 83 - u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq); 84 83 85 84 int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, 86 85 void *qpc, struct mlx5_wq_qp *wq, ··· 139 140 return ctr & wq->fbc.sz_m1; 140 141 } 141 142 142 - static inline u16 mlx5_wq_cyc_ctr2fragix(struct mlx5_wq_cyc *wq, u16 ctr) 143 - { 144 - return ctr & wq->fbc.frag_sz_m1; 145 - } 146 - 147 143 static inline u16 mlx5_wq_cyc_get_head(struct mlx5_wq_cyc *wq) 148 144 { 149 145 return mlx5_wq_cyc_ctr2ix(wq, wq->wqe_ctr); ··· 152 158 static inline void *mlx5_wq_cyc_get_wqe(struct mlx5_wq_cyc *wq, u16 ix) 153 159 { 154 160 return mlx5_frag_buf_get_wqe(&wq->fbc, ix); 161 + } 162 + 163 + static inline u16 mlx5_wq_cyc_get_contig_wqebbs(struct mlx5_wq_cyc *wq, u16 ix) 164 + { 165 + return mlx5_frag_buf_get_idx_last_contig_stride(&wq->fbc, ix) - ix + 1; 155 166 } 156 167 157 168 static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
+2
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 1055 1055 err_driver_init: 1056 1056 mlxsw_thermal_fini(mlxsw_core->thermal); 1057 1057 err_thermal_init: 1058 + mlxsw_hwmon_fini(mlxsw_core->hwmon); 1058 1059 err_hwmon_init: 1059 1060 if (!reload) 1060 1061 devlink_unregister(devlink); ··· 1089 1088 if (mlxsw_core->driver->fini) 1090 1089 mlxsw_core->driver->fini(mlxsw_core); 1091 1090 mlxsw_thermal_fini(mlxsw_core->thermal); 1091 + mlxsw_hwmon_fini(mlxsw_core->hwmon); 1092 1092 if (!reload) 1093 1093 devlink_unregister(devlink); 1094 1094 mlxsw_emad_fini(mlxsw_core);
+4
drivers/net/ethernet/mellanox/mlxsw/core.h
··· 359 359 return 0; 360 360 } 361 361 362 + static inline void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon) 363 + { 364 + } 365 + 362 366 #endif 363 367 364 368 struct mlxsw_thermal;
+11 -6
drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
··· 303 303 struct device *hwmon_dev; 304 304 int err; 305 305 306 - mlxsw_hwmon = devm_kzalloc(mlxsw_bus_info->dev, sizeof(*mlxsw_hwmon), 307 - GFP_KERNEL); 306 + mlxsw_hwmon = kzalloc(sizeof(*mlxsw_hwmon), GFP_KERNEL); 308 307 if (!mlxsw_hwmon) 309 308 return -ENOMEM; 310 309 mlxsw_hwmon->core = mlxsw_core; ··· 320 321 mlxsw_hwmon->groups[0] = &mlxsw_hwmon->group; 321 322 mlxsw_hwmon->group.attrs = mlxsw_hwmon->attrs; 322 323 323 - hwmon_dev = devm_hwmon_device_register_with_groups(mlxsw_bus_info->dev, 324 - "mlxsw", 325 - mlxsw_hwmon, 326 - mlxsw_hwmon->groups); 324 + hwmon_dev = hwmon_device_register_with_groups(mlxsw_bus_info->dev, 325 + "mlxsw", mlxsw_hwmon, 326 + mlxsw_hwmon->groups); 327 327 if (IS_ERR(hwmon_dev)) { 328 328 err = PTR_ERR(hwmon_dev); 329 329 goto err_hwmon_register; ··· 335 337 err_hwmon_register: 336 338 err_fans_init: 337 339 err_temp_init: 340 + kfree(mlxsw_hwmon); 338 341 return err; 342 + } 343 + 344 + void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon) 345 + { 346 + hwmon_device_unregister(mlxsw_hwmon->hwmon_dev); 347 + kfree(mlxsw_hwmon); 339 348 }
+3 -3
drivers/net/ethernet/mscc/ocelot.c
··· 133 133 { 134 134 unsigned int val, timeout = 10; 135 135 136 - /* Wait for the issued mac table command to be completed, or timeout. 137 - * When the command read from ANA_TABLES_MACACCESS is 138 - * MACACCESS_CMD_IDLE, the issued command completed successfully. 136 + /* Wait for the issued vlan table command to be completed, or timeout. 137 + * When the command read from ANA_TABLES_VLANACCESS is 138 + * VLANACCESS_CMD_IDLE, the issued command completed successfully. 139 139 */ 140 140 do { 141 141 val = ocelot_read(ocelot, ANA_TABLES_VLANACCESS);
+33 -18
drivers/net/ethernet/netronome/nfp/flower/action.c
··· 399 399 400 400 switch (off) { 401 401 case offsetof(struct iphdr, daddr): 402 - set_ip_addr->ipv4_dst_mask = mask; 403 - set_ip_addr->ipv4_dst = exact; 402 + set_ip_addr->ipv4_dst_mask |= mask; 403 + set_ip_addr->ipv4_dst &= ~mask; 404 + set_ip_addr->ipv4_dst |= exact & mask; 404 405 break; 405 406 case offsetof(struct iphdr, saddr): 406 - set_ip_addr->ipv4_src_mask = mask; 407 - set_ip_addr->ipv4_src = exact; 407 + set_ip_addr->ipv4_src_mask |= mask; 408 + set_ip_addr->ipv4_src &= ~mask; 409 + set_ip_addr->ipv4_src |= exact & mask; 408 410 break; 409 411 default: 410 412 return -EOPNOTSUPP; ··· 420 418 } 421 419 422 420 static void 423 - nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask, 421 + nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask, 424 422 struct nfp_fl_set_ipv6_addr *ip6) 425 423 { 426 - ip6->ipv6[idx % 4].mask = mask; 427 - ip6->ipv6[idx % 4].exact = exact; 424 + ip6->ipv6[word].mask |= mask; 425 + ip6->ipv6[word].exact &= ~mask; 426 + ip6->ipv6[word].exact |= exact & mask; 428 427 429 428 ip6->reserved = cpu_to_be16(0); 430 429 ip6->head.jump_id = opcode_tag; ··· 438 435 struct nfp_fl_set_ipv6_addr *ip_src) 439 436 { 440 437 __be32 exact, mask; 438 + u8 word; 441 439 442 440 /* We are expecting tcf_pedit to return a big endian value */ 443 441 mask = (__force __be32)~tcf_pedit_mask(action, idx); ··· 447 443 if (exact & ~mask) 448 444 return -EOPNOTSUPP; 449 445 450 - if (off < offsetof(struct ipv6hdr, saddr)) 446 + if (off < offsetof(struct ipv6hdr, saddr)) { 451 447 return -EOPNOTSUPP; 452 - else if (off < offsetof(struct ipv6hdr, daddr)) 453 - nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx, 448 + } else if (off < offsetof(struct ipv6hdr, daddr)) { 449 + word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact); 450 + nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word, 454 451 exact, mask, ip_src); 455 - else if (off < offsetof(struct ipv6hdr, daddr) + 456 - sizeof(struct in6_addr)) 457 - nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx, 452 + } else if (off < offsetof(struct ipv6hdr, daddr) + 453 + sizeof(struct in6_addr)) { 454 + word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact); 455 + nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word, 458 456 exact, mask, ip_dst); 459 - else 457 + } else { 460 458 return -EOPNOTSUPP; 459 + } 461 460 462 461 return 0; 463 462 } ··· 518 511 struct nfp_fl_set_eth set_eth; 519 512 enum pedit_header_type htype; 520 513 int idx, nkeys, err; 521 - size_t act_size; 514 + size_t act_size = 0; 522 515 u32 offset, cmd; 523 516 u8 ip_proto = 0; 524 517 ··· 576 569 act_size = sizeof(set_eth); 577 570 memcpy(nfp_action, &set_eth, act_size); 578 571 *a_len += act_size; 579 - } else if (set_ip_addr.head.len_lw) { 572 + } 573 + if (set_ip_addr.head.len_lw) { 574 + nfp_action += act_size; 580 575 act_size = sizeof(set_ip_addr); 581 576 memcpy(nfp_action, &set_ip_addr, act_size); 582 577 *a_len += act_size; ··· 586 577 /* Hardware will automatically fix IPv4 and TCP/UDP checksum. */ 587 578 *csum_updated |= TCA_CSUM_UPDATE_FLAG_IPV4HDR | 588 579 nfp_fl_csum_l4_to_flag(ip_proto); 589 - } else if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) { 580 + } 581 + if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) { 590 582 /* TC compiles set src and dst IPv6 address as a single action, 591 583 * the hardware requires this to be 2 separate actions. 592 584 */ 585 + nfp_action += act_size; 593 586 act_size = sizeof(set_ip6_src); 594 587 memcpy(nfp_action, &set_ip6_src, act_size); 595 588 *a_len += act_size; ··· 604 593 /* Hardware will automatically fix TCP/UDP checksum. */ 605 594 *csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto); 606 595 } else if (set_ip6_dst.head.len_lw) { 596 + nfp_action += act_size; 607 597 act_size = sizeof(set_ip6_dst); 608 598 memcpy(nfp_action, &set_ip6_dst, act_size); 609 599 *a_len += act_size; ··· 612 600 /* Hardware will automatically fix TCP/UDP checksum. */ 613 601 *csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto); 614 602 } else if (set_ip6_src.head.len_lw) { 603 + nfp_action += act_size; 615 604 act_size = sizeof(set_ip6_src); 616 605 memcpy(nfp_action, &set_ip6_src, act_size); 617 606 *a_len += act_size; 618 607 619 608 /* Hardware will automatically fix TCP/UDP checksum. */ 620 609 *csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto); 621 - } else if (set_tport.head.len_lw) { 610 + } 611 + if (set_tport.head.len_lw) { 612 + nfp_action += act_size; 622 613 act_size = sizeof(set_tport); 623 614 memcpy(nfp_action, &set_tport, act_size); 624 615 *a_len += act_size;
+1 -1
drivers/net/ethernet/qlogic/qed/qed_int.c
··· 228 228 attn_master_to_str(GET_FIELD(tmp, QED_GRC_ATTENTION_MASTER)), 229 229 GET_FIELD(tmp2, QED_GRC_ATTENTION_PF), 230 230 (GET_FIELD(tmp2, QED_GRC_ATTENTION_PRIV) == 231 - QED_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Ireelevant)", 231 + QED_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant)", 232 232 GET_FIELD(tmp2, QED_GRC_ATTENTION_VF)); 233 233 234 234 out:
-2
drivers/net/ethernet/qlogic/qla3xxx.c
··· 380 380 381 381 qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1; 382 382 ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data); 383 - ql_write_nvram_reg(qdev, spir, 384 - ((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data)); 385 383 } 386 384 387 385 /*
+5 -15
drivers/net/ethernet/realtek/r8169.c
··· 6528 6528 struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi); 6529 6529 struct net_device *dev = tp->dev; 6530 6530 u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow; 6531 - int work_done= 0; 6531 + int work_done; 6532 6532 u16 status; 6533 6533 6534 6534 status = rtl_get_events(tp); 6535 6535 rtl_ack_events(tp, status & ~tp->event_slow); 6536 6536 6537 - if (status & RTL_EVENT_NAPI_RX) 6538 - work_done = rtl_rx(dev, tp, (u32) budget); 6537 + work_done = rtl_rx(dev, tp, (u32) budget); 6539 6538 6540 - if (status & RTL_EVENT_NAPI_TX) 6541 - rtl_tx(dev, tp); 6539 + rtl_tx(dev, tp); 6542 6540 6543 6541 if (status & tp->event_slow) { 6544 6542 enable_mask &= ~tp->event_slow; ··· 7069 7071 { 7070 7072 unsigned int flags; 7071 7073 7072 - switch (tp->mac_version) { 7073 - case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06: 7074 + if (tp->mac_version <= RTL_GIGA_MAC_VER_06) { 7074 7075 RTL_W8(tp, Cfg9346, Cfg9346_Unlock); 7075 7076 RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable); 7076 7077 RTL_W8(tp, Cfg9346, Cfg9346_Lock); 7077 7078 flags = PCI_IRQ_LEGACY; 7078 - break; 7079 - case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40: 7080 - /* This version was reported to have issues with resume 7081 - * from suspend when using MSI-X 7082 - */ 7083 - flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI; 7084 - break; 7085 - default: 7079 + } else { 7086 7080 flags = PCI_IRQ_ALL_TYPES; 7087 7081 } 7088 7082
+3 -11
drivers/net/geneve.c
··· 831 831 if (IS_ERR(rt)) 832 832 return PTR_ERR(rt); 833 833 834 - if (skb_dst(skb)) { 835 - int mtu = dst_mtu(&rt->dst) - GENEVE_IPV4_HLEN - 836 - info->options_len; 837 - 838 - skb_dst_update_pmtu(skb, mtu); 839 - } 834 + skb_tunnel_check_pmtu(skb, &rt->dst, 835 + GENEVE_IPV4_HLEN + info->options_len); 840 836 841 837 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 842 838 if (geneve->collect_md) { ··· 877 881 if (IS_ERR(dst)) 878 882 return PTR_ERR(dst); 879 883 880 - if (skb_dst(skb)) { 881 - int mtu = dst_mtu(dst) - GENEVE_IPV6_HLEN - info->options_len; 882 - 883 - skb_dst_update_pmtu(skb, mtu); 884 - } 884 + skb_tunnel_check_pmtu(skb, dst, GENEVE_IPV6_HLEN + info->options_len); 885 885 886 886 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 887 887 if (geneve->collect_md) {
+4 -1
drivers/net/virtio_net.c
··· 2267 2267 /* Make sure no work handler is accessing the device */ 2268 2268 flush_work(&vi->config_work); 2269 2269 2270 + netif_tx_lock_bh(vi->dev); 2270 2271 netif_device_detach(vi->dev); 2271 - netif_tx_disable(vi->dev); 2272 + netif_tx_unlock_bh(vi->dev); 2272 2273 cancel_delayed_work_sync(&vi->refill); 2273 2274 2274 2275 if (netif_running(vi->dev)) { ··· 2305 2304 } 2306 2305 } 2307 2306 2307 + netif_tx_lock_bh(vi->dev); 2308 2308 netif_device_attach(vi->dev); 2309 + netif_tx_unlock_bh(vi->dev); 2309 2310 return err; 2310 2311 } 2311 2312
+2 -10
drivers/net/vxlan.c
··· 2262 2262 } 2263 2263 2264 2264 ndst = &rt->dst; 2265 - if (skb_dst(skb)) { 2266 - int mtu = dst_mtu(ndst) - VXLAN_HEADROOM; 2267 - 2268 - skb_dst_update_pmtu(skb, mtu); 2269 - } 2265 + skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM); 2270 2266 2271 2267 tos = ip_tunnel_ecn_encap(tos, old_iph, skb); 2272 2268 ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); ··· 2299 2303 goto out_unlock; 2300 2304 } 2301 2305 2302 - if (skb_dst(skb)) { 2303 - int mtu = dst_mtu(ndst) - VXLAN6_HEADROOM; 2304 - 2305 - skb_dst_update_pmtu(skb, mtu); 2306 - } 2306 + skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM); 2307 2307 2308 2308 tos = ip_tunnel_ecn_encap(tos, old_iph, skb); 2309 2309 ttl = ttl ? : ip6_dst_hoplimit(ndst);
+7 -1
drivers/perf/arm_pmu.c
··· 485 485 { 486 486 struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 487 487 unsigned int cpu = smp_processor_id(); 488 - return cpumask_test_cpu(cpu, &armpmu->supported_cpus); 488 + int ret; 489 + 490 + ret = cpumask_test_cpu(cpu, &armpmu->supported_cpus); 491 + if (ret && armpmu->filter_match) 492 + return armpmu->filter_match(event); 493 + 494 + return ret; 489 495 } 490 496 491 497 static ssize_t armpmu_cpumask_show(struct device *dev,
+4
drivers/ptp/ptp_chardev.c
··· 24 24 #include <linux/slab.h> 25 25 #include <linux/timekeeping.h> 26 26 27 + #include <linux/nospec.h> 28 + 27 29 #include "ptp_private.h" 28 30 29 31 static int ptp_disable_pinfunc(struct ptp_clock_info *ops, ··· 250 248 err = -EINVAL; 251 249 break; 252 250 } 251 + pin_index = array_index_nospec(pin_index, ops->n_pins); 253 252 if (mutex_lock_interruptible(&ptp->pincfg_mux)) 254 253 return -ERESTARTSYS; 255 254 pd = ops->pin_config[pin_index]; ··· 269 266 err = -EINVAL; 270 267 break; 271 268 } 269 + pin_index = array_index_nospec(pin_index, ops->n_pins); 272 270 if (mutex_lock_interruptible(&ptp->pincfg_mux)) 273 271 return -ERESTARTSYS; 274 272 err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan);
-2
fs/afs/rxrpc.c
··· 690 690 } 691 691 692 692 if (call->state == AFS_CALL_COMPLETE) { 693 - call->reply[0] = NULL; 694 - 695 693 /* We have two refs to release - one from the alloc and one 696 694 * queued with the work item - and we can't just deallocate the 697 695 * call because the work item may be queued again.
-2
fs/afs/server.c
··· 199 199 200 200 write_sequnlock(&net->fs_addr_lock); 201 201 ret = 0; 202 - goto out; 203 202 204 203 exists: 205 204 afs_get_server(server); 206 - out: 207 205 write_sequnlock(&net->fs_lock); 208 206 return server; 209 207 }
+1 -1
fs/cachefiles/namei.c
··· 343 343 trap = lock_rename(cache->graveyard, dir); 344 344 345 345 /* do some checks before getting the grave dentry */ 346 - if (rep->d_parent != dir) { 346 + if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) { 347 347 /* the entry was probably culled when we dropped the parent dir 348 348 * lock */ 349 349 unlock_rename(cache->graveyard, dir);
+11 -2
fs/dax.c
··· 666 666 while (index < end && pagevec_lookup_entries(&pvec, mapping, index, 667 667 min(end - index, (pgoff_t)PAGEVEC_SIZE), 668 668 indices)) { 669 + pgoff_t nr_pages = 1; 670 + 669 671 for (i = 0; i < pagevec_count(&pvec); i++) { 670 672 struct page *pvec_ent = pvec.pages[i]; 671 673 void *entry; ··· 682 680 683 681 xa_lock_irq(&mapping->i_pages); 684 682 entry = get_unlocked_mapping_entry(mapping, index, NULL); 685 - if (entry) 683 + if (entry) { 686 684 page = dax_busy_page(entry); 685 + /* 686 + * Account for multi-order entries at 687 + * the end of the pagevec. 688 + */ 689 + if (i + 1 >= pagevec_count(&pvec)) 690 + nr_pages = 1UL << dax_radix_order(entry); 691 + } 687 692 put_unlocked_mapping_entry(mapping, index, entry); 688 693 xa_unlock_irq(&mapping->i_pages); 689 694 if (page) ··· 705 696 */ 706 697 pagevec_remove_exceptionals(&pvec); 707 698 pagevec_release(&pvec); 708 - index++; 699 + index += nr_pages; 709 700 710 701 if (page) 711 702 break;
+1
fs/fat/fatent.c
··· 682 682 if (ops->ent_get(&fatent) == FAT_ENT_FREE) 683 683 free++; 684 684 } while (fat_ent_next(sbi, &fatent)); 685 + cond_resched(); 685 686 } 686 687 sbi->free_clusters = free; 687 688 sbi->free_clus_valid = 1;
+10 -21
fs/fscache/cookie.c
··· 70 70 } 71 71 72 72 /* 73 - * initialise an cookie jar slab element prior to any use 74 - */ 75 - void fscache_cookie_init_once(void *_cookie) 76 - { 77 - struct fscache_cookie *cookie = _cookie; 78 - 79 - memset(cookie, 0, sizeof(*cookie)); 80 - spin_lock_init(&cookie->lock); 81 - spin_lock_init(&cookie->stores_lock); 82 - INIT_HLIST_HEAD(&cookie->backing_objects); 83 - } 84 - 85 - /* 86 - * Set the index key in a cookie. The cookie struct has space for a 12-byte 73 + * Set the index key in a cookie. The cookie struct has space for a 16-byte 87 74 * key plus length and hash, but if that's not big enough, it's instead a 88 75 * pointer to a buffer containing 3 bytes of hash, 1 byte of length and then 89 76 * the key data. ··· 80 93 { 81 94 unsigned long long h; 82 95 u32 *buf; 96 + int bufs; 83 97 int i; 84 98 85 - cookie->key_len = index_key_len; 99 + bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf)); 86 100 87 101 if (index_key_len > sizeof(cookie->inline_key)) { 88 - buf = kzalloc(index_key_len, GFP_KERNEL); 102 + buf = kcalloc(bufs, sizeof(*buf), GFP_KERNEL); 89 103 if (!buf) 90 104 return -ENOMEM; 91 105 cookie->key = buf; 92 106 } else { 93 107 buf = (u32 *)cookie->inline_key; 94 - buf[0] = 0; 95 - buf[1] = 0; 96 - buf[2] = 0; 97 108 } 98 109 99 110 memcpy(buf, index_key, index_key_len); ··· 101 116 */ 102 117 h = (unsigned long)cookie->parent; 103 118 h += index_key_len + cookie->type; 104 - for (i = 0; i < (index_key_len + sizeof(u32) - 1) / sizeof(u32); i++) 119 + 120 + for (i = 0; i < bufs; i++) 105 121 h += buf[i]; 106 122 107 123 cookie->key_hash = h ^ (h >> 32); ··· 147 161 struct fscache_cookie *cookie; 148 162 149 163 /* allocate and initialise a cookie */ 150 - cookie = kmem_cache_alloc(fscache_cookie_jar, GFP_KERNEL); 164 + cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL); 151 165 if (!cookie) 152 166 return NULL; 153 167 ··· 178 192 cookie->netfs_data = netfs_data; 179 193 cookie->flags = (1 << FSCACHE_COOKIE_NO_DATA_YET); 180 194 cookie->type = def->type; 195 + spin_lock_init(&cookie->lock); 196 + spin_lock_init(&cookie->stores_lock); 197 + INIT_HLIST_HEAD(&cookie->backing_objects); 181 198 182 199 /* radix tree insertion won't use the preallocation pool unless it's 183 200 * told it may not wait */
-1
fs/fscache/internal.h
··· 51 51 extern struct kmem_cache *fscache_cookie_jar; 52 52 53 53 extern void fscache_free_cookie(struct fscache_cookie *); 54 - extern void fscache_cookie_init_once(void *); 55 54 extern struct fscache_cookie *fscache_alloc_cookie(struct fscache_cookie *, 56 55 const struct fscache_cookie_def *, 57 56 const void *, size_t,
+1 -3
fs/fscache/main.c
··· 143 143 144 144 fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar", 145 145 sizeof(struct fscache_cookie), 146 - 0, 147 - 0, 148 - fscache_cookie_init_once); 146 + 0, 0, NULL); 149 147 if (!fscache_cookie_jar) { 150 148 pr_notice("Failed to allocate a cookie jar\n"); 151 149 ret = -ENOMEM;
+1 -5
fs/gfs2/bmap.c
··· 975 975 { 976 976 struct gfs2_inode *ip = GFS2_I(inode); 977 977 978 - if (!page_has_buffers(page)) { 979 - create_empty_buffers(page, inode->i_sb->s_blocksize, 980 - (1 << BH_Dirty)|(1 << BH_Uptodate)); 981 - } 982 978 gfs2_page_add_databufs(ip, page, offset_in_page(pos), copied); 983 979 } 984 980 ··· 1057 1061 } 1058 1062 } 1059 1063 release_metapath(&mp); 1060 - if (gfs2_is_jdata(ip)) 1064 + if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip)) 1061 1065 iomap->page_done = gfs2_iomap_journaled_page_done; 1062 1066 return 0; 1063 1067
+2
fs/ocfs2/dlmglue.c
··· 96 96 }; 97 97 98 98 /* Lockdep class keys */ 99 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 99 100 static struct lock_class_key lockdep_keys[OCFS2_NUM_LOCK_TYPES]; 101 + #endif 100 102 101 103 static int ocfs2_check_meta_downconvert(struct ocfs2_lock_res *lockres, 102 104 int new_level);
+2 -2
fs/ubifs/super.c
··· 2337 2337 2338 2338 static void __exit ubifs_exit(void) 2339 2339 { 2340 - WARN_ON(list_empty(&ubifs_infos)); 2341 - WARN_ON(atomic_long_read(&ubifs_clean_zn_cnt) == 0); 2340 + WARN_ON(!list_empty(&ubifs_infos)); 2341 + WARN_ON(atomic_long_read(&ubifs_clean_zn_cnt) != 0); 2342 2342 2343 2343 dbg_debugfs_exit(); 2344 2344 ubifs_compressors_exit();
+3 -3
include/drm/drm_edid.h
··· 214 214 #define DRM_EDID_HDMI_DC_Y444 (1 << 3) 215 215 216 216 /* YCBCR 420 deep color modes */ 217 - #define DRM_EDID_YCBCR420_DC_48 (1 << 6) 218 - #define DRM_EDID_YCBCR420_DC_36 (1 << 5) 219 - #define DRM_EDID_YCBCR420_DC_30 (1 << 4) 217 + #define DRM_EDID_YCBCR420_DC_48 (1 << 2) 218 + #define DRM_EDID_YCBCR420_DC_36 (1 << 1) 219 + #define DRM_EDID_YCBCR420_DC_30 (1 << 0) 220 220 #define DRM_EDID_YCBCR420_DC_MASK (DRM_EDID_YCBCR420_DC_48 | \ 221 221 DRM_EDID_YCBCR420_DC_36 | \ 222 222 DRM_EDID_YCBCR420_DC_30)
+1 -1
include/linux/huge_mm.h
··· 43 43 unsigned char *vec); 44 44 extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, 45 45 unsigned long new_addr, unsigned long old_end, 46 - pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush); 46 + pmd_t *old_pmd, pmd_t *new_pmd); 47 47 extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, 48 48 unsigned long addr, pgprot_t newprot, 49 49 int prot_numa);
+8
include/linux/mlx5/driver.h
··· 1027 1027 return fbc->frags[frag].buf + ((fbc->frag_sz_m1 & ix) << fbc->log_stride); 1028 1028 } 1029 1029 1030 + static inline u32 1031 + mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix) 1032 + { 1033 + u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1; 1034 + 1035 + return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1); 1036 + } 1037 + 1030 1038 int mlx5_cmd_init(struct mlx5_core_dev *dev); 1031 1039 void mlx5_cmd_cleanup(struct mlx5_core_dev *dev); 1032 1040 void mlx5_cmd_use_events(struct mlx5_core_dev *dev);
+2 -1
include/linux/module.h
··· 20 20 #include <linux/export.h> 21 21 #include <linux/rbtree_latch.h> 22 22 #include <linux/error-injection.h> 23 + #include <linux/tracepoint-defs.h> 23 24 24 25 #include <linux/percpu.h> 25 26 #include <asm/module.h> ··· 431 430 432 431 #ifdef CONFIG_TRACEPOINTS 433 432 unsigned int num_tracepoints; 434 - struct tracepoint * const *tracepoints_ptrs; 433 + tracepoint_ptr_t *tracepoints_ptrs; 435 434 #endif 436 435 #ifdef HAVE_JUMP_LABEL 437 436 struct jump_entry *jump_entries;
+1
include/linux/perf/arm_pmu.h
··· 99 99 void (*stop)(struct arm_pmu *); 100 100 void (*reset)(void *); 101 101 int (*map_event)(struct perf_event *event); 102 + int (*filter_match)(struct perf_event *event); 102 103 int num_events; 103 104 bool secure_access; /* 32-bit ARM only */ 104 105 #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
+6
include/linux/tracepoint-defs.h
··· 35 35 struct tracepoint_func __rcu *funcs; 36 36 }; 37 37 38 + #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 39 + typedef const int tracepoint_ptr_t; 40 + #else 41 + typedef struct tracepoint * const tracepoint_ptr_t; 42 + #endif 43 + 38 44 struct bpf_raw_event_map { 39 45 struct tracepoint *tp; 40 46 void *bpf_func;
+23 -13
include/linux/tracepoint.h
··· 99 99 #define TRACE_DEFINE_ENUM(x) 100 100 #define TRACE_DEFINE_SIZEOF(x) 101 101 102 + #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 103 + static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) 104 + { 105 + return offset_to_ptr(p); 106 + } 107 + 108 + #define __TRACEPOINT_ENTRY(name) \ 109 + asm(" .section \"__tracepoints_ptrs\", \"a\" \n" \ 110 + " .balign 4 \n" \ 111 + " .long __tracepoint_" #name " - . \n" \ 112 + " .previous \n") 113 + #else 114 + static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) 115 + { 116 + return *p; 117 + } 118 + 119 + #define __TRACEPOINT_ENTRY(name) \ 120 + static tracepoint_ptr_t __tracepoint_ptr_##name __used \ 121 + __attribute__((section("__tracepoints_ptrs"))) = \ 122 + &__tracepoint_##name 123 + #endif 124 + 102 125 #endif /* _LINUX_TRACEPOINT_H */ 103 126 104 127 /* ··· 275 252 { \ 276 253 return static_key_false(&__tracepoint_##name.key); \ 277 254 } 278 - 279 - #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 280 - #define __TRACEPOINT_ENTRY(name) \ 281 - asm(" .section \"__tracepoints_ptrs\", \"a\" \n" \ 282 - " .balign 4 \n" \ 283 - " .long __tracepoint_" #name " - . \n" \ 284 - " .previous \n") 285 - #else 286 - #define __TRACEPOINT_ENTRY(name) \ 287 - static struct tracepoint * const __tracepoint_ptr_##name __used \ 288 - __attribute__((section("__tracepoints_ptrs"))) = \ 289 - &__tracepoint_##name 290 - #endif 291 255 292 256 /* 293 257 * We have no guarantee that gcc and the linker won't up-align the tracepoint
+10
include/net/dst.h
··· 527 527 dst->ops->update_pmtu(dst, NULL, skb, mtu); 528 528 } 529 529 530 + static inline void skb_tunnel_check_pmtu(struct sk_buff *skb, 531 + struct dst_entry *encap_dst, 532 + int headroom) 533 + { 534 + u32 encap_mtu = dst_mtu(encap_dst); 535 + 536 + if (skb->len > encap_mtu - headroom) 537 + skb_dst_update_pmtu(skb, encap_mtu - headroom); 538 + } 539 + 530 540 #endif /* _NET_DST_H */
+4
include/net/ip6_fib.h
··· 159 159 struct rt6_info * __percpu *rt6i_pcpu; 160 160 struct rt6_exception_bucket __rcu *rt6i_exception_bucket; 161 161 162 + #ifdef CONFIG_IPV6_ROUTER_PREF 163 + unsigned long last_probe; 164 + #endif 165 + 162 166 u32 fib6_metric; 163 167 u8 fib6_protocol; 164 168 u8 fib6_type;
+1 -1
include/net/sctp/sm.h
··· 347 347 __u16 size; 348 348 349 349 size = ntohs(chunk->chunk_hdr->length); 350 - size -= sctp_datahdr_len(&chunk->asoc->stream); 350 + size -= sctp_datachk_len(&chunk->asoc->stream); 351 351 352 352 return size; 353 353 }
+2
include/net/sctp/structs.h
··· 876 876 unsigned long sackdelay; 877 877 __u32 sackfreq; 878 878 879 + atomic_t mtu_info; 880 + 879 881 /* When was the last time that we heard from this transport? We use 880 882 * this to pick new active and retran paths. 881 883 */
+1
include/uapi/linux/sctp.h
··· 301 301 SCTP_SACK_IMMEDIATELY = (1 << 3), /* SACK should be sent without delay. */ 302 302 /* 2 bits here have been used by SCTP_PR_SCTP_MASK */ 303 303 SCTP_SENDALL = (1 << 6), 304 + SCTP_PR_SCTP_ALL = (1 << 7), 304 305 SCTP_NOTIFICATION = MSG_NOTIFICATION, /* Next message is not user msg but notification. */ 305 306 SCTP_EOF = MSG_FIN, /* Initiate graceful shutdown process. */ 306 307 };
+2 -8
kernel/bpf/xskmap.c
··· 192 192 sock_hold(sock->sk); 193 193 194 194 old_xs = xchg(&m->xsk_map[i], xs); 195 - if (old_xs) { 196 - /* Make sure we've flushed everything. */ 197 - synchronize_net(); 195 + if (old_xs) 198 196 sock_put((struct sock *)old_xs); 199 - } 200 197 201 198 sockfd_put(sock); 202 199 return 0; ··· 209 212 return -EINVAL; 210 213 211 214 old_xs = xchg(&m->xsk_map[k], NULL); 212 - if (old_xs) { 213 - /* Make sure we've flushed everything. */ 214 - synchronize_net(); 215 + if (old_xs) 215 216 sock_put((struct sock *)old_xs); 216 - } 217 217 218 218 return 0; 219 219 }
+5 -5
kernel/trace/preemptirq_delay_test.c
··· 5 5 * Copyright (C) 2018 Joel Fernandes (Google) <joel@joelfernandes.org> 6 6 */ 7 7 8 + #include <linux/trace_clock.h> 8 9 #include <linux/delay.h> 9 10 #include <linux/interrupt.h> 10 11 #include <linux/irq.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/kthread.h> 13 - #include <linux/ktime.h> 14 14 #include <linux/module.h> 15 15 #include <linux/printk.h> 16 16 #include <linux/string.h> ··· 25 25 26 26 static void busy_wait(ulong time) 27 27 { 28 - ktime_t start, end; 29 - start = ktime_get(); 28 + u64 start, end; 29 + start = trace_clock_local(); 30 30 do { 31 - end = ktime_get(); 31 + end = trace_clock_local(); 32 32 if (kthread_should_stop()) 33 33 break; 34 - } while (ktime_to_ns(ktime_sub(end, start)) < (time * 1000)); 34 + } while ((end - start) < (time * 1000)); 35 35 } 36 36 37 37 static int preemptirq_delay_run(void *data)
+8 -16
kernel/tracepoint.c
··· 28 28 #include <linux/sched/task.h> 29 29 #include <linux/static_key.h> 30 30 31 - extern struct tracepoint * const __start___tracepoints_ptrs[]; 32 - extern struct tracepoint * const __stop___tracepoints_ptrs[]; 31 + extern tracepoint_ptr_t __start___tracepoints_ptrs[]; 32 + extern tracepoint_ptr_t __stop___tracepoints_ptrs[]; 33 33 34 34 DEFINE_SRCU(tracepoint_srcu); 35 35 EXPORT_SYMBOL_GPL(tracepoint_srcu); ··· 371 371 } 372 372 EXPORT_SYMBOL_GPL(tracepoint_probe_unregister); 373 373 374 - static void for_each_tracepoint_range(struct tracepoint * const *begin, 375 - struct tracepoint * const *end, 374 + static void for_each_tracepoint_range( 375 + tracepoint_ptr_t *begin, tracepoint_ptr_t *end, 376 376 void (*fct)(struct tracepoint *tp, void *priv), 377 377 void *priv) 378 378 { 379 + tracepoint_ptr_t *iter; 380 + 379 381 if (!begin) 380 382 return; 381 - 382 - if (IS_ENABLED(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)) { 383 - const int *iter; 384 - 385 - for (iter = (const int *)begin; iter < (const int *)end; iter++) 386 - fct(offset_to_ptr(iter), priv); 387 - } else { 388 - struct tracepoint * const *iter; 389 - 390 - for (iter = begin; iter < end; iter++) 391 - fct(*iter, priv); 392 - } 383 + for (iter = begin; iter < end; iter++) 384 + fct(tracepoint_ptr_deref(iter), priv); 393 385 } 394 386 395 387 #ifdef CONFIG_MODULES
+2 -2
lib/test_ida.c
··· 150 150 IDA_BUG_ON(ida, !ida_is_empty(ida)); 151 151 } 152 152 153 + static DEFINE_IDA(ida); 154 + 153 155 static int ida_checks(void) 154 156 { 155 - DEFINE_IDA(ida); 156 - 157 157 IDA_BUG_ON(&ida, !ida_is_empty(&ida)); 158 158 ida_check_alloc(&ida); 159 159 ida_check_destroy(&ida);
+4 -12
mm/huge_memory.c
··· 1780 1780 1781 1781 bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, 1782 1782 unsigned long new_addr, unsigned long old_end, 1783 - pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) 1783 + pmd_t *old_pmd, pmd_t *new_pmd) 1784 1784 { 1785 1785 spinlock_t *old_ptl, *new_ptl; 1786 1786 pmd_t pmd; ··· 1811 1811 if (new_ptl != old_ptl) 1812 1812 spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); 1813 1813 pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd); 1814 - if (pmd_present(pmd) && pmd_dirty(pmd)) 1814 + if (pmd_present(pmd)) 1815 1815 force_flush = true; 1816 1816 VM_BUG_ON(!pmd_none(*new_pmd)); 1817 1817 ··· 1822 1822 } 1823 1823 pmd = move_soft_dirty_pmd(pmd); 1824 1824 set_pmd_at(mm, new_addr, new_pmd, pmd); 1825 - if (new_ptl != old_ptl) 1826 - spin_unlock(new_ptl); 1827 1825 if (force_flush) 1828 1826 flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); 1829 - else 1830 - *need_flush = true; 1827 + if (new_ptl != old_ptl) 1828 + spin_unlock(new_ptl); 1831 1829 spin_unlock(old_ptl); 1832 1830 return true; 1833 1831 } ··· 2883 2885 if (!(pvmw->pmd && !pvmw->pte)) 2884 2886 return; 2885 2887 2886 - mmu_notifier_invalidate_range_start(mm, address, 2887 - address + HPAGE_PMD_SIZE); 2888 - 2889 2888 flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); 2890 2889 pmdval = *pvmw->pmd; 2891 2890 pmdp_invalidate(vma, address, pvmw->pmd); ··· 2895 2900 set_pmd_at(mm, address, pvmw->pmd, pmdswp); 2896 2901 page_remove_rmap(page, true); 2897 2902 put_page(page); 2898 - 2899 - mmu_notifier_invalidate_range_end(mm, address, 2900 - address + HPAGE_PMD_SIZE); 2901 2903 } 2902 2904 2903 2905 void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+1 -1
mm/mmap.c
··· 1410 1410 if (flags & MAP_FIXED_NOREPLACE) { 1411 1411 struct vm_area_struct *vma = find_vma(mm, addr); 1412 1412 1413 - if (vma && vma->vm_start <= addr) 1413 + if (vma && vma->vm_start < addr + len) 1414 1414 return -EEXIST; 1415 1415 } 1416 1416
+13 -17
mm/mremap.c
··· 115 115 static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, 116 116 unsigned long old_addr, unsigned long old_end, 117 117 struct vm_area_struct *new_vma, pmd_t *new_pmd, 118 - unsigned long new_addr, bool need_rmap_locks, bool *need_flush) 118 + unsigned long new_addr, bool need_rmap_locks) 119 119 { 120 120 struct mm_struct *mm = vma->vm_mm; 121 121 pte_t *old_pte, *new_pte, pte; ··· 163 163 164 164 pte = ptep_get_and_clear(mm, old_addr, old_pte); 165 165 /* 166 - * If we are remapping a dirty PTE, make sure 166 + * If we are remapping a valid PTE, make sure 167 167 * to flush TLB before we drop the PTL for the 168 - * old PTE or we may race with page_mkclean(). 168 + * PTE. 169 169 * 170 - * This check has to be done after we removed the 171 - * old PTE from page tables or another thread may 172 - * dirty it after the check and before the removal. 170 + * NOTE! Both old and new PTL matter: the old one 171 + * for racing with page_mkclean(), the new one to 172 + * make sure the physical page stays valid until 173 + * the TLB entry for the old mapping has been 174 + * flushed. 173 175 */ 174 - if (pte_present(pte) && pte_dirty(pte)) 176 + if (pte_present(pte)) 175 177 force_flush = true; 176 178 pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr); 177 179 pte = move_soft_dirty_pte(pte); ··· 181 179 } 182 180 183 181 arch_leave_lazy_mmu_mode(); 182 + if (force_flush) 183 + flush_tlb_range(vma, old_end - len, old_end); 184 184 if (new_ptl != old_ptl) 185 185 spin_unlock(new_ptl); 186 186 pte_unmap(new_pte - 1); 187 - if (force_flush) 188 - flush_tlb_range(vma, old_end - len, old_end); 189 - else 190 - *need_flush = true; 191 187 pte_unmap_unlock(old_pte - 1, old_ptl); 192 188 if (need_rmap_locks) 193 189 drop_rmap_locks(vma); ··· 198 198 { 199 199 unsigned long extent, next, old_end; 200 200 pmd_t *old_pmd, *new_pmd; 201 - bool need_flush = false; 202 201 unsigned long mmun_start; /* For mmu_notifiers */ 203 202 unsigned long mmun_end; /* For mmu_notifiers */ 204 203 ··· 228 229 if (need_rmap_locks) 229 230 take_rmap_locks(vma); 230 231 moved = move_huge_pmd(vma, old_addr, new_addr, 231 - old_end, old_pmd, new_pmd, 232 - &need_flush); 232 + old_end, old_pmd, new_pmd); 233 233 if (need_rmap_locks) 234 234 drop_rmap_locks(vma); 235 235 if (moved) ··· 244 246 if (extent > next - new_addr) 245 247 extent = next - new_addr; 246 248 move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma, 247 - new_pmd, new_addr, need_rmap_locks, &need_flush); 249 + new_pmd, new_addr, need_rmap_locks); 248 250 } 249 - if (need_flush) 250 - flush_tlb_range(vma, old_end-len, old_addr); 251 251 252 252 mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end); 253 253
+4 -2
net/bpfilter/bpfilter_kern.c
··· 23 23 24 24 if (!info->pid) 25 25 return; 26 - tsk = pid_task(find_vpid(info->pid), PIDTYPE_PID); 27 - if (tsk) 26 + tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID); 27 + if (tsk) { 28 28 force_sig(SIGKILL, tsk); 29 + put_task_struct(tsk); 30 + } 29 31 fput(info->pipe_to_umh); 30 32 fput(info->pipe_from_umh); 31 33 info->pid = 0;
+9 -2
net/core/ethtool.c
··· 928 928 return -EINVAL; 929 929 } 930 930 931 + if (info.cmd != cmd) 932 + return -EINVAL; 933 + 931 934 if (info.cmd == ETHTOOL_GRXCLSRLALL) { 932 935 if (info.rule_cnt > 0) { 933 936 if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32)) ··· 2395 2392 return ret; 2396 2393 } 2397 2394 2398 - static int ethtool_set_per_queue(struct net_device *dev, void __user *useraddr) 2395 + static int ethtool_set_per_queue(struct net_device *dev, 2396 + void __user *useraddr, u32 sub_cmd) 2399 2397 { 2400 2398 struct ethtool_per_queue_op per_queue_opt; 2401 2399 2402 2400 if (copy_from_user(&per_queue_opt, useraddr, sizeof(per_queue_opt))) 2403 2401 return -EFAULT; 2402 + 2403 + if (per_queue_opt.sub_command != sub_cmd) 2404 + return -EINVAL; 2404 2405 2405 2406 switch (per_queue_opt.sub_command) { 2406 2407 case ETHTOOL_GCOALESCE: ··· 2776 2769 rc = ethtool_get_phy_stats(dev, useraddr); 2777 2770 break; 2778 2771 case ETHTOOL_PERQUEUE: 2779 - rc = ethtool_set_per_queue(dev, useraddr); 2772 + rc = ethtool_set_per_queue(dev, useraddr, sub_cmd); 2780 2773 break; 2781 2774 case ETHTOOL_GLINKSETTINGS: 2782 2775 rc = ethtool_get_link_ksettings(dev, useraddr);
-2
net/core/netpoll.c
··· 312 312 /* It is up to the caller to keep npinfo alive. */ 313 313 struct netpoll_info *npinfo; 314 314 315 - rcu_read_lock_bh(); 316 315 lockdep_assert_irqs_disabled(); 317 316 318 317 npinfo = rcu_dereference_bh(np->dev->npinfo); ··· 356 357 skb_queue_tail(&npinfo->txq, skb); 357 358 schedule_delayed_work(&npinfo->tx_work,0); 358 359 } 359 - rcu_read_unlock_bh(); 360 360 } 361 361 EXPORT_SYMBOL(netpoll_send_skb_on_dev); 362 362
-2
net/ipv4/ipmr_base.c
··· 315 315 next_entry: 316 316 e++; 317 317 } 318 - e = 0; 319 - s_e = 0; 320 318 321 319 spin_lock_bh(lock); 322 320 list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
+5 -5
net/ipv6/ip6_tunnel.c
··· 1184 1184 } 1185 1185 skb_dst_set(skb, dst); 1186 1186 1187 - if (encap_limit >= 0) { 1188 - init_tel_txopt(&opt, encap_limit); 1189 - ipv6_push_frag_opts(skb, &opt.ops, &proto); 1190 - } 1191 - 1192 1187 if (hop_limit == 0) { 1193 1188 if (skb->protocol == htons(ETH_P_IP)) 1194 1189 hop_limit = ip_hdr(skb)->ttl; ··· 1204 1209 err = ip6_tnl_encap(skb, t, &proto, fl6); 1205 1210 if (err) 1206 1211 return err; 1212 + 1213 + if (encap_limit >= 0) { 1214 + init_tel_txopt(&opt, encap_limit); 1215 + ipv6_push_frag_opts(skb, &opt.ops, &proto); 1216 + } 1207 1217 1208 1218 skb_push(skb, sizeof(struct ipv6hdr)); 1209 1219 skb_reset_network_header(skb);
+8 -8
net/ipv6/mcast.c
··· 2436 2436 { 2437 2437 int err; 2438 2438 2439 - /* callers have the socket lock and rtnl lock 2440 - * so no other readers or writers of iml or its sflist 2441 - */ 2439 + write_lock_bh(&iml->sflock); 2442 2440 if (!iml->sflist) { 2443 2441 /* any-source empty exclude case */ 2444 - return ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0); 2442 + err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0); 2443 + } else { 2444 + err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 2445 + iml->sflist->sl_count, iml->sflist->sl_addr, 0); 2446 + sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max)); 2447 + iml->sflist = NULL; 2445 2448 } 2446 - err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 2447 - iml->sflist->sl_count, iml->sflist->sl_addr, 0); 2448 - sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max)); 2449 - iml->sflist = NULL; 2449 + write_unlock_bh(&iml->sflock); 2450 2450 return err; 2451 2451 } 2452 2452
+6 -6
net/ipv6/route.c
··· 517 517 518 518 static void rt6_probe(struct fib6_info *rt) 519 519 { 520 - struct __rt6_probe_work *work; 520 + struct __rt6_probe_work *work = NULL; 521 521 const struct in6_addr *nh_gw; 522 522 struct neighbour *neigh; 523 523 struct net_device *dev; 524 + struct inet6_dev *idev; 524 525 525 526 /* 526 527 * Okay, this does not seem to be appropriate ··· 537 536 nh_gw = &rt->fib6_nh.nh_gw; 538 537 dev = rt->fib6_nh.nh_dev; 539 538 rcu_read_lock_bh(); 539 + idev = __in6_dev_get(dev); 540 540 neigh = __ipv6_neigh_lookup_noref(dev, nh_gw); 541 541 if (neigh) { 542 - struct inet6_dev *idev; 543 - 544 542 if (neigh->nud_state & NUD_VALID) 545 543 goto out; 546 544 547 - idev = __in6_dev_get(dev); 548 - work = NULL; 549 545 write_lock(&neigh->lock); 550 546 if (!(neigh->nud_state & NUD_VALID) && 551 547 time_after(jiffies, ··· 552 554 __neigh_set_probe_once(neigh); 553 555 } 554 556 write_unlock(&neigh->lock); 555 - } else { 557 + } else if (time_after(jiffies, rt->last_probe + 558 + idev->cnf.rtr_probe_interval)) { 556 559 work = kmalloc(sizeof(*work), GFP_ATOMIC); 557 560 } 558 561 559 562 if (work) { 563 + rt->last_probe = jiffies; 560 564 INIT_WORK(&work->work, rt6_probe_deferred); 561 565 work->target = *nh_gw; 562 566 dev_hold(dev);
+2 -4
net/ipv6/udp.c
··· 766 766 767 767 ret = udpv6_queue_rcv_skb(sk, skb); 768 768 769 - /* a return value > 0 means to resubmit the input, but 770 - * it wants the return to be -protocol, or 0 771 - */ 769 + /* a return value > 0 means to resubmit the input */ 772 770 if (ret > 0) 773 - return -ret; 771 + return ret; 774 772 return 0; 775 773 } 776 774
+2 -2
net/ipv6/xfrm6_policy.c
··· 146 146 fl6->daddr = reverse ? hdr->saddr : hdr->daddr; 147 147 fl6->saddr = reverse ? hdr->daddr : hdr->saddr; 148 148 149 - while (nh + offset + 1 < skb->data || 150 - pskb_may_pull(skb, nh + offset + 1 - skb->data)) { 149 + while (nh + offset + sizeof(*exthdr) < skb->data || 150 + pskb_may_pull(skb, nh + offset + sizeof(*exthdr) - skb->data)) { 151 151 nh = skb_network_header(skb); 152 152 exthdr = (struct ipv6_opt_hdr *)(nh + offset); 153 153
+1
net/llc/llc_conn.c
··· 734 734 llc_sk(sk)->sap = sap; 735 735 736 736 spin_lock_bh(&sap->sk_lock); 737 + sock_set_flag(sk, SOCK_RCU_FREE); 737 738 sap->sk_count++; 738 739 sk_nulls_add_node_rcu(sk, laddr_hb); 739 740 hlist_add_head(&llc->dev_hash_node, dev_hb);
+1 -1
net/rxrpc/call_accept.c
··· 337 337 { 338 338 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 339 339 struct rxrpc_connection *conn; 340 - struct rxrpc_peer *peer; 340 + struct rxrpc_peer *peer = NULL; 341 341 struct rxrpc_call *call; 342 342 343 343 _enter("");
+1 -1
net/rxrpc/local_object.c
··· 139 139 udp_sk(usk)->gro_complete = NULL; 140 140 141 141 udp_encap_enable(); 142 - #if IS_ENABLED(CONFIG_IPV6) 142 + #if IS_ENABLED(CONFIG_AF_RXRPC_IPV6) 143 143 if (local->srx.transport.family == AF_INET6) 144 144 udpv6_encap_enable(); 145 145 #endif
+2 -1
net/rxrpc/output.c
··· 572 572 whdr.flags ^= RXRPC_CLIENT_INITIATED; 573 573 whdr.flags &= RXRPC_CLIENT_INITIATED; 574 574 575 - ret = kernel_sendmsg(local->socket, &msg, iov, 2, size); 575 + ret = kernel_sendmsg(local->socket, &msg, 576 + iov, ioc, size); 576 577 if (ret < 0) 577 578 trace_rxrpc_tx_fail(local->debug_id, 0, ret, 578 579 rxrpc_tx_point_reject);
+1
net/rxrpc/peer_event.c
··· 197 197 rxrpc_store_error(peer, serr); 198 198 rcu_read_unlock(); 199 199 rxrpc_free_skb(skb, rxrpc_skb_rx_freed); 200 + rxrpc_put_peer(peer); 200 201 201 202 _leave(""); 202 203 }
+7 -5
net/sched/cls_api.c
··· 31 31 #include <net/pkt_sched.h> 32 32 #include <net/pkt_cls.h> 33 33 34 + extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1]; 35 + 34 36 /* The list of all installed classifier types */ 35 37 static LIST_HEAD(tcf_proto_base); 36 38 ··· 1306 1304 replay: 1307 1305 tp_created = 0; 1308 1306 1309 - err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack); 1307 + err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack); 1310 1308 if (err < 0) 1311 1309 return err; 1312 1310 ··· 1456 1454 if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN)) 1457 1455 return -EPERM; 1458 1456 1459 - err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack); 1457 + err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack); 1460 1458 if (err < 0) 1461 1459 return err; 1462 1460 ··· 1572 1570 void *fh = NULL; 1573 1571 int err; 1574 1572 1575 - err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack); 1573 + err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack); 1576 1574 if (err < 0) 1577 1575 return err; 1578 1576 ··· 1939 1937 return -EPERM; 1940 1938 1941 1939 replay: 1942 - err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack); 1940 + err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack); 1943 1941 if (err < 0) 1944 1942 return err; 1945 1943 ··· 2057 2055 if (nlmsg_len(cb->nlh) < sizeof(*tcm)) 2058 2056 return skb->len; 2059 2057 2060 - err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, NULL, 2058 + err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy, 2061 2059 cb->extack); 2062 2060 if (err) 2063 2061 return err;
+6 -5
net/sched/sch_api.c
··· 1318 1318 return 0; 1319 1319 } 1320 1320 1321 - /* 1322 - * Delete/get qdisc. 1323 - */ 1324 - 1325 1321 const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = { 1326 1322 [TCA_KIND] = { .type = NLA_STRING }, 1327 1323 [TCA_OPTIONS] = { .type = NLA_NESTED }, ··· 1329 1333 [TCA_INGRESS_BLOCK] = { .type = NLA_U32 }, 1330 1334 [TCA_EGRESS_BLOCK] = { .type = NLA_U32 }, 1331 1335 }; 1336 + 1337 + /* 1338 + * Delete/get qdisc. 1339 + */ 1332 1340 1333 1341 static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n, 1334 1342 struct netlink_ext_ack *extack) ··· 2070 2070 2071 2071 if (tcm->tcm_parent) { 2072 2072 q = qdisc_match_from_root(root, TC_H_MAJ(tcm->tcm_parent)); 2073 - if (q && tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0) 2073 + if (q && q != root && 2074 + tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0) 2074 2075 return -1; 2075 2076 return 0; 2076 2077 }
+2 -1
net/sctp/associola.c
··· 1450 1450 /* Get the lowest pmtu of all the transports. */ 1451 1451 list_for_each_entry(t, &asoc->peer.transport_addr_list, transports) { 1452 1452 if (t->pmtu_pending && t->dst) { 1453 - sctp_transport_update_pmtu(t, sctp_dst_mtu(t->dst)); 1453 + sctp_transport_update_pmtu(t, 1454 + atomic_read(&t->mtu_info)); 1454 1455 t->pmtu_pending = 0; 1455 1456 } 1456 1457 if (!pmtu || (t->pathmtu < pmtu))
+1
net/sctp/input.c
··· 395 395 return; 396 396 397 397 if (sock_owned_by_user(sk)) { 398 + atomic_set(&t->mtu_info, pmtu); 398 399 asoc->pmtu_pending = 1; 399 400 t->pmtu_pending = 1; 400 401 return;
+6
net/sctp/output.c
··· 120 120 sctp_assoc_sync_pmtu(asoc); 121 121 } 122 122 123 + if (asoc->pmtu_pending) { 124 + if (asoc->param_flags & SPP_PMTUD_ENABLE) 125 + sctp_assoc_sync_pmtu(asoc); 126 + asoc->pmtu_pending = 0; 127 + } 128 + 123 129 /* If there a is a prepend chunk stick it on the list before 124 130 * any other chunks get appended. 125 131 */
+9 -8
net/sctp/socket.c
··· 253 253 254 254 spin_lock_bh(&sctp_assocs_id_lock); 255 255 asoc = (struct sctp_association *)idr_find(&sctp_assocs_id, (int)id); 256 + if (asoc && (asoc->base.sk != sk || asoc->base.dead)) 257 + asoc = NULL; 256 258 spin_unlock_bh(&sctp_assocs_id_lock); 257 - 258 - if (!asoc || (asoc->base.sk != sk) || asoc->base.dead) 259 - return NULL; 260 259 261 260 return asoc; 262 261 } ··· 1927 1928 if (sp->strm_interleave) { 1928 1929 timeo = sock_sndtimeo(sk, 0); 1929 1930 err = sctp_wait_for_connect(asoc, &timeo); 1930 - if (err) 1931 + if (err) { 1932 + err = -ESRCH; 1931 1933 goto err; 1934 + } 1932 1935 } else { 1933 1936 wait_connect = true; 1934 1937 } ··· 7083 7082 } 7084 7083 7085 7084 policy = params.sprstat_policy; 7086 - if (policy & ~SCTP_PR_SCTP_MASK) 7085 + if (!policy || (policy & ~(SCTP_PR_SCTP_MASK | SCTP_PR_SCTP_ALL))) 7087 7086 goto out; 7088 7087 7089 7088 asoc = sctp_id2assoc(sk, params.sprstat_assoc_id); 7090 7089 if (!asoc) 7091 7090 goto out; 7092 7091 7093 - if (policy == SCTP_PR_SCTP_NONE) { 7092 + if (policy & SCTP_PR_SCTP_ALL) { 7094 7093 params.sprstat_abandoned_unsent = 0; 7095 7094 params.sprstat_abandoned_sent = 0; 7096 7095 for (policy = 0; policy <= SCTP_PR_INDEX(MAX); policy++) { ··· 7142 7141 } 7143 7142 7144 7143 policy = params.sprstat_policy; 7145 - if (policy & ~SCTP_PR_SCTP_MASK) 7144 + if (!policy || (policy & ~(SCTP_PR_SCTP_MASK | SCTP_PR_SCTP_ALL))) 7146 7145 goto out; 7147 7146 7148 7147 asoc = sctp_id2assoc(sk, params.sprstat_assoc_id); ··· 7158 7157 goto out; 7159 7158 } 7160 7159 7161 - if (policy == SCTP_PR_SCTP_NONE) { 7160 + if (policy == SCTP_PR_SCTP_ALL) { 7162 7161 params.sprstat_abandoned_unsent = 0; 7163 7162 params.sprstat_abandoned_sent = 0; 7164 7163 for (policy = 0; policy <= SCTP_PR_INDEX(MAX); policy++) {
+8 -3
net/socket.c
··· 2875 2875 copy_in_user(&rxnfc->fs.ring_cookie, 2876 2876 &compat_rxnfc->fs.ring_cookie, 2877 2877 (void __user *)(&rxnfc->fs.location + 1) - 2878 - (void __user *)&rxnfc->fs.ring_cookie) || 2879 - copy_in_user(&rxnfc->rule_cnt, &compat_rxnfc->rule_cnt, 2880 - sizeof(rxnfc->rule_cnt))) 2878 + (void __user *)&rxnfc->fs.ring_cookie)) 2879 + return -EFAULT; 2880 + if (ethcmd == ETHTOOL_GRXCLSRLALL) { 2881 + if (put_user(rule_cnt, &rxnfc->rule_cnt)) 2882 + return -EFAULT; 2883 + } else if (copy_in_user(&rxnfc->rule_cnt, 2884 + &compat_rxnfc->rule_cnt, 2885 + sizeof(rxnfc->rule_cnt))) 2881 2886 return -EFAULT; 2882 2887 } 2883 2888
+1
net/tipc/group.c
··· 666 666 struct sk_buff *skb; 667 667 struct tipc_msg *hdr; 668 668 669 + memset(&evt, 0, sizeof(evt)); 669 670 evt.event = event; 670 671 evt.found_lower = m->instance; 671 672 evt.found_upper = m->instance;
+1
net/tipc/link.c
··· 1041 1041 if (r->last_retransm != buf_seqno(skb)) { 1042 1042 r->last_retransm = buf_seqno(skb); 1043 1043 r->stale_limit = jiffies + msecs_to_jiffies(r->tolerance); 1044 + r->stale_cnt = 0; 1044 1045 } else if (++r->stale_cnt > 99 && time_after(jiffies, r->stale_limit)) { 1045 1046 link_retransmit_failure(l, skb); 1046 1047 if (link_is_bc_sndlink(l))
+2 -2
net/tipc/name_distr.c
··· 115 115 struct sk_buff *buf; 116 116 struct distr_item *item; 117 117 118 - list_del(&publ->binding_node); 118 + list_del_rcu(&publ->binding_node); 119 119 120 120 if (publ->scope == TIPC_NODE_SCOPE) 121 121 return NULL; ··· 147 147 ITEM_SIZE) * ITEM_SIZE; 148 148 u32 msg_rem = msg_dsz; 149 149 150 - list_for_each_entry(publ, pls, binding_node) { 150 + list_for_each_entry_rcu(publ, pls, binding_node) { 151 151 /* Prepare next buffer: */ 152 152 if (!skb) { 153 153 skb = named_prepare_buf(net, PUBLICATION, msg_rem,
+2
net/xdp/xsk.c
··· 754 754 sk->sk_destruct = xsk_destruct; 755 755 sk_refcnt_debug_inc(sk); 756 756 757 + sock_set_flag(sk, SOCK_RCU_FREE); 758 + 757 759 xs = xdp_sk(sk); 758 760 mutex_init(&xs->mutex); 759 761 spin_lock_init(&xs->tx_completion_lock);
+3
net/xfrm/xfrm_interface.c
··· 116 116 117 117 static void xfrmi_dev_free(struct net_device *dev) 118 118 { 119 + struct xfrm_if *xi = netdev_priv(dev); 120 + 121 + gro_cells_destroy(&xi->gro_cells); 119 122 free_percpu(dev->tstats); 120 123 } 121 124
+4 -4
net/xfrm/xfrm_policy.c
··· 632 632 break; 633 633 } 634 634 if (newpos) 635 - hlist_add_behind(&policy->bydst, newpos); 635 + hlist_add_behind_rcu(&policy->bydst, newpos); 636 636 else 637 - hlist_add_head(&policy->bydst, chain); 637 + hlist_add_head_rcu(&policy->bydst, chain); 638 638 } 639 639 640 640 spin_unlock_bh(&net->xfrm.xfrm_policy_lock); ··· 774 774 break; 775 775 } 776 776 if (newpos) 777 - hlist_add_behind(&policy->bydst, newpos); 777 + hlist_add_behind_rcu(&policy->bydst, newpos); 778 778 else 779 - hlist_add_head(&policy->bydst, chain); 779 + hlist_add_head_rcu(&policy->bydst, chain); 780 780 __xfrm_policy_link(policy, dir); 781 781 782 782 /* After previous checking, family can either be AF_INET or AF_INET6 */
+9 -4
tools/testing/selftests/net/reuseport_bpf.c
··· 437 437 } 438 438 } 439 439 440 - static struct rlimit rlim_old, rlim_new; 440 + static struct rlimit rlim_old; 441 441 442 442 static __attribute__((constructor)) void main_ctor(void) 443 443 { 444 444 getrlimit(RLIMIT_MEMLOCK, &rlim_old); 445 - rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20); 446 - rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20); 447 - setrlimit(RLIMIT_MEMLOCK, &rlim_new); 445 + 446 + if (rlim_old.rlim_cur != RLIM_INFINITY) { 447 + struct rlimit rlim_new; 448 + 449 + rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20); 450 + rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20); 451 + setrlimit(RLIMIT_MEMLOCK, &rlim_new); 452 + } 448 453 } 449 454 450 455 static __attribute__((destructor)) void main_dtor(void)