Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.6-rc7 into char-misc-next

We need the char/misc driver fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3494 -1510
+21 -4
.clang-format
··· 86 86 - 'bio_for_each_segment_all' 87 87 - 'bio_list_for_each' 88 88 - 'bip_for_each_vec' 89 + - 'bitmap_for_each_clear_region' 90 + - 'bitmap_for_each_set_region' 89 91 - 'blkg_for_each_descendant_post' 90 92 - 'blkg_for_each_descendant_pre' 91 93 - 'blk_queue_for_each_rl' ··· 117 115 - 'drm_client_for_each_connector_iter' 118 116 - 'drm_client_for_each_modeset' 119 117 - 'drm_connector_for_each_possible_encoder' 118 + - 'drm_for_each_bridge_in_chain' 120 119 - 'drm_for_each_connector_iter' 121 120 - 'drm_for_each_crtc' 122 121 - 'drm_for_each_encoder' ··· 139 136 - 'for_each_bio' 140 137 - 'for_each_board_func_rsrc' 141 138 - 'for_each_bvec' 139 + - 'for_each_card_auxs' 140 + - 'for_each_card_auxs_safe' 142 141 - 'for_each_card_components' 143 - - 'for_each_card_links' 144 - - 'for_each_card_links_safe' 142 + - 'for_each_card_pre_auxs' 145 143 - 'for_each_card_prelinks' 146 144 - 'for_each_card_rtds' 147 145 - 'for_each_card_rtds_safe' ··· 170 166 - 'for_each_dpcm_fe' 171 167 - 'for_each_drhd_unit' 172 168 - 'for_each_dss_dev' 169 + - 'for_each_efi_handle' 173 170 - 'for_each_efi_memory_desc' 174 171 - 'for_each_efi_memory_desc_in_map' 175 172 - 'for_each_element' ··· 195 190 - 'for_each_lru' 196 191 - 'for_each_matching_node' 197 192 - 'for_each_matching_node_and_match' 193 + - 'for_each_member' 198 194 - 'for_each_memblock' 199 195 - 'for_each_memblock_type' 200 196 - 'for_each_memcg_cache_index' ··· 206 200 - 'for_each_msi_entry' 207 201 - 'for_each_msi_entry_safe' 208 202 - 'for_each_net' 203 + - 'for_each_net_continue_reverse' 209 204 - 'for_each_netdev' 210 205 - 'for_each_netdev_continue' 211 206 - 'for_each_netdev_continue_rcu' 207 + - 'for_each_netdev_continue_reverse' 212 208 - 'for_each_netdev_feature' 213 209 - 'for_each_netdev_in_bond_rcu' 214 210 - 'for_each_netdev_rcu' ··· 262 254 - 'for_each_reserved_mem_region' 263 255 - 'for_each_rtd_codec_dai' 264 256 - 'for_each_rtd_codec_dai_rollback' 265 - - 'for_each_rtdcom' 266 - - 'for_each_rtdcom_safe' 257 + - 'for_each_rtd_components' 267 258 - 'for_each_set_bit' 268 259 - 'for_each_set_bit_from' 260 + - 'for_each_set_clump8' 269 261 - 'for_each_sg' 270 262 - 'for_each_sg_dma_page' 271 263 - 'for_each_sg_page' ··· 275 267 - 'for_each_subelement_id' 276 268 - '__for_each_thread' 277 269 - 'for_each_thread' 270 + - 'for_each_wakeup_source' 278 271 - 'for_each_zone' 279 272 - 'for_each_zone_zonelist' 280 273 - 'for_each_zone_zonelist_nodemask' ··· 339 330 - 'list_for_each' 340 331 - 'list_for_each_codec' 341 332 - 'list_for_each_codec_safe' 333 + - 'list_for_each_continue' 342 334 - 'list_for_each_entry' 343 335 - 'list_for_each_entry_continue' 344 336 - 'list_for_each_entry_continue_rcu' ··· 361 351 - 'llist_for_each_entry' 362 352 - 'llist_for_each_entry_safe' 363 353 - 'llist_for_each_safe' 354 + - 'mci_for_each_dimm' 364 355 - 'media_device_for_each_entity' 365 356 - 'media_device_for_each_intf' 366 357 - 'media_device_for_each_link' ··· 455 444 - 'virtio_device_for_each_vq' 456 445 - 'xa_for_each' 457 446 - 'xa_for_each_marked' 447 + - 'xa_for_each_range' 458 448 - 'xa_for_each_start' 459 449 - 'xas_for_each' 460 450 - 'xas_for_each_conflict' 461 451 - 'xas_for_each_marked' 452 + - 'xbc_array_for_each_value' 453 + - 'xbc_for_each_key_value' 454 + - 'xbc_node_for_each_array_value' 455 + - 'xbc_node_for_each_child' 456 + - 'xbc_node_for_each_key_value' 462 457 - 'zorro_for_each_dev' 463 458 464 459 #IncludeBlocks: Preserve # Unknown to clang-format-5.0
+2
Documentation/arm64/silicon-errata.rst
··· 110 110 +----------------+-----------------+-----------------+-----------------------------+ 111 111 | Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 | 112 112 +----------------+-----------------+-----------------+-----------------------------+ 113 + | Cavium | ThunderX GICv3 | #38539 | N/A | 114 + +----------------+-----------------+-----------------+-----------------------------+ 113 115 | Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 | 114 116 +----------------+-----------------+-----------------+-----------------------------+ 115 117 | Cavium | ThunderX Core | #30115 | CAVIUM_ERRATUM_30115 |
+7
Documentation/devicetree/bindings/net/fsl-fman.txt
··· 110 110 Usage: required 111 111 Definition: See soc/fsl/qman.txt and soc/fsl/bman.txt 112 112 113 + - fsl,erratum-a050385 114 + Usage: optional 115 + Value type: boolean 116 + Definition: A boolean property. Indicates the presence of the 117 + erratum A050385 which indicates that DMA transactions that are 118 + split can result in a FMan lock. 119 + 113 120 ============================================================================= 114 121 FMan MURAM Node 115 122
+8
Documentation/filesystems/porting.rst
··· 850 850 d_alloc_pseudo() is internal-only; uses outside of alloc_file_pseudo() are 851 851 very suspect (and won't work in modules). Such uses are very likely to 852 852 be misspelled d_alloc_anon(). 853 + 854 + --- 855 + 856 + **mandatory** 857 + 858 + [should've been added in 2016] stale comment in finish_open() nonwithstanding, 859 + failure exits in ->atomic_open() instances should *NOT* fput() the file, 860 + no matter what. Everything is handled by the caller.
+1 -1
Documentation/kbuild/kbuild.rst
··· 237 237 KBUILD_EXTRA_SYMBOLS 238 238 -------------------- 239 239 For modules that use symbols from other modules. 240 - See more details in modules.txt. 240 + See more details in modules.rst. 241 241 242 242 ALLSOURCE_ARCHS 243 243 ---------------
+1 -1
Documentation/kbuild/kconfig-macro-language.rst
··· 44 44 def_bool y 45 45 46 46 Then, Kconfig moves onto the evaluation stage to resolve inter-symbol 47 - dependency as explained in kconfig-language.txt. 47 + dependency as explained in kconfig-language.rst. 48 48 49 49 50 50 Variables
+3 -3
Documentation/kbuild/makefiles.rst
··· 924 924 $(KBUILD_AFLAGS_MODULE) is used to add arch-specific options that 925 925 are used for assembler. 926 926 927 - From commandline AFLAGS_MODULE shall be used (see kbuild.txt). 927 + From commandline AFLAGS_MODULE shall be used (see kbuild.rst). 928 928 929 929 KBUILD_CFLAGS_KERNEL 930 930 $(CC) options specific for built-in ··· 937 937 938 938 $(KBUILD_CFLAGS_MODULE) is used to add arch-specific options that 939 939 are used for $(CC). 940 - From commandline CFLAGS_MODULE shall be used (see kbuild.txt). 940 + From commandline CFLAGS_MODULE shall be used (see kbuild.rst). 941 941 942 942 KBUILD_LDFLAGS_MODULE 943 943 Options for $(LD) when linking modules ··· 945 945 $(KBUILD_LDFLAGS_MODULE) is used to add arch-specific options 946 946 used when linking modules. This is often a linker script. 947 947 948 - From commandline LDFLAGS_MODULE shall be used (see kbuild.txt). 948 + From commandline LDFLAGS_MODULE shall be used (see kbuild.rst). 949 949 950 950 KBUILD_LDS 951 951
+2 -2
Documentation/kbuild/modules.rst
··· 470 470 471 471 The syntax of the Module.symvers file is:: 472 472 473 - <CRC> <Symbol> <Namespace> <Module> <Export Type> 473 + <CRC> <Symbol> <Module> <Export Type> <Namespace> 474 474 475 - 0xe1cc2a05 usb_stor_suspend USB_STORAGE drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL 475 + 0xe1cc2a05 usb_stor_suspend drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL USB_STORAGE 476 476 477 477 The fields are separated by tabs and values may be empty (e.g. 478 478 if no namespace is defined for an exported symbol).
+3 -3
Documentation/networking/net_failover.rst
··· 8 8 ======== 9 9 10 10 The net_failover driver provides an automated failover mechanism via APIs 11 - to create and destroy a failover master netdev and mananges a primary and 11 + to create and destroy a failover master netdev and manages a primary and 12 12 standby slave netdevs that get registered via the generic failover 13 - infrastructrure. 13 + infrastructure. 14 14 15 15 The failover netdev acts a master device and controls 2 slave devices. The 16 16 original paravirtual interface is registered as 'standby' slave netdev and ··· 29 29 ============================================= 30 30 31 31 net_failover enables hypervisor controlled accelerated datapath to virtio-net 32 - enabled VMs in a transparent manner with no/minimal guest userspace chanages. 32 + enabled VMs in a transparent manner with no/minimal guest userspace changes. 33 33 34 34 To support this, the hypervisor needs to enable VIRTIO_NET_F_STANDBY 35 35 feature on the virtio-net interface and assign the same MAC address to both
+1 -1
Documentation/networking/rds.txt
··· 159 159 set SO_RDS_TRANSPORT on a socket for which the transport has 160 160 been previously attached explicitly (by SO_RDS_TRANSPORT) or 161 161 implicitly (via bind(2)) will return an error of EOPNOTSUPP. 162 - An attempt to set SO_RDS_TRANSPPORT to RDS_TRANS_NONE will 162 + An attempt to set SO_RDS_TRANSPORT to RDS_TRANS_NONE will 163 163 always return EINVAL. 164 164 165 165 RDMA for RDS
+2 -4
MAINTAINERS
··· 4076 4076 CISCO VIC ETHERNET NIC DRIVER 4077 4077 M: Christian Benvenuti <benve@cisco.com> 4078 4078 M: Govindarajulu Varadarajan <_govind@gmx.com> 4079 - M: Parvi Kaustubhi <pkaustub@cisco.com> 4080 4079 S: Supported 4081 4080 F: drivers/net/ethernet/cisco/enic/ 4082 4081 ··· 4574 4575 F: include/uapi/rdma/cxgb4-abi.h 4575 4576 4576 4577 CXGB4VF ETHERNET DRIVER (CXGB4VF) 4577 - M: Casey Leedom <leedom@chelsio.com> 4578 + M: Vishal Kulkarni <vishal@gmail.com> 4578 4579 L: netdev@vger.kernel.org 4579 4580 W: http://www.chelsio.com 4580 4581 S: Supported ··· 6200 6201 F: drivers/scsi/be2iscsi/ 6201 6202 6202 6203 Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER (be2net) 6203 - M: Sathya Perla <sathya.perla@broadcom.com> 6204 6204 M: Ajit Khaparde <ajit.khaparde@broadcom.com> 6205 6205 M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> 6206 6206 M: Somnath Kotur <somnath.kotur@broadcom.com> ··· 11130 11132 L: linux-mips@vger.kernel.org 11131 11133 W: http://www.linux-mips.org/ 11132 11134 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git 11133 - Q: http://patchwork.linux-mips.org/project/linux-mips/list/ 11135 + Q: https://patchwork.kernel.org/project/linux-mips/list/ 11134 11136 S: Maintained 11135 11137 F: Documentation/devicetree/bindings/mips/ 11136 11138 F: Documentation/mips/
+2 -2
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 6 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc7 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION* ··· 1804 1804 1805 1805 -include $(foreach f,$(existing-targets),$(dir $(f)).$(notdir $(f)).cmd) 1806 1806 1807 - endif # config-targets 1807 + endif # config-build 1808 1808 endif # mixed-build 1809 1809 endif # need-sub-make 1810 1810
+2 -2
arch/arc/Kconfig
··· 154 154 help 155 155 Support for ARC HS38x Cores based on ARCv2 ISA 156 156 The notable features are: 157 - - SMP configurations of upto 4 core with coherency 157 + - SMP configurations of up to 4 cores with coherency 158 158 - Optional L2 Cache and IO-Coherency 159 159 - Revised Interrupt Architecture (multiple priorites, reg banks, 160 160 auto stack switch, auto regfile save/restore) ··· 192 192 help 193 193 In SMP configuration cores can be configured as Halt-on-reset 194 194 or they could all start at same time. For Halt-on-reset, non 195 - masters are parked until Master kicks them so they can start of 195 + masters are parked until Master kicks them so they can start off 196 196 at designated entry point. For other case, all jump to common 197 197 entry point and spin wait for Master's signal. 198 198
-2
arch/arc/configs/nps_defconfig
··· 21 21 CONFIG_MODULE_FORCE_LOAD=y 22 22 CONFIG_MODULE_UNLOAD=y 23 23 # CONFIG_BLK_DEV_BSG is not set 24 - # CONFIG_IOSCHED_DEADLINE is not set 25 - # CONFIG_IOSCHED_CFQ is not set 26 24 CONFIG_ARC_PLAT_EZNPS=y 27 25 CONFIG_SMP=y 28 26 CONFIG_NR_CPUS=4096
-2
arch/arc/configs/nsimosci_defconfig
··· 20 20 CONFIG_KPROBES=y 21 21 CONFIG_MODULES=y 22 22 # CONFIG_BLK_DEV_BSG is not set 23 - # CONFIG_IOSCHED_DEADLINE is not set 24 - # CONFIG_IOSCHED_CFQ is not set 25 23 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci" 26 24 # CONFIG_COMPACTION is not set 27 25 CONFIG_NET=y
-2
arch/arc/configs/nsimosci_hs_defconfig
··· 19 19 CONFIG_KPROBES=y 20 20 CONFIG_MODULES=y 21 21 # CONFIG_BLK_DEV_BSG is not set 22 - # CONFIG_IOSCHED_DEADLINE is not set 23 - # CONFIG_IOSCHED_CFQ is not set 24 22 CONFIG_ISA_ARCV2=y 25 23 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs" 26 24 # CONFIG_COMPACTION is not set
-2
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 14 14 CONFIG_KPROBES=y 15 15 CONFIG_MODULES=y 16 16 # CONFIG_BLK_DEV_BSG is not set 17 - # CONFIG_IOSCHED_DEADLINE is not set 18 - # CONFIG_IOSCHED_CFQ is not set 19 17 CONFIG_ISA_ARCV2=y 20 18 CONFIG_SMP=y 21 19 # CONFIG_ARC_TIMERS_64BIT is not set
+2
arch/arc/include/asm/fpu.h
··· 43 43 44 44 #endif /* !CONFIG_ISA_ARCOMPACT */ 45 45 46 + struct task_struct; 47 + 46 48 extern void fpu_save_restore(struct task_struct *p, struct task_struct *n); 47 49 48 50 #else /* !CONFIG_ARC_FPU_SAVE_RESTORE */
+2
arch/arc/include/asm/linkage.h
··· 29 29 .endm 30 30 31 31 #define ASM_NL ` /* use '`' to mark new line in macro */ 32 + #define __ALIGN .align 4 33 + #define __ALIGN_STR __stringify(__ALIGN) 32 34 33 35 /* annotation for data we want in DCCM - if enabled in .config */ 34 36 .macro ARCFP_DATA nm
+1 -1
arch/arc/kernel/setup.c
··· 8 8 #include <linux/delay.h> 9 9 #include <linux/root_dev.h> 10 10 #include <linux/clk.h> 11 - #include <linux/clk-provider.h> 12 11 #include <linux/clocksource.h> 13 12 #include <linux/console.h> 14 13 #include <linux/module.h> 15 14 #include <linux/cpu.h> 15 + #include <linux/of_clk.h> 16 16 #include <linux/of_fdt.h> 17 17 #include <linux/of.h> 18 18 #include <linux/cache.h>
+12 -15
arch/arc/kernel/troubleshoot.c
··· 104 104 if (IS_ERR(nm)) 105 105 nm = "?"; 106 106 } 107 - pr_info(" @off 0x%lx in [%s]\n" 108 - " VMA: 0x%08lx to 0x%08lx\n", 107 + pr_info(" @off 0x%lx in [%s] VMA: 0x%08lx to 0x%08lx\n", 109 108 vma->vm_start < TASK_UNMAPPED_BASE ? 110 109 address : address - vma->vm_start, 111 110 nm, vma->vm_start, vma->vm_end); ··· 119 120 unsigned int vec, cause_code; 120 121 unsigned long address; 121 122 122 - pr_info("\n[ECR ]: 0x%08lx => ", regs->event); 123 - 124 123 /* For Data fault, this is data address not instruction addr */ 125 124 address = current->thread.fault_address; 126 125 ··· 127 130 128 131 /* For DTLB Miss or ProtV, display the memory involved too */ 129 132 if (vec == ECR_V_DTLB_MISS) { 130 - pr_cont("Invalid %s @ 0x%08lx by insn @ 0x%08lx\n", 133 + pr_cont("Invalid %s @ 0x%08lx by insn @ %pS\n", 131 134 (cause_code == 0x01) ? "Read" : 132 135 ((cause_code == 0x02) ? "Write" : "EX"), 133 - address, regs->ret); 136 + address, (void *)regs->ret); 134 137 } else if (vec == ECR_V_ITLB_MISS) { 135 138 pr_cont("Insn could not be fetched\n"); 136 139 } else if (vec == ECR_V_MACH_CHK) { ··· 188 191 189 192 show_ecr_verbose(regs); 190 193 191 - pr_info("[EFA ]: 0x%08lx\n[BLINK ]: %pS\n[ERET ]: %pS\n", 192 - current->thread.fault_address, 193 - (void *)regs->blink, (void *)regs->ret); 194 - 195 194 if (user_mode(regs)) 196 195 show_faulting_vma(regs->ret); /* faulting code, not data */ 197 196 198 - pr_info("[STAT32]: 0x%08lx", regs->status32); 197 + pr_info("ECR: 0x%08lx EFA: 0x%08lx ERET: 0x%08lx\n", 198 + regs->event, current->thread.fault_address, regs->ret); 199 + 200 + pr_info("STAT32: 0x%08lx", regs->status32); 199 201 200 202 #define STS_BIT(r, bit) r->status32 & STATUS_##bit##_MASK ? #bit" " : "" 201 203 202 204 #ifdef CONFIG_ISA_ARCOMPACT 203 - pr_cont(" : %2s%2s%2s%2s%2s%2s%2s\n", 205 + pr_cont(" [%2s%2s%2s%2s%2s%2s%2s]", 204 206 (regs->status32 & STATUS_U_MASK) ? "U " : "K ", 205 207 STS_BIT(regs, DE), STS_BIT(regs, AE), 206 208 STS_BIT(regs, A2), STS_BIT(regs, A1), 207 209 STS_BIT(regs, E2), STS_BIT(regs, E1)); 208 210 #else 209 - pr_cont(" : %2s%2s%2s%2s\n", 211 + pr_cont(" [%2s%2s%2s%2s]", 210 212 STS_BIT(regs, IE), 211 213 (regs->status32 & STATUS_U_MASK) ? "U " : "K ", 212 214 STS_BIT(regs, DE), STS_BIT(regs, AE)); 213 215 #endif 214 - pr_info("BTA: 0x%08lx\t SP: 0x%08lx\t FP: 0x%08lx\n", 215 - regs->bta, regs->sp, regs->fp); 216 + pr_cont(" BTA: 0x%08lx\n", regs->bta); 217 + pr_info("BLK: %pS\n SP: 0x%08lx FP: 0x%08lx\n", 218 + (void *)regs->blink, regs->sp, regs->fp); 216 219 pr_info("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n", 217 220 regs->lp_start, regs->lp_end, regs->lp_count); 218 221
+3 -1
arch/arm/Makefile
··· 307 307 ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) 308 308 prepare: stack_protector_prepare 309 309 stack_protector_prepare: prepare0 310 - $(eval KBUILD_CFLAGS += \ 310 + $(eval SSP_PLUGIN_CFLAGS := \ 311 311 -fplugin-arg-arm_ssp_per_task_plugin-tso=$(shell \ 312 312 awk '{if ($$2 == "THREAD_SZ_ORDER") print $$3;}'\ 313 313 include/generated/asm-offsets.h) \ 314 314 -fplugin-arg-arm_ssp_per_task_plugin-offset=$(shell \ 315 315 awk '{if ($$2 == "TI_STACK_CANARY") print $$3;}'\ 316 316 include/generated/asm-offsets.h)) 317 + $(eval KBUILD_CFLAGS += $(SSP_PLUGIN_CFLAGS)) 318 + $(eval GCC_PLUGINS_CFLAGS += $(SSP_PLUGIN_CFLAGS)) 317 319 endif 318 320 319 321 all: $(notdir $(KBUILD_IMAGE))
+2 -2
arch/arm/boot/compressed/Makefile
··· 101 101 $(libfdt) $(libfdt_hdrs) hyp-stub.S 102 102 103 103 KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 104 - KBUILD_CFLAGS += $(DISABLE_ARM_SSP_PER_TASK_PLUGIN) 105 104 106 105 ifeq ($(CONFIG_FUNCTION_TRACER),y) 107 106 ORIG_CFLAGS := $(KBUILD_CFLAGS) ··· 116 117 CFLAGS_fdt_rw.o := $(nossp-flags-y) 117 118 CFLAGS_fdt_wip.o := $(nossp-flags-y) 118 119 119 - ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin -I$(obj) 120 + ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin \ 121 + -I$(obj) $(DISABLE_ARM_SSP_PER_TASK_PLUGIN) 120 122 asflags-y := -DZIMAGE 121 123 122 124 # Supply kernel BSS size to the decompressor via a linker symbol.
+2
arch/arm/kernel/vdso.c
··· 95 95 */ 96 96 np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer"); 97 97 if (!np) 98 + np = of_find_compatible_node(NULL, NULL, "arm,armv8-timer"); 99 + if (!np) 98 100 goto out_put; 99 101 100 102 if (of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
+1 -1
arch/arm/lib/copy_from_user.S
··· 118 118 119 119 ENDPROC(arm_copy_from_user) 120 120 121 - .pushsection .fixup,"ax" 121 + .pushsection .text.fixup,"ax" 122 122 .align 0 123 123 copy_abort_preamble 124 124 ldmfd sp!, {r1, r2, r3}
+2
arch/arm64/boot/dts/freescale/fsl-ls1043-post.dtsi
··· 20 20 }; 21 21 22 22 &fman0 { 23 + fsl,erratum-a050385; 24 + 23 25 /* these aliases provide the FMan ports mapping */ 24 26 enet0: ethernet@e0000 { 25 27 };
+1 -3
arch/arm64/include/asm/mmu.h
··· 29 29 */ 30 30 #define ASID(mm) ((mm)->context.id.counter & 0xffff) 31 31 32 - extern bool arm64_use_ng_mappings; 33 - 34 32 static inline bool arm64_kernel_unmapped_at_el0(void) 35 33 { 36 - return arm64_use_ng_mappings; 34 + return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); 37 35 } 38 36 39 37 typedef void (*bp_hardening_cb_t)(void);
+4 -2
arch/arm64/include/asm/pgtable-prot.h
··· 23 23 24 24 #include <asm/pgtable-types.h> 25 25 26 + extern bool arm64_use_ng_mappings; 27 + 26 28 #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) 27 29 #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) 28 30 29 - #define PTE_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0) 30 - #define PMD_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0) 31 + #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) 32 + #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) 31 33 32 34 #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) 33 35 #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+1 -1
arch/arm64/include/asm/unistd.h
··· 25 25 #define __NR_compat_gettimeofday 78 26 26 #define __NR_compat_sigreturn 119 27 27 #define __NR_compat_rt_sigreturn 173 28 - #define __NR_compat_clock_getres 247 29 28 #define __NR_compat_clock_gettime 263 29 + #define __NR_compat_clock_getres 264 30 30 #define __NR_compat_clock_gettime64 403 31 31 #define __NR_compat_clock_getres_time64 406 32 32
+20 -5
arch/arm64/kernel/smp.c
··· 958 958 } 959 959 #endif 960 960 961 + /* 962 + * The number of CPUs online, not counting this CPU (which may not be 963 + * fully online and so not counted in num_online_cpus()). 964 + */ 965 + static inline unsigned int num_other_online_cpus(void) 966 + { 967 + unsigned int this_cpu_online = cpu_online(smp_processor_id()); 968 + 969 + return num_online_cpus() - this_cpu_online; 970 + } 971 + 961 972 void smp_send_stop(void) 962 973 { 963 974 unsigned long timeout; 964 975 965 - if (num_online_cpus() > 1) { 976 + if (num_other_online_cpus()) { 966 977 cpumask_t mask; 967 978 968 979 cpumask_copy(&mask, cpu_online_mask); ··· 986 975 987 976 /* Wait up to one second for other CPUs to stop */ 988 977 timeout = USEC_PER_SEC; 989 - while (num_online_cpus() > 1 && timeout--) 978 + while (num_other_online_cpus() && timeout--) 990 979 udelay(1); 991 980 992 - if (num_online_cpus() > 1) 981 + if (num_other_online_cpus()) 993 982 pr_warn("SMP: failed to stop secondary CPUs %*pbl\n", 994 983 cpumask_pr_args(cpu_online_mask)); 995 984 ··· 1012 1001 1013 1002 cpus_stopped = 1; 1014 1003 1015 - if (num_online_cpus() == 1) { 1004 + /* 1005 + * If this cpu is the only one alive at this point in time, online or 1006 + * not, there are no stop messages to be sent around, so just back out. 1007 + */ 1008 + if (num_other_online_cpus() == 0) { 1016 1009 sdei_mask_local_cpu(); 1017 1010 return; 1018 1011 } ··· 1024 1009 cpumask_copy(&mask, cpu_online_mask); 1025 1010 cpumask_clear_cpu(smp_processor_id(), &mask); 1026 1011 1027 - atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1); 1012 + atomic_set(&waiting_for_crash_ipi, num_other_online_cpus()); 1028 1013 1029 1014 pr_crit("SMP: stopping secondary CPUs\n"); 1030 1015 smp_cross_call(&mask, IPI_CPU_CRASH_STOP);
+28 -16
arch/mips/boot/dts/ingenic/ci20.dts
··· 4 4 #include "jz4780.dtsi" 5 5 #include <dt-bindings/clock/ingenic,tcu.h> 6 6 #include <dt-bindings/gpio/gpio.h> 7 + #include <dt-bindings/interrupt-controller/irq.h> 8 + #include <dt-bindings/regulator/active-semi,8865-regulator.h> 7 9 8 10 / { 9 11 compatible = "img,ci20", "ingenic,jz4780"; ··· 165 163 166 164 regulators { 167 165 vddcore: SUDCDC1 { 168 - regulator-name = "VDDCORE"; 166 + regulator-name = "DCDC_REG1"; 169 167 regulator-min-microvolt = <1100000>; 170 168 regulator-max-microvolt = <1100000>; 171 169 regulator-always-on; 172 170 }; 173 171 vddmem: SUDCDC2 { 174 - regulator-name = "VDDMEM"; 172 + regulator-name = "DCDC_REG2"; 175 173 regulator-min-microvolt = <1500000>; 176 174 regulator-max-microvolt = <1500000>; 177 175 regulator-always-on; 178 176 }; 179 177 vcc_33: SUDCDC3 { 180 - regulator-name = "VCC33"; 178 + regulator-name = "DCDC_REG3"; 181 179 regulator-min-microvolt = <3300000>; 182 180 regulator-max-microvolt = <3300000>; 183 181 regulator-always-on; 184 182 }; 185 183 vcc_50: SUDCDC4 { 186 - regulator-name = "VCC50"; 184 + regulator-name = "SUDCDC_REG4"; 187 185 regulator-min-microvolt = <5000000>; 188 186 regulator-max-microvolt = <5000000>; 189 187 regulator-always-on; 190 188 }; 191 189 vcc_25: LDO_REG5 { 192 - regulator-name = "VCC25"; 190 + regulator-name = "LDO_REG5"; 193 191 regulator-min-microvolt = <2500000>; 194 192 regulator-max-microvolt = <2500000>; 195 193 regulator-always-on; 196 194 }; 197 195 wifi_io: LDO_REG6 { 198 - regulator-name = "WIFIIO"; 196 + regulator-name = "LDO_REG6"; 199 197 regulator-min-microvolt = <2500000>; 200 198 regulator-max-microvolt = <2500000>; 201 199 regulator-always-on; 202 200 }; 203 201 vcc_28: LDO_REG7 { 204 - regulator-name = "VCC28"; 202 + regulator-name = "LDO_REG7"; 205 203 regulator-min-microvolt = <2800000>; 206 204 regulator-max-microvolt = <2800000>; 207 205 regulator-always-on; 208 206 }; 209 207 vcc_15: LDO_REG8 { 210 - regulator-name = "VCC15"; 208 + regulator-name = "LDO_REG8"; 211 209 regulator-min-microvolt = <1500000>; 212 210 regulator-max-microvolt = <1500000>; 213 211 regulator-always-on; 214 212 }; 215 - vcc_18: LDO_REG9 { 216 - regulator-name = "VCC18"; 217 - regulator-min-microvolt = <1800000>; 218 - regulator-max-microvolt = <1800000>; 213 + vrtc_18: LDO_REG9 { 214 + regulator-name = "LDO_REG9"; 215 + /* Despite the datasheet stating 3.3V 216 + * for REG9 and the driver expecting that, 217 + * REG9 outputs 1.8V. 218 + * Likely the CI20 uses a proprietary 219 + * factory programmed chip variant. 220 + * Since this is a simple on/off LDO the 221 + * exact values do not matter. 222 + */ 223 + regulator-min-microvolt = <3300000>; 224 + regulator-max-microvolt = <3300000>; 219 225 regulator-always-on; 220 226 }; 221 227 vcc_11: LDO_REG10 { 222 - regulator-name = "VCC11"; 223 - regulator-min-microvolt = <1100000>; 224 - regulator-max-microvolt = <1100000>; 228 + regulator-name = "LDO_REG10"; 229 + regulator-min-microvolt = <1200000>; 230 + regulator-max-microvolt = <1200000>; 225 231 regulator-always-on; 226 232 }; 227 233 }; ··· 271 261 rtc@51 { 272 262 compatible = "nxp,pcf8563"; 273 263 reg = <0x51>; 274 - interrupts = <110>; 264 + 265 + interrupt-parent = <&gpf>; 266 + interrupts = <30 IRQ_TYPE_LEVEL_LOW>; 275 267 }; 276 268 }; 277 269
+2 -1
arch/mips/kernel/setup.c
··· 605 605 * If we're configured to take boot arguments from DT, look for those 606 606 * now. 607 607 */ 608 - if (IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB)) 608 + if (IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB) || 609 + IS_ENABLED(CONFIG_MIPS_CMDLINE_DTB_EXTEND)) 609 610 of_scan_flat_dt(bootcmdline_scan_chosen, &dt_bootargs); 610 611 #endif 611 612
+1
arch/powerpc/kvm/book3s_pr.c
··· 1817 1817 { 1818 1818 struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu); 1819 1819 1820 + kvmppc_mmu_destroy_pr(vcpu); 1820 1821 free_page((unsigned long)vcpu->arch.shared & PAGE_MASK); 1821 1822 #ifdef CONFIG_KVM_BOOK3S_32_HANDLER 1822 1823 kfree(vcpu->arch.shadow_vcpu);
-2
arch/powerpc/kvm/powerpc.c
··· 759 759 return 0; 760 760 761 761 out_vcpu_uninit: 762 - kvmppc_mmu_destroy(vcpu); 763 762 kvmppc_subarch_vcpu_uninit(vcpu); 764 763 return err; 765 764 } ··· 791 792 792 793 kvmppc_core_vcpu_free(vcpu); 793 794 794 - kvmppc_mmu_destroy(vcpu); 795 795 kvmppc_subarch_vcpu_uninit(vcpu); 796 796 } 797 797
+2 -7
arch/powerpc/mm/kasan/kasan_init_32.c
··· 120 120 unsigned long k_cur; 121 121 phys_addr_t pa = __pa(kasan_early_shadow_page); 122 122 123 - if (!early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { 124 - int ret = kasan_init_shadow_page_tables(k_start, k_end); 125 - 126 - if (ret) 127 - panic("kasan: kasan_init_shadow_page_tables() failed"); 128 - } 129 123 for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { 130 124 pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur); 131 125 pte_t *ptep = pte_offset_kernel(pmd, k_cur); ··· 137 143 int ret; 138 144 struct memblock_region *reg; 139 145 140 - if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { 146 + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) || 147 + IS_ENABLED(CONFIG_KASAN_VMALLOC)) { 141 148 ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END); 142 149 143 150 if (ret)
+17 -1
arch/s390/kvm/kvm-s390.c
··· 3268 3268 /* Initial reset is a superset of the normal reset */ 3269 3269 kvm_arch_vcpu_ioctl_normal_reset(vcpu); 3270 3270 3271 - /* this equals initial cpu reset in pop, but we don't switch to ESA */ 3271 + /* 3272 + * This equals initial cpu reset in pop, but we don't switch to ESA. 3273 + * We do not only reset the internal data, but also ... 3274 + */ 3272 3275 vcpu->arch.sie_block->gpsw.mask = 0; 3273 3276 vcpu->arch.sie_block->gpsw.addr = 0; 3274 3277 kvm_s390_set_prefix(vcpu, 0); ··· 3281 3278 memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr)); 3282 3279 vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK; 3283 3280 vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK; 3281 + 3282 + /* ... the data in sync regs */ 3283 + memset(vcpu->run->s.regs.crs, 0, sizeof(vcpu->run->s.regs.crs)); 3284 + vcpu->run->s.regs.ckc = 0; 3285 + vcpu->run->s.regs.crs[0] = CR0_INITIAL_MASK; 3286 + vcpu->run->s.regs.crs[14] = CR14_INITIAL_MASK; 3287 + vcpu->run->psw_addr = 0; 3288 + vcpu->run->psw_mask = 0; 3289 + vcpu->run->s.regs.todpr = 0; 3290 + vcpu->run->s.regs.cputm = 0; 3291 + vcpu->run->s.regs.ckc = 0; 3292 + vcpu->run->s.regs.pp = 0; 3293 + vcpu->run->s.regs.gbea = 1; 3284 3294 vcpu->run->s.regs.fpc = 0; 3285 3295 vcpu->arch.sie_block->gbea = 1; 3286 3296 vcpu->arch.sie_block->pp = 0;
+3 -2
arch/x86/Makefile
··· 194 194 avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1) 195 195 sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1) 196 196 sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1) 197 + adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1) 197 198 198 - KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) 199 - KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) 199 + KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr) 200 + KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr) 200 201 201 202 KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE) 202 203
+6 -1
arch/x86/crypto/Makefile
··· 11 11 avx512_supported :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,yes,no) 12 12 sha1_ni_supported :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,yes,no) 13 13 sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no) 14 + adx_supported := $(call as-instr,adox %r10$(comma)%r10,yes,no) 14 15 15 16 obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o 16 17 ··· 40 39 41 40 obj-$(CONFIG_CRYPTO_NHPOLY1305_SSE2) += nhpoly1305-sse2.o 42 41 obj-$(CONFIG_CRYPTO_NHPOLY1305_AVX2) += nhpoly1305-avx2.o 43 - obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o 42 + 43 + # These modules require the assembler to support ADX. 44 + ifeq ($(adx_supported),yes) 45 + obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o 46 + endif 44 47 45 48 # These modules require assembler to support AVX. 46 49 ifeq ($(avx_supported),yes)
+7 -10
arch/x86/events/amd/uncore.c
··· 190 190 191 191 /* 192 192 * NB and Last level cache counters (MSRs) are shared across all cores 193 - * that share the same NB / Last level cache. Interrupts can be directed 194 - * to a single target core, however, event counts generated by processes 195 - * running on other cores cannot be masked out. So we do not support 196 - * sampling and per-thread events. 193 + * that share the same NB / Last level cache. On family 16h and below, 194 + * Interrupts can be directed to a single target core, however, event 195 + * counts generated by processes running on other cores cannot be masked 196 + * out. So we do not support sampling and per-thread events via 197 + * CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts: 197 198 */ 198 - if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) 199 - return -EINVAL; 200 - 201 - /* and we do not enable counter overflow interrupts */ 202 199 hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB; 203 200 hwc->idx = -1; 204 201 ··· 303 306 .start = amd_uncore_start, 304 307 .stop = amd_uncore_stop, 305 308 .read = amd_uncore_read, 306 - .capabilities = PERF_PMU_CAP_NO_EXCLUDE, 309 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, 307 310 }; 308 311 309 312 static struct pmu amd_llc_pmu = { ··· 314 317 .start = amd_uncore_start, 315 318 .stop = amd_uncore_stop, 316 319 .read = amd_uncore_read, 317 - .capabilities = PERF_PMU_CAP_NO_EXCLUDE, 320 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, 318 321 }; 319 322 320 323 static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
-1
arch/x86/include/asm/kvm_emulate.h
··· 360 360 u64 d; 361 361 unsigned long _eip; 362 362 struct operand memop; 363 - /* Fields above regs are cleared together. */ 364 363 unsigned long _regs[NR_VCPU_REGS]; 365 364 struct operand *memopp; 366 365 struct fetch_cache fetch;
+8 -6
arch/x86/kernel/apic/vector.c
··· 838 838 bool managed = apicd->is_managed; 839 839 840 840 /* 841 - * This should never happen. Managed interrupts are not 842 - * migrated except on CPU down, which does not involve the 843 - * cleanup vector. But try to keep the accounting correct 844 - * nevertheless. 841 + * Managed interrupts are usually not migrated away 842 + * from an online CPU, but CPU isolation 'managed_irq' 843 + * can make that happen. 844 + * 1) Activation does not take the isolation into account 845 + * to keep the code simple 846 + * 2) Migration away from an isolated CPU can happen when 847 + * a non-isolated CPU which is in the calculated 848 + * affinity mask comes online. 845 849 */ 846 - WARN_ON_ONCE(managed); 847 - 848 850 trace_vector_free_moved(apicd->irq, cpu, vector, managed); 849 851 irq_matrix_free(vector_matrix, cpu, vector, managed); 850 852 per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED;
+5 -4
arch/x86/kernel/cpu/mce/intel.c
··· 493 493 return; 494 494 495 495 if ((val & 3UL) == 1UL) { 496 - /* PPIN available but disabled: */ 496 + /* PPIN locked in disabled mode */ 497 497 return; 498 498 } 499 499 500 - /* If PPIN is disabled, but not locked, try to enable: */ 501 - if (!(val & 3UL)) { 500 + /* If PPIN is disabled, try to enable */ 501 + if (!(val & 2UL)) { 502 502 wrmsrl_safe(MSR_PPIN_CTL, val | 2UL); 503 503 rdmsrl_safe(MSR_PPIN_CTL, &val); 504 504 } 505 505 506 - if ((val & 3UL) == 2UL) 506 + /* Is the enable bit set? */ 507 + if (val & 2UL) 507 508 set_cpu_cap(c, X86_FEATURE_INTEL_PPIN); 508 509 } 509 510 }
+7 -2
arch/x86/kernel/cpu/mce/therm_throt.c
··· 486 486 { 487 487 struct thermal_state *state = &per_cpu(thermal_state, cpu); 488 488 struct device *dev = get_cpu_device(cpu); 489 + u32 l; 489 490 490 - cancel_delayed_work(&state->package_throttle.therm_work); 491 - cancel_delayed_work(&state->core_throttle.therm_work); 491 + /* Mask the thermal vector before draining evtl. pending work */ 492 + l = apic_read(APIC_LVTTHMR); 493 + apic_write(APIC_LVTTHMR, l | APIC_LVT_MASKED); 494 + 495 + cancel_delayed_work_sync(&state->package_throttle.therm_work); 496 + cancel_delayed_work_sync(&state->core_throttle.therm_work); 492 497 493 498 state->package_throttle.rate_control_active = false; 494 499 state->core_throttle.rate_control_active = false;
+1 -1
arch/x86/kvm/Kconfig
··· 68 68 depends on (X86_64 && !KASAN) || !COMPILE_TEST 69 69 depends on EXPERT 70 70 help 71 - Add -Werror to the build flags for (and only for) i915.ko. 71 + Add -Werror to the build flags for KVM. 72 72 73 73 If in doubt, say "N". 74 74
+1
arch/x86/kvm/emulate.c
··· 5173 5173 ctxt->fetch.ptr = ctxt->fetch.data; 5174 5174 ctxt->fetch.end = ctxt->fetch.data + insn_len; 5175 5175 ctxt->opcode_len = 1; 5176 + ctxt->intercept = x86_intercept_none; 5176 5177 if (insn_len > 0) 5177 5178 memcpy(ctxt->fetch.data, insn, insn_len); 5178 5179 else {
+5 -2
arch/x86/kvm/ioapic.c
··· 378 378 if (e->fields.delivery_mode == APIC_DM_FIXED) { 379 379 struct kvm_lapic_irq irq; 380 380 381 - irq.shorthand = APIC_DEST_NOSHORT; 382 381 irq.vector = e->fields.vector; 383 382 irq.delivery_mode = e->fields.delivery_mode << 8; 384 - irq.dest_id = e->fields.dest_id; 385 383 irq.dest_mode = 386 384 kvm_lapic_irq_dest_mode(!!e->fields.dest_mode); 385 + irq.level = false; 386 + irq.trig_mode = e->fields.trig_mode; 387 + irq.shorthand = APIC_DEST_NOSHORT; 388 + irq.dest_id = e->fields.dest_id; 389 + irq.msi_redir_hint = false; 387 390 bitmap_zero(&vcpu_bitmap, 16); 388 391 kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, 389 392 &vcpu_bitmap);
+2 -1
arch/x86/kvm/svm.c
··· 6312 6312 enum exit_fastpath_completion *exit_fastpath) 6313 6313 { 6314 6314 if (!is_guest_mode(vcpu) && 6315 - to_svm(vcpu)->vmcb->control.exit_code == EXIT_REASON_MSR_WRITE) 6315 + to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR && 6316 + to_svm(vcpu)->vmcb->control.exit_info_1) 6316 6317 *exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu); 6317 6318 } 6318 6319
+3 -2
arch/x86/kvm/vmx/nested.c
··· 224 224 return; 225 225 226 226 kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); 227 - vmx->nested.hv_evmcs_vmptr = -1ull; 227 + vmx->nested.hv_evmcs_vmptr = 0; 228 228 vmx->nested.hv_evmcs = NULL; 229 229 } 230 230 ··· 1923 1923 if (!nested_enlightened_vmentry(vcpu, &evmcs_gpa)) 1924 1924 return 1; 1925 1925 1926 - if (unlikely(evmcs_gpa != vmx->nested.hv_evmcs_vmptr)) { 1926 + if (unlikely(!vmx->nested.hv_evmcs || 1927 + evmcs_gpa != vmx->nested.hv_evmcs_vmptr)) { 1927 1928 if (!vmx->nested.hv_evmcs) 1928 1929 vmx->nested.current_vmptr = -1ull; 1929 1930
+14 -2
arch/x86/kvm/vmx/vmx.c
··· 2338 2338 kvm_cpu_vmxoff(); 2339 2339 } 2340 2340 2341 + /* 2342 + * There is no X86_FEATURE for SGX yet, but anyway we need to query CPUID 2343 + * directly instead of going through cpu_has(), to ensure KVM is trapping 2344 + * ENCLS whenever it's supported in hardware. It does not matter whether 2345 + * the host OS supports or has enabled SGX. 2346 + */ 2347 + static bool cpu_has_sgx(void) 2348 + { 2349 + return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0)); 2350 + } 2351 + 2341 2352 static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, 2342 2353 u32 msr, u32 *result) 2343 2354 { ··· 2429 2418 SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | 2430 2419 SECONDARY_EXEC_PT_USE_GPA | 2431 2420 SECONDARY_EXEC_PT_CONCEAL_VMX | 2432 - SECONDARY_EXEC_ENABLE_VMFUNC | 2433 - SECONDARY_EXEC_ENCLS_EXITING; 2421 + SECONDARY_EXEC_ENABLE_VMFUNC; 2422 + if (cpu_has_sgx()) 2423 + opt2 |= SECONDARY_EXEC_ENCLS_EXITING; 2434 2424 if (adjust_vmx_controls(min2, opt2, 2435 2425 MSR_IA32_VMX_PROCBASED_CTLS2, 2436 2426 &_cpu_based_2nd_exec_control) < 0)
+5 -3
arch/x86/kvm/x86.c
··· 7195 7195 7196 7196 cpu = get_cpu(); 7197 7197 policy = cpufreq_cpu_get(cpu); 7198 - if (policy && policy->cpuinfo.max_freq) 7199 - max_tsc_khz = policy->cpuinfo.max_freq; 7198 + if (policy) { 7199 + if (policy->cpuinfo.max_freq) 7200 + max_tsc_khz = policy->cpuinfo.max_freq; 7201 + cpufreq_cpu_put(policy); 7202 + } 7200 7203 put_cpu(); 7201 - cpufreq_cpu_put(policy); 7202 7204 #endif 7203 7205 cpufreq_register_notifier(&kvmclock_cpufreq_notifier_block, 7204 7206 CPUFREQ_TRANSITION_NOTIFIER);
+24 -2
arch/x86/mm/fault.c
··· 190 190 return pmd_k; 191 191 } 192 192 193 - void vmalloc_sync_all(void) 193 + static void vmalloc_sync(void) 194 194 { 195 195 unsigned long address; 196 196 ··· 215 215 } 216 216 spin_unlock(&pgd_lock); 217 217 } 218 + } 219 + 220 + void vmalloc_sync_mappings(void) 221 + { 222 + vmalloc_sync(); 223 + } 224 + 225 + void vmalloc_sync_unmappings(void) 226 + { 227 + vmalloc_sync(); 218 228 } 219 229 220 230 /* ··· 329 319 330 320 #else /* CONFIG_X86_64: */ 331 321 332 - void vmalloc_sync_all(void) 322 + void vmalloc_sync_mappings(void) 333 323 { 324 + /* 325 + * 64-bit mappings might allocate new p4d/pud pages 326 + * that need to be propagated to all tasks' PGDs. 327 + */ 334 328 sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END); 329 + } 330 + 331 + void vmalloc_sync_unmappings(void) 332 + { 333 + /* 334 + * Unmappings never allocate or free p4d/pud pages. 335 + * No work is required here. 336 + */ 335 337 } 336 338 337 339 /*
+18
arch/x86/mm/ioremap.c
··· 106 106 return 0; 107 107 } 108 108 109 + /* 110 + * The EFI runtime services data area is not covered by walk_mem_res(), but must 111 + * be mapped encrypted when SEV is active. 112 + */ 113 + static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc) 114 + { 115 + if (!sev_active()) 116 + return; 117 + 118 + if (efi_mem_type(addr) == EFI_RUNTIME_SERVICES_DATA) 119 + desc->flags |= IORES_MAP_ENCRYPTED; 120 + } 121 + 109 122 static int __ioremap_collect_map_flags(struct resource *res, void *arg) 110 123 { 111 124 struct ioremap_desc *desc = arg; ··· 137 124 * To avoid multiple resource walks, this function walks resources marked as 138 125 * IORESOURCE_MEM and IORESOURCE_BUSY and looking for system RAM and/or a 139 126 * resource described not as IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES). 127 + * 128 + * After that, deal with misc other ranges in __ioremap_check_other() which do 129 + * not fall into the above category. 140 130 */ 141 131 static void __ioremap_check_mem(resource_size_t addr, unsigned long size, 142 132 struct ioremap_desc *desc) ··· 151 135 memset(desc, 0, sizeof(struct ioremap_desc)); 152 136 153 137 walk_mem_res(start, end, desc, __ioremap_collect_map_flags); 138 + 139 + __ioremap_check_other(addr, desc); 154 140 } 155 141 156 142 /*
+1 -1
block/blk-iocost.c
··· 1318 1318 return false; 1319 1319 1320 1320 /* is something in flight? */ 1321 - if (atomic64_read(&iocg->done_vtime) < atomic64_read(&iocg->vtime)) 1321 + if (atomic64_read(&iocg->done_vtime) != atomic64_read(&iocg->vtime)) 1322 1322 return false; 1323 1323 1324 1324 return true;
+22
block/blk-mq-sched.c
··· 398 398 WARN_ON(e && (rq->tag != -1)); 399 399 400 400 if (blk_mq_sched_bypass_insert(hctx, !!e, rq)) { 401 + /* 402 + * Firstly normal IO request is inserted to scheduler queue or 403 + * sw queue, meantime we add flush request to dispatch queue( 404 + * hctx->dispatch) directly and there is at most one in-flight 405 + * flush request for each hw queue, so it doesn't matter to add 406 + * flush request to tail or front of the dispatch queue. 407 + * 408 + * Secondly in case of NCQ, flush request belongs to non-NCQ 409 + * command, and queueing it will fail when there is any 410 + * in-flight normal IO request(NCQ command). When adding flush 411 + * rq to the front of hctx->dispatch, it is easier to introduce 412 + * extra time to flush rq's latency because of S_SCHED_RESTART 413 + * compared with adding to the tail of dispatch queue, then 414 + * chance of flush merge is increased, and less flush requests 415 + * will be issued to controller. It is observed that ~10% time 416 + * is saved in blktests block/004 on disk attached to AHCI/NCQ 417 + * drive when adding flush rq to the front of hctx->dispatch. 418 + * 419 + * Simply queue flush rq to the front of hctx->dispatch so that 420 + * intensive flush workloads can benefit in case of NCQ HW. 421 + */ 422 + at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head; 401 423 blk_mq_request_bypass_insert(rq, at_head, false); 402 424 goto run; 403 425 }
+36
block/genhd.c
··· 301 301 } 302 302 EXPORT_SYMBOL_GPL(disk_map_sector_rcu); 303 303 304 + /** 305 + * disk_has_partitions 306 + * @disk: gendisk of interest 307 + * 308 + * Walk through the partition table and check if valid partition exists. 309 + * 310 + * CONTEXT: 311 + * Don't care. 312 + * 313 + * RETURNS: 314 + * True if the gendisk has at least one valid non-zero size partition. 315 + * Otherwise false. 316 + */ 317 + bool disk_has_partitions(struct gendisk *disk) 318 + { 319 + struct disk_part_tbl *ptbl; 320 + int i; 321 + bool ret = false; 322 + 323 + rcu_read_lock(); 324 + ptbl = rcu_dereference(disk->part_tbl); 325 + 326 + /* Iterate partitions skipping the whole device at index 0 */ 327 + for (i = 1; i < ptbl->len; i++) { 328 + if (rcu_dereference(ptbl->part[i])) { 329 + ret = true; 330 + break; 331 + } 332 + } 333 + 334 + rcu_read_unlock(); 335 + 336 + return ret; 337 + } 338 + EXPORT_SYMBOL_GPL(disk_has_partitions); 339 + 304 340 /* 305 341 * Can be deleted altogether. Later. 306 342 *
+1 -1
drivers/acpi/apei/ghes.c
··· 171 171 * New allocation must be visible in all pgd before it can be found by 172 172 * an NMI allocating from the pool. 173 173 */ 174 - vmalloc_sync_all(); 174 + vmalloc_sync_mappings(); 175 175 176 176 rc = gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1); 177 177 if (rc)
+1
drivers/android/binderfs.c
··· 439 439 inode->i_uid = info->root_uid; 440 440 inode->i_gid = info->root_gid; 441 441 442 + refcount_set(&device->ref, 1); 442 443 device->binderfs_inode = inode; 443 444 device->miscdev.minor = minor; 444 445
+1 -1
drivers/atm/nicstar.c
··· 91 91 #ifdef GENERAL_DEBUG 92 92 #define PRINTK(args...) printk(args) 93 93 #else 94 - #define PRINTK(args...) 94 + #define PRINTK(args...) do {} while (0) 95 95 #endif /* GENERAL_DEBUG */ 96 96 97 97 #ifdef EXTRA_DEBUG
+8 -8
drivers/auxdisplay/Kconfig
··· 111 111 If unsure, say N. 112 112 113 113 config CFAG12864B_RATE 114 - int "Refresh rate (hertz)" 114 + int "Refresh rate (hertz)" 115 115 depends on CFAG12864B 116 116 default "20" 117 117 ---help--- ··· 329 329 330 330 config PANEL_LCD_PIN_E 331 331 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO="0" 332 - int "Parallel port pin number & polarity connected to the LCD E signal (-17...17) " 332 + int "Parallel port pin number & polarity connected to the LCD E signal (-17...17) " 333 333 range -17 17 334 334 default 14 335 335 ---help--- ··· 344 344 345 345 config PANEL_LCD_PIN_RS 346 346 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO="0" 347 - int "Parallel port pin number & polarity connected to the LCD RS signal (-17...17) " 347 + int "Parallel port pin number & polarity connected to the LCD RS signal (-17...17) " 348 348 range -17 17 349 349 default 17 350 350 ---help--- ··· 359 359 360 360 config PANEL_LCD_PIN_RW 361 361 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO="0" 362 - int "Parallel port pin number & polarity connected to the LCD RW signal (-17...17) " 362 + int "Parallel port pin number & polarity connected to the LCD RW signal (-17...17) " 363 363 range -17 17 364 364 default 16 365 365 ---help--- ··· 374 374 375 375 config PANEL_LCD_PIN_SCL 376 376 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO!="0" 377 - int "Parallel port pin number & polarity connected to the LCD SCL signal (-17...17) " 377 + int "Parallel port pin number & polarity connected to the LCD SCL signal (-17...17) " 378 378 range -17 17 379 379 default 1 380 380 ---help--- ··· 389 389 390 390 config PANEL_LCD_PIN_SDA 391 391 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO!="0" 392 - int "Parallel port pin number & polarity connected to the LCD SDA signal (-17...17) " 392 + int "Parallel port pin number & polarity connected to the LCD SDA signal (-17...17) " 393 393 range -17 17 394 394 default 2 395 395 ---help--- ··· 404 404 405 405 config PANEL_LCD_PIN_BL 406 406 depends on PANEL_PROFILE="0" && PANEL_LCD="1" 407 - int "Parallel port pin number & polarity connected to the LCD backlight signal (-17...17) " 407 + int "Parallel port pin number & polarity connected to the LCD backlight signal (-17...17) " 408 408 range -17 17 409 409 default 0 410 410 ---help--- 411 411 This describes the number of the parallel port pin to which the LCD 'BL' signal 412 - has been connected. It can be : 412 + has been connected. It can be : 413 413 414 414 0 : no connection (eg: connected to ground) 415 415 1..17 : directly connected to any of these pins on the DB25 plug
+1 -1
drivers/auxdisplay/charlcd.c
··· 86 86 int len; 87 87 } esc_seq; 88 88 89 - unsigned long long drvdata[0]; 89 + unsigned long long drvdata[]; 90 90 }; 91 91 92 92 #define charlcd_to_priv(p) container_of(p, struct charlcd_priv, lcd)
+1 -3
drivers/auxdisplay/img-ascii-lcd.c
··· 356 356 const struct of_device_id *match; 357 357 const struct img_ascii_lcd_config *cfg; 358 358 struct img_ascii_lcd_ctx *ctx; 359 - struct resource *res; 360 359 int err; 361 360 362 361 match = of_match_device(img_ascii_lcd_matches, &pdev->dev); ··· 377 378 &ctx->offset)) 378 379 return -EINVAL; 379 380 } else { 380 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 381 - ctx->base = devm_ioremap_resource(&pdev->dev, res); 381 + ctx->base = devm_platform_ioremap_resource(pdev, 0); 382 382 if (IS_ERR(ctx->base)) 383 383 return PTR_ERR(ctx->base); 384 384 }
+6 -19
drivers/base/platform.c
··· 363 363 { 364 364 if (!pdev->dev.coherent_dma_mask) 365 365 pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 366 - if (!pdev->dma_mask) 367 - pdev->dma_mask = DMA_BIT_MASK(32); 368 - if (!pdev->dev.dma_mask) 369 - pdev->dev.dma_mask = &pdev->dma_mask; 366 + if (!pdev->dev.dma_mask) { 367 + pdev->platform_dma_mask = DMA_BIT_MASK(32); 368 + pdev->dev.dma_mask = &pdev->platform_dma_mask; 369 + } 370 370 }; 371 371 372 372 /** ··· 662 662 pdev->dev.of_node_reused = pdevinfo->of_node_reused; 663 663 664 664 if (pdevinfo->dma_mask) { 665 - /* 666 - * This memory isn't freed when the device is put, 667 - * I don't have a nice idea for that though. Conceptually 668 - * dma_mask in struct device should not be a pointer. 669 - * See http://thread.gmane.org/gmane.linux.kernel.pci/9081 670 - */ 671 - pdev->dev.dma_mask = 672 - kmalloc(sizeof(*pdev->dev.dma_mask), GFP_KERNEL); 673 - if (!pdev->dev.dma_mask) 674 - goto err; 675 - 676 - kmemleak_ignore(pdev->dev.dma_mask); 677 - 678 - *pdev->dev.dma_mask = pdevinfo->dma_mask; 665 + pdev->platform_dma_mask = pdevinfo->dma_mask; 666 + pdev->dev.dma_mask = &pdev->platform_dma_mask; 679 667 pdev->dev.coherent_dma_mask = pdevinfo->dma_mask; 680 668 } 681 669 ··· 688 700 if (ret) { 689 701 err: 690 702 ACPI_COMPANION_SET(&pdev->dev, NULL); 691 - kfree(pdev->dev.dma_mask); 692 703 platform_device_put(pdev); 693 704 return ERR_PTR(ret); 694 705 }
+12 -5
drivers/block/virtio_blk.c
··· 245 245 err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num); 246 246 if (err) { 247 247 virtqueue_kick(vblk->vqs[qid].vq); 248 - blk_mq_stop_hw_queue(hctx); 248 + /* Don't stop the queue if -ENOMEM: we may have failed to 249 + * bounce the buffer due to global resource outage. 250 + */ 251 + if (err == -ENOSPC) 252 + blk_mq_stop_hw_queue(hctx); 249 253 spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags); 250 - /* Out of mem doesn't actually happen, since we fall back 251 - * to direct descriptors */ 252 - if (err == -ENOMEM || err == -ENOSPC) 254 + switch (err) { 255 + case -ENOSPC: 253 256 return BLK_STS_DEV_RESOURCE; 254 - return BLK_STS_IOERR; 257 + case -ENOMEM: 258 + return BLK_STS_RESOURCE; 259 + default: 260 + return BLK_STS_IOERR; 261 + } 255 262 } 256 263 257 264 if (bd->last && virtqueue_kick_prepare(vblk->vqs[qid].vq))
+2 -2
drivers/char/ipmi/ipmi_si_platform.c
··· 194 194 else 195 195 io.slave_addr = slave_addr; 196 196 197 - io.irq = platform_get_irq(pdev, 0); 197 + io.irq = platform_get_irq_optional(pdev, 0); 198 198 if (io.irq > 0) 199 199 io.irq_setup = ipmi_std_irq_setup; 200 200 else ··· 378 378 io.irq = tmp; 379 379 io.irq_setup = acpi_gpe_irq_setup; 380 380 } else { 381 - int irq = platform_get_irq(pdev, 0); 381 + int irq = platform_get_irq_optional(pdev, 0); 382 382 383 383 if (irq > 0) { 384 384 io.irq = irq;
+2 -2
drivers/clk/clk.c
··· 4713 4713 * 4714 4714 * Returns: The number of clocks that are possible parents of this node 4715 4715 */ 4716 - unsigned int of_clk_get_parent_count(struct device_node *np) 4716 + unsigned int of_clk_get_parent_count(const struct device_node *np) 4717 4717 { 4718 4718 int count; 4719 4719 ··· 4725 4725 } 4726 4726 EXPORT_SYMBOL_GPL(of_clk_get_parent_count); 4727 4727 4728 - const char *of_clk_get_parent_name(struct device_node *np, int index) 4728 + const char *of_clk_get_parent_name(const struct device_node *np, int index) 4729 4729 { 4730 4730 struct of_phandle_args clkspec; 4731 4731 struct property *prop;
-19
drivers/clk/qcom/dispcc-sc7180.c
··· 592 592 }, 593 593 }; 594 594 595 - static struct clk_branch disp_cc_mdss_rscc_ahb_clk = { 596 - .halt_reg = 0x400c, 597 - .halt_check = BRANCH_HALT, 598 - .clkr = { 599 - .enable_reg = 0x400c, 600 - .enable_mask = BIT(0), 601 - .hw.init = &(struct clk_init_data){ 602 - .name = "disp_cc_mdss_rscc_ahb_clk", 603 - .parent_data = &(const struct clk_parent_data){ 604 - .hw = &disp_cc_mdss_ahb_clk_src.clkr.hw, 605 - }, 606 - .num_parents = 1, 607 - .flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT, 608 - .ops = &clk_branch2_ops, 609 - }, 610 - }, 611 - }; 612 - 613 595 static struct clk_branch disp_cc_mdss_rscc_vsync_clk = { 614 596 .halt_reg = 0x4008, 615 597 .halt_check = BRANCH_HALT, ··· 669 687 [DISP_CC_MDSS_PCLK0_CLK_SRC] = &disp_cc_mdss_pclk0_clk_src.clkr, 670 688 [DISP_CC_MDSS_ROT_CLK] = &disp_cc_mdss_rot_clk.clkr, 671 689 [DISP_CC_MDSS_ROT_CLK_SRC] = &disp_cc_mdss_rot_clk_src.clkr, 672 - [DISP_CC_MDSS_RSCC_AHB_CLK] = &disp_cc_mdss_rscc_ahb_clk.clkr, 673 690 [DISP_CC_MDSS_RSCC_VSYNC_CLK] = &disp_cc_mdss_rscc_vsync_clk.clkr, 674 691 [DISP_CC_MDSS_VSYNC_CLK] = &disp_cc_mdss_vsync_clk.clkr, 675 692 [DISP_CC_MDSS_VSYNC_CLK_SRC] = &disp_cc_mdss_vsync_clk_src.clkr,
+1 -1
drivers/clk/qcom/videocc-sc7180.c
··· 97 97 98 98 static struct clk_branch video_cc_vcodec0_core_clk = { 99 99 .halt_reg = 0x890, 100 - .halt_check = BRANCH_HALT, 100 + .halt_check = BRANCH_HALT_VOTED, 101 101 .clkr = { 102 102 .enable_reg = 0x890, 103 103 .enable_mask = BIT(0),
+23 -9
drivers/firmware/efi/efivars.c
··· 83 83 efivar_attr_read(struct efivar_entry *entry, char *buf) 84 84 { 85 85 struct efi_variable *var = &entry->var; 86 + unsigned long size = sizeof(var->Data); 86 87 char *str = buf; 88 + int ret; 87 89 88 90 if (!entry || !buf) 89 91 return -EINVAL; 90 92 91 - var->DataSize = 1024; 92 - if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) 93 + ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data); 94 + var->DataSize = size; 95 + if (ret) 93 96 return -EIO; 94 97 95 98 if (var->Attributes & EFI_VARIABLE_NON_VOLATILE) ··· 119 116 efivar_size_read(struct efivar_entry *entry, char *buf) 120 117 { 121 118 struct efi_variable *var = &entry->var; 119 + unsigned long size = sizeof(var->Data); 122 120 char *str = buf; 121 + int ret; 123 122 124 123 if (!entry || !buf) 125 124 return -EINVAL; 126 125 127 - var->DataSize = 1024; 128 - if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) 126 + ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data); 127 + var->DataSize = size; 128 + if (ret) 129 129 return -EIO; 130 130 131 131 str += sprintf(str, "0x%lx\n", var->DataSize); ··· 139 133 efivar_data_read(struct efivar_entry *entry, char *buf) 140 134 { 141 135 struct efi_variable *var = &entry->var; 136 + unsigned long size = sizeof(var->Data); 137 + int ret; 142 138 143 139 if (!entry || !buf) 144 140 return -EINVAL; 145 141 146 - var->DataSize = 1024; 147 - if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) 142 + ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data); 143 + var->DataSize = size; 144 + if (ret) 148 145 return -EIO; 149 146 150 147 memcpy(buf, var->Data, var->DataSize); ··· 208 199 u8 *data; 209 200 int err; 210 201 202 + if (!entry || !buf) 203 + return -EINVAL; 204 + 211 205 if (in_compat_syscall()) { 212 206 struct compat_efi_variable *compat; 213 207 ··· 262 250 { 263 251 struct efi_variable *var = &entry->var; 264 252 struct compat_efi_variable *compat; 253 + unsigned long datasize = sizeof(var->Data); 265 254 size_t size; 255 + int ret; 266 256 267 257 if (!entry || !buf) 268 258 return 0; 269 259 270 - var->DataSize = 1024; 271 - if (efivar_entry_get(entry, &entry->var.Attributes, 272 - &entry->var.DataSize, entry->var.Data)) 260 + ret = efivar_entry_get(entry, &var->Attributes, &datasize, var->Data); 261 + var->DataSize = datasize; 262 + if (ret) 273 263 return -EIO; 274 264 275 265 if (in_compat_syscall()) {
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
··· 781 781 ssize_t result = 0; 782 782 uint32_t offset, se, sh, cu, wave, simd, thread, bank, *data; 783 783 784 - if (size & 3 || *pos & 3) 784 + if (size > 4096 || size & 3 || *pos & 3) 785 785 return -EINVAL; 786 786 787 787 /* decode offset */ 788 - offset = *pos & GENMASK_ULL(11, 0); 788 + offset = (*pos & GENMASK_ULL(11, 0)) >> 2; 789 789 se = (*pos & GENMASK_ULL(19, 12)) >> 12; 790 790 sh = (*pos & GENMASK_ULL(27, 20)) >> 20; 791 791 cu = (*pos & GENMASK_ULL(35, 28)) >> 28; ··· 823 823 while (size) { 824 824 uint32_t value; 825 825 826 - value = data[offset++]; 826 + value = data[result >> 2]; 827 827 r = put_user(value, (uint32_t *)buf); 828 828 if (r) { 829 829 result = r;
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3913 3913 if (r) 3914 3914 goto out; 3915 3915 3916 + amdgpu_fbdev_set_suspend(tmp_adev, 0); 3917 + 3916 3918 /* must succeed. */ 3917 3919 amdgpu_ras_resume(tmp_adev); 3918 3920 ··· 4087 4085 * And add them back after reset completed 4088 4086 */ 4089 4087 amdgpu_unregister_gpu_instance(tmp_adev); 4088 + 4089 + amdgpu_fbdev_set_suspend(adev, 1); 4090 4090 4091 4091 /* disable ras on ALL IPs */ 4092 4092 if (!(in_ras_intr && !use_baco) &&
+1 -1
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
··· 693 693 bool enable = (state == AMD_CG_STATE_GATE); 694 694 695 695 if (enable) { 696 - if (jpeg_v2_0_is_idle(handle)) 696 + if (!jpeg_v2_0_is_idle(handle)) 697 697 return -EBUSY; 698 698 jpeg_v2_0_enable_clock_gating(adev); 699 699 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
··· 477 477 continue; 478 478 479 479 if (enable) { 480 - if (jpeg_v2_5_is_idle(handle)) 480 + if (!jpeg_v2_5_is_idle(handle)) 481 481 return -EBUSY; 482 482 jpeg_v2_5_enable_clock_gating(adev, i); 483 483 } else {
+23 -2
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 89 89 #define HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK 0x00010000L 90 90 #define HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK 0x00020000L 91 91 #define mmHDP_MEM_POWER_CTRL_BASE_IDX 0 92 + 93 + /* for Vega20/arcturus regiter offset change */ 94 + #define mmROM_INDEX_VG20 0x00e4 95 + #define mmROM_INDEX_VG20_BASE_IDX 0 96 + #define mmROM_DATA_VG20 0x00e5 97 + #define mmROM_DATA_VG20_BASE_IDX 0 98 + 92 99 /* 93 100 * Indirect registers accessor 94 101 */ ··· 316 309 { 317 310 u32 *dw_ptr; 318 311 u32 i, length_dw; 312 + uint32_t rom_index_offset; 313 + uint32_t rom_data_offset; 319 314 320 315 if (bios == NULL) 321 316 return false; ··· 330 321 dw_ptr = (u32 *)bios; 331 322 length_dw = ALIGN(length_bytes, 4) / 4; 332 323 324 + switch (adev->asic_type) { 325 + case CHIP_VEGA20: 326 + case CHIP_ARCTURUS: 327 + rom_index_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX_VG20); 328 + rom_data_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA_VG20); 329 + break; 330 + default: 331 + rom_index_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX); 332 + rom_data_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA); 333 + break; 334 + } 335 + 333 336 /* set rom index to 0 */ 334 - WREG32(SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX), 0); 337 + WREG32(rom_index_offset, 0); 335 338 /* read out the rom data */ 336 339 for (i = 0; i < length_dw; i++) 337 - dw_ptr[i] = RREG32(SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA)); 340 + dw_ptr[i] = RREG32(rom_data_offset); 338 341 339 342 return true; 340 343 }
+1 -1
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
··· 1352 1352 1353 1353 if (enable) { 1354 1354 /* wait for STATUS to clear */ 1355 - if (vcn_v1_0_is_idle(handle)) 1355 + if (!vcn_v1_0_is_idle(handle)) 1356 1356 return -EBUSY; 1357 1357 vcn_v1_0_enable_clock_gating(adev); 1358 1358 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
··· 1217 1217 1218 1218 if (enable) { 1219 1219 /* wait for STATUS to clear */ 1220 - if (vcn_v2_0_is_idle(handle)) 1220 + if (!vcn_v2_0_is_idle(handle)) 1221 1221 return -EBUSY; 1222 1222 vcn_v2_0_enable_clock_gating(adev); 1223 1223 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
··· 1672 1672 return 0; 1673 1673 1674 1674 if (enable) { 1675 - if (vcn_v2_5_is_idle(handle)) 1675 + if (!vcn_v2_5_is_idle(handle)) 1676 1676 return -EBUSY; 1677 1677 vcn_v2_5_enable_clock_gating(adev); 1678 1678 } else {
+15 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 522 522 523 523 acrtc_state = to_dm_crtc_state(acrtc->base.state); 524 524 525 - DRM_DEBUG_DRIVER("crtc:%d, vupdate-vrr:%d\n", acrtc->crtc_id, 526 - amdgpu_dm_vrr_active(acrtc_state)); 525 + DRM_DEBUG_DRIVER("crtc:%d, vupdate-vrr:%d, planes:%d\n", acrtc->crtc_id, 526 + amdgpu_dm_vrr_active(acrtc_state), 527 + acrtc_state->active_planes); 527 528 528 529 amdgpu_dm_crtc_handle_crc_irq(&acrtc->base); 529 530 drm_crtc_handle_vblank(&acrtc->base); ··· 544 543 &acrtc_state->vrr_params.adjust); 545 544 } 546 545 547 - if (acrtc->pflip_status == AMDGPU_FLIP_SUBMITTED) { 546 + /* 547 + * If there aren't any active_planes then DCH HUBP may be clock-gated. 548 + * In that case, pageflip completion interrupts won't fire and pageflip 549 + * completion events won't get delivered. Prevent this by sending 550 + * pending pageflip events from here if a flip is still pending. 551 + * 552 + * If any planes are enabled, use dm_pflip_high_irq() instead, to 553 + * avoid race conditions between flip programming and completion, 554 + * which could cause too early flip completion events. 555 + */ 556 + if (acrtc->pflip_status == AMDGPU_FLIP_SUBMITTED && 557 + acrtc_state->active_planes == 0) { 548 558 if (acrtc->event) { 549 559 drm_crtc_send_vblank_event(&acrtc->base, acrtc->event); 550 560 acrtc->event = NULL;
-1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
··· 108 108 .enable_power_gating_plane = dcn20_enable_power_gating_plane, 109 109 .dpp_pg_control = dcn20_dpp_pg_control, 110 110 .hubp_pg_control = dcn20_hubp_pg_control, 111 - .dsc_pg_control = NULL, 112 111 .update_odm = dcn20_update_odm, 113 112 .dsc_pg_control = dcn20_dsc_pg_control, 114 113 .get_surface_visual_confirm_color = dcn10_get_surface_visual_confirm_color,
+114
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 335 335 .use_urgent_burst_bw = 0 336 336 }; 337 337 338 + struct _vcs_dpi_soc_bounding_box_st dcn2_0_nv14_soc = { 339 + .clock_limits = { 340 + { 341 + .state = 0, 342 + .dcfclk_mhz = 560.0, 343 + .fabricclk_mhz = 560.0, 344 + .dispclk_mhz = 513.0, 345 + .dppclk_mhz = 513.0, 346 + .phyclk_mhz = 540.0, 347 + .socclk_mhz = 560.0, 348 + .dscclk_mhz = 171.0, 349 + .dram_speed_mts = 8960.0, 350 + }, 351 + { 352 + .state = 1, 353 + .dcfclk_mhz = 694.0, 354 + .fabricclk_mhz = 694.0, 355 + .dispclk_mhz = 642.0, 356 + .dppclk_mhz = 642.0, 357 + .phyclk_mhz = 600.0, 358 + .socclk_mhz = 694.0, 359 + .dscclk_mhz = 214.0, 360 + .dram_speed_mts = 11104.0, 361 + }, 362 + { 363 + .state = 2, 364 + .dcfclk_mhz = 875.0, 365 + .fabricclk_mhz = 875.0, 366 + .dispclk_mhz = 734.0, 367 + .dppclk_mhz = 734.0, 368 + .phyclk_mhz = 810.0, 369 + .socclk_mhz = 875.0, 370 + .dscclk_mhz = 245.0, 371 + .dram_speed_mts = 14000.0, 372 + }, 373 + { 374 + .state = 3, 375 + .dcfclk_mhz = 1000.0, 376 + .fabricclk_mhz = 1000.0, 377 + .dispclk_mhz = 1100.0, 378 + .dppclk_mhz = 1100.0, 379 + .phyclk_mhz = 810.0, 380 + .socclk_mhz = 1000.0, 381 + .dscclk_mhz = 367.0, 382 + .dram_speed_mts = 16000.0, 383 + }, 384 + { 385 + .state = 4, 386 + .dcfclk_mhz = 1200.0, 387 + .fabricclk_mhz = 1200.0, 388 + .dispclk_mhz = 1284.0, 389 + .dppclk_mhz = 1284.0, 390 + .phyclk_mhz = 810.0, 391 + .socclk_mhz = 1200.0, 392 + .dscclk_mhz = 428.0, 393 + .dram_speed_mts = 16000.0, 394 + }, 395 + /*Extra state, no dispclk ramping*/ 396 + { 397 + .state = 5, 398 + .dcfclk_mhz = 1200.0, 399 + .fabricclk_mhz = 1200.0, 400 + .dispclk_mhz = 1284.0, 401 + .dppclk_mhz = 1284.0, 402 + .phyclk_mhz = 810.0, 403 + .socclk_mhz = 1200.0, 404 + .dscclk_mhz = 428.0, 405 + .dram_speed_mts = 16000.0, 406 + }, 407 + }, 408 + .num_states = 5, 409 + .sr_exit_time_us = 8.6, 410 + .sr_enter_plus_exit_time_us = 10.9, 411 + .urgent_latency_us = 4.0, 412 + .urgent_latency_pixel_data_only_us = 4.0, 413 + .urgent_latency_pixel_mixed_with_vm_data_us = 4.0, 414 + .urgent_latency_vm_data_only_us = 4.0, 415 + .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096, 416 + .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096, 417 + .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096, 418 + .pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 40.0, 419 + .pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 40.0, 420 + .pct_ideal_dram_sdp_bw_after_urgent_vm_only = 40.0, 421 + .max_avg_sdp_bw_use_normal_percent = 40.0, 422 + .max_avg_dram_bw_use_normal_percent = 40.0, 423 + .writeback_latency_us = 12.0, 424 + .ideal_dram_bw_after_urgent_percent = 40.0, 425 + .max_request_size_bytes = 256, 426 + .dram_channel_width_bytes = 2, 427 + .fabric_datapath_to_dcn_data_return_bytes = 64, 428 + .dcn_downspread_percent = 0.5, 429 + .downspread_percent = 0.38, 430 + .dram_page_open_time_ns = 50.0, 431 + .dram_rw_turnaround_time_ns = 17.5, 432 + .dram_return_buffer_per_channel_bytes = 8192, 433 + .round_trip_ping_latency_dcfclk_cycles = 131, 434 + .urgent_out_of_order_return_per_channel_bytes = 256, 435 + .channel_interleave_bytes = 256, 436 + .num_banks = 8, 437 + .num_chans = 8, 438 + .vmm_page_size_bytes = 4096, 439 + .dram_clock_change_latency_us = 404.0, 440 + .dummy_pstate_latency_us = 5.0, 441 + .writeback_dram_clock_change_latency_us = 23.0, 442 + .return_bus_width_bytes = 64, 443 + .dispclk_dppclk_vco_speed_mhz = 3850, 444 + .xfc_bus_transport_time_us = 20, 445 + .xfc_xbuf_latency_tolerance_us = 4, 446 + .use_urgent_burst_bw = 0 447 + }; 448 + 338 449 struct _vcs_dpi_soc_bounding_box_st dcn2_0_nv12_soc = { 0 }; 339 450 340 451 #ifndef mmDP0_DP_DPHY_INTERNAL_CTRL ··· 3402 3291 static struct _vcs_dpi_soc_bounding_box_st *get_asic_rev_soc_bb( 3403 3292 uint32_t hw_internal_rev) 3404 3293 { 3294 + if (ASICREV_IS_NAVI14_M(hw_internal_rev)) 3295 + return &dcn2_0_nv14_soc; 3296 + 3405 3297 if (ASICREV_IS_NAVI12_P(hw_internal_rev)) 3406 3298 return &dcn2_0_nv12_soc; 3407 3299
-1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
··· 116 116 .enable_power_gating_plane = dcn20_enable_power_gating_plane, 117 117 .dpp_pg_control = dcn20_dpp_pg_control, 118 118 .hubp_pg_control = dcn20_hubp_pg_control, 119 - .dsc_pg_control = NULL, 120 119 .update_odm = dcn20_update_odm, 121 120 .dsc_pg_control = dcn20_dsc_pg_control, 122 121 .get_surface_visual_confirm_color = dcn10_get_surface_visual_confirm_color,
+5 -2
drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
··· 2006 2006 smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT) && 2007 2007 smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) { 2008 2008 smu_set_watermarks_table(smu, table, clock_ranges); 2009 - smu->watermarks_bitmap |= WATERMARKS_EXIST; 2010 - smu->watermarks_bitmap &= ~WATERMARKS_LOADED; 2009 + 2010 + if (!(smu->watermarks_bitmap & WATERMARKS_EXIST)) { 2011 + smu->watermarks_bitmap |= WATERMARKS_EXIST; 2012 + smu->watermarks_bitmap &= ~WATERMARKS_LOADED; 2013 + } 2011 2014 } 2012 2015 2013 2016 mutex_unlock(&smu->mutex);
+13 -9
drivers/gpu/drm/amd/powerplay/navi10_ppt.c
··· 1063 1063 int ret = 0; 1064 1064 1065 1065 if ((smu->watermarks_bitmap & WATERMARKS_EXIST) && 1066 - !(smu->watermarks_bitmap & WATERMARKS_LOADED)) { 1067 - ret = smu_write_watermarks_table(smu); 1068 - if (ret) 1069 - return ret; 1070 - 1071 - smu->watermarks_bitmap |= WATERMARKS_LOADED; 1072 - } 1073 - 1074 - if ((smu->watermarks_bitmap & WATERMARKS_EXIST) && 1075 1066 smu_feature_is_supported(smu, SMU_FEATURE_DPM_DCEFCLK_BIT) && 1076 1067 smu_feature_is_supported(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) { 1077 1068 ret = smu_send_smc_msg_with_param(smu, SMU_MSG_NumOfDisplays, ··· 1484 1493 *clock_ranges) 1485 1494 { 1486 1495 int i; 1496 + int ret = 0; 1487 1497 Watermarks_t *table = watermarks; 1488 1498 1489 1499 if (!table || !clock_ranges) ··· 1534 1542 1000)); 1535 1543 table->WatermarkRow[0][i].WmSetting = (uint8_t) 1536 1544 clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id; 1545 + } 1546 + 1547 + smu->watermarks_bitmap |= WATERMARKS_EXIST; 1548 + 1549 + /* pass data to smu controller */ 1550 + if (!(smu->watermarks_bitmap & WATERMARKS_LOADED)) { 1551 + ret = smu_write_watermarks_table(smu); 1552 + if (ret) { 1553 + pr_err("Failed to update WMTABLE!"); 1554 + return ret; 1555 + } 1556 + smu->watermarks_bitmap |= WATERMARKS_LOADED; 1537 1557 } 1538 1558 1539 1559 return 0;
+3 -2
drivers/gpu/drm/amd/powerplay/renoir_ppt.c
··· 806 806 clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id; 807 807 } 808 808 809 + smu->watermarks_bitmap |= WATERMARKS_EXIST; 810 + 809 811 /* pass data to smu controller */ 810 - if ((smu->watermarks_bitmap & WATERMARKS_EXIST) && 811 - !(smu->watermarks_bitmap & WATERMARKS_LOADED)) { 812 + if (!(smu->watermarks_bitmap & WATERMARKS_LOADED)) { 812 813 ret = smu_write_watermarks_table(smu); 813 814 if (ret) { 814 815 pr_err("Failed to update WMTABLE!");
+2 -2
drivers/gpu/drm/arm/display/komeda/komeda_drv.c
··· 146 146 147 147 MODULE_DEVICE_TABLE(of, komeda_of_match); 148 148 149 - static int komeda_rt_pm_suspend(struct device *dev) 149 + static int __maybe_unused komeda_rt_pm_suspend(struct device *dev) 150 150 { 151 151 struct komeda_drv *mdrv = dev_get_drvdata(dev); 152 152 153 153 return komeda_dev_suspend(mdrv->mdev); 154 154 } 155 155 156 - static int komeda_rt_pm_resume(struct device *dev) 156 + static int __maybe_unused komeda_rt_pm_resume(struct device *dev) 157 157 { 158 158 struct komeda_drv *mdrv = dev_get_drvdata(dev); 159 159
+2 -4
drivers/gpu/drm/bochs/bochs_hw.c
··· 156 156 size = min(size, mem); 157 157 } 158 158 159 - if (pci_request_region(pdev, 0, "bochs-drm") != 0) { 160 - DRM_ERROR("Cannot request framebuffer\n"); 161 - return -EBUSY; 162 - } 159 + if (pci_request_region(pdev, 0, "bochs-drm") != 0) 160 + DRM_WARN("Cannot request framebuffer, boot fb still active?\n"); 163 161 164 162 bochs->fb_map = ioremap(addr, size); 165 163 if (bochs->fb_map == NULL) {
+26 -20
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 1624 1624 frame.colorspace = HDMI_COLORSPACE_RGB; 1625 1625 1626 1626 /* Set up colorimetry */ 1627 - switch (hdmi->hdmi_data.enc_out_encoding) { 1628 - case V4L2_YCBCR_ENC_601: 1629 - if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV601) 1630 - frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1631 - else 1627 + if (!hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_out_bus_format)) { 1628 + switch (hdmi->hdmi_data.enc_out_encoding) { 1629 + case V4L2_YCBCR_ENC_601: 1630 + if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV601) 1631 + frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1632 + else 1633 + frame.colorimetry = HDMI_COLORIMETRY_ITU_601; 1634 + frame.extended_colorimetry = 1635 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1636 + break; 1637 + case V4L2_YCBCR_ENC_709: 1638 + if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV709) 1639 + frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1640 + else 1641 + frame.colorimetry = HDMI_COLORIMETRY_ITU_709; 1642 + frame.extended_colorimetry = 1643 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_709; 1644 + break; 1645 + default: /* Carries no data */ 1632 1646 frame.colorimetry = HDMI_COLORIMETRY_ITU_601; 1647 + frame.extended_colorimetry = 1648 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1649 + break; 1650 + } 1651 + } else { 1652 + frame.colorimetry = HDMI_COLORIMETRY_NONE; 1633 1653 frame.extended_colorimetry = 1634 - HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1635 - break; 1636 - case V4L2_YCBCR_ENC_709: 1637 - if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV709) 1638 - frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1639 - else 1640 - frame.colorimetry = HDMI_COLORIMETRY_ITU_709; 1641 - frame.extended_colorimetry = 1642 - HDMI_EXTENDED_COLORIMETRY_XV_YCC_709; 1643 - break; 1644 - default: /* Carries no data */ 1645 - frame.colorimetry = HDMI_COLORIMETRY_ITU_601; 1646 - frame.extended_colorimetry = 1647 - HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1648 - break; 1654 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1649 1655 } 1650 1656 1651 1657 frame.scan_mode = HDMI_SCAN_MODE_NONE;
+128 -60
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1935 1935 return parent_lct + 1; 1936 1936 } 1937 1937 1938 - static bool drm_dp_mst_is_dp_mst_end_device(u8 pdt, bool mcs) 1938 + static bool drm_dp_mst_is_end_device(u8 pdt, bool mcs) 1939 1939 { 1940 1940 switch (pdt) { 1941 1941 case DP_PEER_DEVICE_DP_LEGACY_CONV: ··· 1965 1965 1966 1966 /* Teardown the old pdt, if there is one */ 1967 1967 if (port->pdt != DP_PEER_DEVICE_NONE) { 1968 - if (drm_dp_mst_is_dp_mst_end_device(port->pdt, port->mcs)) { 1968 + if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { 1969 1969 /* 1970 1970 * If the new PDT would also have an i2c bus, 1971 1971 * don't bother with reregistering it 1972 1972 */ 1973 1973 if (new_pdt != DP_PEER_DEVICE_NONE && 1974 - drm_dp_mst_is_dp_mst_end_device(new_pdt, new_mcs)) { 1974 + drm_dp_mst_is_end_device(new_pdt, new_mcs)) { 1975 1975 port->pdt = new_pdt; 1976 1976 port->mcs = new_mcs; 1977 1977 return 0; ··· 1991 1991 port->mcs = new_mcs; 1992 1992 1993 1993 if (port->pdt != DP_PEER_DEVICE_NONE) { 1994 - if (drm_dp_mst_is_dp_mst_end_device(port->pdt, port->mcs)) { 1994 + if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { 1995 1995 /* add i2c over sideband */ 1996 1996 ret = drm_dp_mst_register_i2c_bus(&port->aux); 1997 1997 } else { ··· 2172 2172 } 2173 2173 2174 2174 if (port->pdt != DP_PEER_DEVICE_NONE && 2175 - drm_dp_mst_is_dp_mst_end_device(port->pdt, port->mcs)) { 2175 + drm_dp_mst_is_end_device(port->pdt, port->mcs)) { 2176 2176 port->cached_edid = drm_get_edid(port->connector, 2177 2177 &port->aux.ddc); 2178 2178 drm_connector_set_tile_property(port->connector); ··· 2302 2302 mutex_unlock(&mgr->lock); 2303 2303 } 2304 2304 2305 - if (old_ddps != port->ddps) { 2306 - if (port->ddps) { 2307 - if (!port->input) { 2308 - drm_dp_send_enum_path_resources(mgr, mstb, 2309 - port); 2310 - } 2305 + /* 2306 + * Reprobe PBN caps on both hotplug, and when re-probing the link 2307 + * for our parent mstb 2308 + */ 2309 + if (old_ddps != port->ddps || !created) { 2310 + if (port->ddps && !port->input) { 2311 + ret = drm_dp_send_enum_path_resources(mgr, mstb, 2312 + port); 2313 + if (ret == 1) 2314 + changed = true; 2311 2315 } else { 2312 - port->available_pbn = 0; 2316 + port->full_pbn = 0; 2313 2317 } 2314 2318 } 2315 2319 ··· 2405 2401 port->ddps = conn_stat->displayport_device_plug_status; 2406 2402 2407 2403 if (old_ddps != port->ddps) { 2408 - if (port->ddps) { 2409 - dowork = true; 2410 - } else { 2411 - port->available_pbn = 0; 2412 - } 2404 + if (port->ddps && !port->input) 2405 + drm_dp_send_enum_path_resources(mgr, mstb, port); 2406 + else 2407 + port->full_pbn = 0; 2413 2408 } 2414 2409 2415 2410 new_pdt = port->input ? DP_PEER_DEVICE_NONE : conn_stat->peer_device_type; ··· 2558 2555 2559 2556 if (port->input || !port->ddps) 2560 2557 continue; 2561 - 2562 - if (!port->available_pbn) { 2563 - drm_modeset_lock(&mgr->base.lock, NULL); 2564 - drm_dp_send_enum_path_resources(mgr, mstb, port); 2565 - drm_modeset_unlock(&mgr->base.lock); 2566 - changed = true; 2567 - } 2568 2558 2569 2559 if (port->mstb) 2570 2560 mstb_child = drm_dp_mst_topology_get_mstb_validated( ··· 2986 2990 2987 2991 ret = drm_dp_mst_wait_tx_reply(mstb, txmsg); 2988 2992 if (ret > 0) { 2993 + ret = 0; 2989 2994 path_res = &txmsg->reply.u.path_resources; 2990 2995 2991 2996 if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { ··· 2999 3002 path_res->port_number, 3000 3003 path_res->full_payload_bw_number, 3001 3004 path_res->avail_payload_bw_number); 3002 - port->available_pbn = 3003 - path_res->avail_payload_bw_number; 3005 + 3006 + /* 3007 + * If something changed, make sure we send a 3008 + * hotplug 3009 + */ 3010 + if (port->full_pbn != path_res->full_payload_bw_number || 3011 + port->fec_capable != path_res->fec_capable) 3012 + ret = 1; 3013 + 3014 + port->full_pbn = path_res->full_payload_bw_number; 3004 3015 port->fec_capable = path_res->fec_capable; 3005 3016 } 3006 3017 } 3007 3018 3008 3019 kfree(txmsg); 3009 - return 0; 3020 + return ret; 3010 3021 } 3011 3022 3012 3023 static struct drm_dp_mst_port *drm_dp_get_last_connected_port_to_mstb(struct drm_dp_mst_branch *mstb) ··· 3601 3596 /* The link address will need to be re-sent on resume */ 3602 3597 mstb->link_address_sent = false; 3603 3598 3604 - list_for_each_entry(port, &mstb->ports, next) { 3605 - /* The PBN for each port will also need to be re-probed */ 3606 - port->available_pbn = 0; 3607 - 3599 + list_for_each_entry(port, &mstb->ports, next) 3608 3600 if (port->mstb) 3609 3601 drm_dp_mst_topology_mgr_invalidate_mstb(port->mstb); 3610 - } 3611 3602 } 3612 3603 3613 3604 /** ··· 4830 4829 return false; 4831 4830 } 4832 4831 4833 - static inline 4834 - int drm_dp_mst_atomic_check_bw_limit(struct drm_dp_mst_branch *branch, 4835 - struct drm_dp_mst_topology_state *mst_state) 4832 + static int 4833 + drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, 4834 + struct drm_dp_mst_topology_state *state); 4835 + 4836 + static int 4837 + drm_dp_mst_atomic_check_mstb_bw_limit(struct drm_dp_mst_branch *mstb, 4838 + struct drm_dp_mst_topology_state *state) 4836 4839 { 4837 - struct drm_dp_mst_port *port; 4838 4840 struct drm_dp_vcpi_allocation *vcpi; 4839 - int pbn_limit = 0, pbn_used = 0; 4841 + struct drm_dp_mst_port *port; 4842 + int pbn_used = 0, ret; 4843 + bool found = false; 4840 4844 4841 - list_for_each_entry(port, &branch->ports, next) { 4842 - if (port->mstb) 4843 - if (drm_dp_mst_atomic_check_bw_limit(port->mstb, mst_state)) 4844 - return -ENOSPC; 4845 - 4846 - if (port->available_pbn > 0) 4847 - pbn_limit = port->available_pbn; 4848 - } 4849 - DRM_DEBUG_ATOMIC("[MST BRANCH:%p] branch has %d PBN available\n", 4850 - branch, pbn_limit); 4851 - 4852 - list_for_each_entry(vcpi, &mst_state->vcpis, next) { 4853 - if (!vcpi->pbn) 4845 + /* Check that we have at least one port in our state that's downstream 4846 + * of this branch, otherwise we can skip this branch 4847 + */ 4848 + list_for_each_entry(vcpi, &state->vcpis, next) { 4849 + if (!vcpi->pbn || 4850 + !drm_dp_mst_port_downstream_of_branch(vcpi->port, mstb)) 4854 4851 continue; 4855 4852 4856 - if (drm_dp_mst_port_downstream_of_branch(vcpi->port, branch)) 4857 - pbn_used += vcpi->pbn; 4853 + found = true; 4854 + break; 4858 4855 } 4859 - DRM_DEBUG_ATOMIC("[MST BRANCH:%p] branch used %d PBN\n", 4860 - branch, pbn_used); 4856 + if (!found) 4857 + return 0; 4861 4858 4862 - if (pbn_used > pbn_limit) { 4863 - DRM_DEBUG_ATOMIC("[MST BRANCH:%p] No available bandwidth\n", 4864 - branch); 4859 + if (mstb->port_parent) 4860 + DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] Checking bandwidth limits on [MSTB:%p]\n", 4861 + mstb->port_parent->parent, mstb->port_parent, 4862 + mstb); 4863 + else 4864 + DRM_DEBUG_ATOMIC("[MSTB:%p] Checking bandwidth limits\n", 4865 + mstb); 4866 + 4867 + list_for_each_entry(port, &mstb->ports, next) { 4868 + ret = drm_dp_mst_atomic_check_port_bw_limit(port, state); 4869 + if (ret < 0) 4870 + return ret; 4871 + 4872 + pbn_used += ret; 4873 + } 4874 + 4875 + return pbn_used; 4876 + } 4877 + 4878 + static int 4879 + drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, 4880 + struct drm_dp_mst_topology_state *state) 4881 + { 4882 + struct drm_dp_vcpi_allocation *vcpi; 4883 + int pbn_used = 0; 4884 + 4885 + if (port->pdt == DP_PEER_DEVICE_NONE) 4886 + return 0; 4887 + 4888 + if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { 4889 + bool found = false; 4890 + 4891 + list_for_each_entry(vcpi, &state->vcpis, next) { 4892 + if (vcpi->port != port) 4893 + continue; 4894 + if (!vcpi->pbn) 4895 + return 0; 4896 + 4897 + found = true; 4898 + break; 4899 + } 4900 + if (!found) 4901 + return 0; 4902 + 4903 + /* This should never happen, as it means we tried to 4904 + * set a mode before querying the full_pbn 4905 + */ 4906 + if (WARN_ON(!port->full_pbn)) 4907 + return -EINVAL; 4908 + 4909 + pbn_used = vcpi->pbn; 4910 + } else { 4911 + pbn_used = drm_dp_mst_atomic_check_mstb_bw_limit(port->mstb, 4912 + state); 4913 + if (pbn_used <= 0) 4914 + return pbn_used; 4915 + } 4916 + 4917 + if (pbn_used > port->full_pbn) { 4918 + DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] required PBN of %d exceeds port limit of %d\n", 4919 + port->parent, port, pbn_used, 4920 + port->full_pbn); 4865 4921 return -ENOSPC; 4866 4922 } 4867 - return 0; 4923 + 4924 + DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] uses %d out of %d PBN\n", 4925 + port->parent, port, pbn_used, port->full_pbn); 4926 + 4927 + return pbn_used; 4868 4928 } 4869 4929 4870 4930 static inline int ··· 5123 5061 ret = drm_dp_mst_atomic_check_vcpi_alloc_limit(mgr, mst_state); 5124 5062 if (ret) 5125 5063 break; 5126 - ret = drm_dp_mst_atomic_check_bw_limit(mgr->mst_primary, mst_state); 5127 - if (ret) 5064 + 5065 + mutex_lock(&mgr->lock); 5066 + ret = drm_dp_mst_atomic_check_mstb_bw_limit(mgr->mst_primary, 5067 + mst_state); 5068 + mutex_unlock(&mgr->lock); 5069 + if (ret < 0) 5128 5070 break; 5071 + else 5072 + ret = 0; 5129 5073 } 5130 5074 5131 5075 return ret;
+2 -1
drivers/gpu/drm/drm_lease.c
··· 542 542 } 543 543 544 544 DRM_DEBUG_LEASE("Creating lease\n"); 545 + /* lessee will take the ownership of leases */ 545 546 lessee = drm_lease_create(lessor, &leases); 546 547 547 548 if (IS_ERR(lessee)) { 548 549 ret = PTR_ERR(lessee); 550 + idr_destroy(&leases); 549 551 goto out_leases; 550 552 } 551 553 ··· 582 580 583 581 out_leases: 584 582 put_unused_fd(fd); 585 - idr_destroy(&leases); 586 583 587 584 DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret); 588 585 return ret;
+3 -2
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 55 55 struct decon_context { 56 56 struct device *dev; 57 57 struct drm_device *drm_dev; 58 + void *dma_priv; 58 59 struct exynos_drm_crtc *crtc; 59 60 struct exynos_drm_plane planes[WINDOWS_NR]; 60 61 struct exynos_drm_plane_config configs[WINDOWS_NR]; ··· 645 644 646 645 decon_clear_channels(ctx->crtc); 647 646 648 - return exynos_drm_register_dma(drm_dev, dev); 647 + return exynos_drm_register_dma(drm_dev, dev, &ctx->dma_priv); 649 648 } 650 649 651 650 static void decon_unbind(struct device *dev, struct device *master, void *data) ··· 655 654 decon_atomic_disable(ctx->crtc); 656 655 657 656 /* detach this sub driver from iommu mapping if supported. */ 658 - exynos_drm_unregister_dma(ctx->drm_dev, ctx->dev); 657 + exynos_drm_unregister_dma(ctx->drm_dev, ctx->dev, &ctx->dma_priv); 659 658 } 660 659 661 660 static const struct component_ops decon_component_ops = {
+3 -2
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 40 40 struct decon_context { 41 41 struct device *dev; 42 42 struct drm_device *drm_dev; 43 + void *dma_priv; 43 44 struct exynos_drm_crtc *crtc; 44 45 struct exynos_drm_plane planes[WINDOWS_NR]; 45 46 struct exynos_drm_plane_config configs[WINDOWS_NR]; ··· 128 127 129 128 decon_clear_channels(ctx->crtc); 130 129 131 - return exynos_drm_register_dma(drm_dev, ctx->dev); 130 + return exynos_drm_register_dma(drm_dev, ctx->dev, &ctx->dma_priv); 132 131 } 133 132 134 133 static void decon_ctx_remove(struct decon_context *ctx) 135 134 { 136 135 /* detach this sub driver from iommu mapping if supported. */ 137 - exynos_drm_unregister_dma(ctx->drm_dev, ctx->dev); 136 + exynos_drm_unregister_dma(ctx->drm_dev, ctx->dev, &ctx->dma_priv); 138 137 } 139 138 140 139 static u32 decon_calc_clkdiv(struct decon_context *ctx,
+19 -9
drivers/gpu/drm/exynos/exynos_drm_dma.c
··· 58 58 * mapping. 59 59 */ 60 60 static int drm_iommu_attach_device(struct drm_device *drm_dev, 61 - struct device *subdrv_dev) 61 + struct device *subdrv_dev, void **dma_priv) 62 62 { 63 63 struct exynos_drm_private *priv = drm_dev->dev_private; 64 64 int ret; ··· 74 74 return ret; 75 75 76 76 if (IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)) { 77 - if (to_dma_iommu_mapping(subdrv_dev)) 77 + /* 78 + * Keep the original DMA mapping of the sub-device and 79 + * restore it on Exynos DRM detach, otherwise the DMA 80 + * framework considers it as IOMMU-less during the next 81 + * probe (in case of deferred probe or modular build) 82 + */ 83 + *dma_priv = to_dma_iommu_mapping(subdrv_dev); 84 + if (*dma_priv) 78 85 arm_iommu_detach_device(subdrv_dev); 79 86 80 87 ret = arm_iommu_attach_device(subdrv_dev, priv->mapping); ··· 105 98 * mapping 106 99 */ 107 100 static void drm_iommu_detach_device(struct drm_device *drm_dev, 108 - struct device *subdrv_dev) 101 + struct device *subdrv_dev, void **dma_priv) 109 102 { 110 103 struct exynos_drm_private *priv = drm_dev->dev_private; 111 104 112 - if (IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)) 105 + if (IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)) { 113 106 arm_iommu_detach_device(subdrv_dev); 114 - else if (IS_ENABLED(CONFIG_IOMMU_DMA)) 107 + arm_iommu_attach_device(subdrv_dev, *dma_priv); 108 + } else if (IS_ENABLED(CONFIG_IOMMU_DMA)) 115 109 iommu_detach_device(priv->mapping, subdrv_dev); 116 110 117 111 clear_dma_max_seg_size(subdrv_dev); 118 112 } 119 113 120 - int exynos_drm_register_dma(struct drm_device *drm, struct device *dev) 114 + int exynos_drm_register_dma(struct drm_device *drm, struct device *dev, 115 + void **dma_priv) 121 116 { 122 117 struct exynos_drm_private *priv = drm->dev_private; 123 118 ··· 146 137 priv->mapping = mapping; 147 138 } 148 139 149 - return drm_iommu_attach_device(drm, dev); 140 + return drm_iommu_attach_device(drm, dev, dma_priv); 150 141 } 151 142 152 - void exynos_drm_unregister_dma(struct drm_device *drm, struct device *dev) 143 + void exynos_drm_unregister_dma(struct drm_device *drm, struct device *dev, 144 + void **dma_priv) 153 145 { 154 146 if (IS_ENABLED(CONFIG_EXYNOS_IOMMU)) 155 - drm_iommu_detach_device(drm, dev); 147 + drm_iommu_detach_device(drm, dev, dma_priv); 156 148 } 157 149 158 150 void exynos_drm_cleanup_dma(struct drm_device *drm)
+4 -2
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 223 223 return priv->mapping ? true : false; 224 224 } 225 225 226 - int exynos_drm_register_dma(struct drm_device *drm, struct device *dev); 227 - void exynos_drm_unregister_dma(struct drm_device *drm, struct device *dev); 226 + int exynos_drm_register_dma(struct drm_device *drm, struct device *dev, 227 + void **dma_priv); 228 + void exynos_drm_unregister_dma(struct drm_device *drm, struct device *dev, 229 + void **dma_priv); 228 230 void exynos_drm_cleanup_dma(struct drm_device *drm); 229 231 230 232 #ifdef CONFIG_DRM_EXYNOS_DPI
+3 -2
drivers/gpu/drm/exynos/exynos_drm_fimc.c
··· 97 97 struct fimc_context { 98 98 struct exynos_drm_ipp ipp; 99 99 struct drm_device *drm_dev; 100 + void *dma_priv; 100 101 struct device *dev; 101 102 struct exynos_drm_ipp_task *task; 102 103 struct exynos_drm_ipp_formats *formats; ··· 1134 1133 1135 1134 ctx->drm_dev = drm_dev; 1136 1135 ipp->drm_dev = drm_dev; 1137 - exynos_drm_register_dma(drm_dev, dev); 1136 + exynos_drm_register_dma(drm_dev, dev, &ctx->dma_priv); 1138 1137 1139 1138 exynos_drm_ipp_register(dev, ipp, &ipp_funcs, 1140 1139 DRM_EXYNOS_IPP_CAP_CROP | DRM_EXYNOS_IPP_CAP_ROTATE | ··· 1154 1153 struct exynos_drm_ipp *ipp = &ctx->ipp; 1155 1154 1156 1155 exynos_drm_ipp_unregister(dev, ipp); 1157 - exynos_drm_unregister_dma(drm_dev, dev); 1156 + exynos_drm_unregister_dma(drm_dev, dev, &ctx->dma_priv); 1158 1157 } 1159 1158 1160 1159 static const struct component_ops fimc_component_ops = {
+3 -2
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 167 167 struct fimd_context { 168 168 struct device *dev; 169 169 struct drm_device *drm_dev; 170 + void *dma_priv; 170 171 struct exynos_drm_crtc *crtc; 171 172 struct exynos_drm_plane planes[WINDOWS_NR]; 172 173 struct exynos_drm_plane_config configs[WINDOWS_NR]; ··· 1091 1090 if (is_drm_iommu_supported(drm_dev)) 1092 1091 fimd_clear_channels(ctx->crtc); 1093 1092 1094 - return exynos_drm_register_dma(drm_dev, dev); 1093 + return exynos_drm_register_dma(drm_dev, dev, &ctx->dma_priv); 1095 1094 } 1096 1095 1097 1096 static void fimd_unbind(struct device *dev, struct device *master, ··· 1101 1100 1102 1101 fimd_atomic_disable(ctx->crtc); 1103 1102 1104 - exynos_drm_unregister_dma(ctx->drm_dev, ctx->dev); 1103 + exynos_drm_unregister_dma(ctx->drm_dev, ctx->dev, &ctx->dma_priv); 1105 1104 1106 1105 if (ctx->encoder) 1107 1106 exynos_dpi_remove(ctx->encoder);
+3 -2
drivers/gpu/drm/exynos/exynos_drm_g2d.c
··· 232 232 233 233 struct g2d_data { 234 234 struct device *dev; 235 + void *dma_priv; 235 236 struct clk *gate_clk; 236 237 void __iomem *regs; 237 238 int irq; ··· 1410 1409 return ret; 1411 1410 } 1412 1411 1413 - ret = exynos_drm_register_dma(drm_dev, dev); 1412 + ret = exynos_drm_register_dma(drm_dev, dev, &g2d->dma_priv); 1414 1413 if (ret < 0) { 1415 1414 dev_err(dev, "failed to enable iommu.\n"); 1416 1415 g2d_fini_cmdlist(g2d); ··· 1435 1434 priv->g2d_dev = NULL; 1436 1435 1437 1436 cancel_work_sync(&g2d->runqueue_work); 1438 - exynos_drm_unregister_dma(g2d->drm_dev, dev); 1437 + exynos_drm_unregister_dma(g2d->drm_dev, dev, &g2d->dma_priv); 1439 1438 } 1440 1439 1441 1440 static const struct component_ops g2d_component_ops = {
+3 -2
drivers/gpu/drm/exynos/exynos_drm_gsc.c
··· 97 97 struct gsc_context { 98 98 struct exynos_drm_ipp ipp; 99 99 struct drm_device *drm_dev; 100 + void *dma_priv; 100 101 struct device *dev; 101 102 struct exynos_drm_ipp_task *task; 102 103 struct exynos_drm_ipp_formats *formats; ··· 1170 1169 1171 1170 ctx->drm_dev = drm_dev; 1172 1171 ctx->drm_dev = drm_dev; 1173 - exynos_drm_register_dma(drm_dev, dev); 1172 + exynos_drm_register_dma(drm_dev, dev, &ctx->dma_priv); 1174 1173 1175 1174 exynos_drm_ipp_register(dev, ipp, &ipp_funcs, 1176 1175 DRM_EXYNOS_IPP_CAP_CROP | DRM_EXYNOS_IPP_CAP_ROTATE | ··· 1190 1189 struct exynos_drm_ipp *ipp = &ctx->ipp; 1191 1190 1192 1191 exynos_drm_ipp_unregister(dev, ipp); 1193 - exynos_drm_unregister_dma(drm_dev, dev); 1192 + exynos_drm_unregister_dma(drm_dev, dev, &ctx->dma_priv); 1194 1193 } 1195 1194 1196 1195 static const struct component_ops gsc_component_ops = {
+3 -2
drivers/gpu/drm/exynos/exynos_drm_rotator.c
··· 56 56 struct rot_context { 57 57 struct exynos_drm_ipp ipp; 58 58 struct drm_device *drm_dev; 59 + void *dma_priv; 59 60 struct device *dev; 60 61 void __iomem *regs; 61 62 struct clk *clock; ··· 244 243 245 244 rot->drm_dev = drm_dev; 246 245 ipp->drm_dev = drm_dev; 247 - exynos_drm_register_dma(drm_dev, dev); 246 + exynos_drm_register_dma(drm_dev, dev, &rot->dma_priv); 248 247 249 248 exynos_drm_ipp_register(dev, ipp, &ipp_funcs, 250 249 DRM_EXYNOS_IPP_CAP_CROP | DRM_EXYNOS_IPP_CAP_ROTATE, ··· 262 261 struct exynos_drm_ipp *ipp = &rot->ipp; 263 262 264 263 exynos_drm_ipp_unregister(dev, ipp); 265 - exynos_drm_unregister_dma(rot->drm_dev, rot->dev); 264 + exynos_drm_unregister_dma(rot->drm_dev, rot->dev, &rot->dma_priv); 266 265 } 267 266 268 267 static const struct component_ops rotator_component_ops = {
+4 -2
drivers/gpu/drm/exynos/exynos_drm_scaler.c
··· 39 39 struct scaler_context { 40 40 struct exynos_drm_ipp ipp; 41 41 struct drm_device *drm_dev; 42 + void *dma_priv; 42 43 struct device *dev; 43 44 void __iomem *regs; 44 45 struct clk *clock[SCALER_MAX_CLK]; ··· 451 450 452 451 scaler->drm_dev = drm_dev; 453 452 ipp->drm_dev = drm_dev; 454 - exynos_drm_register_dma(drm_dev, dev); 453 + exynos_drm_register_dma(drm_dev, dev, &scaler->dma_priv); 455 454 456 455 exynos_drm_ipp_register(dev, ipp, &ipp_funcs, 457 456 DRM_EXYNOS_IPP_CAP_CROP | DRM_EXYNOS_IPP_CAP_ROTATE | ··· 471 470 struct exynos_drm_ipp *ipp = &scaler->ipp; 472 471 473 472 exynos_drm_ipp_unregister(dev, ipp); 474 - exynos_drm_unregister_dma(scaler->drm_dev, scaler->dev); 473 + exynos_drm_unregister_dma(scaler->drm_dev, scaler->dev, 474 + &scaler->dma_priv); 475 475 } 476 476 477 477 static const struct component_ops scaler_component_ops = {
+5 -2
drivers/gpu/drm/exynos/exynos_mixer.c
··· 94 94 struct platform_device *pdev; 95 95 struct device *dev; 96 96 struct drm_device *drm_dev; 97 + void *dma_priv; 97 98 struct exynos_drm_crtc *crtc; 98 99 struct exynos_drm_plane planes[MIXER_WIN_NR]; 99 100 unsigned long flags; ··· 895 894 } 896 895 } 897 896 898 - return exynos_drm_register_dma(drm_dev, mixer_ctx->dev); 897 + return exynos_drm_register_dma(drm_dev, mixer_ctx->dev, 898 + &mixer_ctx->dma_priv); 899 899 } 900 900 901 901 static void mixer_ctx_remove(struct mixer_context *mixer_ctx) 902 902 { 903 - exynos_drm_unregister_dma(mixer_ctx->drm_dev, mixer_ctx->dev); 903 + exynos_drm_unregister_dma(mixer_ctx->drm_dev, mixer_ctx->dev, 904 + &mixer_ctx->dma_priv); 904 905 } 905 906 906 907 static int mixer_enable_vblank(struct exynos_drm_crtc *crtc)
+2 -1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 423 423 if (unlikely(entry->flags & eb->invalid_flags)) 424 424 return -EINVAL; 425 425 426 - if (unlikely(entry->alignment && !is_power_of_2(entry->alignment))) 426 + if (unlikely(entry->alignment && 427 + !is_power_of_2_u64(entry->alignment))) 427 428 return -EINVAL; 428 429 429 430 /*
+30 -51
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 1600 1600 spin_unlock(&old->breadcrumbs.irq_lock); 1601 1601 } 1602 1602 1603 - static struct i915_request * 1604 - last_active(const struct intel_engine_execlists *execlists) 1605 - { 1606 - struct i915_request * const *last = READ_ONCE(execlists->active); 1607 - 1608 - while (*last && i915_request_completed(*last)) 1609 - last++; 1610 - 1611 - return *last; 1612 - } 1613 - 1614 1603 #define for_each_waiter(p__, rq__) \ 1615 1604 list_for_each_entry_lockless(p__, \ 1616 1605 &(rq__)->sched.waiters_list, \ ··· 1668 1679 if (!intel_engine_has_timeslices(engine)) 1669 1680 return false; 1670 1681 1671 - if (list_is_last(&rq->sched.link, &engine->active.requests)) 1672 - return false; 1673 - 1674 - hint = max(rq_prio(list_next_entry(rq, sched.link)), 1675 - engine->execlists.queue_priority_hint); 1682 + hint = engine->execlists.queue_priority_hint; 1683 + if (!list_is_last(&rq->sched.link, &engine->active.requests)) 1684 + hint = max(hint, rq_prio(list_next_entry(rq, sched.link))); 1676 1685 1677 1686 return hint >= effective_prio(rq); 1678 1687 } ··· 1712 1725 set_timer_ms(&engine->execlists.timer, active_timeslice(engine)); 1713 1726 } 1714 1727 1728 + static void start_timeslice(struct intel_engine_cs *engine) 1729 + { 1730 + struct intel_engine_execlists *execlists = &engine->execlists; 1731 + 1732 + execlists->switch_priority_hint = execlists->queue_priority_hint; 1733 + 1734 + if (timer_pending(&execlists->timer)) 1735 + return; 1736 + 1737 + set_timer_ms(&execlists->timer, timeslice(engine)); 1738 + } 1739 + 1715 1740 static void record_preemption(struct intel_engine_execlists *execlists) 1716 1741 { 1717 1742 (void)I915_SELFTEST_ONLY(execlists->preempt_hang.count++); 1718 1743 } 1719 1744 1720 - static unsigned long active_preempt_timeout(struct intel_engine_cs *engine) 1745 + static unsigned long active_preempt_timeout(struct intel_engine_cs *engine, 1746 + const struct i915_request *rq) 1721 1747 { 1722 - struct i915_request *rq; 1723 - 1724 - rq = last_active(&engine->execlists); 1725 1748 if (!rq) 1726 1749 return 0; 1727 1750 ··· 1742 1745 return READ_ONCE(engine->props.preempt_timeout_ms); 1743 1746 } 1744 1747 1745 - static void set_preempt_timeout(struct intel_engine_cs *engine) 1748 + static void set_preempt_timeout(struct intel_engine_cs *engine, 1749 + const struct i915_request *rq) 1746 1750 { 1747 1751 if (!intel_engine_has_preempt_reset(engine)) 1748 1752 return; 1749 1753 1750 1754 set_timer_ms(&engine->execlists.preempt, 1751 - active_preempt_timeout(engine)); 1755 + active_preempt_timeout(engine, rq)); 1752 1756 } 1753 1757 1754 1758 static inline void clear_ports(struct i915_request **ports, int count) ··· 1762 1764 struct intel_engine_execlists * const execlists = &engine->execlists; 1763 1765 struct i915_request **port = execlists->pending; 1764 1766 struct i915_request ** const last_port = port + execlists->port_mask; 1767 + struct i915_request * const *active; 1765 1768 struct i915_request *last; 1766 1769 struct rb_node *rb; 1767 1770 bool submit = false; ··· 1817 1818 * i.e. we will retrigger preemption following the ack in case 1818 1819 * of trouble. 1819 1820 */ 1820 - last = last_active(execlists); 1821 + active = READ_ONCE(execlists->active); 1822 + while ((last = *active) && i915_request_completed(last)) 1823 + active++; 1824 + 1821 1825 if (last) { 1822 1826 if (need_preempt(engine, last, rb)) { 1823 1827 ENGINE_TRACE(engine, ··· 1890 1888 * Even if ELSP[1] is occupied and not worthy 1891 1889 * of timeslices, our queue might be. 1892 1890 */ 1893 - if (!execlists->timer.expires && 1894 - need_timeslice(engine, last)) 1895 - set_timer_ms(&execlists->timer, 1896 - timeslice(engine)); 1897 - 1891 + start_timeslice(engine); 1898 1892 return; 1899 1893 } 1900 1894 } ··· 1925 1927 1926 1928 if (last && !can_merge_rq(last, rq)) { 1927 1929 spin_unlock(&ve->base.active.lock); 1928 - return; /* leave this for another */ 1930 + start_timeslice(engine); 1931 + return; /* leave this for another sibling */ 1929 1932 } 1930 1933 1931 1934 ENGINE_TRACE(engine, ··· 2102 2103 * Skip if we ended up with exactly the same set of requests, 2103 2104 * e.g. trying to timeslice a pair of ordered contexts 2104 2105 */ 2105 - if (!memcmp(execlists->active, execlists->pending, 2106 + if (!memcmp(active, execlists->pending, 2106 2107 (port - execlists->pending + 1) * sizeof(*port))) { 2107 2108 do 2108 2109 execlists_schedule_out(fetch_and_zero(port)); ··· 2113 2114 clear_ports(port + 1, last_port - port); 2114 2115 2115 2116 execlists_submit_ports(engine); 2116 - set_preempt_timeout(engine); 2117 + set_preempt_timeout(engine, *active); 2117 2118 } else { 2118 2119 skip_submit: 2119 2120 ring_set_paused(engine, 0); ··· 4000 4001 4001 4002 *cs++ = preparser_disable(false); 4002 4003 intel_ring_advance(request, cs); 4003 - 4004 - /* 4005 - * Wa_1604544889:tgl 4006 - */ 4007 - if (IS_TGL_REVID(request->i915, TGL_REVID_A0, TGL_REVID_A0)) { 4008 - flags = 0; 4009 - flags |= PIPE_CONTROL_CS_STALL; 4010 - flags |= PIPE_CONTROL_HDC_PIPELINE_FLUSH; 4011 - 4012 - flags |= PIPE_CONTROL_STORE_DATA_INDEX; 4013 - flags |= PIPE_CONTROL_QW_WRITE; 4014 - 4015 - cs = intel_ring_begin(request, 6); 4016 - if (IS_ERR(cs)) 4017 - return PTR_ERR(cs); 4018 - 4019 - cs = gen8_emit_pipe_control(cs, flags, 4020 - LRC_PPHWSP_SCRATCH_ADDR); 4021 - intel_ring_advance(request, cs); 4022 - } 4023 4004 } 4024 4005 4025 4006 return 0;
+6 -2
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 192 192 193 193 static void cacheline_free(struct intel_timeline_cacheline *cl) 194 194 { 195 + if (!i915_active_acquire_if_busy(&cl->active)) { 196 + __idle_cacheline_free(cl); 197 + return; 198 + } 199 + 195 200 GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE)); 196 201 cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE); 197 202 198 - if (i915_active_is_idle(&cl->active)) 199 - __idle_cacheline_free(cl); 203 + i915_active_release(&cl->active); 200 204 } 201 205 202 206 int intel_timeline_init(struct intel_timeline *timeline,
+22 -3
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 1529 1529 return ERR_PTR(err); 1530 1530 } 1531 1531 1532 + static const struct { 1533 + u32 start; 1534 + u32 end; 1535 + } mcr_ranges_gen8[] = { 1536 + { .start = 0x5500, .end = 0x55ff }, 1537 + { .start = 0x7000, .end = 0x7fff }, 1538 + { .start = 0x9400, .end = 0x97ff }, 1539 + { .start = 0xb000, .end = 0xb3ff }, 1540 + { .start = 0xe000, .end = 0xe7ff }, 1541 + {}, 1542 + }; 1543 + 1532 1544 static bool mcr_range(struct drm_i915_private *i915, u32 offset) 1533 1545 { 1546 + int i; 1547 + 1548 + if (INTEL_GEN(i915) < 8) 1549 + return false; 1550 + 1534 1551 /* 1535 - * Registers in this range are affected by the MCR selector 1552 + * Registers in these ranges are affected by the MCR selector 1536 1553 * which only controls CPU initiated MMIO. Routing does not 1537 1554 * work for CS access so we cannot verify them on this path. 1538 1555 */ 1539 - if (INTEL_GEN(i915) >= 8 && (offset >= 0xb000 && offset <= 0xb4ff)) 1540 - return true; 1556 + for (i = 0; mcr_ranges_gen8[i].start; i++) 1557 + if (offset >= mcr_ranges_gen8[i].start && 1558 + offset <= mcr_ranges_gen8[i].end) 1559 + return true; 1541 1560 1542 1561 return false; 1543 1562 }
+2 -1
drivers/gpu/drm/i915/gvt/display.c
··· 457 457 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 458 458 459 459 /* TODO: add more platforms support */ 460 - if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) { 460 + if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv) || 461 + IS_COFFEELAKE(dev_priv)) { 461 462 if (connected) { 462 463 vgpu_vreg_t(vgpu, SFUSE_STRAP) |= 463 464 SFUSE_STRAP_DDID_DETECTED;
+2 -3
drivers/gpu/drm/i915/gvt/opregion.c
··· 147 147 /* there's features depending on version! */ 148 148 v->header.version = 155; 149 149 v->header.header_size = sizeof(v->header); 150 - v->header.vbt_size = sizeof(struct vbt) - sizeof(v->header); 150 + v->header.vbt_size = sizeof(struct vbt); 151 151 v->header.bdb_offset = offsetof(struct vbt, bdb_header); 152 152 153 153 strcpy(&v->bdb_header.signature[0], "BIOS_DATA_BLOCK"); 154 154 v->bdb_header.version = 186; /* child_dev_size = 33 */ 155 155 v->bdb_header.header_size = sizeof(v->bdb_header); 156 156 157 - v->bdb_header.bdb_size = sizeof(struct vbt) - sizeof(struct vbt_header) 158 - - sizeof(struct bdb_header); 157 + v->bdb_header.bdb_size = sizeof(struct vbt) - sizeof(struct vbt_header); 159 158 160 159 /* general features */ 161 160 v->general_features_header.id = BDB_GENERAL_FEATURES;
+9 -3
drivers/gpu/drm/i915/gvt/vgpu.c
··· 272 272 { 273 273 struct intel_gvt *gvt = vgpu->gvt; 274 274 275 - mutex_lock(&vgpu->vgpu_lock); 276 - 277 275 WARN(vgpu->active, "vGPU is still active!\n"); 278 276 277 + /* 278 + * remove idr first so later clean can judge if need to stop 279 + * service if no active vgpu. 280 + */ 281 + mutex_lock(&gvt->lock); 282 + idr_remove(&gvt->vgpu_idr, vgpu->id); 283 + mutex_unlock(&gvt->lock); 284 + 285 + mutex_lock(&vgpu->vgpu_lock); 279 286 intel_gvt_debugfs_remove_vgpu(vgpu); 280 287 intel_vgpu_clean_sched_policy(vgpu); 281 288 intel_vgpu_clean_submission(vgpu); ··· 297 290 mutex_unlock(&vgpu->vgpu_lock); 298 291 299 292 mutex_lock(&gvt->lock); 300 - idr_remove(&gvt->vgpu_idr, vgpu->id); 301 293 if (idr_is_empty(&gvt->vgpu_idr)) 302 294 intel_gvt_clean_irq(gvt); 303 295 intel_gvt_update_vgpu_types(gvt);
+20 -8
drivers/gpu/drm/i915/i915_request.c
··· 527 527 return NOTIFY_DONE; 528 528 } 529 529 530 + static void irq_semaphore_cb(struct irq_work *wrk) 531 + { 532 + struct i915_request *rq = 533 + container_of(wrk, typeof(*rq), semaphore_work); 534 + 535 + i915_schedule_bump_priority(rq, I915_PRIORITY_NOSEMAPHORE); 536 + i915_request_put(rq); 537 + } 538 + 530 539 static int __i915_sw_fence_call 531 540 semaphore_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) 532 541 { 533 - struct i915_request *request = 534 - container_of(fence, typeof(*request), semaphore); 542 + struct i915_request *rq = container_of(fence, typeof(*rq), semaphore); 535 543 536 544 switch (state) { 537 545 case FENCE_COMPLETE: 538 - i915_schedule_bump_priority(request, I915_PRIORITY_NOSEMAPHORE); 546 + if (!(READ_ONCE(rq->sched.attr.priority) & I915_PRIORITY_NOSEMAPHORE)) { 547 + i915_request_get(rq); 548 + init_irq_work(&rq->semaphore_work, irq_semaphore_cb); 549 + irq_work_queue(&rq->semaphore_work); 550 + } 539 551 break; 540 552 541 553 case FENCE_FREE: 542 - i915_request_put(request); 554 + i915_request_put(rq); 543 555 break; 544 556 } 545 557 ··· 788 776 struct dma_fence *fence; 789 777 int err; 790 778 791 - GEM_BUG_ON(i915_request_timeline(rq) == 792 - rcu_access_pointer(signal->timeline)); 779 + if (i915_request_timeline(rq) == rcu_access_pointer(signal->timeline)) 780 + return 0; 793 781 794 782 if (i915_request_started(signal)) 795 783 return 0; ··· 833 821 return 0; 834 822 835 823 err = 0; 836 - if (intel_timeline_sync_is_later(i915_request_timeline(rq), fence)) 824 + if (!intel_timeline_sync_is_later(i915_request_timeline(rq), fence)) 837 825 err = i915_sw_fence_await_dma_fence(&rq->submit, 838 826 fence, 0, 839 827 I915_FENCE_GFP); ··· 1330 1318 * decide whether to preempt the entire chain so that it is ready to 1331 1319 * run at the earliest possible convenience. 1332 1320 */ 1333 - i915_sw_fence_commit(&rq->semaphore); 1334 1321 if (attr && rq->engine->schedule) 1335 1322 rq->engine->schedule(rq, attr); 1323 + i915_sw_fence_commit(&rq->semaphore); 1336 1324 i915_sw_fence_commit(&rq->submit); 1337 1325 } 1338 1326
+2
drivers/gpu/drm/i915/i915_request.h
··· 26 26 #define I915_REQUEST_H 27 27 28 28 #include <linux/dma-fence.h> 29 + #include <linux/irq_work.h> 29 30 #include <linux/lockdep.h> 30 31 31 32 #include "gem/i915_gem_context_types.h" ··· 209 208 }; 210 209 struct list_head execute_cb; 211 210 struct i915_sw_fence semaphore; 211 + struct irq_work semaphore_work; 212 212 213 213 /* 214 214 * A list of everyone we wait upon, and everyone who waits upon us.
+5
drivers/gpu/drm/i915/i915_utils.h
··· 234 234 __idx; \ 235 235 }) 236 236 237 + static inline bool is_power_of_2_u64(u64 n) 238 + { 239 + return (n != 0 && ((n & (n - 1)) == 0)); 240 + } 241 + 237 242 static inline void __list_del_many(struct list_head *head, 238 243 struct list_head *first) 239 244 {
+2
drivers/hid/hid-google-hammer.c
··· 533 533 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 534 534 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MASTERBALL) }, 535 535 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 536 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MOONBALL) }, 537 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 536 538 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_STAFF) }, 537 539 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 538 540 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_WAND) },
+2
drivers/hid/hid-ids.h
··· 478 478 #define USB_DEVICE_ID_GOOGLE_WHISKERS 0x5030 479 479 #define USB_DEVICE_ID_GOOGLE_MASTERBALL 0x503c 480 480 #define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d 481 + #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 481 482 482 483 #define USB_VENDOR_ID_GOTOP 0x08f2 483 484 #define USB_DEVICE_ID_SUPER_Q2 0x007f ··· 727 726 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 728 727 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 729 728 #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 729 + #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d 730 730 731 731 #define USB_VENDOR_ID_LG 0x1fd2 732 732 #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
+2 -2
drivers/hid/hid-picolcd_fb.c
··· 458 458 if (ret >= PAGE_SIZE) 459 459 break; 460 460 else if (i == fb_update_rate) 461 - ret += snprintf(buf+ret, PAGE_SIZE-ret, "[%u] ", i); 461 + ret += scnprintf(buf+ret, PAGE_SIZE-ret, "[%u] ", i); 462 462 else 463 - ret += snprintf(buf+ret, PAGE_SIZE-ret, "%u ", i); 463 + ret += scnprintf(buf+ret, PAGE_SIZE-ret, "%u ", i); 464 464 if (ret > 0) 465 465 buf[min(ret, (size_t)PAGE_SIZE)-1] = '\n'; 466 466 return ret;
+1
drivers/hid/hid-quirks.c
··· 103 103 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_M912), HID_QUIRK_MULTI_INPUT }, 104 104 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE), HID_QUIRK_MULTI_INPUT }, 105 105 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2), HID_QUIRK_ALWAYS_POLL }, 106 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL }, 106 107 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C007), HID_QUIRK_ALWAYS_POLL }, 107 108 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077), HID_QUIRK_ALWAYS_POLL }, 108 109 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS), HID_QUIRK_NOGET },
+3 -3
drivers/hid/hid-sensor-custom.c
··· 313 313 314 314 while (i < ret) { 315 315 if (i + attribute->size > ret) { 316 - len += snprintf(&buf[len], 316 + len += scnprintf(&buf[len], 317 317 PAGE_SIZE - len, 318 318 "%d ", values[i]); 319 319 break; ··· 336 336 ++i; 337 337 break; 338 338 } 339 - len += snprintf(&buf[len], PAGE_SIZE - len, 339 + len += scnprintf(&buf[len], PAGE_SIZE - len, 340 340 "%lld ", value); 341 341 } 342 - len += snprintf(&buf[len], PAGE_SIZE - len, "\n"); 342 + len += scnprintf(&buf[len], PAGE_SIZE - len, "\n"); 343 343 344 344 return len; 345 345 } else if (input)
+7 -6
drivers/hwtracing/intel_th/msu.c
··· 719 719 720 720 if (old != expect) { 721 721 ret = -EINVAL; 722 - dev_warn_ratelimited(msc_dev(win->msc), 723 - "expected lockout state %d, got %d\n", 724 - expect, old); 725 722 goto unlock; 726 723 } 727 724 ··· 739 742 /* from intel_th_msc_window_unlock(), don't warn if not locked */ 740 743 if (expect == WIN_LOCKED && old == new) 741 744 return 0; 745 + 746 + dev_warn_ratelimited(msc_dev(win->msc), 747 + "expected lockout state %d, got %d\n", 748 + expect, old); 742 749 } 743 750 744 751 return ret; ··· 762 761 lockdep_assert_held(&msc->buf_mutex); 763 762 764 763 if (msc->mode > MSC_MODE_MULTI) 765 - return -ENOTSUPP; 764 + return -EINVAL; 766 765 767 766 if (msc->mode == MSC_MODE_MULTI) { 768 767 if (msc_win_set_lockout(msc->cur_win, WIN_READY, WIN_INUSE)) ··· 1296 1295 } else if (msc->mode == MSC_MODE_MULTI) { 1297 1296 ret = msc_buffer_multi_alloc(msc, nr_pages, nr_wins); 1298 1297 } else { 1299 - ret = -ENOTSUPP; 1298 + ret = -EINVAL; 1300 1299 } 1301 1300 1302 1301 if (!ret) { ··· 1532 1531 if (ret >= 0) 1533 1532 *ppos = iter->offset; 1534 1533 } else { 1535 - ret = -ENOTSUPP; 1534 + ret = -EINVAL; 1536 1535 } 1537 1536 1538 1537 put_count:
+5
drivers/hwtracing/intel_th/pci.c
··· 239 239 .driver_data = (kernel_ulong_t)&intel_th_2x, 240 240 }, 241 241 { 242 + /* Elkhart Lake CPU */ 243 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4529), 244 + .driver_data = (kernel_ulong_t)&intel_th_2x, 245 + }, 246 + { 242 247 /* Elkhart Lake */ 243 248 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4b26), 244 249 .driver_data = (kernel_ulong_t)&intel_th_2x,
+3 -3
drivers/hwtracing/stm/p_sys-t.c
··· 238 238 static inline bool sys_t_need_ts(struct sys_t_output *op) 239 239 { 240 240 if (op->node.ts_interval && 241 - time_after(op->ts_jiffies + op->node.ts_interval, jiffies)) { 241 + time_after(jiffies, op->ts_jiffies + op->node.ts_interval)) { 242 242 op->ts_jiffies = jiffies; 243 243 244 244 return true; ··· 250 250 static bool sys_t_need_clock_sync(struct sys_t_output *op) 251 251 { 252 252 if (op->node.clocksync_interval && 253 - time_after(op->clocksync_jiffies + op->node.clocksync_interval, 254 - jiffies)) { 253 + time_after(jiffies, 254 + op->clocksync_jiffies + op->node.clocksync_interval)) { 255 255 op->clocksync_jiffies = jiffies; 256 256 257 257 return true;
+1
drivers/i2c/busses/i2c-designware-pcidrv.c
··· 313 313 pm_runtime_get_noresume(&pdev->dev); 314 314 315 315 i2c_del_adapter(&dev->adapter); 316 + devm_free_irq(&pdev->dev, dev->irq, dev); 316 317 pci_free_irq_vectors(pdev); 317 318 } 318 319
+1 -1
drivers/i2c/busses/i2c-gpio.c
··· 348 348 if (ret == -ENOENT) 349 349 retdesc = ERR_PTR(-EPROBE_DEFER); 350 350 351 - if (ret != -EPROBE_DEFER) 351 + if (PTR_ERR(retdesc) != -EPROBE_DEFER) 352 352 dev_err(dev, "error trying to get descriptor: %d\n", ret); 353 353 354 354 return retdesc;
+12 -33
drivers/i2c/busses/i2c-i801.c
··· 132 132 #define TCOBASE 0x050 133 133 #define TCOCTL 0x054 134 134 135 - #define ACPIBASE 0x040 136 - #define ACPIBASE_SMI_OFF 0x030 137 - #define ACPICTRL 0x044 138 - #define ACPICTRL_EN 0x080 139 - 140 135 #define SBREG_BAR 0x10 141 136 #define SBREG_SMBCTRL 0xc6000c 142 137 #define SBREG_SMBCTRL_DNV 0xcf000c ··· 1548 1553 pci_bus_write_config_byte(pci_dev->bus, devfn, 0xe1, hidden); 1549 1554 spin_unlock(&p2sb_spinlock); 1550 1555 1551 - res = &tco_res[ICH_RES_MEM_OFF]; 1556 + res = &tco_res[1]; 1552 1557 if (pci_dev->device == PCI_DEVICE_ID_INTEL_DNV_SMBUS) 1553 1558 res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL_DNV; 1554 1559 else ··· 1558 1563 res->flags = IORESOURCE_MEM; 1559 1564 1560 1565 return platform_device_register_resndata(&pci_dev->dev, "iTCO_wdt", -1, 1561 - tco_res, 3, &spt_tco_platform_data, 1566 + tco_res, 2, &spt_tco_platform_data, 1562 1567 sizeof(spt_tco_platform_data)); 1563 1568 } 1564 1569 ··· 1571 1576 i801_add_tco_cnl(struct i801_priv *priv, struct pci_dev *pci_dev, 1572 1577 struct resource *tco_res) 1573 1578 { 1574 - return platform_device_register_resndata(&pci_dev->dev, "iTCO_wdt", -1, 1575 - tco_res, 2, &cnl_tco_platform_data, 1576 - sizeof(cnl_tco_platform_data)); 1579 + return platform_device_register_resndata(&pci_dev->dev, 1580 + "iTCO_wdt", -1, tco_res, 1, &cnl_tco_platform_data, 1581 + sizeof(cnl_tco_platform_data)); 1577 1582 } 1578 1583 1579 1584 static void i801_add_tco(struct i801_priv *priv) 1580 1585 { 1581 - u32 base_addr, tco_base, tco_ctl, ctrl_val; 1582 1586 struct pci_dev *pci_dev = priv->pci_dev; 1583 - struct resource tco_res[3], *res; 1584 - unsigned int devfn; 1587 + struct resource tco_res[2], *res; 1588 + u32 tco_base, tco_ctl; 1585 1589 1586 1590 /* If we have ACPI based watchdog use that instead */ 1587 1591 if (acpi_has_watchdog()) ··· 1595 1601 return; 1596 1602 1597 1603 memset(tco_res, 0, sizeof(tco_res)); 1598 - 1599 - res = &tco_res[ICH_RES_IO_TCO]; 1604 + /* 1605 + * Always populate the main iTCO IO resource here. The second entry 1606 + * for NO_REBOOT MMIO is filled by the SPT specific function. 1607 + */ 1608 + res = &tco_res[0]; 1600 1609 res->start = tco_base & ~1; 1601 1610 res->end = res->start + 32 - 1; 1602 1611 res->flags = IORESOURCE_IO; 1603 - 1604 - /* 1605 - * Power Management registers. 1606 - */ 1607 - devfn = PCI_DEVFN(PCI_SLOT(pci_dev->devfn), 2); 1608 - pci_bus_read_config_dword(pci_dev->bus, devfn, ACPIBASE, &base_addr); 1609 - 1610 - res = &tco_res[ICH_RES_IO_SMI]; 1611 - res->start = (base_addr & ~1) + ACPIBASE_SMI_OFF; 1612 - res->end = res->start + 3; 1613 - res->flags = IORESOURCE_IO; 1614 - 1615 - /* 1616 - * Enable the ACPI I/O space. 1617 - */ 1618 - pci_bus_read_config_dword(pci_dev->bus, devfn, ACPICTRL, &ctrl_val); 1619 - ctrl_val |= ACPICTRL_EN; 1620 - pci_bus_write_config_dword(pci_dev->bus, devfn, ACPICTRL, ctrl_val); 1621 1612 1622 1613 if (priv->features & FEATURE_TCO_CNL) 1623 1614 priv->tco_pdev = i801_add_tco_cnl(priv, pci_dev, tco_res);
+9 -1
drivers/i2c/i2c-core-acpi.c
··· 394 394 static struct i2c_client *i2c_acpi_find_client_by_adev(struct acpi_device *adev) 395 395 { 396 396 struct device *dev; 397 + struct i2c_client *client; 397 398 398 399 dev = bus_find_device_by_acpi_dev(&i2c_bus_type, adev); 399 - return dev ? i2c_verify_client(dev) : NULL; 400 + if (!dev) 401 + return NULL; 402 + 403 + client = i2c_verify_client(dev); 404 + if (!client) 405 + put_device(dev); 406 + 407 + return client; 400 408 } 401 409 402 410 static int i2c_acpi_notify(struct notifier_block *nb, unsigned long value,
+1
drivers/iio/accel/adxl372.c
··· 237 237 .realbits = 12, \ 238 238 .storagebits = 16, \ 239 239 .shift = 4, \ 240 + .endianness = IIO_BE, \ 240 241 }, \ 241 242 } 242 243
+1 -1
drivers/iio/accel/st_accel_i2c.c
··· 110 110 111 111 #ifdef CONFIG_ACPI 112 112 static const struct acpi_device_id st_accel_acpi_match[] = { 113 - {"SMO8840", (kernel_ulong_t)LNG2DM_ACCEL_DEV_NAME}, 113 + {"SMO8840", (kernel_ulong_t)LIS2DH12_ACCEL_DEV_NAME}, 114 114 {"SMO8A90", (kernel_ulong_t)LNG2DM_ACCEL_DEV_NAME}, 115 115 { }, 116 116 };
+15
drivers/iio/adc/at91-sama5d2_adc.c
··· 723 723 724 724 for_each_set_bit(bit, indio->active_scan_mask, indio->num_channels) { 725 725 struct iio_chan_spec const *chan = at91_adc_chan_get(indio, bit); 726 + u32 cor; 726 727 727 728 if (!chan) 728 729 continue; ··· 731 730 if (chan->type == IIO_POSITIONRELATIVE || 732 731 chan->type == IIO_PRESSURE) 733 732 continue; 733 + 734 + if (state) { 735 + cor = at91_adc_readl(st, AT91_SAMA5D2_COR); 736 + 737 + if (chan->differential) 738 + cor |= (BIT(chan->channel) | 739 + BIT(chan->channel2)) << 740 + AT91_SAMA5D2_COR_DIFF_OFFSET; 741 + else 742 + cor &= ~(BIT(chan->channel) << 743 + AT91_SAMA5D2_COR_DIFF_OFFSET); 744 + 745 + at91_adc_writel(st, AT91_SAMA5D2_COR, cor); 746 + } 734 747 735 748 if (state) { 736 749 at91_adc_writel(st, AT91_SAMA5D2_CHER,
+10 -33
drivers/iio/adc/stm32-dfsdm-adc.c
··· 842 842 } 843 843 } 844 844 845 - static irqreturn_t stm32_dfsdm_adc_trigger_handler(int irq, void *p) 846 - { 847 - struct iio_poll_func *pf = p; 848 - struct iio_dev *indio_dev = pf->indio_dev; 849 - struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); 850 - int available = stm32_dfsdm_adc_dma_residue(adc); 851 - 852 - while (available >= indio_dev->scan_bytes) { 853 - s32 *buffer = (s32 *)&adc->rx_buf[adc->bufi]; 854 - 855 - stm32_dfsdm_process_data(adc, buffer); 856 - 857 - iio_push_to_buffers_with_timestamp(indio_dev, buffer, 858 - pf->timestamp); 859 - available -= indio_dev->scan_bytes; 860 - adc->bufi += indio_dev->scan_bytes; 861 - if (adc->bufi >= adc->buf_sz) 862 - adc->bufi = 0; 863 - } 864 - 865 - iio_trigger_notify_done(indio_dev->trig); 866 - 867 - return IRQ_HANDLED; 868 - } 869 - 870 845 static void stm32_dfsdm_dma_buffer_done(void *data) 871 846 { 872 847 struct iio_dev *indio_dev = data; 873 848 struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); 874 849 int available = stm32_dfsdm_adc_dma_residue(adc); 875 850 size_t old_pos; 876 - 877 - if (indio_dev->currentmode & INDIO_BUFFER_TRIGGERED) { 878 - iio_trigger_poll_chained(indio_dev->trig); 879 - return; 880 - } 881 851 882 852 /* 883 853 * FIXME: In Kernel interface does not support cyclic DMA buffer,and ··· 876 906 adc->bufi = 0; 877 907 old_pos = 0; 878 908 } 879 - /* regular iio buffer without trigger */ 909 + /* 910 + * In DMA mode the trigger services of IIO are not used 911 + * (e.g. no call to iio_trigger_poll). 912 + * Calling irq handler associated to the hardware trigger is not 913 + * relevant as the conversions have already been done. Data 914 + * transfers are performed directly in DMA callback instead. 915 + * This implementation avoids to call trigger irq handler that 916 + * may sleep, in an atomic context (DMA irq handler context). 917 + */ 880 918 if (adc->dev_data->type == DFSDM_IIO) 881 919 iio_push_to_buffers(indio_dev, buffer); 882 920 } ··· 1514 1536 } 1515 1537 1516 1538 ret = iio_triggered_buffer_setup(indio_dev, 1517 - &iio_pollfunc_store_time, 1518 - &stm32_dfsdm_adc_trigger_handler, 1539 + &iio_pollfunc_store_time, NULL, 1519 1540 &stm32_dfsdm_buffer_setup_ops); 1520 1541 if (ret) { 1521 1542 stm32_dfsdm_dma_release(indio_dev);
+2
drivers/iio/chemical/Kconfig
··· 91 91 tristate "SPS30 particulate matter sensor" 92 92 depends on I2C 93 93 select CRC8 94 + select IIO_BUFFER 95 + select IIO_TRIGGERED_BUFFER 94 96 help 95 97 Say Y here to build support for the Sensirion SPS30 particulate 96 98 matter sensor.
+8 -7
drivers/iio/light/vcnl4000.c
··· 167 167 data->vcnl4200_ps.reg = VCNL4200_PS_DATA; 168 168 switch (id) { 169 169 case VCNL4200_PROD_ID: 170 - /* Integration time is 50ms, but the experiments */ 171 - /* show 54ms in total. */ 172 - data->vcnl4200_al.sampling_rate = ktime_set(0, 54000 * 1000); 173 - data->vcnl4200_ps.sampling_rate = ktime_set(0, 4200 * 1000); 170 + /* Default wait time is 50ms, add 20% tolerance. */ 171 + data->vcnl4200_al.sampling_rate = ktime_set(0, 60000 * 1000); 172 + /* Default wait time is 4.8ms, add 20% tolerance. */ 173 + data->vcnl4200_ps.sampling_rate = ktime_set(0, 5760 * 1000); 174 174 data->al_scale = 24000; 175 175 break; 176 176 case VCNL4040_PROD_ID: 177 - /* Integration time is 80ms, add 10ms. */ 178 - data->vcnl4200_al.sampling_rate = ktime_set(0, 100000 * 1000); 179 - data->vcnl4200_ps.sampling_rate = ktime_set(0, 100000 * 1000); 177 + /* Default wait time is 80ms, add 20% tolerance. */ 178 + data->vcnl4200_al.sampling_rate = ktime_set(0, 96000 * 1000); 179 + /* Default wait time is 5ms, add 20% tolerance. */ 180 + data->vcnl4200_ps.sampling_rate = ktime_set(0, 6000 * 1000); 180 181 data->al_scale = 120000; 181 182 break; 182 183 }
+1 -1
drivers/iio/magnetometer/ak8974.c
··· 564 564 * We read all axes and discard all but one, for optimized 565 565 * reading, use the triggered buffer. 566 566 */ 567 - *val = le16_to_cpu(hw_values[chan->address]); 567 + *val = (s16)le16_to_cpu(hw_values[chan->address]); 568 568 569 569 ret = IIO_VAL_INT; 570 570 }
+1 -1
drivers/iio/proximity/ping.c
··· 269 269 270 270 static const struct of_device_id of_ping_match[] = { 271 271 { .compatible = "parallax,ping", .data = &pa_ping_cfg}, 272 - { .compatible = "parallax,laserping", .data = &pa_ping_cfg}, 272 + { .compatible = "parallax,laserping", .data = &pa_laser_ping_cfg}, 273 273 {}, 274 274 }; 275 275
+9 -2
drivers/iio/trigger/stm32-timer-trigger.c
··· 161 161 return 0; 162 162 } 163 163 164 - static void stm32_timer_stop(struct stm32_timer_trigger *priv) 164 + static void stm32_timer_stop(struct stm32_timer_trigger *priv, 165 + struct iio_trigger *trig) 165 166 { 166 167 u32 ccer, cr1; 167 168 ··· 179 178 regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0); 180 179 regmap_write(priv->regmap, TIM_PSC, 0); 181 180 regmap_write(priv->regmap, TIM_ARR, 0); 181 + 182 + /* Force disable master mode */ 183 + if (stm32_timer_is_trgo2_name(trig->name)) 184 + regmap_update_bits(priv->regmap, TIM_CR2, TIM_CR2_MMS2, 0); 185 + else 186 + regmap_update_bits(priv->regmap, TIM_CR2, TIM_CR2_MMS, 0); 182 187 183 188 /* Make sure that registers are updated */ 184 189 regmap_update_bits(priv->regmap, TIM_EGR, TIM_EGR_UG, TIM_EGR_UG); ··· 204 197 return ret; 205 198 206 199 if (freq == 0) { 207 - stm32_timer_stop(priv); 200 + stm32_timer_stop(priv, trig); 208 201 } else { 209 202 ret = stm32_timer_start(priv, trig, freq); 210 203 if (ret)
+2 -2
drivers/iommu/amd_iommu.c
··· 3826 3826 entry->lo.fields_vapic.ga_tag = ir_data->ga_tag; 3827 3827 3828 3828 return modify_irte_ga(ir_data->irq_2_irte.devid, 3829 - ir_data->irq_2_irte.index, entry, NULL); 3829 + ir_data->irq_2_irte.index, entry, ir_data); 3830 3830 } 3831 3831 EXPORT_SYMBOL(amd_iommu_activate_guest_mode); 3832 3832 ··· 3852 3852 APICID_TO_IRTE_DEST_HI(cfg->dest_apicid); 3853 3853 3854 3854 return modify_irte_ga(ir_data->irq_2_irte.devid, 3855 - ir_data->irq_2_irte.index, entry, NULL); 3855 + ir_data->irq_2_irte.index, entry, ir_data); 3856 3856 } 3857 3857 EXPORT_SYMBOL(amd_iommu_deactivate_guest_mode); 3858 3858
+8 -8
drivers/iommu/dma-iommu.c
··· 177 177 start -= iova_offset(iovad, start); 178 178 num_pages = iova_align(iovad, end - start) >> iova_shift(iovad); 179 179 180 - msi_page = kcalloc(num_pages, sizeof(*msi_page), GFP_KERNEL); 181 - if (!msi_page) 182 - return -ENOMEM; 183 - 184 180 for (i = 0; i < num_pages; i++) { 185 - msi_page[i].phys = start; 186 - msi_page[i].iova = start; 187 - INIT_LIST_HEAD(&msi_page[i].list); 188 - list_add(&msi_page[i].list, &cookie->msi_page_list); 181 + msi_page = kmalloc(sizeof(*msi_page), GFP_KERNEL); 182 + if (!msi_page) 183 + return -ENOMEM; 184 + 185 + msi_page->phys = start; 186 + msi_page->iova = start; 187 + INIT_LIST_HEAD(&msi_page->list); 188 + list_add(&msi_page->list, &cookie->msi_page_list); 189 189 start += iovad->granule; 190 190 } 191 191
+17 -7
drivers/iommu/dmar.c
··· 28 28 #include <linux/slab.h> 29 29 #include <linux/iommu.h> 30 30 #include <linux/numa.h> 31 + #include <linux/limits.h> 31 32 #include <asm/irq_remapping.h> 32 33 #include <asm/iommu_table.h> 33 34 ··· 128 127 struct dmar_pci_notify_info *info; 129 128 130 129 BUG_ON(dev->is_virtfn); 130 + 131 + /* 132 + * Ignore devices that have a domain number higher than what can 133 + * be looked up in DMAR, e.g. VMD subdevices with domain 0x10000 134 + */ 135 + if (pci_domain_nr(dev->bus) > U16_MAX) 136 + return NULL; 131 137 132 138 /* Only generate path[] for device addition event */ 133 139 if (event == BUS_NOTIFY_ADD_DEVICE) ··· 371 363 { 372 364 struct dmar_drhd_unit *dmaru; 373 365 374 - list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list) 366 + list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list, 367 + dmar_rcu_check()) 375 368 if (dmaru->segment == drhd->segment && 376 369 dmaru->reg_base_addr == drhd->address) 377 370 return dmaru; ··· 449 440 450 441 /* Check for NUL termination within the designated length */ 451 442 if (strnlen(andd->device_name, header->length - 8) == header->length - 8) { 452 - WARN_TAINT(1, TAINT_FIRMWARE_WORKAROUND, 443 + pr_warn(FW_BUG 453 444 "Your BIOS is broken; ANDD object name is not NUL-terminated\n" 454 445 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 455 446 dmi_get_system_info(DMI_BIOS_VENDOR), 456 447 dmi_get_system_info(DMI_BIOS_VERSION), 457 448 dmi_get_system_info(DMI_PRODUCT_VERSION)); 449 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 458 450 return -EINVAL; 459 451 } 460 452 pr_info("ANDD device: %x name: %s\n", andd->device_number, ··· 481 471 return 0; 482 472 } 483 473 } 484 - WARN_TAINT( 485 - 1, TAINT_FIRMWARE_WORKAROUND, 474 + pr_warn(FW_BUG 486 475 "Your BIOS is broken; RHSA refers to non-existent DMAR unit at %llx\n" 487 476 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 488 - drhd->reg_base_addr, 477 + rhsa->base_address, 489 478 dmi_get_system_info(DMI_BIOS_VENDOR), 490 479 dmi_get_system_info(DMI_BIOS_VERSION), 491 480 dmi_get_system_info(DMI_PRODUCT_VERSION)); 481 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 492 482 493 483 return 0; 494 484 } ··· 837 827 838 828 static void warn_invalid_dmar(u64 addr, const char *message) 839 829 { 840 - WARN_TAINT_ONCE( 841 - 1, TAINT_FIRMWARE_WORKAROUND, 830 + pr_warn_once(FW_BUG 842 831 "Your BIOS is broken; DMAR reported at address %llx%s!\n" 843 832 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 844 833 addr, message, 845 834 dmi_get_system_info(DMI_BIOS_VENDOR), 846 835 dmi_get_system_info(DMI_BIOS_VERSION), 847 836 dmi_get_system_info(DMI_PRODUCT_VERSION)); 837 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 848 838 } 849 839 850 840 static int __ref
+38 -19
drivers/iommu/intel-iommu-debugfs.c
··· 33 33 34 34 #define IOMMU_REGSET_ENTRY(_reg_) \ 35 35 { DMAR_##_reg_##_REG, __stringify(_reg_) } 36 - static const struct iommu_regset iommu_regs[] = { 36 + 37 + static const struct iommu_regset iommu_regs_32[] = { 37 38 IOMMU_REGSET_ENTRY(VER), 38 - IOMMU_REGSET_ENTRY(CAP), 39 - IOMMU_REGSET_ENTRY(ECAP), 40 39 IOMMU_REGSET_ENTRY(GCMD), 41 40 IOMMU_REGSET_ENTRY(GSTS), 42 - IOMMU_REGSET_ENTRY(RTADDR), 43 - IOMMU_REGSET_ENTRY(CCMD), 44 41 IOMMU_REGSET_ENTRY(FSTS), 45 42 IOMMU_REGSET_ENTRY(FECTL), 46 43 IOMMU_REGSET_ENTRY(FEDATA), 47 44 IOMMU_REGSET_ENTRY(FEADDR), 48 45 IOMMU_REGSET_ENTRY(FEUADDR), 49 - IOMMU_REGSET_ENTRY(AFLOG), 50 46 IOMMU_REGSET_ENTRY(PMEN), 51 47 IOMMU_REGSET_ENTRY(PLMBASE), 52 48 IOMMU_REGSET_ENTRY(PLMLIMIT), 53 - IOMMU_REGSET_ENTRY(PHMBASE), 54 - IOMMU_REGSET_ENTRY(PHMLIMIT), 55 - IOMMU_REGSET_ENTRY(IQH), 56 - IOMMU_REGSET_ENTRY(IQT), 57 - IOMMU_REGSET_ENTRY(IQA), 58 49 IOMMU_REGSET_ENTRY(ICS), 59 - IOMMU_REGSET_ENTRY(IRTA), 60 - IOMMU_REGSET_ENTRY(PQH), 61 - IOMMU_REGSET_ENTRY(PQT), 62 - IOMMU_REGSET_ENTRY(PQA), 63 50 IOMMU_REGSET_ENTRY(PRS), 64 51 IOMMU_REGSET_ENTRY(PECTL), 65 52 IOMMU_REGSET_ENTRY(PEDATA), 66 53 IOMMU_REGSET_ENTRY(PEADDR), 67 54 IOMMU_REGSET_ENTRY(PEUADDR), 55 + }; 56 + 57 + static const struct iommu_regset iommu_regs_64[] = { 58 + IOMMU_REGSET_ENTRY(CAP), 59 + IOMMU_REGSET_ENTRY(ECAP), 60 + IOMMU_REGSET_ENTRY(RTADDR), 61 + IOMMU_REGSET_ENTRY(CCMD), 62 + IOMMU_REGSET_ENTRY(AFLOG), 63 + IOMMU_REGSET_ENTRY(PHMBASE), 64 + IOMMU_REGSET_ENTRY(PHMLIMIT), 65 + IOMMU_REGSET_ENTRY(IQH), 66 + IOMMU_REGSET_ENTRY(IQT), 67 + IOMMU_REGSET_ENTRY(IQA), 68 + IOMMU_REGSET_ENTRY(IRTA), 69 + IOMMU_REGSET_ENTRY(PQH), 70 + IOMMU_REGSET_ENTRY(PQT), 71 + IOMMU_REGSET_ENTRY(PQA), 68 72 IOMMU_REGSET_ENTRY(MTRRCAP), 69 73 IOMMU_REGSET_ENTRY(MTRRDEF), 70 74 IOMMU_REGSET_ENTRY(MTRR_FIX64K_00000), ··· 131 127 * by adding the offset to the pointer (virtual address). 132 128 */ 133 129 raw_spin_lock_irqsave(&iommu->register_lock, flag); 134 - for (i = 0 ; i < ARRAY_SIZE(iommu_regs); i++) { 135 - value = dmar_readq(iommu->reg + iommu_regs[i].offset); 130 + for (i = 0 ; i < ARRAY_SIZE(iommu_regs_32); i++) { 131 + value = dmar_readl(iommu->reg + iommu_regs_32[i].offset); 136 132 seq_printf(m, "%-16s\t0x%02x\t\t0x%016llx\n", 137 - iommu_regs[i].regs, iommu_regs[i].offset, 133 + iommu_regs_32[i].regs, iommu_regs_32[i].offset, 134 + value); 135 + } 136 + for (i = 0 ; i < ARRAY_SIZE(iommu_regs_64); i++) { 137 + value = dmar_readq(iommu->reg + iommu_regs_64[i].offset); 138 + seq_printf(m, "%-16s\t0x%02x\t\t0x%016llx\n", 139 + iommu_regs_64[i].regs, iommu_regs_64[i].offset, 138 140 value); 139 141 } 140 142 raw_spin_unlock_irqrestore(&iommu->register_lock, flag); ··· 282 272 { 283 273 struct dmar_drhd_unit *drhd; 284 274 struct intel_iommu *iommu; 275 + u32 sts; 285 276 286 277 rcu_read_lock(); 287 278 for_each_active_iommu(iommu, drhd) { 279 + sts = dmar_readl(iommu->reg + DMAR_GSTS_REG); 280 + if (!(sts & DMA_GSTS_TES)) { 281 + seq_printf(m, "DMA Remapping is not enabled on %s\n", 282 + iommu->name); 283 + continue; 284 + } 288 285 root_tbl_walk(m, iommu); 289 286 seq_putc(m, '\n'); 290 287 } ··· 432 415 struct dmar_drhd_unit *drhd; 433 416 struct intel_iommu *iommu; 434 417 u64 irta; 418 + u32 sts; 435 419 436 420 rcu_read_lock(); 437 421 for_each_active_iommu(iommu, drhd) { ··· 442 424 seq_printf(m, "Remapped Interrupt supported on IOMMU: %s\n", 443 425 iommu->name); 444 426 445 - if (iommu->ir_table) { 427 + sts = dmar_readl(iommu->reg + DMAR_GSTS_REG); 428 + if (iommu->ir_table && (sts & DMA_GSTS_IRES)) { 446 429 irta = virt_to_phys(iommu->ir_table->base); 447 430 seq_printf(m, " IR table address:%llx\n", irta); 448 431 ir_tbl_remap_entry_show(m, iommu);
+19 -9
drivers/iommu/intel-iommu.c
··· 4261 4261 4262 4262 /* we know that the this iommu should be at offset 0xa000 from vtbar */ 4263 4263 drhd = dmar_find_matched_drhd_unit(pdev); 4264 - if (WARN_TAINT_ONCE(!drhd || drhd->reg_base_addr - vtbar != 0xa000, 4265 - TAINT_FIRMWARE_WORKAROUND, 4266 - "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n")) 4264 + if (!drhd || drhd->reg_base_addr - vtbar != 0xa000) { 4265 + pr_warn_once(FW_BUG "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n"); 4266 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 4267 4267 pdev->dev.archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO; 4268 + } 4268 4269 } 4269 4270 DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB, quirk_ioat_snb_local_iommu); 4270 4271 ··· 4461 4460 struct dmar_rmrr_unit *rmrru; 4462 4461 4463 4462 rmrr = (struct acpi_dmar_reserved_memory *)header; 4464 - if (rmrr_sanity_check(rmrr)) 4465 - WARN_TAINT(1, TAINT_FIRMWARE_WORKAROUND, 4463 + if (rmrr_sanity_check(rmrr)) { 4464 + pr_warn(FW_BUG 4466 4465 "Your BIOS is broken; bad RMRR [%#018Lx-%#018Lx]\n" 4467 4466 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 4468 4467 rmrr->base_address, rmrr->end_address, 4469 4468 dmi_get_system_info(DMI_BIOS_VENDOR), 4470 4469 dmi_get_system_info(DMI_BIOS_VERSION), 4471 4470 dmi_get_system_info(DMI_PRODUCT_VERSION)); 4471 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 4472 + } 4472 4473 4473 4474 rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL); 4474 4475 if (!rmrru) ··· 5133 5130 5134 5131 down_write(&dmar_global_lock); 5135 5132 5133 + if (!no_iommu) 5134 + intel_iommu_debugfs_init(); 5135 + 5136 5136 if (no_iommu || dmar_disabled) { 5137 5137 /* 5138 5138 * We exit the function here to ensure IOMMU's remapping and ··· 5199 5193 5200 5194 init_iommu_pm_ops(); 5201 5195 5196 + down_read(&dmar_global_lock); 5202 5197 for_each_active_iommu(iommu, drhd) { 5203 5198 iommu_device_sysfs_add(&iommu->iommu, NULL, 5204 5199 intel_iommu_groups, ··· 5207 5200 iommu_device_set_ops(&iommu->iommu, &intel_iommu_ops); 5208 5201 iommu_device_register(&iommu->iommu); 5209 5202 } 5203 + up_read(&dmar_global_lock); 5210 5204 5211 5205 bus_set_iommu(&pci_bus_type, &intel_iommu_ops); 5212 5206 if (si_domain && !hw_pass_through) ··· 5218 5210 down_read(&dmar_global_lock); 5219 5211 if (probe_acpi_namespace_devices()) 5220 5212 pr_warn("ACPI name space devices didn't probe correctly\n"); 5221 - up_read(&dmar_global_lock); 5222 5213 5223 5214 /* Finally, we enable the DMA remapping hardware. */ 5224 5215 for_each_iommu(iommu, drhd) { ··· 5226 5219 5227 5220 iommu_disable_protect_mem_regions(iommu); 5228 5221 } 5222 + up_read(&dmar_global_lock); 5223 + 5229 5224 pr_info("Intel(R) Virtualization Technology for Directed I/O\n"); 5230 5225 5231 5226 intel_iommu_enabled = 1; 5232 - intel_iommu_debugfs_init(); 5233 5227 5234 5228 return 0; 5235 5229 ··· 5708 5700 u64 phys = 0; 5709 5701 5710 5702 pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level); 5711 - if (pte) 5712 - phys = dma_pte_addr(pte); 5703 + if (pte && dma_pte_present(pte)) 5704 + phys = dma_pte_addr(pte) + 5705 + (iova & (BIT_MASK(level_to_offset_bits(level) + 5706 + VTD_PAGE_SHIFT) - 1)); 5713 5707 5714 5708 return phys; 5715 5709 }
+2 -2
drivers/iommu/io-pgtable-arm.c
··· 468 468 arm_lpae_iopte *ptep = data->pgd; 469 469 int ret, lvl = data->start_level; 470 470 arm_lpae_iopte prot; 471 - long iaext = (long)iova >> cfg->ias; 471 + long iaext = (s64)iova >> cfg->ias; 472 472 473 473 /* If no access, then nothing to do */ 474 474 if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) ··· 645 645 struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); 646 646 struct io_pgtable_cfg *cfg = &data->iop.cfg; 647 647 arm_lpae_iopte *ptep = data->pgd; 648 - long iaext = (long)iova >> cfg->ias; 648 + long iaext = (s64)iova >> cfg->ias; 649 649 650 650 if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) 651 651 return 0;
+29 -1
drivers/irqchip/irq-gic-v3.c
··· 34 34 #define GICD_INT_NMI_PRI (GICD_INT_DEF_PRI & ~0x80) 35 35 36 36 #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0) 37 + #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1) 37 38 38 39 struct redist_region { 39 40 void __iomem *redist_base; ··· 1465 1464 return true; 1466 1465 } 1467 1466 1467 + static bool gic_enable_quirk_cavium_38539(void *data) 1468 + { 1469 + struct gic_chip_data *d = data; 1470 + 1471 + d->flags |= FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539; 1472 + 1473 + return true; 1474 + } 1475 + 1468 1476 static bool gic_enable_quirk_hip06_07(void *data) 1469 1477 { 1470 1478 struct gic_chip_data *d = data; ··· 1511 1501 .iidr = 0x00000000, 1512 1502 .mask = 0xffffffff, 1513 1503 .init = gic_enable_quirk_hip06_07, 1504 + }, 1505 + { 1506 + /* 1507 + * Reserved register accesses generate a Synchronous 1508 + * External Abort. This erratum applies to: 1509 + * - ThunderX: CN88xx 1510 + * - OCTEON TX: CN83xx, CN81xx 1511 + * - OCTEON TX2: CN93xx, CN96xx, CN98xx, CNF95xx* 1512 + */ 1513 + .desc = "GICv3: Cavium erratum 38539", 1514 + .iidr = 0xa000034c, 1515 + .mask = 0xe8f00fff, 1516 + .init = gic_enable_quirk_cavium_38539, 1514 1517 }, 1515 1518 { 1516 1519 } ··· 1600 1577 pr_info("%d SPIs implemented\n", GIC_LINE_NR - 32); 1601 1578 pr_info("%d Extended SPIs implemented\n", GIC_ESPI_NR); 1602 1579 1603 - gic_data.rdists.gicd_typer2 = readl_relaxed(gic_data.dist_base + GICD_TYPER2); 1580 + /* 1581 + * ThunderX1 explodes on reading GICD_TYPER2, in violation of the 1582 + * architecture spec (which says that reserved registers are RES0). 1583 + */ 1584 + if (!(gic_data.flags & FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539)) 1585 + gic_data.rdists.gicd_typer2 = readl_relaxed(gic_data.dist_base + GICD_TYPER2); 1604 1586 1605 1587 gic_data.domain = irq_domain_create_tree(handle, &gic_irq_domain_ops, 1606 1588 &gic_data);
+7
drivers/macintosh/windfarm_ad7417_sensor.c
··· 312 312 }; 313 313 MODULE_DEVICE_TABLE(i2c, wf_ad7417_id); 314 314 315 + static const struct of_device_id wf_ad7417_of_id[] = { 316 + { .compatible = "ad7417", }, 317 + { } 318 + }; 319 + MODULE_DEVICE_TABLE(of, wf_ad7417_of_id); 320 + 315 321 static struct i2c_driver wf_ad7417_driver = { 316 322 .driver = { 317 323 .name = "wf_ad7417", 324 + .of_match_table = wf_ad7417_of_id, 318 325 }, 319 326 .probe = wf_ad7417_probe, 320 327 .remove = wf_ad7417_remove,
+7
drivers/macintosh/windfarm_fcu_controls.c
··· 580 580 }; 581 581 MODULE_DEVICE_TABLE(i2c, wf_fcu_id); 582 582 583 + static const struct of_device_id wf_fcu_of_id[] = { 584 + { .compatible = "fcu", }, 585 + { } 586 + }; 587 + MODULE_DEVICE_TABLE(of, wf_fcu_of_id); 588 + 583 589 static struct i2c_driver wf_fcu_driver = { 584 590 .driver = { 585 591 .name = "wf_fcu", 592 + .of_match_table = wf_fcu_of_id, 586 593 }, 587 594 .probe = wf_fcu_probe, 588 595 .remove = wf_fcu_remove,
+15 -1
drivers/macintosh/windfarm_lm75_sensor.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/wait.h> 16 16 #include <linux/i2c.h> 17 + #include <linux/of_device.h> 17 18 #include <asm/prom.h> 18 19 #include <asm/machdep.h> 19 20 #include <asm/io.h> ··· 92 91 const struct i2c_device_id *id) 93 92 { 94 93 struct wf_lm75_sensor *lm; 95 - int rc, ds1775 = id->driver_data; 94 + int rc, ds1775; 96 95 const char *name, *loc; 96 + 97 + if (id) 98 + ds1775 = id->driver_data; 99 + else 100 + ds1775 = !!of_device_get_match_data(&client->dev); 97 101 98 102 DBG("wf_lm75: creating %s device at address 0x%02x\n", 99 103 ds1775 ? "ds1775" : "lm75", client->addr); ··· 170 164 }; 171 165 MODULE_DEVICE_TABLE(i2c, wf_lm75_id); 172 166 167 + static const struct of_device_id wf_lm75_of_id[] = { 168 + { .compatible = "lm75", .data = (void *)0}, 169 + { .compatible = "ds1775", .data = (void *)1 }, 170 + { } 171 + }; 172 + MODULE_DEVICE_TABLE(of, wf_lm75_of_id); 173 + 173 174 static struct i2c_driver wf_lm75_driver = { 174 175 .driver = { 175 176 .name = "wf_lm75", 177 + .of_match_table = wf_lm75_of_id, 176 178 }, 177 179 .probe = wf_lm75_probe, 178 180 .remove = wf_lm75_remove,
+7
drivers/macintosh/windfarm_lm87_sensor.c
··· 166 166 }; 167 167 MODULE_DEVICE_TABLE(i2c, wf_lm87_id); 168 168 169 + static const struct of_device_id wf_lm87_of_id[] = { 170 + { .compatible = "lm87cimt", }, 171 + { } 172 + }; 173 + MODULE_DEVICE_TABLE(of, wf_lm87_of_id); 174 + 169 175 static struct i2c_driver wf_lm87_driver = { 170 176 .driver = { 171 177 .name = "wf_lm87", 178 + .of_match_table = wf_lm87_of_id, 172 179 }, 173 180 .probe = wf_lm87_probe, 174 181 .remove = wf_lm87_remove,
+7
drivers/macintosh/windfarm_max6690_sensor.c
··· 120 120 }; 121 121 MODULE_DEVICE_TABLE(i2c, wf_max6690_id); 122 122 123 + static const struct of_device_id wf_max6690_of_id[] = { 124 + { .compatible = "max6690", }, 125 + { } 126 + }; 127 + MODULE_DEVICE_TABLE(of, wf_max6690_of_id); 128 + 123 129 static struct i2c_driver wf_max6690_driver = { 124 130 .driver = { 125 131 .name = "wf_max6690", 132 + .of_match_table = wf_max6690_of_id, 126 133 }, 127 134 .probe = wf_max6690_probe, 128 135 .remove = wf_max6690_remove,
+7
drivers/macintosh/windfarm_smu_sat.c
··· 341 341 }; 342 342 MODULE_DEVICE_TABLE(i2c, wf_sat_id); 343 343 344 + static const struct of_device_id wf_sat_of_id[] = { 345 + { .compatible = "smu-sat", }, 346 + { } 347 + }; 348 + MODULE_DEVICE_TABLE(of, wf_sat_of_id); 349 + 344 350 static struct i2c_driver wf_sat_driver = { 345 351 .driver = { 346 352 .name = "wf_smu_sat", 353 + .of_match_table = wf_sat_of_id, 347 354 }, 348 355 .probe = wf_sat_probe, 349 356 .remove = wf_sat_remove,
+1 -1
drivers/misc/cardreader/rts5227.c
··· 394 394 void rts522a_init_params(struct rtsx_pcr *pcr) 395 395 { 396 396 rts5227_init_params(pcr); 397 - 397 + pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 20, 11); 398 398 pcr->reg_pm_ctrl3 = RTS522A_PM_CTRL3; 399 399 400 400 pcr->option.ocp_en = 1;
+2
drivers/misc/cardreader/rts5249.c
··· 618 618 void rts524a_init_params(struct rtsx_pcr *pcr) 619 619 { 620 620 rts5249_init_params(pcr); 621 + pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 29, 11); 621 622 pcr->option.ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF; 622 623 pcr->option.ltr_l1off_snooze_sspwrgate = 623 624 LTR_L1OFF_SNOOZE_SSPWRGATE_5250_DEF; ··· 734 733 void rts525a_init_params(struct rtsx_pcr *pcr) 735 734 { 736 735 rts5249_init_params(pcr); 736 + pcr->tx_initial_phase = SET_CLOCK_PHASE(25, 29, 11); 737 737 pcr->option.ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF; 738 738 pcr->option.ltr_l1off_snooze_sspwrgate = 739 739 LTR_L1OFF_SNOOZE_SSPWRGATE_5250_DEF;
+1 -1
drivers/misc/cardreader/rts5260.c
··· 662 662 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 663 663 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 664 664 pcr->aspm_en = ASPM_L1_EN; 665 - pcr->tx_initial_phase = SET_CLOCK_PHASE(1, 29, 16); 665 + pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 29, 11); 666 666 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 667 667 668 668 pcr->ic_version = rts5260_get_ic_version(pcr);
+1 -1
drivers/misc/cardreader/rts5261.c
··· 764 764 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 765 765 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 766 766 pcr->aspm_en = ASPM_L1_EN; 767 - pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 27, 16); 767 + pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 27, 11); 768 768 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 769 769 770 770 pcr->ic_version = rts5261_get_ic_version(pcr);
+2 -1
drivers/misc/eeprom/at24.c
··· 712 712 * chip is functional. 713 713 */ 714 714 err = at24_read(at24, 0, &test_byte, 1); 715 - pm_runtime_idle(dev); 716 715 if (err) { 717 716 pm_runtime_disable(dev); 718 717 regulator_disable(at24->vcc_reg); 719 718 return -ENODEV; 720 719 } 720 + 721 + pm_runtime_idle(dev); 721 722 722 723 if (writable) 723 724 dev_info(dev, "%u byte %s EEPROM, writable, %u bytes/write\n",
+4 -1
drivers/mmc/core/core.c
··· 1732 1732 * the erase operation does not exceed the max_busy_timeout, we should 1733 1733 * use R1B response. Or we need to prevent the host from doing hw busy 1734 1734 * detection, which is done by converting to a R1 response instead. 1735 + * Note, some hosts requires R1B, which also means they are on their own 1736 + * when it comes to deal with the busy timeout. 1735 1737 */ 1736 - if (card->host->max_busy_timeout && 1738 + if (!(card->host->caps & MMC_CAP_NEED_RSP_BUSY) && 1739 + card->host->max_busy_timeout && 1737 1740 busy_timeout > card->host->max_busy_timeout) { 1738 1741 cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; 1739 1742 } else {
+5 -2
drivers/mmc/core/mmc.c
··· 1910 1910 * If the max_busy_timeout of the host is specified, validate it against 1911 1911 * the sleep cmd timeout. A failure means we need to prevent the host 1912 1912 * from doing hw busy detection, which is done by converting to a R1 1913 - * response instead of a R1B. 1913 + * response instead of a R1B. Note, some hosts requires R1B, which also 1914 + * means they are on their own when it comes to deal with the busy 1915 + * timeout. 1914 1916 */ 1915 - if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) { 1917 + if (!(host->caps & MMC_CAP_NEED_RSP_BUSY) && host->max_busy_timeout && 1918 + (timeout_ms > host->max_busy_timeout)) { 1916 1919 cmd.flags = MMC_RSP_R1 | MMC_CMD_AC; 1917 1920 } else { 1918 1921 cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
+4 -2
drivers/mmc/core/mmc_ops.c
··· 542 542 * If the max_busy_timeout of the host is specified, make sure it's 543 543 * enough to fit the used timeout_ms. In case it's not, let's instruct 544 544 * the host to avoid HW busy detection, by converting to a R1 response 545 - * instead of a R1B. 545 + * instead of a R1B. Note, some hosts requires R1B, which also means 546 + * they are on their own when it comes to deal with the busy timeout. 546 547 */ 547 - if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) 548 + if (!(host->caps & MMC_CAP_NEED_RSP_BUSY) && host->max_busy_timeout && 549 + (timeout_ms > host->max_busy_timeout)) 548 550 use_r1b_resp = false; 549 551 550 552 cmd.opcode = MMC_SWITCH;
+8 -5
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 606 606 u8 sample_point, bool rx) 607 607 { 608 608 struct rtsx_pcr *pcr = host->pcr; 609 - 609 + u16 SD_VP_CTL = 0; 610 610 dev_dbg(sdmmc_dev(host), "%s(%s): sample_point = %d\n", 611 611 __func__, rx ? "RX" : "TX", sample_point); 612 612 613 613 rtsx_pci_write_register(pcr, CLK_CTL, CHANGE_CLK, CHANGE_CLK); 614 - if (rx) 614 + if (rx) { 615 + SD_VP_CTL = SD_VPRX_CTL; 615 616 rtsx_pci_write_register(pcr, SD_VPRX_CTL, 616 617 PHASE_SELECT_MASK, sample_point); 617 - else 618 + } else { 619 + SD_VP_CTL = SD_VPTX_CTL; 618 620 rtsx_pci_write_register(pcr, SD_VPTX_CTL, 619 621 PHASE_SELECT_MASK, sample_point); 620 - rtsx_pci_write_register(pcr, SD_VPCLK0_CTL, PHASE_NOT_RESET, 0); 621 - rtsx_pci_write_register(pcr, SD_VPCLK0_CTL, PHASE_NOT_RESET, 622 + } 623 + rtsx_pci_write_register(pcr, SD_VP_CTL, PHASE_NOT_RESET, 0); 624 + rtsx_pci_write_register(pcr, SD_VP_CTL, PHASE_NOT_RESET, 622 625 PHASE_NOT_RESET); 623 626 rtsx_pci_write_register(pcr, CLK_CTL, CHANGE_CLK, 0); 624 627 rtsx_pci_write_register(pcr, SD_CFG1, SD_ASYNC_FIFO_NOT_RST, 0);
+82 -2
drivers/mmc/host/sdhci-acpi.c
··· 23 23 #include <linux/pm.h> 24 24 #include <linux/pm_runtime.h> 25 25 #include <linux/delay.h> 26 + #include <linux/dmi.h> 26 27 27 28 #include <linux/mmc/host.h> 28 29 #include <linux/mmc/pm.h> ··· 73 72 const struct sdhci_acpi_slot *slot; 74 73 struct platform_device *pdev; 75 74 bool use_runtime_pm; 75 + bool is_intel; 76 + bool reset_signal_volt_on_suspend; 76 77 unsigned long private[0] ____cacheline_aligned; 78 + }; 79 + 80 + enum { 81 + DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP = BIT(0), 82 + DMI_QUIRK_SD_NO_WRITE_PROTECT = BIT(1), 77 83 }; 78 84 79 85 static inline void *sdhci_acpi_priv(struct sdhci_acpi_host *c) ··· 399 391 host->mmc_host_ops.start_signal_voltage_switch = 400 392 intel_start_signal_voltage_switch; 401 393 394 + c->is_intel = true; 395 + 402 396 return 0; 403 397 } 404 398 ··· 657 647 }; 658 648 MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids); 659 649 650 + static const struct dmi_system_id sdhci_acpi_quirks[] = { 651 + { 652 + /* 653 + * The Lenovo Miix 320-10ICR has a bug in the _PS0 method of 654 + * the SHC1 ACPI device, this bug causes it to reprogram the 655 + * wrong LDO (DLDO3) to 1.8V if 1.8V modes are used and the 656 + * card is (runtime) suspended + resumed. DLDO3 is used for 657 + * the LCD and setting it to 1.8V causes the LCD to go black. 658 + */ 659 + .matches = { 660 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 661 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 662 + }, 663 + .driver_data = (void *)DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP, 664 + }, 665 + { 666 + /* 667 + * The Acer Aspire Switch 10 (SW5-012) microSD slot always 668 + * reports the card being write-protected even though microSD 669 + * cards do not have a write-protect switch at all. 670 + */ 671 + .matches = { 672 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 673 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"), 674 + }, 675 + .driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT, 676 + }, 677 + {} /* Terminating entry */ 678 + }; 679 + 660 680 static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(struct acpi_device *adev) 661 681 { 662 682 const struct sdhci_acpi_uid_slot *u; ··· 703 663 struct device *dev = &pdev->dev; 704 664 const struct sdhci_acpi_slot *slot; 705 665 struct acpi_device *device, *child; 666 + const struct dmi_system_id *id; 706 667 struct sdhci_acpi_host *c; 707 668 struct sdhci_host *host; 708 669 struct resource *iomem; 709 670 resource_size_t len; 710 671 size_t priv_size; 672 + int quirks = 0; 711 673 int err; 712 674 713 675 device = ACPI_COMPANION(dev); 714 676 if (!device) 715 677 return -ENODEV; 678 + 679 + id = dmi_first_match(sdhci_acpi_quirks); 680 + if (id) 681 + quirks = (long)id->driver_data; 716 682 717 683 slot = sdhci_acpi_get_slot(device); 718 684 ··· 805 759 dev_warn(dev, "failed to setup card detect gpio\n"); 806 760 c->use_runtime_pm = false; 807 761 } 762 + 763 + if (quirks & DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP) 764 + c->reset_signal_volt_on_suspend = true; 765 + 766 + if (quirks & DMI_QUIRK_SD_NO_WRITE_PROTECT) 767 + host->mmc->caps2 |= MMC_CAP2_NO_WRITE_PROTECT; 808 768 } 809 769 810 770 err = sdhci_setup_host(host); ··· 875 823 return 0; 876 824 } 877 825 826 + static void __maybe_unused sdhci_acpi_reset_signal_voltage_if_needed( 827 + struct device *dev) 828 + { 829 + struct sdhci_acpi_host *c = dev_get_drvdata(dev); 830 + struct sdhci_host *host = c->host; 831 + 832 + if (c->is_intel && c->reset_signal_volt_on_suspend && 833 + host->mmc->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_330) { 834 + struct intel_host *intel_host = sdhci_acpi_priv(c); 835 + unsigned int fn = INTEL_DSM_V33_SWITCH; 836 + u32 result = 0; 837 + 838 + intel_dsm(intel_host, dev, fn, &result); 839 + } 840 + } 841 + 878 842 #ifdef CONFIG_PM_SLEEP 879 843 880 844 static int sdhci_acpi_suspend(struct device *dev) 881 845 { 882 846 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 883 847 struct sdhci_host *host = c->host; 848 + int ret; 884 849 885 850 if (host->tuning_mode != SDHCI_TUNING_MODE_3) 886 851 mmc_retune_needed(host->mmc); 887 852 888 - return sdhci_suspend_host(host); 853 + ret = sdhci_suspend_host(host); 854 + if (ret) 855 + return ret; 856 + 857 + sdhci_acpi_reset_signal_voltage_if_needed(dev); 858 + return 0; 889 859 } 890 860 891 861 static int sdhci_acpi_resume(struct device *dev) ··· 927 853 { 928 854 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 929 855 struct sdhci_host *host = c->host; 856 + int ret; 930 857 931 858 if (host->tuning_mode != SDHCI_TUNING_MODE_3) 932 859 mmc_retune_needed(host->mmc); 933 860 934 - return sdhci_runtime_suspend_host(host); 861 + ret = sdhci_runtime_suspend_host(host); 862 + if (ret) 863 + return ret; 864 + 865 + sdhci_acpi_reset_signal_voltage_if_needed(dev); 866 + return 0; 935 867 } 936 868 937 869 static int sdhci_acpi_runtime_resume(struct device *dev)
+16 -2
drivers/mmc/host/sdhci-cadence.c
··· 11 11 #include <linux/mmc/host.h> 12 12 #include <linux/mmc/mmc.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_device.h> 14 15 15 16 #include "sdhci-pltfm.h" 16 17 ··· 236 235 .set_uhs_signaling = sdhci_cdns_set_uhs_signaling, 237 236 }; 238 237 238 + static const struct sdhci_pltfm_data sdhci_cdns_uniphier_pltfm_data = { 239 + .ops = &sdhci_cdns_ops, 240 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 241 + }; 242 + 239 243 static const struct sdhci_pltfm_data sdhci_cdns_pltfm_data = { 240 244 .ops = &sdhci_cdns_ops, 241 245 }; ··· 340 334 static int sdhci_cdns_probe(struct platform_device *pdev) 341 335 { 342 336 struct sdhci_host *host; 337 + const struct sdhci_pltfm_data *data; 343 338 struct sdhci_pltfm_host *pltfm_host; 344 339 struct sdhci_cdns_priv *priv; 345 340 struct clk *clk; ··· 357 350 if (ret) 358 351 return ret; 359 352 353 + data = of_device_get_match_data(dev); 354 + if (!data) 355 + data = &sdhci_cdns_pltfm_data; 356 + 360 357 nr_phy_params = sdhci_cdns_phy_param_count(dev->of_node); 361 - host = sdhci_pltfm_init(pdev, &sdhci_cdns_pltfm_data, 358 + host = sdhci_pltfm_init(pdev, data, 362 359 struct_size(priv, phy_params, nr_phy_params)); 363 360 if (IS_ERR(host)) { 364 361 ret = PTR_ERR(host); ··· 442 431 }; 443 432 444 433 static const struct of_device_id sdhci_cdns_match[] = { 445 - { .compatible = "socionext,uniphier-sd4hc" }, 434 + { 435 + .compatible = "socionext,uniphier-sd4hc", 436 + .data = &sdhci_cdns_uniphier_pltfm_data, 437 + }, 446 438 { .compatible = "cdns,sd4hc" }, 447 439 { /* sentinel */ } 448 440 };
+1 -1
drivers/mmc/host/sdhci-msm.c
··· 1590 1590 return 0; 1591 1591 } 1592 1592 1593 - void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery) 1593 + static void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery) 1594 1594 { 1595 1595 struct sdhci_host *host = mmc_priv(mmc); 1596 1596 unsigned long flags;
+6 -2
drivers/mmc/host/sdhci-of-at91.c
··· 132 132 133 133 sdhci_reset(host, mask); 134 134 135 - if (host->mmc->caps & MMC_CAP_NONREMOVABLE) 135 + if ((host->mmc->caps & MMC_CAP_NONREMOVABLE) 136 + || mmc_gpio_get_cd(host->mmc) >= 0) 136 137 sdhci_at91_set_force_card_detect(host); 137 138 138 139 if (priv->cal_always_on && (mask & SDHCI_RESET_ALL)) ··· 428 427 * detection procedure using the SDMCC_CD signal is bypassed. 429 428 * This bit is reset when a software reset for all command is performed 430 429 * so we need to implement our own reset function to set back this bit. 430 + * 431 + * WA: SAMA5D2 doesn't drive CMD if using CD GPIO line. 431 432 */ 432 - if (host->mmc->caps & MMC_CAP_NONREMOVABLE) 433 + if ((host->mmc->caps & MMC_CAP_NONREMOVABLE) 434 + || mmc_gpio_get_cd(host->mmc) >= 0) 433 435 sdhci_at91_set_force_card_detect(host); 434 436 435 437 pm_runtime_put_autosuspend(&pdev->dev);
+3
drivers/mmc/host/sdhci-omap.c
··· 1192 1192 if (of_find_property(dev->of_node, "dmas", NULL)) 1193 1193 sdhci_switch_external_dma(host, true); 1194 1194 1195 + /* R1B responses is required to properly manage HW busy detection. */ 1196 + mmc->caps |= MMC_CAP_NEED_RSP_BUSY; 1197 + 1195 1198 ret = sdhci_setup_host(host); 1196 1199 if (ret) 1197 1200 goto err_put_sync;
+17
drivers/mmc/host/sdhci-pci-gli.c
··· 262 262 return 0; 263 263 } 264 264 265 + static void gli_pcie_enable_msi(struct sdhci_pci_slot *slot) 266 + { 267 + int ret; 268 + 269 + ret = pci_alloc_irq_vectors(slot->chip->pdev, 1, 1, 270 + PCI_IRQ_MSI | PCI_IRQ_MSIX); 271 + if (ret < 0) { 272 + pr_warn("%s: enable PCI MSI failed, error=%d\n", 273 + mmc_hostname(slot->host->mmc), ret); 274 + return; 275 + } 276 + 277 + slot->host->irq = pci_irq_vector(slot->chip->pdev, 0); 278 + } 279 + 265 280 static int gli_probe_slot_gl9750(struct sdhci_pci_slot *slot) 266 281 { 267 282 struct sdhci_host *host = slot->host; 268 283 284 + gli_pcie_enable_msi(slot); 269 285 slot->host->mmc->caps2 |= MMC_CAP2_NO_SDIO; 270 286 sdhci_enable_v4_mode(host); 271 287 ··· 292 276 { 293 277 struct sdhci_host *host = slot->host; 294 278 279 + gli_pcie_enable_msi(slot); 295 280 slot->host->mmc->caps2 |= MMC_CAP2_NO_SDIO; 296 281 sdhci_enable_v4_mode(host); 297 282
+3
drivers/mmc/host/sdhci-tegra.c
··· 1552 1552 if (tegra_host->soc_data->nvquirks & NVQUIRK_ENABLE_DDR50) 1553 1553 host->mmc->caps |= MMC_CAP_1_8V_DDR; 1554 1554 1555 + /* R1B responses is required to properly manage HW busy detection. */ 1556 + host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY; 1557 + 1555 1558 tegra_sdhci_parse_dt(host); 1556 1559 1557 1560 tegra_host->power_gpio = devm_gpiod_get_optional(&pdev->dev, "power",
+10 -10
drivers/net/bonding/bond_alb.c
··· 50 50 }; 51 51 #pragma pack() 52 52 53 - static inline struct arp_pkt *arp_pkt(const struct sk_buff *skb) 54 - { 55 - return (struct arp_pkt *)skb_network_header(skb); 56 - } 57 - 58 53 /* Forward declaration */ 59 54 static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[], 60 55 bool strict_match); ··· 548 553 spin_unlock(&bond->mode_lock); 549 554 } 550 555 551 - static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond) 556 + static struct slave *rlb_choose_channel(struct sk_buff *skb, 557 + struct bonding *bond, 558 + const struct arp_pkt *arp) 552 559 { 553 560 struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond)); 554 - struct arp_pkt *arp = arp_pkt(skb); 555 561 struct slave *assigned_slave, *curr_active_slave; 556 562 struct rlb_client_info *client_info; 557 563 u32 hash_index = 0; ··· 649 653 */ 650 654 static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond) 651 655 { 652 - struct arp_pkt *arp = arp_pkt(skb); 653 656 struct slave *tx_slave = NULL; 657 + struct arp_pkt *arp; 658 + 659 + if (!pskb_network_may_pull(skb, sizeof(*arp))) 660 + return NULL; 661 + arp = (struct arp_pkt *)skb_network_header(skb); 654 662 655 663 /* Don't modify or load balance ARPs that do not originate locally 656 664 * (e.g.,arrive via a bridge). ··· 664 664 665 665 if (arp->op_code == htons(ARPOP_REPLY)) { 666 666 /* the arp must be sent on the selected rx channel */ 667 - tx_slave = rlb_choose_channel(skb, bond); 667 + tx_slave = rlb_choose_channel(skb, bond, arp); 668 668 if (tx_slave) 669 669 bond_hw_addr_copy(arp->mac_src, tx_slave->dev->dev_addr, 670 670 tx_slave->dev->addr_len); ··· 676 676 * When the arp reply is received the entry will be updated 677 677 * with the correct unicast address of the client. 678 678 */ 679 - tx_slave = rlb_choose_channel(skb, bond); 679 + tx_slave = rlb_choose_channel(skb, bond, arp); 680 680 681 681 /* The ARP reply packets must be delayed so that 682 682 * they can cancel out the influence of the ARP request.
+1
drivers/net/can/dev.c
··· 883 883 = { .len = sizeof(struct can_bittiming) }, 884 884 [IFLA_CAN_DATA_BITTIMING_CONST] 885 885 = { .len = sizeof(struct can_bittiming_const) }, 886 + [IFLA_CAN_TERMINATION] = { .type = NLA_U16 }, 886 887 }; 887 888 888 889 static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+2
drivers/net/dsa/mv88e6xxx/chip.c
··· 2769 2769 goto unlock; 2770 2770 } 2771 2771 2772 + occupancy &= MV88E6XXX_G2_ATU_STATS_MASK; 2773 + 2772 2774 unlock: 2773 2775 mv88e6xxx_reg_unlock(chip); 2774 2776
+7 -1
drivers/net/dsa/mv88e6xxx/global2.c
··· 1099 1099 { 1100 1100 int err, irq, virq; 1101 1101 1102 + chip->g2_irq.masked = ~0; 1103 + mv88e6xxx_reg_lock(chip); 1104 + err = mv88e6xxx_g2_int_mask(chip, ~chip->g2_irq.masked); 1105 + mv88e6xxx_reg_unlock(chip); 1106 + if (err) 1107 + return err; 1108 + 1102 1109 chip->g2_irq.domain = irq_domain_add_simple( 1103 1110 chip->dev->of_node, 16, 0, &mv88e6xxx_g2_irq_domain_ops, chip); 1104 1111 if (!chip->g2_irq.domain) ··· 1115 1108 irq_create_mapping(chip->g2_irq.domain, irq); 1116 1109 1117 1110 chip->g2_irq.chip = mv88e6xxx_g2_irq_chip; 1118 - chip->g2_irq.masked = ~0; 1119 1111 1120 1112 chip->device_irq = irq_find_mapping(chip->g1_irq.domain, 1121 1113 MV88E6XXX_G1_STS_IRQ_DEVICE);
+2 -1
drivers/net/dsa/sja1105/sja1105_main.c
··· 1741 1741 if (!dsa_is_user_port(ds, port)) 1742 1742 continue; 1743 1743 1744 - kthread_destroy_worker(sp->xmit_worker); 1744 + if (sp->xmit_worker) 1745 + kthread_destroy_worker(sp->xmit_worker); 1745 1746 } 1746 1747 1747 1748 sja1105_tas_teardown(ds);
+1 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2135 2135 return -ENOSPC; 2136 2136 2137 2137 index = find_first_zero_bit(priv->filters, RXCHK_BRCM_TAG_MAX); 2138 - if (index > RXCHK_BRCM_TAG_MAX) 2138 + if (index >= RXCHK_BRCM_TAG_MAX) 2139 2139 return -ENOSPC; 2140 2140 2141 2141 /* Location is the classification ID, and index is the position
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 10982 10982 struct bnxt *bp = netdev_priv(dev); 10983 10983 10984 10984 if (netif_running(dev)) 10985 - bnxt_close_nic(bp, false, false); 10985 + bnxt_close_nic(bp, true, false); 10986 10986 10987 10987 dev->mtu = new_mtu; 10988 10988 bnxt_set_ring_params(bp); 10989 10989 10990 10990 if (netif_running(dev)) 10991 - return bnxt_open_nic(bp, false, false); 10991 + return bnxt_open_nic(bp, true, false); 10992 10992 10993 10993 return 0; 10994 10994 }
+11 -13
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2007 2007 struct hwrm_nvm_install_update_output *resp = bp->hwrm_cmd_resp_addr; 2008 2008 struct hwrm_nvm_install_update_input install = {0}; 2009 2009 const struct firmware *fw; 2010 - int rc, hwrm_err = 0; 2011 2010 u32 item_len; 2011 + int rc = 0; 2012 2012 u16 index; 2013 2013 2014 2014 bnxt_hwrm_fw_set_time(bp); ··· 2052 2052 memcpy(kmem, fw->data, fw->size); 2053 2053 modify.host_src_addr = cpu_to_le64(dma_handle); 2054 2054 2055 - hwrm_err = hwrm_send_message(bp, &modify, 2056 - sizeof(modify), 2057 - FLASH_PACKAGE_TIMEOUT); 2055 + rc = hwrm_send_message(bp, &modify, sizeof(modify), 2056 + FLASH_PACKAGE_TIMEOUT); 2058 2057 dma_free_coherent(&bp->pdev->dev, fw->size, kmem, 2059 2058 dma_handle); 2060 2059 } 2061 2060 } 2062 2061 release_firmware(fw); 2063 - if (rc || hwrm_err) 2062 + if (rc) 2064 2063 goto err_exit; 2065 2064 2066 2065 if ((install_type & 0xffff) == 0) ··· 2068 2069 install.install_type = cpu_to_le32(install_type); 2069 2070 2070 2071 mutex_lock(&bp->hwrm_cmd_lock); 2071 - hwrm_err = _hwrm_send_message(bp, &install, sizeof(install), 2072 - INSTALL_PACKAGE_TIMEOUT); 2073 - if (hwrm_err) { 2072 + rc = _hwrm_send_message(bp, &install, sizeof(install), 2073 + INSTALL_PACKAGE_TIMEOUT); 2074 + if (rc) { 2074 2075 u8 error_code = ((struct hwrm_err_output *)resp)->cmd_err; 2075 2076 2076 2077 if (resp->error_code && error_code == 2077 2078 NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) { 2078 2079 install.flags |= cpu_to_le16( 2079 2080 NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG); 2080 - hwrm_err = _hwrm_send_message(bp, &install, 2081 - sizeof(install), 2082 - INSTALL_PACKAGE_TIMEOUT); 2081 + rc = _hwrm_send_message(bp, &install, sizeof(install), 2082 + INSTALL_PACKAGE_TIMEOUT); 2083 2083 } 2084 - if (hwrm_err) 2084 + if (rc) 2085 2085 goto flash_pkg_exit; 2086 2086 } 2087 2087 ··· 2092 2094 flash_pkg_exit: 2093 2095 mutex_unlock(&bp->hwrm_cmd_lock); 2094 2096 err_exit: 2095 - if (hwrm_err == -EACCES) 2097 + if (rc == -EACCES) 2096 2098 bnxt_print_admin_err(bp); 2097 2099 return rc; 2098 2100 }
+27 -22
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 5381 5381 static int cfg_queues(struct adapter *adap) 5382 5382 { 5383 5383 u32 avail_qsets, avail_eth_qsets, avail_uld_qsets; 5384 + u32 i, n10g = 0, qidx = 0, n1g = 0; 5385 + u32 ncpus = num_online_cpus(); 5384 5386 u32 niqflint, neq, num_ulds; 5385 5387 struct sge *s = &adap->sge; 5386 - u32 i, n10g = 0, qidx = 0; 5387 - #ifndef CONFIG_CHELSIO_T4_DCB 5388 - int q10g = 0; 5389 - #endif 5388 + u32 q10g = 0, q1g; 5390 5389 5391 5390 /* Reduce memory usage in kdump environment, disable all offload. */ 5392 5391 if (is_kdump_kernel() || (is_uld(adap) && t4_uld_mem_alloc(adap))) { ··· 5423 5424 n10g += is_x_10g_port(&adap2pinfo(adap, i)->link_cfg); 5424 5425 5425 5426 avail_eth_qsets = min_t(u32, avail_qsets, MAX_ETH_QSETS); 5427 + 5428 + /* We default to 1 queue per non-10G port and up to # of cores queues 5429 + * per 10G port. 5430 + */ 5431 + if (n10g) 5432 + q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g; 5433 + 5434 + n1g = adap->params.nports - n10g; 5426 5435 #ifdef CONFIG_CHELSIO_T4_DCB 5427 5436 /* For Data Center Bridging support we need to be able to support up 5428 5437 * to 8 Traffic Priorities; each of which will be assigned to its 5429 5438 * own TX Queue in order to prevent Head-Of-Line Blocking. 5430 5439 */ 5440 + q1g = 8; 5431 5441 if (adap->params.nports * 8 > avail_eth_qsets) { 5432 5442 dev_err(adap->pdev_dev, "DCB avail_eth_qsets=%d < %d!\n", 5433 5443 avail_eth_qsets, adap->params.nports * 8); 5434 5444 return -ENOMEM; 5435 5445 } 5436 5446 5437 - for_each_port(adap, i) { 5438 - struct port_info *pi = adap2pinfo(adap, i); 5447 + if (adap->params.nports * ncpus < avail_eth_qsets) 5448 + q10g = max(8U, ncpus); 5449 + else 5450 + q10g = max(8U, q10g); 5439 5451 5440 - pi->first_qset = qidx; 5441 - pi->nqsets = is_kdump_kernel() ? 1 : 8; 5442 - qidx += pi->nqsets; 5443 - } 5452 + while ((q10g * n10g) > (avail_eth_qsets - n1g * q1g)) 5453 + q10g--; 5454 + 5444 5455 #else /* !CONFIG_CHELSIO_T4_DCB */ 5445 - /* We default to 1 queue per non-10G port and up to # of cores queues 5446 - * per 10G port. 5447 - */ 5448 - if (n10g) 5449 - q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g; 5450 - if (q10g > netif_get_num_default_rss_queues()) 5451 - q10g = netif_get_num_default_rss_queues(); 5452 - 5453 - if (is_kdump_kernel()) 5456 + q1g = 1; 5457 + q10g = min(q10g, ncpus); 5458 + #endif /* !CONFIG_CHELSIO_T4_DCB */ 5459 + if (is_kdump_kernel()) { 5454 5460 q10g = 1; 5461 + q1g = 1; 5462 + } 5455 5463 5456 5464 for_each_port(adap, i) { 5457 5465 struct port_info *pi = adap2pinfo(adap, i); 5458 5466 5459 5467 pi->first_qset = qidx; 5460 - pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : 1; 5468 + pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : q1g; 5461 5469 qidx += pi->nqsets; 5462 5470 } 5463 - #endif /* !CONFIG_CHELSIO_T4_DCB */ 5464 5471 5465 5472 s->ethqsets = qidx; 5466 5473 s->max_ethqsets = qidx; /* MSI-X may lower it later */ ··· 5478 5473 * capped by the number of available cores. 5479 5474 */ 5480 5475 num_ulds = adap->num_uld + adap->num_ofld_uld; 5481 - i = min_t(u32, MAX_OFLD_QSETS, num_online_cpus()); 5476 + i = min_t(u32, MAX_OFLD_QSETS, ncpus); 5482 5477 avail_uld_qsets = roundup(i, adap->params.nports); 5483 5478 if (avail_qsets < num_ulds * adap->params.nports) { 5484 5479 adap->params.offload = 0;
+108 -6
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 1 1 /* Copyright 2008 - 2016 Freescale Semiconductor Inc. 2 + * Copyright 2020 NXP 2 3 * 3 4 * Redistribution and use in source and binary forms, with or without 4 5 * modification, are permitted provided that the following conditions are met: ··· 124 123 #define FSL_QMAN_MAX_OAL 127 125 124 126 125 /* Default alignment for start of data in an Rx FD */ 126 + #ifdef CONFIG_DPAA_ERRATUM_A050385 127 + /* aligning data start to 64 avoids DMA transaction splits, unless the buffer 128 + * is crossing a 4k page boundary 129 + */ 130 + #define DPAA_FD_DATA_ALIGNMENT (fman_has_errata_a050385() ? 64 : 16) 131 + /* aligning to 256 avoids DMA transaction splits caused by 4k page boundary 132 + * crossings; also, all SG fragments except the last must have a size multiple 133 + * of 256 to avoid DMA transaction splits 134 + */ 135 + #define DPAA_A050385_ALIGN 256 136 + #define DPAA_FD_RX_DATA_ALIGNMENT (fman_has_errata_a050385() ? \ 137 + DPAA_A050385_ALIGN : 16) 138 + #else 127 139 #define DPAA_FD_DATA_ALIGNMENT 16 140 + #define DPAA_FD_RX_DATA_ALIGNMENT DPAA_FD_DATA_ALIGNMENT 141 + #endif 128 142 129 143 /* The DPAA requires 256 bytes reserved and mapped for the SGT */ 130 144 #define DPAA_SGT_SIZE 256 ··· 174 158 #define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result) 175 159 #define DPAA_TIME_STAMP_SIZE 8 176 160 #define DPAA_HASH_RESULTS_SIZE 8 161 + #ifdef CONFIG_DPAA_ERRATUM_A050385 162 + #define DPAA_RX_PRIV_DATA_SIZE (DPAA_A050385_ALIGN - (DPAA_PARSE_RESULTS_SIZE\ 163 + + DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE)) 164 + #else 177 165 #define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \ 178 166 dpaa_rx_extra_headroom) 167 + #endif 179 168 180 169 #define DPAA_ETH_PCD_RXQ_NUM 128 181 170 ··· 201 180 202 181 #define DPAA_BP_RAW_SIZE 4096 203 182 183 + #ifdef CONFIG_DPAA_ERRATUM_A050385 184 + #define dpaa_bp_size(raw_size) (SKB_WITH_OVERHEAD(raw_size) & \ 185 + ~(DPAA_A050385_ALIGN - 1)) 186 + #else 204 187 #define dpaa_bp_size(raw_size) SKB_WITH_OVERHEAD(raw_size) 188 + #endif 205 189 206 190 static int dpaa_max_frm; 207 191 ··· 1218 1192 buf_prefix_content.pass_prs_result = true; 1219 1193 buf_prefix_content.pass_hash_result = true; 1220 1194 buf_prefix_content.pass_time_stamp = true; 1221 - buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT; 1195 + buf_prefix_content.data_align = DPAA_FD_RX_DATA_ALIGNMENT; 1222 1196 1223 1197 rx_p = &params.specific_params.rx_params; 1224 1198 rx_p->err_fqid = errq->fqid; ··· 1688 1662 return CHECKSUM_NONE; 1689 1663 } 1690 1664 1665 + #define PTR_IS_ALIGNED(x, a) (IS_ALIGNED((unsigned long)(x), (a))) 1666 + 1691 1667 /* Build a linear skb around the received buffer. 1692 1668 * We are guaranteed there is enough room at the end of the data buffer to 1693 1669 * accommodate the shared info area of the skb. ··· 1761 1733 1762 1734 sg_addr = qm_sg_addr(&sgt[i]); 1763 1735 sg_vaddr = phys_to_virt(sg_addr); 1764 - WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr, 1765 - SMP_CACHE_BYTES)); 1736 + WARN_ON(!PTR_IS_ALIGNED(sg_vaddr, SMP_CACHE_BYTES)); 1766 1737 1767 1738 dma_unmap_page(priv->rx_dma_dev, sg_addr, 1768 1739 DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE); ··· 2049 2022 return 0; 2050 2023 } 2051 2024 2025 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2026 + int dpaa_a050385_wa(struct net_device *net_dev, struct sk_buff **s) 2027 + { 2028 + struct dpaa_priv *priv = netdev_priv(net_dev); 2029 + struct sk_buff *new_skb, *skb = *s; 2030 + unsigned char *start, i; 2031 + 2032 + /* check linear buffer alignment */ 2033 + if (!PTR_IS_ALIGNED(skb->data, DPAA_A050385_ALIGN)) 2034 + goto workaround; 2035 + 2036 + /* linear buffers just need to have an aligned start */ 2037 + if (!skb_is_nonlinear(skb)) 2038 + return 0; 2039 + 2040 + /* linear data size for nonlinear skbs needs to be aligned */ 2041 + if (!IS_ALIGNED(skb_headlen(skb), DPAA_A050385_ALIGN)) 2042 + goto workaround; 2043 + 2044 + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2045 + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 2046 + 2047 + /* all fragments need to have aligned start addresses */ 2048 + if (!IS_ALIGNED(skb_frag_off(frag), DPAA_A050385_ALIGN)) 2049 + goto workaround; 2050 + 2051 + /* all but last fragment need to have aligned sizes */ 2052 + if (!IS_ALIGNED(skb_frag_size(frag), DPAA_A050385_ALIGN) && 2053 + (i < skb_shinfo(skb)->nr_frags - 1)) 2054 + goto workaround; 2055 + } 2056 + 2057 + return 0; 2058 + 2059 + workaround: 2060 + /* copy all the skb content into a new linear buffer */ 2061 + new_skb = netdev_alloc_skb(net_dev, skb->len + DPAA_A050385_ALIGN - 1 + 2062 + priv->tx_headroom); 2063 + if (!new_skb) 2064 + return -ENOMEM; 2065 + 2066 + /* NET_SKB_PAD bytes already reserved, adding up to tx_headroom */ 2067 + skb_reserve(new_skb, priv->tx_headroom - NET_SKB_PAD); 2068 + 2069 + /* Workaround for DPAA_A050385 requires data start to be aligned */ 2070 + start = PTR_ALIGN(new_skb->data, DPAA_A050385_ALIGN); 2071 + if (start - new_skb->data != 0) 2072 + skb_reserve(new_skb, start - new_skb->data); 2073 + 2074 + skb_put(new_skb, skb->len); 2075 + skb_copy_bits(skb, 0, new_skb->data, skb->len); 2076 + skb_copy_header(new_skb, skb); 2077 + new_skb->dev = skb->dev; 2078 + 2079 + /* We move the headroom when we align it so we have to reset the 2080 + * network and transport header offsets relative to the new data 2081 + * pointer. The checksum offload relies on these offsets. 2082 + */ 2083 + skb_set_network_header(new_skb, skb_network_offset(skb)); 2084 + skb_set_transport_header(new_skb, skb_transport_offset(skb)); 2085 + 2086 + /* TODO: does timestamping need the result in the old skb? */ 2087 + dev_kfree_skb(skb); 2088 + *s = new_skb; 2089 + 2090 + return 0; 2091 + } 2092 + #endif 2093 + 2052 2094 static netdev_tx_t 2053 2095 dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev) 2054 2096 { ··· 2163 2067 2164 2068 nonlinear = skb_is_nonlinear(skb); 2165 2069 } 2070 + 2071 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2072 + if (unlikely(fman_has_errata_a050385())) { 2073 + if (dpaa_a050385_wa(net_dev, &skb)) 2074 + goto enomem; 2075 + nonlinear = skb_is_nonlinear(skb); 2076 + } 2077 + #endif 2166 2078 2167 2079 if (nonlinear) { 2168 2080 /* Just create a S/G fd based on the skb */ ··· 2845 2741 headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE + 2846 2742 DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE); 2847 2743 2848 - return DPAA_FD_DATA_ALIGNMENT ? ALIGN(headroom, 2849 - DPAA_FD_DATA_ALIGNMENT) : 2850 - headroom; 2744 + return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT); 2851 2745 } 2852 2746 2853 2747 static int dpaa_eth_probe(struct platform_device *pdev)
+3 -3
drivers/net/ethernet/freescale/fec_main.c
··· 2529 2529 return -EINVAL; 2530 2530 } 2531 2531 2532 - cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr); 2532 + cycle = fec_enet_us_to_itr_clock(ndev, ec->rx_coalesce_usecs); 2533 2533 if (cycle > 0xFFFF) { 2534 2534 dev_err(dev, "Rx coalesced usec exceed hardware limitation\n"); 2535 2535 return -EINVAL; 2536 2536 } 2537 2537 2538 - cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr); 2538 + cycle = fec_enet_us_to_itr_clock(ndev, ec->tx_coalesce_usecs); 2539 2539 if (cycle > 0xFFFF) { 2540 - dev_err(dev, "Rx coalesced usec exceed hardware limitation\n"); 2540 + dev_err(dev, "Tx coalesced usec exceed hardware limitation\n"); 2541 2541 return -EINVAL; 2542 2542 } 2543 2543
+28
drivers/net/ethernet/freescale/fman/Kconfig
··· 8 8 help 9 9 Freescale Data-Path Acceleration Architecture Frame Manager 10 10 (FMan) support 11 + 12 + config DPAA_ERRATUM_A050385 13 + bool 14 + depends on ARM64 && FSL_DPAA 15 + default y 16 + help 17 + DPAA FMan erratum A050385 software workaround implementation: 18 + align buffers, data start, SG fragment length to avoid FMan DMA 19 + splits. 20 + FMAN DMA read or writes under heavy traffic load may cause FMAN 21 + internal resource leak thus stopping further packet processing. 22 + The FMAN internal queue can overflow when FMAN splits single 23 + read or write transactions into multiple smaller transactions 24 + such that more than 17 AXI transactions are in flight from FMAN 25 + to interconnect. When the FMAN internal queue overflows, it can 26 + stall further packet processing. The issue can occur with any 27 + one of the following three conditions: 28 + 1. FMAN AXI transaction crosses 4K address boundary (Errata 29 + A010022) 30 + 2. FMAN DMA address for an AXI transaction is not 16 byte 31 + aligned, i.e. the last 4 bits of an address are non-zero 32 + 3. Scatter Gather (SG) frames have more than one SG buffer in 33 + the SG list and any one of the buffers, except the last 34 + buffer in the SG list has data size that is not a multiple 35 + of 16 bytes, i.e., other than 16, 32, 48, 64, etc. 36 + With any one of the above three conditions present, there is 37 + likelihood of stalled FMAN packet processing, especially under 38 + stress with multiple ports injecting line-rate traffic.
+18
drivers/net/ethernet/freescale/fman/fman.c
··· 1 1 /* 2 2 * Copyright 2008-2015 Freescale Semiconductor Inc. 3 + * Copyright 2020 NXP 3 4 * 4 5 * Redistribution and use in source and binary forms, with or without 5 6 * modification, are permitted provided that the following conditions are met: ··· 566 565 u32 total_num_of_tasks; 567 566 u32 qmi_def_tnums_thresh; 568 567 }; 568 + 569 + #ifdef CONFIG_DPAA_ERRATUM_A050385 570 + static bool fman_has_err_a050385; 571 + #endif 569 572 570 573 static irqreturn_t fman_exceptions(struct fman *fman, 571 574 enum fman_exceptions exception) ··· 2523 2518 } 2524 2519 EXPORT_SYMBOL(fman_bind); 2525 2520 2521 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2522 + bool fman_has_errata_a050385(void) 2523 + { 2524 + return fman_has_err_a050385; 2525 + } 2526 + EXPORT_SYMBOL(fman_has_errata_a050385); 2527 + #endif 2528 + 2526 2529 static irqreturn_t fman_err_irq(int irq, void *handle) 2527 2530 { 2528 2531 struct fman *fman = (struct fman *)handle; ··· 2857 2844 __func__); 2858 2845 goto fman_free; 2859 2846 } 2847 + 2848 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2849 + fman_has_err_a050385 = 2850 + of_property_read_bool(fm_node, "fsl,erratum-a050385"); 2851 + #endif 2860 2852 2861 2853 return fman; 2862 2854
+5
drivers/net/ethernet/freescale/fman/fman.h
··· 1 1 /* 2 2 * Copyright 2008-2015 Freescale Semiconductor Inc. 3 + * Copyright 2020 NXP 3 4 * 4 5 * Redistribution and use in source and binary forms, with or without 5 6 * modification, are permitted provided that the following conditions are met: ··· 398 397 u16 fman_get_max_frm(void); 399 398 400 399 int fman_get_rx_extra_headroom(void); 400 + 401 + #ifdef CONFIG_DPAA_ERRATUM_A050385 402 + bool fman_has_errata_a050385(void); 403 + #endif 401 404 402 405 struct fman *fman_bind(struct device *dev); 403 406
+1
drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
··· 46 46 HCLGE_MBX_PUSH_VLAN_INFO, /* (PF -> VF) push port base vlan */ 47 47 HCLGE_MBX_GET_MEDIA_TYPE, /* (VF -> PF) get media type */ 48 48 HCLGE_MBX_PUSH_PROMISC_INFO, /* (PF -> VF) push vf promisc info */ 49 + HCLGE_MBX_VF_UNINIT, /* (VF -> PF) vf is unintializing */ 49 50 50 51 HCLGE_MBX_GET_VF_FLR_STATUS = 200, /* (M7 -> PF) get vf flr status */ 51 52 HCLGE_MBX_PUSH_LINK_STATUS, /* (M7 -> PF) get port link status */
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 1711 1711 netif_dbg(h, drv, netdev, "setup tc: num_tc=%u\n", tc); 1712 1712 1713 1713 return (kinfo->dcb_ops && kinfo->dcb_ops->setup_tc) ? 1714 - kinfo->dcb_ops->setup_tc(h, tc, prio_tc) : -EOPNOTSUPP; 1714 + kinfo->dcb_ops->setup_tc(h, tc ? tc : 1, prio_tc) : -EOPNOTSUPP; 1715 1715 } 1716 1716 1717 1717 static int hns3_nic_setup_tc(struct net_device *dev, enum tc_setup_type type,
+42 -5
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 2446 2446 2447 2447 int hclge_cfg_mac_speed_dup(struct hclge_dev *hdev, int speed, u8 duplex) 2448 2448 { 2449 + struct hclge_mac *mac = &hdev->hw.mac; 2449 2450 int ret; 2450 2451 2451 2452 duplex = hclge_check_speed_dup(duplex, speed); 2452 - if (hdev->hw.mac.speed == speed && hdev->hw.mac.duplex == duplex) 2453 + if (!mac->support_autoneg && mac->speed == speed && 2454 + mac->duplex == duplex) 2453 2455 return 0; 2454 2456 2455 2457 ret = hclge_cfg_mac_speed_dup_hw(hdev, speed, duplex); ··· 7745 7743 struct hclge_desc desc; 7746 7744 int ret; 7747 7745 7748 - hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_VLAN_FILTER_CTRL, false); 7749 - 7746 + /* read current vlan filter parameter */ 7747 + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_VLAN_FILTER_CTRL, true); 7750 7748 req = (struct hclge_vlan_filter_ctrl_cmd *)desc.data; 7751 7749 req->vlan_type = vlan_type; 7752 - req->vlan_fe = filter_en ? fe_type : 0; 7753 7750 req->vf_id = vf_id; 7754 7751 7755 7752 ret = hclge_cmd_send(&hdev->hw, &desc, 1); 7753 + if (ret) { 7754 + dev_err(&hdev->pdev->dev, 7755 + "failed to get vlan filter config, ret = %d.\n", ret); 7756 + return ret; 7757 + } 7758 + 7759 + /* modify and write new config parameter */ 7760 + hclge_cmd_reuse_desc(&desc, false); 7761 + req->vlan_fe = filter_en ? 7762 + (req->vlan_fe | fe_type) : (req->vlan_fe & ~fe_type); 7763 + 7764 + ret = hclge_cmd_send(&hdev->hw, &desc, 1); 7756 7765 if (ret) 7757 - dev_err(&hdev->pdev->dev, "set vlan filter fail, ret =%d.\n", 7766 + dev_err(&hdev->pdev->dev, "failed to set vlan filter, ret = %d.\n", 7758 7767 ret); 7759 7768 7760 7769 return ret; ··· 8283 8270 kfree(vlan); 8284 8271 } 8285 8272 } 8273 + clear_bit(vport->vport_id, hdev->vf_vlan_full); 8286 8274 } 8287 8275 8288 8276 void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev) ··· 8497 8483 vlan, qos, 8498 8484 ntohs(proto)); 8499 8485 return ret; 8486 + } 8487 + } 8488 + 8489 + static void hclge_clear_vf_vlan(struct hclge_dev *hdev) 8490 + { 8491 + struct hclge_vlan_info *vlan_info; 8492 + struct hclge_vport *vport; 8493 + int ret; 8494 + int vf; 8495 + 8496 + /* clear port base vlan for all vf */ 8497 + for (vf = HCLGE_VF_VPORT_START_NUM; vf < hdev->num_alloc_vport; vf++) { 8498 + vport = &hdev->vport[vf]; 8499 + vlan_info = &vport->port_base_vlan_cfg.vlan_info; 8500 + 8501 + ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q), 8502 + vport->vport_id, 8503 + vlan_info->vlan_tag, true); 8504 + if (ret) 8505 + dev_err(&hdev->pdev->dev, 8506 + "failed to clear vf vlan for vf%d, ret = %d\n", 8507 + vf - HCLGE_VF_VPORT_START_NUM, ret); 8500 8508 } 8501 8509 } 8502 8510 ··· 9931 9895 struct hclge_mac *mac = &hdev->hw.mac; 9932 9896 9933 9897 hclge_reset_vf_rate(hdev); 9898 + hclge_clear_vf_vlan(hdev); 9934 9899 hclge_misc_affinity_teardown(hdev); 9935 9900 hclge_state_uninit(hdev); 9936 9901
+1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
··· 799 799 hclge_get_link_mode(vport, req); 800 800 break; 801 801 case HCLGE_MBX_GET_VF_FLR_STATUS: 802 + case HCLGE_MBX_VF_UNINIT: 802 803 hclge_rm_vport_all_mac_table(vport, true, 803 804 HCLGE_MAC_ADDR_UC); 804 805 hclge_rm_vport_all_mac_table(vport, true,
+3
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2803 2803 { 2804 2804 hclgevf_state_uninit(hdev); 2805 2805 2806 + hclgevf_send_mbx_msg(hdev, HCLGE_MBX_VF_UNINIT, 0, NULL, 0, 2807 + false, NULL, 0); 2808 + 2806 2809 if (test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) { 2807 2810 hclgevf_misc_irq_uninit(hdev); 2808 2811 hclgevf_uninit_msi(hdev);
+22 -2
drivers/net/ethernet/ibm/ibmvnic.c
··· 2142 2142 { 2143 2143 struct ibmvnic_rwi *rwi; 2144 2144 struct ibmvnic_adapter *adapter; 2145 + bool saved_state = false; 2146 + unsigned long flags; 2145 2147 u32 reset_state; 2146 2148 int rc = 0; 2147 2149 ··· 2155 2153 return; 2156 2154 } 2157 2155 2158 - reset_state = adapter->state; 2159 - 2160 2156 rwi = get_next_rwi(adapter); 2161 2157 while (rwi) { 2158 + spin_lock_irqsave(&adapter->state_lock, flags); 2159 + 2162 2160 if (adapter->state == VNIC_REMOVING || 2163 2161 adapter->state == VNIC_REMOVED) { 2162 + spin_unlock_irqrestore(&adapter->state_lock, flags); 2164 2163 kfree(rwi); 2165 2164 rc = EBUSY; 2166 2165 break; 2167 2166 } 2167 + 2168 + if (!saved_state) { 2169 + reset_state = adapter->state; 2170 + adapter->state = VNIC_RESETTING; 2171 + saved_state = true; 2172 + } 2173 + spin_unlock_irqrestore(&adapter->state_lock, flags); 2168 2174 2169 2175 if (rwi->reset_reason == VNIC_RESET_CHANGE_PARAM) { 2170 2176 /* CHANGE_PARAM requestor holds rtnl_lock */ ··· 5101 5091 __ibmvnic_delayed_reset); 5102 5092 INIT_LIST_HEAD(&adapter->rwi_list); 5103 5093 spin_lock_init(&adapter->rwi_lock); 5094 + spin_lock_init(&adapter->state_lock); 5104 5095 mutex_init(&adapter->fw_lock); 5105 5096 init_completion(&adapter->init_done); 5106 5097 init_completion(&adapter->fw_done); ··· 5174 5163 { 5175 5164 struct net_device *netdev = dev_get_drvdata(&dev->dev); 5176 5165 struct ibmvnic_adapter *adapter = netdev_priv(netdev); 5166 + unsigned long flags; 5167 + 5168 + spin_lock_irqsave(&adapter->state_lock, flags); 5169 + if (adapter->state == VNIC_RESETTING) { 5170 + spin_unlock_irqrestore(&adapter->state_lock, flags); 5171 + return -EBUSY; 5172 + } 5177 5173 5178 5174 adapter->state = VNIC_REMOVING; 5175 + spin_unlock_irqrestore(&adapter->state_lock, flags); 5176 + 5179 5177 rtnl_lock(); 5180 5178 unregister_netdevice(netdev); 5181 5179
+5 -1
drivers/net/ethernet/ibm/ibmvnic.h
··· 941 941 VNIC_CLOSING, 942 942 VNIC_CLOSED, 943 943 VNIC_REMOVING, 944 - VNIC_REMOVED}; 944 + VNIC_REMOVED, 945 + VNIC_RESETTING}; 945 946 946 947 enum ibmvnic_reset_reason {VNIC_RESET_FAILOVER = 1, 947 948 VNIC_RESET_MOBILITY, ··· 1091 1090 1092 1091 struct ibmvnic_tunables desired; 1093 1092 struct ibmvnic_tunables fallback; 1093 + 1094 + /* Used for serializatin of state field */ 1095 + spinlock_t state_lock; 1094 1096 };
+3 -3
drivers/net/ethernet/marvell/mvmdio.c
··· 347 347 } 348 348 349 349 350 - dev->err_interrupt = platform_get_irq(pdev, 0); 350 + dev->err_interrupt = platform_get_irq_optional(pdev, 0); 351 351 if (dev->err_interrupt > 0 && 352 352 resource_size(r) < MVMDIO_ERR_INT_MASK + 4) { 353 353 dev_err(&pdev->dev, ··· 364 364 writel(MVMDIO_ERR_INT_SMI_DONE, 365 365 dev->regs + MVMDIO_ERR_INT_MASK); 366 366 367 - } else if (dev->err_interrupt == -EPROBE_DEFER) { 368 - ret = -EPROBE_DEFER; 367 + } else if (dev->err_interrupt < 0) { 368 + ret = dev->err_interrupt; 369 369 goto out_mdio; 370 370 } 371 371
+17 -11
drivers/net/ethernet/mscc/ocelot.c
··· 2176 2176 return 0; 2177 2177 } 2178 2178 2179 - static void ocelot_port_set_mtu(struct ocelot *ocelot, int port, size_t mtu) 2179 + /* Configure the maximum SDU (L2 payload) on RX to the value specified in @sdu. 2180 + * The length of VLAN tags is accounted for automatically via DEV_MAC_TAGS_CFG. 2181 + */ 2182 + static void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu) 2180 2183 { 2181 2184 struct ocelot_port *ocelot_port = ocelot->ports[port]; 2185 + int maxlen = sdu + ETH_HLEN + ETH_FCS_LEN; 2182 2186 int atop_wm; 2183 2187 2184 - ocelot_port_writel(ocelot_port, mtu, DEV_MAC_MAXLEN_CFG); 2188 + ocelot_port_writel(ocelot_port, maxlen, DEV_MAC_MAXLEN_CFG); 2185 2189 2186 2190 /* Set Pause WM hysteresis 2187 - * 152 = 6 * mtu / OCELOT_BUFFER_CELL_SZ 2188 - * 101 = 4 * mtu / OCELOT_BUFFER_CELL_SZ 2191 + * 152 = 6 * maxlen / OCELOT_BUFFER_CELL_SZ 2192 + * 101 = 4 * maxlen / OCELOT_BUFFER_CELL_SZ 2189 2193 */ 2190 2194 ocelot_write_rix(ocelot, SYS_PAUSE_CFG_PAUSE_ENA | 2191 2195 SYS_PAUSE_CFG_PAUSE_STOP(101) | 2192 2196 SYS_PAUSE_CFG_PAUSE_START(152), SYS_PAUSE_CFG, port); 2193 2197 2194 2198 /* Tail dropping watermark */ 2195 - atop_wm = (ocelot->shared_queue_sz - 9 * mtu) / OCELOT_BUFFER_CELL_SZ; 2196 - ocelot_write_rix(ocelot, ocelot_wm_enc(9 * mtu), 2199 + atop_wm = (ocelot->shared_queue_sz - 9 * maxlen) / 2200 + OCELOT_BUFFER_CELL_SZ; 2201 + ocelot_write_rix(ocelot, ocelot_wm_enc(9 * maxlen), 2197 2202 SYS_ATOP, port); 2198 2203 ocelot_write(ocelot, ocelot_wm_enc(atop_wm), SYS_ATOP_TOT_CFG); 2199 2204 } ··· 2227 2222 DEV_MAC_HDX_CFG); 2228 2223 2229 2224 /* Set Max Length and maximum tags allowed */ 2230 - ocelot_port_set_mtu(ocelot, port, VLAN_ETH_FRAME_LEN); 2225 + ocelot_port_set_maxlen(ocelot, port, ETH_DATA_LEN); 2231 2226 ocelot_port_writel(ocelot_port, DEV_MAC_TAGS_CFG_TAG_ID(ETH_P_8021AD) | 2232 2227 DEV_MAC_TAGS_CFG_VLAN_AWR_ENA | 2228 + DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA | 2233 2229 DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA, 2234 2230 DEV_MAC_TAGS_CFG); 2235 2231 ··· 2316 2310 * Only one port can be an NPI at the same time. 2317 2311 */ 2318 2312 if (cpu < ocelot->num_phys_ports) { 2319 - int mtu = VLAN_ETH_FRAME_LEN + OCELOT_TAG_LEN; 2313 + int sdu = ETH_DATA_LEN + OCELOT_TAG_LEN; 2320 2314 2321 2315 ocelot_write(ocelot, QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M | 2322 2316 QSYS_EXT_CPU_CFG_EXT_CPU_PORT(cpu), 2323 2317 QSYS_EXT_CPU_CFG); 2324 2318 2325 2319 if (injection == OCELOT_TAG_PREFIX_SHORT) 2326 - mtu += OCELOT_SHORT_PREFIX_LEN; 2320 + sdu += OCELOT_SHORT_PREFIX_LEN; 2327 2321 else if (injection == OCELOT_TAG_PREFIX_LONG) 2328 - mtu += OCELOT_LONG_PREFIX_LEN; 2322 + sdu += OCELOT_LONG_PREFIX_LEN; 2329 2323 2330 - ocelot_port_set_mtu(ocelot, cpu, mtu); 2324 + ocelot_port_set_maxlen(ocelot, cpu, sdu); 2331 2325 } 2332 2326 2333 2327 /* CPU port Injection/Extraction configuration */
+4 -4
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 1688 1688 if (!(is_zero_ether_addr(mac) || is_valid_ether_addr(mac))) 1689 1689 return -EINVAL; 1690 1690 1691 - down_read(&ionic->vf_op_lock); 1691 + down_write(&ionic->vf_op_lock); 1692 1692 1693 1693 if (vf >= pci_num_vf(ionic->pdev) || !ionic->vfs) { 1694 1694 ret = -EINVAL; ··· 1698 1698 ether_addr_copy(ionic->vfs[vf].macaddr, mac); 1699 1699 } 1700 1700 1701 - up_read(&ionic->vf_op_lock); 1701 + up_write(&ionic->vf_op_lock); 1702 1702 return ret; 1703 1703 } 1704 1704 ··· 1719 1719 if (proto != htons(ETH_P_8021Q)) 1720 1720 return -EPROTONOSUPPORT; 1721 1721 1722 - down_read(&ionic->vf_op_lock); 1722 + down_write(&ionic->vf_op_lock); 1723 1723 1724 1724 if (vf >= pci_num_vf(ionic->pdev) || !ionic->vfs) { 1725 1725 ret = -EINVAL; ··· 1730 1730 ionic->vfs[vf].vlanid = vlan; 1731 1731 } 1732 1732 1733 - up_read(&ionic->vf_op_lock); 1733 + up_write(&ionic->vf_op_lock); 1734 1734 return ret; 1735 1735 } 1736 1736
+1 -1
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
··· 2277 2277 if (!str || !*str) 2278 2278 return -EINVAL; 2279 2279 while ((opt = strsep(&str, ",")) != NULL) { 2280 - if (!strncmp(opt, "eee_timer:", 6)) { 2280 + if (!strncmp(opt, "eee_timer:", 10)) { 2281 2281 if (kstrtoint(opt + 10, 0, &eee_timer)) 2282 2282 goto err; 2283 2283 }
+17 -15
drivers/net/ethernet/sfc/ef10.c
··· 2853 2853 } 2854 2854 2855 2855 /* Transmit timestamps are only available for 8XXX series. They result 2856 - * in three events per packet. These occur in order, and are: 2857 - * - the normal completion event 2856 + * in up to three events per packet. These occur in order, and are: 2857 + * - the normal completion event (may be omitted) 2858 2858 * - the low part of the timestamp 2859 2859 * - the high part of the timestamp 2860 + * 2861 + * It's possible for multiple completion events to appear before the 2862 + * corresponding timestamps. So we can for example get: 2863 + * COMP N 2864 + * COMP N+1 2865 + * TS_LO N 2866 + * TS_HI N 2867 + * TS_LO N+1 2868 + * TS_HI N+1 2869 + * 2870 + * In addition it's also possible for the adjacent completions to be 2871 + * merged, so we may not see COMP N above. As such, the completion 2872 + * events are not very useful here. 2860 2873 * 2861 2874 * Each part of the timestamp is itself split across two 16 bit 2862 2875 * fields in the event. ··· 2878 2865 2879 2866 switch (tx_ev_type) { 2880 2867 case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION: 2881 - /* In case of Queue flush or FLR, we might have received 2882 - * the previous TX completion event but not the Timestamp 2883 - * events. 2884 - */ 2885 - if (tx_queue->completed_desc_ptr != tx_queue->ptr_mask) 2886 - efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr); 2887 - 2888 - tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, 2889 - ESF_DZ_TX_DESCR_INDX); 2890 - tx_queue->completed_desc_ptr = 2891 - tx_ev_desc_ptr & tx_queue->ptr_mask; 2868 + /* Ignore this event - see above. */ 2892 2869 break; 2893 2870 2894 2871 case TX_TIMESTAMP_EVENT_TX_EV_TSTAMP_LO: ··· 2890 2887 ts_part = efx_ef10_extract_event_ts(event); 2891 2888 tx_queue->completed_timestamp_major = ts_part; 2892 2889 2893 - efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr); 2894 - tx_queue->completed_desc_ptr = tx_queue->ptr_mask; 2890 + efx_xmit_done_single(tx_queue); 2895 2891 break; 2896 2892 2897 2893 default:
+1
drivers/net/ethernet/sfc/efx.h
··· 20 20 struct net_device *net_dev); 21 21 netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb); 22 22 void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index); 23 + void efx_xmit_done_single(struct efx_tx_queue *tx_queue); 23 24 int efx_setup_tc(struct net_device *net_dev, enum tc_setup_type type, 24 25 void *type_data); 25 26 extern unsigned int efx_piobuf_size;
+1
drivers/net/ethernet/sfc/efx_channels.c
··· 583 583 if (tx_queue->channel) 584 584 tx_queue->channel = channel; 585 585 tx_queue->buffer = NULL; 586 + tx_queue->cb_page = NULL; 586 587 memset(&tx_queue->txd, 0, sizeof(tx_queue->txd)); 587 588 } 588 589
-3
drivers/net/ethernet/sfc/net_driver.h
··· 208 208 * avoid cache-line ping-pong between the xmit path and the 209 209 * completion path. 210 210 * @merge_events: Number of TX merged completion events 211 - * @completed_desc_ptr: Most recent completed pointer - only used with 212 - * timestamping. 213 211 * @completed_timestamp_major: Top part of the most recent tx timestamp. 214 212 * @completed_timestamp_minor: Low part of the most recent tx timestamp. 215 213 * @insert_count: Current insert pointer ··· 267 269 unsigned int merge_events; 268 270 unsigned int bytes_compl; 269 271 unsigned int pkts_compl; 270 - unsigned int completed_desc_ptr; 271 272 u32 completed_timestamp_major; 272 273 u32 completed_timestamp_minor; 273 274
+38
drivers/net/ethernet/sfc/tx.c
··· 535 535 return efx_enqueue_skb(tx_queue, skb); 536 536 } 537 537 538 + void efx_xmit_done_single(struct efx_tx_queue *tx_queue) 539 + { 540 + unsigned int pkts_compl = 0, bytes_compl = 0; 541 + unsigned int read_ptr; 542 + bool finished = false; 543 + 544 + read_ptr = tx_queue->read_count & tx_queue->ptr_mask; 545 + 546 + while (!finished) { 547 + struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr]; 548 + 549 + if (!efx_tx_buffer_in_use(buffer)) { 550 + struct efx_nic *efx = tx_queue->efx; 551 + 552 + netif_err(efx, hw, efx->net_dev, 553 + "TX queue %d spurious single TX completion\n", 554 + tx_queue->queue); 555 + efx_schedule_reset(efx, RESET_TYPE_TX_SKIP); 556 + return; 557 + } 558 + 559 + /* Need to check the flag before dequeueing. */ 560 + if (buffer->flags & EFX_TX_BUF_SKB) 561 + finished = true; 562 + efx_dequeue_buffer(tx_queue, buffer, &pkts_compl, &bytes_compl); 563 + 564 + ++tx_queue->read_count; 565 + read_ptr = tx_queue->read_count & tx_queue->ptr_mask; 566 + } 567 + 568 + tx_queue->pkts_compl += pkts_compl; 569 + tx_queue->bytes_compl += bytes_compl; 570 + 571 + EFX_WARN_ON_PARANOID(pkts_compl != 1); 572 + 573 + efx_xmit_done_check_empty(tx_queue); 574 + } 575 + 538 576 void efx_init_tx_queue_core_txq(struct efx_tx_queue *tx_queue) 539 577 { 540 578 struct efx_nic *efx = tx_queue->efx;
+16 -13
drivers/net/ethernet/sfc/tx_common.c
··· 80 80 tx_queue->xmit_more_available = false; 81 81 tx_queue->timestamping = (efx_ptp_use_mac_tx_timestamps(efx) && 82 82 tx_queue->channel == efx_ptp_channel(efx)); 83 - tx_queue->completed_desc_ptr = tx_queue->ptr_mask; 84 83 tx_queue->completed_timestamp_major = 0; 85 84 tx_queue->completed_timestamp_minor = 0; 86 85 ··· 209 210 while (read_ptr != stop_index) { 210 211 struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr]; 211 212 212 - if (!(buffer->flags & EFX_TX_BUF_OPTION) && 213 - unlikely(buffer->len == 0)) { 213 + if (!efx_tx_buffer_in_use(buffer)) { 214 214 netif_err(efx, tx_err, efx->net_dev, 215 - "TX queue %d spurious TX completion id %x\n", 215 + "TX queue %d spurious TX completion id %d\n", 216 216 tx_queue->queue, read_ptr); 217 217 efx_schedule_reset(efx, RESET_TYPE_TX_SKIP); 218 218 return; ··· 221 223 222 224 ++tx_queue->read_count; 223 225 read_ptr = tx_queue->read_count & tx_queue->ptr_mask; 226 + } 227 + } 228 + 229 + void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue) 230 + { 231 + if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) { 232 + tx_queue->old_write_count = READ_ONCE(tx_queue->write_count); 233 + if (tx_queue->read_count == tx_queue->old_write_count) { 234 + /* Ensure that read_count is flushed. */ 235 + smp_mb(); 236 + tx_queue->empty_read_count = 237 + tx_queue->read_count | EFX_EMPTY_COUNT_VALID; 238 + } 224 239 } 225 240 } 226 241 ··· 267 256 netif_tx_wake_queue(tx_queue->core_txq); 268 257 } 269 258 270 - /* Check whether the hardware queue is now empty */ 271 - if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) { 272 - tx_queue->old_write_count = READ_ONCE(tx_queue->write_count); 273 - if (tx_queue->read_count == tx_queue->old_write_count) { 274 - smp_mb(); 275 - tx_queue->empty_read_count = 276 - tx_queue->read_count | EFX_EMPTY_COUNT_VALID; 277 - } 278 - } 259 + efx_xmit_done_check_empty(tx_queue); 279 260 } 280 261 281 262 /* Remove buffers put into a tx_queue for the current packet.
+6
drivers/net/ethernet/sfc/tx_common.h
··· 21 21 unsigned int *pkts_compl, 22 22 unsigned int *bytes_compl); 23 23 24 + static inline bool efx_tx_buffer_in_use(struct efx_tx_buffer *buffer) 25 + { 26 + return buffer->len || (buffer->flags & EFX_TX_BUF_OPTION); 27 + } 28 + 29 + void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue); 24 30 void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index); 25 31 26 32 void efx_enqueue_unwind(struct efx_tx_queue *tx_queue,
+2 -1
drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
··· 24 24 static void dwmac1000_core_init(struct mac_device_info *hw, 25 25 struct net_device *dev) 26 26 { 27 + struct stmmac_priv *priv = netdev_priv(dev); 27 28 void __iomem *ioaddr = hw->pcsr; 28 29 u32 value = readl(ioaddr + GMAC_CONTROL); 29 30 int mtu = dev->mtu; ··· 36 35 * Broadcom tags can look like invalid LLC/SNAP packets and cause the 37 36 * hardware to truncate packets on reception. 38 37 */ 39 - if (netdev_uses_dsa(dev)) 38 + if (netdev_uses_dsa(dev) || !priv->plat->enh_desc) 40 39 value &= ~GMAC_CONTROL_ACS; 41 40 42 41 if (mtu > 1500)
+11 -8
drivers/net/ipvlan/ipvlan_core.c
··· 293 293 } 294 294 if (dev) 295 295 dev_put(dev); 296 + cond_resched(); 296 297 } 297 298 } 298 299 ··· 499 498 struct ethhdr *ethh = eth_hdr(skb); 500 499 int ret = NET_XMIT_DROP; 501 500 502 - /* In this mode we dont care about multicast and broadcast traffic */ 503 - if (is_multicast_ether_addr(ethh->h_dest)) { 504 - pr_debug_ratelimited("Dropped {multi|broad}cast of type=[%x]\n", 505 - ntohs(skb->protocol)); 506 - kfree_skb(skb); 507 - goto out; 508 - } 509 - 510 501 /* The ipvlan is a pseudo-L2 device, so the packets that we receive 511 502 * will have L2; which need to discarded and processed further 512 503 * in the net-ns of the main-device. 513 504 */ 514 505 if (skb_mac_header_was_set(skb)) { 506 + /* In this mode we dont care about 507 + * multicast and broadcast traffic */ 508 + if (is_multicast_ether_addr(ethh->h_dest)) { 509 + pr_debug_ratelimited( 510 + "Dropped {multi|broad}cast of type=[%x]\n", 511 + ntohs(skb->protocol)); 512 + kfree_skb(skb); 513 + goto out; 514 + } 515 + 515 516 skb_pull(skb, sizeof(*ethh)); 516 517 skb->mac_header = (typeof(skb->mac_header))~0U; 517 518 skb_reset_network_header(skb);
+1 -4
drivers/net/ipvlan/ipvlan_main.c
··· 164 164 static int ipvlan_open(struct net_device *dev) 165 165 { 166 166 struct ipvl_dev *ipvlan = netdev_priv(dev); 167 - struct net_device *phy_dev = ipvlan->phy_dev; 168 167 struct ipvl_addr *addr; 169 168 170 169 if (ipvlan->port->mode == IPVLAN_MODE_L3 || ··· 177 178 ipvlan_ht_addr_add(ipvlan, addr); 178 179 rcu_read_unlock(); 179 180 180 - return dev_uc_add(phy_dev, phy_dev->dev_addr); 181 + return 0; 181 182 } 182 183 183 184 static int ipvlan_stop(struct net_device *dev) ··· 188 189 189 190 dev_uc_unsync(phy_dev, dev); 190 191 dev_mc_unsync(phy_dev, dev); 191 - 192 - dev_uc_del(phy_dev, phy_dev->dev_addr); 193 192 194 193 rcu_read_lock(); 195 194 list_for_each_entry_rcu(addr, &ipvlan->addrs, anode)
+20 -5
drivers/net/macsec.c
··· 424 424 return (struct macsec_eth_header *)skb_mac_header(skb); 425 425 } 426 426 427 + static sci_t dev_to_sci(struct net_device *dev, __be16 port) 428 + { 429 + return make_sci(dev->dev_addr, port); 430 + } 431 + 427 432 static void __macsec_pn_wrapped(struct macsec_secy *secy, 428 433 struct macsec_tx_sa *tx_sa) 429 434 { ··· 3273 3268 3274 3269 out: 3275 3270 ether_addr_copy(dev->dev_addr, addr->sa_data); 3271 + macsec->secy.sci = dev_to_sci(dev, MACSEC_PORT_ES); 3272 + 3273 + /* If h/w offloading is available, propagate to the device */ 3274 + if (macsec_is_offloaded(macsec)) { 3275 + const struct macsec_ops *ops; 3276 + struct macsec_context ctx; 3277 + 3278 + ops = macsec_get_ops(macsec, &ctx); 3279 + if (ops) { 3280 + ctx.secy = &macsec->secy; 3281 + macsec_offload(ops->mdo_upd_secy, &ctx); 3282 + } 3283 + } 3284 + 3276 3285 return 0; 3277 3286 } 3278 3287 ··· 3361 3342 3362 3343 static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = { 3363 3344 [IFLA_MACSEC_SCI] = { .type = NLA_U64 }, 3345 + [IFLA_MACSEC_PORT] = { .type = NLA_U16 }, 3364 3346 [IFLA_MACSEC_ICV_LEN] = { .type = NLA_U8 }, 3365 3347 [IFLA_MACSEC_CIPHER_SUITE] = { .type = NLA_U64 }, 3366 3348 [IFLA_MACSEC_WINDOW] = { .type = NLA_U32 }, ··· 3610 3590 } 3611 3591 3612 3592 return false; 3613 - } 3614 - 3615 - static sci_t dev_to_sci(struct net_device *dev, __be16 port) 3616 - { 3617 - return make_sci(dev->dev_addr, port); 3618 3593 } 3619 3594 3620 3595 static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
+2
drivers/net/macvlan.c
··· 334 334 if (src) 335 335 dev_put(src->dev); 336 336 consume_skb(skb); 337 + 338 + cond_resched(); 337 339 } 338 340 } 339 341
+1
drivers/net/phy/bcm63xx.c
··· 73 73 /* same phy as above, with just a different OUI */ 74 74 .phy_id = 0x002bdc00, 75 75 .phy_id_mask = 0xfffffc00, 76 + .name = "Broadcom BCM63XX (2)", 76 77 /* PHY_BASIC_FEATURES */ 77 78 .flags = PHY_IS_INTERNAL, 78 79 .config_init = bcm63xx_config_init,
+2 -1
drivers/net/phy/phy.c
··· 727 727 phy_trigger_machine(phydev); 728 728 } 729 729 730 - if (phy_clear_interrupt(phydev)) 730 + /* did_interrupt() may have cleared the interrupt already */ 731 + if (!phydev->drv->did_interrupt && phy_clear_interrupt(phydev)) 731 732 goto phy_err; 732 733 return IRQ_HANDLED; 733 734
+5 -1
drivers/net/phy/phy_device.c
··· 286 286 if (!mdio_bus_phy_may_suspend(phydev)) 287 287 return 0; 288 288 289 + phydev->suspended_by_mdio_bus = 1; 290 + 289 291 return phy_suspend(phydev); 290 292 } 291 293 ··· 296 294 struct phy_device *phydev = to_phy_device(dev); 297 295 int ret; 298 296 299 - if (!mdio_bus_phy_may_suspend(phydev)) 297 + if (!phydev->suspended_by_mdio_bus) 300 298 goto no_resume; 299 + 300 + phydev->suspended_by_mdio_bus = 0; 301 301 302 302 ret = phy_resume(phydev); 303 303 if (ret < 0)
+7 -1
drivers/net/phy/phylink.c
··· 761 761 config.interface = interface; 762 762 763 763 ret = phylink_validate(pl, supported, &config); 764 - if (ret) 764 + if (ret) { 765 + phylink_warn(pl, "validation of %s with support %*pb and advertisement %*pb failed: %d\n", 766 + phy_modes(config.interface), 767 + __ETHTOOL_LINK_MODE_MASK_NBITS, phy->supported, 768 + __ETHTOOL_LINK_MODE_MASK_NBITS, config.advertising, 769 + ret); 765 770 return ret; 771 + } 766 772 767 773 phy->phylink = pl; 768 774 phy->phy_link_change = phylink_phy_change;
+10 -4
drivers/net/slip/slhc.c
··· 232 232 struct cstate *cs = lcs->next; 233 233 unsigned long deltaS, deltaA; 234 234 short changes = 0; 235 - int hlen; 235 + int nlen, hlen; 236 236 unsigned char new_seq[16]; 237 237 unsigned char *cp = new_seq; 238 238 struct iphdr *ip; ··· 248 248 return isize; 249 249 250 250 ip = (struct iphdr *) icp; 251 + if (ip->version != 4 || ip->ihl < 5) 252 + return isize; 251 253 252 254 /* Bail if this packet isn't TCP, or is an IP fragment */ 253 255 if (ip->protocol != IPPROTO_TCP || (ntohs(ip->frag_off) & 0x3fff)) { ··· 260 258 comp->sls_o_tcp++; 261 259 return isize; 262 260 } 263 - /* Extract TCP header */ 261 + nlen = ip->ihl * 4; 262 + if (isize < nlen + sizeof(*th)) 263 + return isize; 264 264 265 - th = (struct tcphdr *)(((unsigned char *)ip) + ip->ihl*4); 266 - hlen = ip->ihl*4 + th->doff*4; 265 + th = (struct tcphdr *)(icp + nlen); 266 + if (th->doff < sizeof(struct tcphdr) / 4) 267 + return isize; 268 + hlen = nlen + th->doff * 4; 267 269 268 270 /* Bail if the TCP packet isn't `compressible' (i.e., ACK isn't set or 269 271 * some other control bit is set). Also uncompressible if
+2
drivers/net/team/team.c
··· 2240 2240 [TEAM_ATTR_OPTION_CHANGED] = { .type = NLA_FLAG }, 2241 2241 [TEAM_ATTR_OPTION_TYPE] = { .type = NLA_U8 }, 2242 2242 [TEAM_ATTR_OPTION_DATA] = { .type = NLA_BINARY }, 2243 + [TEAM_ATTR_OPTION_PORT_IFINDEX] = { .type = NLA_U32 }, 2244 + [TEAM_ATTR_OPTION_ARRAY_INDEX] = { .type = NLA_U32 }, 2243 2245 }; 2244 2246 2245 2247 static int team_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info)
+8
drivers/net/usb/r8152.c
··· 3221 3221 } 3222 3222 3223 3223 msleep(20); 3224 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 3225 + break; 3224 3226 } 3225 3227 3226 3228 return data; ··· 5404 5402 if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) & 5405 5403 AUTOLOAD_DONE) 5406 5404 break; 5405 + 5407 5406 msleep(20); 5407 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5408 + break; 5408 5409 } 5409 5410 5410 5411 data = r8153_phy_status(tp, 0); ··· 5544 5539 if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) & 5545 5540 AUTOLOAD_DONE) 5546 5541 break; 5542 + 5547 5543 msleep(20); 5544 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5545 + break; 5548 5546 } 5549 5547 5550 5548 data = r8153_phy_status(tp, 0);
+1 -1
drivers/net/veth.c
··· 328 328 rcu_read_lock(); 329 329 peer = rcu_dereference(priv->peer); 330 330 if (peer) { 331 - tot->rx_dropped += veth_stats_tx(peer, &packets, &bytes); 331 + veth_stats_tx(peer, &packets, &bytes); 332 332 tot->rx_bytes += bytes; 333 333 tot->rx_packets += packets; 334 334
+2 -1
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
··· 308 308 } 309 309 310 310 /* PHY_SKU section is mandatory in B0 */ 311 - if (!mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) { 311 + if (mvm->trans->cfg->nvm_type == IWL_NVM_EXT && 312 + !mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) { 312 313 IWL_ERR(mvm, 313 314 "Can't parse phy_sku in B0, empty sections\n"); 314 315 return NULL;
+6 -3
drivers/net/wireless/mediatek/mt76/dma.c
··· 447 447 struct page *page = virt_to_head_page(data); 448 448 int offset = data - page_address(page); 449 449 struct sk_buff *skb = q->rx_head; 450 + struct skb_shared_info *shinfo = skb_shinfo(skb); 450 451 451 - offset += q->buf_offset; 452 - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, len, 453 - q->buf_size); 452 + if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) { 453 + offset += q->buf_offset; 454 + skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len, 455 + q->buf_size); 456 + } 454 457 455 458 if (more) 456 459 return;
+5 -3
drivers/nvme/host/rdma.c
··· 850 850 if (new) 851 851 blk_mq_free_tag_set(ctrl->ctrl.admin_tagset); 852 852 out_free_async_qe: 853 - nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 854 - sizeof(struct nvme_command), DMA_TO_DEVICE); 855 - ctrl->async_event_sqe.data = NULL; 853 + if (ctrl->async_event_sqe.data) { 854 + nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 855 + sizeof(struct nvme_command), DMA_TO_DEVICE); 856 + ctrl->async_event_sqe.data = NULL; 857 + } 856 858 out_free_queue: 857 859 nvme_rdma_free_queue(&ctrl->queues[0]); 858 860 return error;
+9 -3
drivers/nvme/target/tcp.c
··· 515 515 return 1; 516 516 } 517 517 518 - static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd) 518 + static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd, bool last_in_batch) 519 519 { 520 520 struct nvmet_tcp_queue *queue = cmd->queue; 521 521 int ret; ··· 523 523 while (cmd->cur_sg) { 524 524 struct page *page = sg_page(cmd->cur_sg); 525 525 u32 left = cmd->cur_sg->length - cmd->offset; 526 + int flags = MSG_DONTWAIT; 527 + 528 + if ((!last_in_batch && cmd->queue->send_list_len) || 529 + cmd->wbytes_done + left < cmd->req.transfer_len || 530 + queue->data_digest || !queue->nvme_sq.sqhd_disabled) 531 + flags |= MSG_MORE; 526 532 527 533 ret = kernel_sendpage(cmd->queue->sock, page, cmd->offset, 528 - left, MSG_DONTWAIT | MSG_MORE); 534 + left, flags); 529 535 if (ret <= 0) 530 536 return ret; 531 537 ··· 666 660 } 667 661 668 662 if (cmd->state == NVMET_TCP_SEND_DATA) { 669 - ret = nvmet_try_send_data(cmd); 663 + ret = nvmet_try_send_data(cmd, last_in_batch); 670 664 if (ret <= 0) 671 665 goto done_send; 672 666 }
+1
drivers/of/of_mdio.c
··· 306 306 rc = of_mdiobus_register_phy(mdio, child, addr); 307 307 if (rc && rc != -ENODEV) 308 308 goto unregister; 309 + break; 309 310 } 310 311 } 311 312 }
+13
drivers/pinctrl/cirrus/pinctrl-madera-core.c
··· 1073 1073 return ret; 1074 1074 } 1075 1075 1076 + platform_set_drvdata(pdev, priv); 1077 + 1076 1078 dev_dbg(priv->dev, "pinctrl probed ok\n"); 1079 + 1080 + return 0; 1081 + } 1082 + 1083 + static int madera_pin_remove(struct platform_device *pdev) 1084 + { 1085 + struct madera_pin_private *priv = platform_get_drvdata(pdev); 1086 + 1087 + if (priv->madera->pdata.gpio_configs) 1088 + pinctrl_unregister_mappings(priv->madera->pdata.gpio_configs); 1077 1089 1078 1090 return 0; 1079 1091 } 1080 1092 1081 1093 static struct platform_driver madera_pin_driver = { 1082 1094 .probe = madera_pin_probe, 1095 + .remove = madera_pin_remove, 1083 1096 .driver = { 1084 1097 .name = "madera-pinctrl", 1085 1098 },
-1
drivers/pinctrl/core.c
··· 2021 2021 return PTR_ERR(pctldev->p); 2022 2022 } 2023 2023 2024 - kref_get(&pctldev->p->users); 2025 2024 pctldev->hog_default = 2026 2025 pinctrl_lookup_state(pctldev->p, PINCTRL_STATE_DEFAULT); 2027 2026 if (IS_ERR(pctldev->hog_default)) {
+2 -2
drivers/pinctrl/freescale/pinctrl-scu.c
··· 23 23 struct imx_sc_rpc_msg hdr; 24 24 u32 val; 25 25 u16 pad; 26 - } __packed; 26 + } __packed __aligned(4); 27 27 28 28 struct imx_sc_msg_req_pad_get { 29 29 struct imx_sc_rpc_msg hdr; 30 30 u16 pad; 31 - } __packed; 31 + } __packed __aligned(4); 32 32 33 33 struct imx_sc_msg_resp_pad_get { 34 34 struct imx_sc_rpc_msg hdr;
+2 -2
drivers/pinctrl/meson/pinctrl-meson-gxl.c
··· 147 147 static const unsigned int sdio_d1_pins[] = { GPIOX_1 }; 148 148 static const unsigned int sdio_d2_pins[] = { GPIOX_2 }; 149 149 static const unsigned int sdio_d3_pins[] = { GPIOX_3 }; 150 - static const unsigned int sdio_cmd_pins[] = { GPIOX_4 }; 151 - static const unsigned int sdio_clk_pins[] = { GPIOX_5 }; 150 + static const unsigned int sdio_clk_pins[] = { GPIOX_4 }; 151 + static const unsigned int sdio_cmd_pins[] = { GPIOX_5 }; 152 152 static const unsigned int sdio_irq_pins[] = { GPIOX_7 }; 153 153 154 154 static const unsigned int nand_ce0_pins[] = { BOOT_8 };
+1 -1
drivers/pinctrl/pinctrl-falcon.c
··· 451 451 falcon_info.clk[*bank] = clk_get(&ppdev->dev, NULL); 452 452 if (IS_ERR(falcon_info.clk[*bank])) { 453 453 dev_err(&ppdev->dev, "failed to get clock\n"); 454 - of_node_put(np) 454 + of_node_put(np); 455 455 return PTR_ERR(falcon_info.clk[*bank]); 456 456 } 457 457 falcon_info.membase[*bank] = devm_ioremap_resource(&pdev->dev,
+1 -2
drivers/pinctrl/qcom/pinctrl-msm.c
··· 1104 1104 pctrl->irq_chip.irq_mask = msm_gpio_irq_mask; 1105 1105 pctrl->irq_chip.irq_unmask = msm_gpio_irq_unmask; 1106 1106 pctrl->irq_chip.irq_ack = msm_gpio_irq_ack; 1107 - pctrl->irq_chip.irq_eoi = irq_chip_eoi_parent; 1108 1107 pctrl->irq_chip.irq_set_type = msm_gpio_irq_set_type; 1109 1108 pctrl->irq_chip.irq_set_wake = msm_gpio_irq_set_wake; 1110 1109 pctrl->irq_chip.irq_request_resources = msm_gpio_irq_reqres; ··· 1117 1118 if (!chip->irq.parent_domain) 1118 1119 return -EPROBE_DEFER; 1119 1120 chip->irq.child_to_parent_hwirq = msm_gpio_wakeirq; 1120 - 1121 + pctrl->irq_chip.irq_eoi = irq_chip_eoi_parent; 1121 1122 /* 1122 1123 * Let's skip handling the GPIOs, if the parent irqchip 1123 1124 * is handling the direct connect IRQ of the GPIO.
+1 -1
drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
··· 794 794 girq->fwnode = of_node_to_fwnode(pctrl->dev->of_node); 795 795 girq->parent_domain = parent_domain; 796 796 girq->child_to_parent_hwirq = pm8xxx_child_to_parent_hwirq; 797 - girq->populate_parent_alloc_arg = gpiochip_populate_parent_fwspec_fourcell; 797 + girq->populate_parent_alloc_arg = gpiochip_populate_parent_fwspec_twocell; 798 798 girq->child_offset_to_irq = pm8xxx_child_offset_to_irq; 799 799 girq->child_irq_domain_ops.translate = pm8xxx_domain_translate; 800 800
+1
drivers/rtc/Kconfig
··· 327 327 config RTC_DRV_MAX8907 328 328 tristate "Maxim MAX8907" 329 329 depends on MFD_MAX8907 || COMPILE_TEST 330 + select REGMAP_IRQ 330 331 help 331 332 If you say yes here you will get support for the 332 333 RTC of Maxim MAX8907 PMIC.
+24 -3
drivers/s390/block/dasd.c
··· 178 178 (unsigned long) block); 179 179 INIT_LIST_HEAD(&block->ccw_queue); 180 180 spin_lock_init(&block->queue_lock); 181 + INIT_LIST_HEAD(&block->format_list); 182 + spin_lock_init(&block->format_lock); 181 183 timer_setup(&block->timer, dasd_block_timeout, 0); 182 184 spin_lock_init(&block->profile.lock); 183 185 ··· 1781 1779 1782 1780 if (dasd_ese_needs_format(cqr->block, irb)) { 1783 1781 if (rq_data_dir((struct request *)cqr->callback_data) == READ) { 1784 - device->discipline->ese_read(cqr); 1782 + device->discipline->ese_read(cqr, irb); 1785 1783 cqr->status = DASD_CQR_SUCCESS; 1786 1784 cqr->stopclk = now; 1787 1785 dasd_device_clear_timer(device); 1788 1786 dasd_schedule_device_bh(device); 1789 1787 return; 1790 1788 } 1791 - fcqr = device->discipline->ese_format(device, cqr); 1789 + fcqr = device->discipline->ese_format(device, cqr, irb); 1792 1790 if (IS_ERR(fcqr)) { 1791 + if (PTR_ERR(fcqr) == -EINVAL) { 1792 + cqr->status = DASD_CQR_ERROR; 1793 + return; 1794 + } 1793 1795 /* 1794 1796 * If we can't format now, let the request go 1795 1797 * one extra round. Maybe we can format later. 1796 1798 */ 1797 1799 cqr->status = DASD_CQR_QUEUED; 1800 + dasd_schedule_device_bh(device); 1801 + return; 1798 1802 } else { 1799 1803 fcqr->status = DASD_CQR_QUEUED; 1800 1804 cqr->status = DASD_CQR_QUEUED; ··· 2756 2748 { 2757 2749 struct request *req; 2758 2750 blk_status_t error = BLK_STS_OK; 2751 + unsigned int proc_bytes; 2759 2752 int status; 2760 2753 2761 2754 req = (struct request *) cqr->callback_data; 2762 2755 dasd_profile_end(cqr->block, cqr, req); 2763 2756 2757 + proc_bytes = cqr->proc_bytes; 2764 2758 status = cqr->block->base->discipline->free_cp(cqr, req); 2765 2759 if (status < 0) 2766 2760 error = errno_to_blk_status(status); ··· 2793 2783 blk_mq_end_request(req, error); 2794 2784 blk_mq_run_hw_queues(req->q, true); 2795 2785 } else { 2796 - blk_mq_complete_request(req); 2786 + /* 2787 + * Partial completed requests can happen with ESE devices. 2788 + * During read we might have gotten a NRF error and have to 2789 + * complete a request partially. 2790 + */ 2791 + if (proc_bytes) { 2792 + blk_update_request(req, BLK_STS_OK, 2793 + blk_rq_bytes(req) - proc_bytes); 2794 + blk_mq_requeue_request(req, true); 2795 + } else { 2796 + blk_mq_complete_request(req); 2797 + } 2797 2798 } 2798 2799 } 2799 2800
+156 -7
drivers/s390/block/dasd_eckd.c
··· 207 207 geo->head |= head; 208 208 } 209 209 210 + /* 211 + * calculate failing track from sense data depending if 212 + * it is an EAV device or not 213 + */ 214 + static int dasd_eckd_track_from_irb(struct irb *irb, struct dasd_device *device, 215 + sector_t *track) 216 + { 217 + struct dasd_eckd_private *private = device->private; 218 + u8 *sense = NULL; 219 + u32 cyl; 220 + u8 head; 221 + 222 + sense = dasd_get_sense(irb); 223 + if (!sense) { 224 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 225 + "ESE error no sense data\n"); 226 + return -EINVAL; 227 + } 228 + if (!(sense[27] & DASD_SENSE_BIT_2)) { 229 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 230 + "ESE error no valid track data\n"); 231 + return -EINVAL; 232 + } 233 + 234 + if (sense[27] & DASD_SENSE_BIT_3) { 235 + /* enhanced addressing */ 236 + cyl = sense[30] << 20; 237 + cyl |= (sense[31] & 0xF0) << 12; 238 + cyl |= sense[28] << 8; 239 + cyl |= sense[29]; 240 + } else { 241 + cyl = sense[29] << 8; 242 + cyl |= sense[30]; 243 + } 244 + head = sense[31] & 0x0F; 245 + *track = cyl * private->rdc_data.trk_per_cyl + head; 246 + return 0; 247 + } 248 + 210 249 static int set_timestamp(struct ccw1 *ccw, struct DE_eckd_data *data, 211 250 struct dasd_device *device) 212 251 { ··· 3025 2986 0, NULL); 3026 2987 } 3027 2988 2989 + static bool test_and_set_format_track(struct dasd_format_entry *to_format, 2990 + struct dasd_block *block) 2991 + { 2992 + struct dasd_format_entry *format; 2993 + unsigned long flags; 2994 + bool rc = false; 2995 + 2996 + spin_lock_irqsave(&block->format_lock, flags); 2997 + list_for_each_entry(format, &block->format_list, list) { 2998 + if (format->track == to_format->track) { 2999 + rc = true; 3000 + goto out; 3001 + } 3002 + } 3003 + list_add_tail(&to_format->list, &block->format_list); 3004 + 3005 + out: 3006 + spin_unlock_irqrestore(&block->format_lock, flags); 3007 + return rc; 3008 + } 3009 + 3010 + static void clear_format_track(struct dasd_format_entry *format, 3011 + struct dasd_block *block) 3012 + { 3013 + unsigned long flags; 3014 + 3015 + spin_lock_irqsave(&block->format_lock, flags); 3016 + list_del_init(&format->list); 3017 + spin_unlock_irqrestore(&block->format_lock, flags); 3018 + } 3019 + 3028 3020 /* 3029 3021 * Callback function to free ESE format requests. 3030 3022 */ ··· 3063 2993 { 3064 2994 struct dasd_device *device = cqr->startdev; 3065 2995 struct dasd_eckd_private *private = device->private; 2996 + struct dasd_format_entry *format = data; 3066 2997 2998 + clear_format_track(format, cqr->basedev->block); 3067 2999 private->count--; 3068 3000 dasd_ffree_request(cqr, device); 3069 3001 } 3070 3002 3071 3003 static struct dasd_ccw_req * 3072 - dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr) 3004 + dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr, 3005 + struct irb *irb) 3073 3006 { 3074 3007 struct dasd_eckd_private *private; 3008 + struct dasd_format_entry *format; 3075 3009 struct format_data_t fdata; 3076 3010 unsigned int recs_per_trk; 3077 3011 struct dasd_ccw_req *fcqr; ··· 3085 3011 struct request *req; 3086 3012 sector_t first_trk; 3087 3013 sector_t last_trk; 3014 + sector_t curr_trk; 3088 3015 int rc; 3089 3016 3090 3017 req = cqr->callback_data; 3091 - base = cqr->block->base; 3018 + block = cqr->block; 3019 + base = block->base; 3092 3020 private = base->private; 3093 - block = base->block; 3094 3021 blksize = block->bp_block; 3095 3022 recs_per_trk = recs_per_track(&private->rdc_data, 0, blksize); 3023 + format = &startdev->format_entry; 3096 3024 3097 3025 first_trk = blk_rq_pos(req) >> block->s2b_shift; 3098 3026 sector_div(first_trk, recs_per_trk); 3099 3027 last_trk = 3100 3028 (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift; 3101 3029 sector_div(last_trk, recs_per_trk); 3030 + rc = dasd_eckd_track_from_irb(irb, base, &curr_trk); 3031 + if (rc) 3032 + return ERR_PTR(rc); 3102 3033 3103 - fdata.start_unit = first_trk; 3104 - fdata.stop_unit = last_trk; 3034 + if (curr_trk < first_trk || curr_trk > last_trk) { 3035 + DBF_DEV_EVENT(DBF_WARNING, startdev, 3036 + "ESE error track %llu not within range %llu - %llu\n", 3037 + curr_trk, first_trk, last_trk); 3038 + return ERR_PTR(-EINVAL); 3039 + } 3040 + format->track = curr_trk; 3041 + /* test if track is already in formatting by another thread */ 3042 + if (test_and_set_format_track(format, block)) 3043 + return ERR_PTR(-EEXIST); 3044 + 3045 + fdata.start_unit = curr_trk; 3046 + fdata.stop_unit = curr_trk; 3105 3047 fdata.blksize = blksize; 3106 3048 fdata.intensity = private->uses_cdl ? DASD_FMT_INT_COMPAT : 0; 3107 3049 ··· 3134 3044 return fcqr; 3135 3045 3136 3046 fcqr->callback = dasd_eckd_ese_format_cb; 3047 + fcqr->callback_data = (void *) format; 3137 3048 3138 3049 return fcqr; 3139 3050 } ··· 3142 3051 /* 3143 3052 * When data is read from an unformatted area of an ESE volume, this function 3144 3053 * returns zeroed data and thereby mimics a read of zero data. 3054 + * 3055 + * The first unformatted track is the one that got the NRF error, the address is 3056 + * encoded in the sense data. 3057 + * 3058 + * All tracks before have returned valid data and should not be touched. 3059 + * All tracks after the unformatted track might be formatted or not. This is 3060 + * currently not known, remember the processed data and return the remainder of 3061 + * the request to the blocklayer in __dasd_cleanup_cqr(). 3145 3062 */ 3146 - static void dasd_eckd_ese_read(struct dasd_ccw_req *cqr) 3063 + static int dasd_eckd_ese_read(struct dasd_ccw_req *cqr, struct irb *irb) 3147 3064 { 3065 + struct dasd_eckd_private *private; 3066 + sector_t first_trk, last_trk; 3067 + sector_t first_blk, last_blk; 3148 3068 unsigned int blksize, off; 3069 + unsigned int recs_per_trk; 3149 3070 struct dasd_device *base; 3150 3071 struct req_iterator iter; 3072 + struct dasd_block *block; 3073 + unsigned int skip_block; 3074 + unsigned int blk_count; 3151 3075 struct request *req; 3152 3076 struct bio_vec bv; 3077 + sector_t curr_trk; 3078 + sector_t end_blk; 3153 3079 char *dst; 3080 + int rc; 3154 3081 3155 3082 req = (struct request *) cqr->callback_data; 3156 3083 base = cqr->block->base; 3157 3084 blksize = base->block->bp_block; 3085 + block = cqr->block; 3086 + private = base->private; 3087 + skip_block = 0; 3088 + blk_count = 0; 3089 + 3090 + recs_per_trk = recs_per_track(&private->rdc_data, 0, blksize); 3091 + first_trk = first_blk = blk_rq_pos(req) >> block->s2b_shift; 3092 + sector_div(first_trk, recs_per_trk); 3093 + last_trk = last_blk = 3094 + (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift; 3095 + sector_div(last_trk, recs_per_trk); 3096 + rc = dasd_eckd_track_from_irb(irb, base, &curr_trk); 3097 + if (rc) 3098 + return rc; 3099 + 3100 + /* sanity check if the current track from sense data is valid */ 3101 + if (curr_trk < first_trk || curr_trk > last_trk) { 3102 + DBF_DEV_EVENT(DBF_WARNING, base, 3103 + "ESE error track %llu not within range %llu - %llu\n", 3104 + curr_trk, first_trk, last_trk); 3105 + return -EINVAL; 3106 + } 3107 + 3108 + /* 3109 + * if not the first track got the NRF error we have to skip over valid 3110 + * blocks 3111 + */ 3112 + if (curr_trk != first_trk) 3113 + skip_block = curr_trk * recs_per_trk - first_blk; 3114 + 3115 + /* we have no information beyond the current track */ 3116 + end_blk = (curr_trk + 1) * recs_per_trk; 3158 3117 3159 3118 rq_for_each_segment(bv, req, iter) { 3160 3119 dst = page_address(bv.bv_page) + bv.bv_offset; 3161 3120 for (off = 0; off < bv.bv_len; off += blksize) { 3162 - if (dst && rq_data_dir(req) == READ) { 3121 + if (first_blk + blk_count >= end_blk) { 3122 + cqr->proc_bytes = blk_count * blksize; 3123 + return 0; 3124 + } 3125 + if (dst && !skip_block) { 3163 3126 dst += off; 3164 3127 memset(dst, 0, blksize); 3128 + } else { 3129 + skip_block--; 3165 3130 } 3131 + blk_count++; 3166 3132 } 3167 3133 } 3134 + return 0; 3168 3135 } 3169 3136 3170 3137 /*
+13 -2
drivers/s390/block/dasd_int.h
··· 187 187 188 188 void (*callback)(struct dasd_ccw_req *, void *data); 189 189 void *callback_data; 190 + unsigned int proc_bytes; /* bytes for partial completion */ 190 191 }; 191 192 192 193 /* ··· 388 387 int (*ext_pool_warn_thrshld)(struct dasd_device *); 389 388 int (*ext_pool_oos)(struct dasd_device *); 390 389 int (*ext_pool_exhaust)(struct dasd_device *, struct dasd_ccw_req *); 391 - struct dasd_ccw_req *(*ese_format)(struct dasd_device *, struct dasd_ccw_req *); 392 - void (*ese_read)(struct dasd_ccw_req *); 390 + struct dasd_ccw_req *(*ese_format)(struct dasd_device *, 391 + struct dasd_ccw_req *, struct irb *); 392 + int (*ese_read)(struct dasd_ccw_req *, struct irb *); 393 393 }; 394 394 395 395 extern struct dasd_discipline *dasd_diag_discipline_pointer; ··· 476 474 spinlock_t lock; 477 475 }; 478 476 477 + struct dasd_format_entry { 478 + struct list_head list; 479 + sector_t track; 480 + }; 481 + 479 482 struct dasd_device { 480 483 /* Block device stuff. */ 481 484 struct dasd_block *block; ··· 546 539 struct dentry *debugfs_dentry; 547 540 struct dentry *hosts_dentry; 548 541 struct dasd_profile profile; 542 + struct dasd_format_entry format_entry; 549 543 }; 550 544 551 545 struct dasd_block { ··· 572 564 573 565 struct dentry *debugfs_dentry; 574 566 struct dasd_profile profile; 567 + 568 + struct list_head format_list; 569 + spinlock_t format_lock; 575 570 }; 576 571 577 572 struct dasd_attention_data {
+2 -2
drivers/s390/net/qeth_core.h
··· 369 369 struct qeth_buffer_pool_entry { 370 370 struct list_head list; 371 371 struct list_head init_list; 372 - void *elements[QDIO_MAX_ELEMENTS_PER_BUFFER]; 372 + struct page *elements[QDIO_MAX_ELEMENTS_PER_BUFFER]; 373 373 }; 374 374 375 375 struct qeth_qdio_buffer_pool { ··· 983 983 extern const struct device_type qeth_generic_devtype; 984 984 985 985 const char *qeth_get_cardname_short(struct qeth_card *); 986 - int qeth_realloc_buffer_pool(struct qeth_card *, int); 986 + int qeth_resize_buffer_pool(struct qeth_card *card, unsigned int count); 987 987 int qeth_core_load_discipline(struct qeth_card *, enum qeth_discipline_id); 988 988 void qeth_core_free_discipline(struct qeth_card *); 989 989
+118 -58
drivers/s390/net/qeth_core_main.c
··· 65 65 static void qeth_issue_next_read_cb(struct qeth_card *card, 66 66 struct qeth_cmd_buffer *iob, 67 67 unsigned int data_length); 68 - static void qeth_free_buffer_pool(struct qeth_card *); 69 68 static int qeth_qdio_establish(struct qeth_card *); 70 69 static void qeth_free_qdio_queues(struct qeth_card *card); 71 70 static void qeth_notify_skbs(struct qeth_qdio_out_q *queue, ··· 211 212 } 212 213 EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list); 213 214 215 + static void qeth_free_pool_entry(struct qeth_buffer_pool_entry *entry) 216 + { 217 + unsigned int i; 218 + 219 + for (i = 0; i < ARRAY_SIZE(entry->elements); i++) { 220 + if (entry->elements[i]) 221 + __free_page(entry->elements[i]); 222 + } 223 + 224 + kfree(entry); 225 + } 226 + 227 + static void qeth_free_buffer_pool(struct qeth_card *card) 228 + { 229 + struct qeth_buffer_pool_entry *entry, *tmp; 230 + 231 + list_for_each_entry_safe(entry, tmp, &card->qdio.init_pool.entry_list, 232 + init_list) { 233 + list_del(&entry->init_list); 234 + qeth_free_pool_entry(entry); 235 + } 236 + } 237 + 238 + static struct qeth_buffer_pool_entry *qeth_alloc_pool_entry(unsigned int pages) 239 + { 240 + struct qeth_buffer_pool_entry *entry; 241 + unsigned int i; 242 + 243 + entry = kzalloc(sizeof(*entry), GFP_KERNEL); 244 + if (!entry) 245 + return NULL; 246 + 247 + for (i = 0; i < pages; i++) { 248 + entry->elements[i] = alloc_page(GFP_KERNEL); 249 + 250 + if (!entry->elements[i]) { 251 + qeth_free_pool_entry(entry); 252 + return NULL; 253 + } 254 + } 255 + 256 + return entry; 257 + } 258 + 214 259 static int qeth_alloc_buffer_pool(struct qeth_card *card) 215 260 { 216 - struct qeth_buffer_pool_entry *pool_entry; 217 - void *ptr; 218 - int i, j; 261 + unsigned int buf_elements = QETH_MAX_BUFFER_ELEMENTS(card); 262 + unsigned int i; 219 263 220 264 QETH_CARD_TEXT(card, 5, "alocpool"); 221 265 for (i = 0; i < card->qdio.init_pool.buf_count; ++i) { 222 - pool_entry = kzalloc(sizeof(*pool_entry), GFP_KERNEL); 223 - if (!pool_entry) { 266 + struct qeth_buffer_pool_entry *entry; 267 + 268 + entry = qeth_alloc_pool_entry(buf_elements); 269 + if (!entry) { 224 270 qeth_free_buffer_pool(card); 225 271 return -ENOMEM; 226 272 } 227 - for (j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j) { 228 - ptr = (void *) __get_free_page(GFP_KERNEL); 229 - if (!ptr) { 230 - while (j > 0) 231 - free_page((unsigned long) 232 - pool_entry->elements[--j]); 233 - kfree(pool_entry); 234 - qeth_free_buffer_pool(card); 235 - return -ENOMEM; 236 - } 237 - pool_entry->elements[j] = ptr; 238 - } 239 - list_add(&pool_entry->init_list, 240 - &card->qdio.init_pool.entry_list); 273 + 274 + list_add(&entry->init_list, &card->qdio.init_pool.entry_list); 241 275 } 242 276 return 0; 243 277 } 244 278 245 - int qeth_realloc_buffer_pool(struct qeth_card *card, int bufcnt) 279 + int qeth_resize_buffer_pool(struct qeth_card *card, unsigned int count) 246 280 { 281 + unsigned int buf_elements = QETH_MAX_BUFFER_ELEMENTS(card); 282 + struct qeth_qdio_buffer_pool *pool = &card->qdio.init_pool; 283 + struct qeth_buffer_pool_entry *entry, *tmp; 284 + int delta = count - pool->buf_count; 285 + LIST_HEAD(entries); 286 + 247 287 QETH_CARD_TEXT(card, 2, "realcbp"); 248 288 249 - /* TODO: steel/add buffers from/to a running card's buffer pool (?) */ 250 - qeth_clear_working_pool_list(card); 251 - qeth_free_buffer_pool(card); 252 - card->qdio.in_buf_pool.buf_count = bufcnt; 253 - card->qdio.init_pool.buf_count = bufcnt; 254 - return qeth_alloc_buffer_pool(card); 289 + /* Defer until queue is allocated: */ 290 + if (!card->qdio.in_q) 291 + goto out; 292 + 293 + /* Remove entries from the pool: */ 294 + while (delta < 0) { 295 + entry = list_first_entry(&pool->entry_list, 296 + struct qeth_buffer_pool_entry, 297 + init_list); 298 + list_del(&entry->init_list); 299 + qeth_free_pool_entry(entry); 300 + 301 + delta++; 302 + } 303 + 304 + /* Allocate additional entries: */ 305 + while (delta > 0) { 306 + entry = qeth_alloc_pool_entry(buf_elements); 307 + if (!entry) { 308 + list_for_each_entry_safe(entry, tmp, &entries, 309 + init_list) { 310 + list_del(&entry->init_list); 311 + qeth_free_pool_entry(entry); 312 + } 313 + 314 + return -ENOMEM; 315 + } 316 + 317 + list_add(&entry->init_list, &entries); 318 + 319 + delta--; 320 + } 321 + 322 + list_splice(&entries, &pool->entry_list); 323 + 324 + out: 325 + card->qdio.in_buf_pool.buf_count = count; 326 + pool->buf_count = count; 327 + return 0; 255 328 } 256 - EXPORT_SYMBOL_GPL(qeth_realloc_buffer_pool); 329 + EXPORT_SYMBOL_GPL(qeth_resize_buffer_pool); 257 330 258 331 static void qeth_free_qdio_queue(struct qeth_qdio_q *q) 259 332 { ··· 1241 1170 } 1242 1171 EXPORT_SYMBOL_GPL(qeth_drain_output_queues); 1243 1172 1244 - static void qeth_free_buffer_pool(struct qeth_card *card) 1245 - { 1246 - struct qeth_buffer_pool_entry *pool_entry, *tmp; 1247 - int i = 0; 1248 - list_for_each_entry_safe(pool_entry, tmp, 1249 - &card->qdio.init_pool.entry_list, init_list){ 1250 - for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) 1251 - free_page((unsigned long)pool_entry->elements[i]); 1252 - list_del(&pool_entry->init_list); 1253 - kfree(pool_entry); 1254 - } 1255 - } 1256 - 1257 1173 static int qeth_osa_set_output_queues(struct qeth_card *card, bool single) 1258 1174 { 1259 1175 unsigned int count = single ? 1 : card->dev->num_tx_queues; ··· 1262 1204 if (count == 1) 1263 1205 dev_info(&card->gdev->dev, "Priority Queueing not supported\n"); 1264 1206 1265 - card->qdio.default_out_queue = single ? 0 : QETH_DEFAULT_QUEUE; 1266 1207 card->qdio.no_out_queues = count; 1267 1208 return 0; 1268 1209 } ··· 2450 2393 return; 2451 2394 2452 2395 qeth_free_cq(card); 2453 - cancel_delayed_work_sync(&card->buffer_reclaim_work); 2454 2396 for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { 2455 2397 if (card->qdio.in_q->bufs[j].rx_skb) 2456 2398 dev_kfree_skb_any(card->qdio.in_q->bufs[j].rx_skb); ··· 2631 2575 struct list_head *plh; 2632 2576 struct qeth_buffer_pool_entry *entry; 2633 2577 int i, free; 2634 - struct page *page; 2635 2578 2636 2579 if (list_empty(&card->qdio.in_buf_pool.entry_list)) 2637 2580 return NULL; ··· 2639 2584 entry = list_entry(plh, struct qeth_buffer_pool_entry, list); 2640 2585 free = 1; 2641 2586 for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { 2642 - if (page_count(virt_to_page(entry->elements[i])) > 1) { 2587 + if (page_count(entry->elements[i]) > 1) { 2643 2588 free = 0; 2644 2589 break; 2645 2590 } ··· 2654 2599 entry = list_entry(card->qdio.in_buf_pool.entry_list.next, 2655 2600 struct qeth_buffer_pool_entry, list); 2656 2601 for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { 2657 - if (page_count(virt_to_page(entry->elements[i])) > 1) { 2658 - page = alloc_page(GFP_ATOMIC); 2659 - if (!page) { 2602 + if (page_count(entry->elements[i]) > 1) { 2603 + struct page *page = alloc_page(GFP_ATOMIC); 2604 + 2605 + if (!page) 2660 2606 return NULL; 2661 - } else { 2662 - free_page((unsigned long)entry->elements[i]); 2663 - entry->elements[i] = page_address(page); 2664 - QETH_CARD_STAT_INC(card, rx_sg_alloc_page); 2665 - } 2607 + 2608 + __free_page(entry->elements[i]); 2609 + entry->elements[i] = page; 2610 + QETH_CARD_STAT_INC(card, rx_sg_alloc_page); 2666 2611 } 2667 2612 } 2668 2613 list_del_init(&entry->list); ··· 2680 2625 ETH_HLEN + 2681 2626 sizeof(struct ipv6hdr)); 2682 2627 if (!buf->rx_skb) 2683 - return 1; 2628 + return -ENOMEM; 2684 2629 } 2685 2630 2686 2631 pool_entry = qeth_find_free_buffer_pool_entry(card); 2687 2632 if (!pool_entry) 2688 - return 1; 2633 + return -ENOBUFS; 2689 2634 2690 2635 /* 2691 2636 * since the buffer is accessed only from the input_tasklet ··· 2698 2643 for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { 2699 2644 buf->buffer->element[i].length = PAGE_SIZE; 2700 2645 buf->buffer->element[i].addr = 2701 - virt_to_phys(pool_entry->elements[i]); 2646 + page_to_phys(pool_entry->elements[i]); 2702 2647 if (i == QETH_MAX_BUFFER_ELEMENTS(card) - 1) 2703 2648 buf->buffer->element[i].eflags = SBAL_EFLAGS_LAST_ENTRY; 2704 2649 else ··· 2730 2675 /* inbound queue */ 2731 2676 qdio_reset_buffers(card->qdio.in_q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q); 2732 2677 memset(&card->rx, 0, sizeof(struct qeth_rx)); 2678 + 2733 2679 qeth_initialize_working_pool_list(card); 2734 2680 /*give only as many buffers to hardware as we have buffer pool entries*/ 2735 - for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; ++i) 2736 - qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]); 2681 + for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; i++) { 2682 + rc = qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]); 2683 + if (rc) 2684 + return rc; 2685 + } 2686 + 2737 2687 card->qdio.in_q->next_buf_to_init = 2738 2688 card->qdio.in_buf_pool.buf_count - 1; 2739 2689 rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0,
+4 -5
drivers/s390/net/qeth_core_sys.c
··· 247 247 struct device_attribute *attr, const char *buf, size_t count) 248 248 { 249 249 struct qeth_card *card = dev_get_drvdata(dev); 250 + unsigned int cnt; 250 251 char *tmp; 251 - int cnt, old_cnt; 252 252 int rc = 0; 253 253 254 254 mutex_lock(&card->conf_mutex); ··· 257 257 goto out; 258 258 } 259 259 260 - old_cnt = card->qdio.in_buf_pool.buf_count; 261 260 cnt = simple_strtoul(buf, &tmp, 10); 262 261 cnt = (cnt < QETH_IN_BUF_COUNT_MIN) ? QETH_IN_BUF_COUNT_MIN : 263 262 ((cnt > QETH_IN_BUF_COUNT_MAX) ? QETH_IN_BUF_COUNT_MAX : cnt); 264 - if (old_cnt != cnt) { 265 - rc = qeth_realloc_buffer_pool(card, cnt); 266 - } 263 + 264 + rc = qeth_resize_buffer_pool(card, cnt); 265 + 267 266 out: 268 267 mutex_unlock(&card->conf_mutex); 269 268 return rc ? rc : count;
+1
drivers/s390/net/qeth_l2_main.c
··· 284 284 if (card->state == CARD_STATE_SOFTSETUP) { 285 285 qeth_clear_ipacmd_list(card); 286 286 qeth_drain_output_queues(card); 287 + cancel_delayed_work_sync(&card->buffer_reclaim_work); 287 288 card->state = CARD_STATE_DOWN; 288 289 } 289 290
+1
drivers/s390/net/qeth_l3_main.c
··· 1178 1178 qeth_l3_clear_ip_htable(card, 1); 1179 1179 qeth_clear_ipacmd_list(card); 1180 1180 qeth_drain_output_queues(card); 1181 + cancel_delayed_work_sync(&card->buffer_reclaim_work); 1181 1182 card->state = CARD_STATE_DOWN; 1182 1183 } 1183 1184
+4 -5
drivers/s390/net/qeth_l3_sys.c
··· 206 206 qdio_get_ssqd_desc(CARD_DDEV(card), &card->ssqd); 207 207 if (card->ssqd.qdioac2 & CHSC_AC2_SNIFFER_AVAILABLE) { 208 208 card->options.sniffer = i; 209 - if (card->qdio.init_pool.buf_count != 210 - QETH_IN_BUF_COUNT_MAX) 211 - qeth_realloc_buffer_pool(card, 212 - QETH_IN_BUF_COUNT_MAX); 213 - } else 209 + qeth_resize_buffer_pool(card, QETH_IN_BUF_COUNT_MAX); 210 + } else { 214 211 rc = -EPERM; 212 + } 213 + 215 214 break; 216 215 default: 217 216 rc = -EINVAL;
+2 -1
drivers/scsi/ipr.c
··· 9950 9950 ioa_cfg->max_devs_supported = ipr_max_devs; 9951 9951 9952 9952 if (ioa_cfg->sis64) { 9953 + host->max_channel = IPR_MAX_SIS64_BUSES; 9953 9954 host->max_id = IPR_MAX_SIS64_TARGETS_PER_BUS; 9954 9955 host->max_lun = IPR_MAX_SIS64_LUNS_PER_TARGET; 9955 9956 if (ipr_max_devs > IPR_MAX_SIS64_DEVS) ··· 9959 9958 + ((sizeof(struct ipr_config_table_entry64) 9960 9959 * ioa_cfg->max_devs_supported))); 9961 9960 } else { 9961 + host->max_channel = IPR_VSET_BUS; 9962 9962 host->max_id = IPR_MAX_NUM_TARGETS_PER_BUS; 9963 9963 host->max_lun = IPR_MAX_NUM_LUNS_PER_TARGET; 9964 9964 if (ipr_max_devs > IPR_MAX_PHYSICAL_DEVS) ··· 9969 9967 * ioa_cfg->max_devs_supported))); 9970 9968 } 9971 9969 9972 - host->max_channel = IPR_VSET_BUS; 9973 9970 host->unique_id = host->host_no; 9974 9971 host->max_cmd_len = IPR_MAX_CDB_LEN; 9975 9972 host->can_queue = ioa_cfg->max_cmds;
+1
drivers/scsi/ipr.h
··· 1300 1300 #define IPR_ARRAY_VIRTUAL_BUS 0x1 1301 1301 #define IPR_VSET_VIRTUAL_BUS 0x2 1302 1302 #define IPR_IOAFP_VIRTUAL_BUS 0x3 1303 + #define IPR_MAX_SIS64_BUSES 0x4 1303 1304 1304 1305 #define IPR_GET_RES_PHYS_LOC(res) \ 1305 1306 (((res)->bus << 24) | ((res)->target << 8) | (res)->lun)
+14 -7
drivers/scsi/ufs/ufshcd.c
··· 3884 3884 void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) 3885 3885 { 3886 3886 unsigned long flags; 3887 + bool update = false; 3887 3888 3888 - if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 3889 + if (!ufshcd_is_auto_hibern8_supported(hba)) 3889 3890 return; 3890 3891 3891 3892 spin_lock_irqsave(hba->host->host_lock, flags); 3892 - if (hba->ahit == ahit) 3893 - goto out_unlock; 3894 - hba->ahit = ahit; 3895 - if (!pm_runtime_suspended(hba->dev)) 3896 - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); 3897 - out_unlock: 3893 + if (hba->ahit != ahit) { 3894 + hba->ahit = ahit; 3895 + update = true; 3896 + } 3898 3897 spin_unlock_irqrestore(hba->host->host_lock, flags); 3898 + 3899 + if (update && !pm_runtime_suspended(hba->dev)) { 3900 + pm_runtime_get_sync(hba->dev); 3901 + ufshcd_hold(hba, false); 3902 + ufshcd_auto_hibern8_enable(hba); 3903 + ufshcd_release(hba); 3904 + pm_runtime_put(hba->dev); 3905 + } 3899 3906 } 3900 3907 EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); 3901 3908
+3
drivers/slimbus/qcom-ngd-ctrl.c
··· 1320 1320 { 1321 1321 .compatible = "qcom,slim-ngd-v1.5.0", 1322 1322 .data = &ngd_v1_5_offset_info, 1323 + },{ 1324 + .compatible = "qcom,slim-ngd-v2.1.0", 1325 + .data = &ngd_v1_5_offset_info, 1323 1326 }, 1324 1327 {} 1325 1328 };
+11 -10
drivers/staging/greybus/tools/loopback_test.c
··· 19 19 #include <signal.h> 20 20 21 21 #define MAX_NUM_DEVICES 10 22 + #define MAX_SYSFS_PREFIX 0x80 22 23 #define MAX_SYSFS_PATH 0x200 23 24 #define CSV_MAX_LINE 0x1000 24 25 #define SYSFS_MAX_INT 0x20 ··· 68 67 }; 69 68 70 69 struct loopback_device { 71 - char name[MAX_SYSFS_PATH]; 70 + char name[MAX_STR_LEN]; 72 71 char sysfs_entry[MAX_SYSFS_PATH]; 73 72 char debugfs_entry[MAX_SYSFS_PATH]; 74 73 struct loopback_results results; ··· 94 93 int stop_all; 95 94 int poll_count; 96 95 char test_name[MAX_STR_LEN]; 97 - char sysfs_prefix[MAX_SYSFS_PATH]; 98 - char debugfs_prefix[MAX_SYSFS_PATH]; 96 + char sysfs_prefix[MAX_SYSFS_PREFIX]; 97 + char debugfs_prefix[MAX_SYSFS_PREFIX]; 99 98 struct timespec poll_timeout; 100 99 struct loopback_device devices[MAX_NUM_DEVICES]; 101 100 struct loopback_results aggregate_results; ··· 638 637 static int open_poll_files(struct loopback_test *t) 639 638 { 640 639 struct loopback_device *dev; 641 - char buf[MAX_STR_LEN]; 640 + char buf[MAX_SYSFS_PATH + MAX_STR_LEN]; 642 641 char dummy; 643 642 int fds_idx = 0; 644 643 int i; ··· 656 655 goto err; 657 656 } 658 657 read(t->fds[fds_idx].fd, &dummy, 1); 659 - t->fds[fds_idx].events = EPOLLERR|EPOLLPRI; 658 + t->fds[fds_idx].events = POLLERR | POLLPRI; 660 659 t->fds[fds_idx].revents = 0; 661 660 fds_idx++; 662 661 } ··· 749 748 } 750 749 751 750 for (i = 0; i < t->poll_count; i++) { 752 - if (t->fds[i].revents & EPOLLPRI) { 751 + if (t->fds[i].revents & POLLPRI) { 753 752 /* Dummy read to clear the event */ 754 753 read(t->fds[i].fd, &dummy, 1); 755 754 number_of_events++; ··· 908 907 t.iteration_max = atoi(optarg); 909 908 break; 910 909 case 'S': 911 - snprintf(t.sysfs_prefix, MAX_SYSFS_PATH, "%s", optarg); 910 + snprintf(t.sysfs_prefix, MAX_SYSFS_PREFIX, "%s", optarg); 912 911 break; 913 912 case 'D': 914 - snprintf(t.debugfs_prefix, MAX_SYSFS_PATH, "%s", optarg); 913 + snprintf(t.debugfs_prefix, MAX_SYSFS_PREFIX, "%s", optarg); 915 914 break; 916 915 case 'm': 917 916 t.mask = atol(optarg); ··· 962 961 } 963 962 964 963 if (!strcmp(t.sysfs_prefix, "")) 965 - snprintf(t.sysfs_prefix, MAX_SYSFS_PATH, "%s", sysfs_prefix); 964 + snprintf(t.sysfs_prefix, MAX_SYSFS_PREFIX, "%s", sysfs_prefix); 966 965 967 966 if (!strcmp(t.debugfs_prefix, "")) 968 - snprintf(t.debugfs_prefix, MAX_SYSFS_PATH, "%s", debugfs_prefix); 967 + snprintf(t.debugfs_prefix, MAX_SYSFS_PREFIX, "%s", debugfs_prefix); 969 968 970 969 ret = find_loopback_devices(&t); 971 970 if (ret)
+1
drivers/staging/rtl8188eu/os_dep/usb_intf.c
··· 38 38 {USB_DEVICE(0x2001, 0x331B)}, /* D-Link DWA-121 rev B1 */ 39 39 {USB_DEVICE(0x2357, 0x010c)}, /* TP-Link TL-WN722N v2 */ 40 40 {USB_DEVICE(0x2357, 0x0111)}, /* TP-Link TL-WN727N v5.21 */ 41 + {USB_DEVICE(0x2C4E, 0x0102)}, /* MERCUSYS MW150US v2 */ 41 42 {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ 42 43 {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */ 43 44 {} /* Terminating entry */
+1 -1
drivers/staging/speakup/main.c
··· 561 561 return 0; 562 562 } else if (tmpx < vc->vc_cols - 2 && 563 563 (ch == SPACE || ch == 0 || (ch < 0x100 && IS_WDLM(ch))) && 564 - get_char(vc, (u_short *)&tmp_pos + 1, &temp) > SPACE) { 564 + get_char(vc, (u_short *)tmp_pos + 1, &temp) > SPACE) { 565 565 tmp_pos += 2; 566 566 tmpx++; 567 567 } else {
+8 -7
drivers/staging/wfx/hif_tx.c
··· 140 140 else 141 141 control_reg_write(wdev, 0); 142 142 mutex_unlock(&wdev->hif_cmd.lock); 143 + mutex_unlock(&wdev->hif_cmd.key_renew_lock); 143 144 kfree(hif); 144 145 return ret; 145 146 } ··· 290 289 } 291 290 292 291 int hif_join(struct wfx_vif *wvif, const struct ieee80211_bss_conf *conf, 293 - const struct ieee80211_channel *channel, const u8 *ssidie) 292 + struct ieee80211_channel *channel, const u8 *ssid, int ssidlen) 294 293 { 295 294 int ret; 296 295 struct hif_msg *hif; ··· 308 307 body->basic_rate_set = 309 308 cpu_to_le32(wfx_rate_mask_to_hw(wvif->wdev, conf->basic_rates)); 310 309 memcpy(body->bssid, conf->bssid, sizeof(body->bssid)); 311 - if (!conf->ibss_joined && ssidie) { 312 - body->ssid_length = cpu_to_le32(ssidie[1]); 313 - memcpy(body->ssid, &ssidie[2], ssidie[1]); 310 + if (!conf->ibss_joined && ssid) { 311 + body->ssid_length = cpu_to_le32(ssidlen); 312 + memcpy(body->ssid, ssid, ssidlen); 314 313 } 315 314 wfx_fill_header(hif, wvif->id, HIF_REQ_ID_JOIN, sizeof(*body)); 316 315 ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false); ··· 428 427 struct hif_msg *hif; 429 428 struct hif_req_start *body = wfx_alloc_hif(sizeof(*body), &hif); 430 429 431 - body->dtim_period = conf->dtim_period, 432 - body->short_preamble = conf->use_short_preamble, 433 - body->channel_number = cpu_to_le16(channel->hw_value), 430 + body->dtim_period = conf->dtim_period; 431 + body->short_preamble = conf->use_short_preamble; 432 + body->channel_number = cpu_to_le16(channel->hw_value); 434 433 body->beacon_interval = cpu_to_le32(conf->beacon_int); 435 434 body->basic_rate_set = 436 435 cpu_to_le32(wfx_rate_mask_to_hw(wvif->wdev, conf->basic_rates));
+1 -1
drivers/staging/wfx/hif_tx.h
··· 46 46 int chan_start, int chan_num); 47 47 int hif_stop_scan(struct wfx_vif *wvif); 48 48 int hif_join(struct wfx_vif *wvif, const struct ieee80211_bss_conf *conf, 49 - const struct ieee80211_channel *channel, const u8 *ssidie); 49 + struct ieee80211_channel *channel, const u8 *ssid, int ssidlen); 50 50 int hif_set_pm(struct wfx_vif *wvif, bool ps, int dynamic_ps_timeout); 51 51 int hif_set_bss_params(struct wfx_vif *wvif, 52 52 const struct hif_req_set_bss_params *arg);
+10 -5
drivers/staging/wfx/hif_tx_mib.h
··· 191 191 } 192 192 193 193 static inline int hif_set_association_mode(struct wfx_vif *wvif, 194 - struct ieee80211_bss_conf *info, 195 - struct ieee80211_sta_ht_cap *ht_cap) 194 + struct ieee80211_bss_conf *info) 196 195 { 197 196 int basic_rates = wfx_rate_mask_to_hw(wvif->wdev, info->basic_rates); 197 + struct ieee80211_sta *sta = NULL; 198 198 struct hif_mib_set_association_mode val = { 199 199 .preambtype_use = 1, 200 200 .mode = 1, ··· 204 204 .basic_rate_set = cpu_to_le32(basic_rates) 205 205 }; 206 206 207 + rcu_read_lock(); // protect sta 208 + if (info->bssid && !info->ibss_joined) 209 + sta = ieee80211_find_sta(wvif->vif, info->bssid); 210 + 207 211 // FIXME: it is strange to not retrieve all information from bss_info 208 - if (ht_cap && ht_cap->ht_supported) { 209 - val.mpdu_start_spacing = ht_cap->ampdu_density; 212 + if (sta && sta->ht_cap.ht_supported) { 213 + val.mpdu_start_spacing = sta->ht_cap.ampdu_density; 210 214 if (!(info->ht_operation_mode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT)) 211 - val.greenfield = !!(ht_cap->cap & IEEE80211_HT_CAP_GRN_FLD); 215 + val.greenfield = !!(sta->ht_cap.cap & IEEE80211_HT_CAP_GRN_FLD); 212 216 } 217 + rcu_read_unlock(); 213 218 214 219 return hif_write_mib(wvif->wdev, wvif->id, 215 220 HIF_MIB_ID_SET_ASSOCIATION_MODE, &val, sizeof(val));
+15 -10
drivers/staging/wfx/sta.c
··· 491 491 static void wfx_do_join(struct wfx_vif *wvif) 492 492 { 493 493 int ret; 494 - const u8 *ssidie; 495 494 struct ieee80211_bss_conf *conf = &wvif->vif->bss_conf; 496 495 struct cfg80211_bss *bss = NULL; 496 + u8 ssid[IEEE80211_MAX_SSID_LEN]; 497 + const u8 *ssidie = NULL; 498 + int ssidlen = 0; 497 499 498 500 wfx_tx_lock_flush(wvif->wdev); 499 501 ··· 516 514 if (!wvif->beacon_int) 517 515 wvif->beacon_int = 1; 518 516 519 - rcu_read_lock(); 517 + rcu_read_lock(); // protect ssidie 520 518 if (!conf->ibss_joined) 521 519 ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID); 522 - else 523 - ssidie = NULL; 520 + if (ssidie) { 521 + ssidlen = ssidie[1]; 522 + memcpy(ssid, &ssidie[2], ssidie[1]); 523 + } 524 + rcu_read_unlock(); 524 525 525 526 wfx_tx_flush(wvif->wdev); 526 527 ··· 532 527 533 528 wfx_set_mfp(wvif, bss); 534 529 535 - /* Perform actual join */ 536 530 wvif->wdev->tx_burst_idx = -1; 537 - ret = hif_join(wvif, conf, wvif->channel, ssidie); 538 - rcu_read_unlock(); 531 + ret = hif_join(wvif, conf, wvif->channel, ssid, ssidlen); 539 532 if (ret) { 540 533 ieee80211_connection_loss(wvif->vif); 541 534 wvif->join_complete_status = -1; ··· 608 605 int i; 609 606 610 607 for (i = 0; i < ARRAY_SIZE(sta_priv->buffered); i++) 611 - WARN(sta_priv->buffered[i], "release station while Tx is in progress"); 608 + if (sta_priv->buffered[i]) 609 + dev_warn(wvif->wdev->dev, "release station while %d pending frame on queue %d", 610 + sta_priv->buffered[i], i); 612 611 // FIXME: see note in wfx_sta_add() 613 612 if (vif->type == NL80211_IFTYPE_STATION) 614 613 return 0; ··· 694 689 wfx_rate_mask_to_hw(wvif->wdev, sta->supp_rates[wvif->channel->band]); 695 690 else 696 691 wvif->bss_params.operational_rate_set = -1; 692 + rcu_read_unlock(); 697 693 if (sta && 698 694 info->ht_operation_mode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT) 699 695 hif_dual_cts_protection(wvif, true); ··· 707 701 wvif->bss_params.beacon_lost_count = 20; 708 702 wvif->bss_params.aid = info->aid; 709 703 710 - hif_set_association_mode(wvif, info, sta ? &sta->ht_cap : NULL); 711 - rcu_read_unlock(); 704 + hif_set_association_mode(wvif, info); 712 705 713 706 if (!info->ibss_joined) { 714 707 hif_keep_alive_period(wvif, 30 /* sec */);
+1 -1
drivers/thunderbolt/switch.c
··· 954 954 ret = tb_port_read(port, &phy, TB_CFG_PORT, 955 955 port->cap_phy + LANE_ADP_CS_0, 1); 956 956 if (ret) 957 - return ret; 957 + return false; 958 958 959 959 widths = (phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >> 960 960 LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT;
+6 -8
drivers/tty/tty_io.c
··· 1589 1589 tty_debug_hangup(tty, "freeing structure\n"); 1590 1590 /* 1591 1591 * The release_tty function takes care of the details of clearing 1592 - * the slots and preserving the termios structure. The tty_unlock_pair 1593 - * should be safe as we keep a kref while the tty is locked (so the 1594 - * unlock never unlocks a freed tty). 1592 + * the slots and preserving the termios structure. 1595 1593 */ 1596 1594 mutex_lock(&tty_mutex); 1597 1595 tty_port_set_kopened(tty->port, 0); ··· 1619 1621 tty_debug_hangup(tty, "freeing structure\n"); 1620 1622 /* 1621 1623 * The release_tty function takes care of the details of clearing 1622 - * the slots and preserving the termios structure. The tty_unlock_pair 1623 - * should be safe as we keep a kref while the tty is locked (so the 1624 - * unlock never unlocks a freed tty). 1624 + * the slots and preserving the termios structure. 1625 1625 */ 1626 1626 mutex_lock(&tty_mutex); 1627 1627 release_tty(tty, idx); ··· 2730 2734 struct serial_struct32 v32; 2731 2735 struct serial_struct v; 2732 2736 int err; 2733 - memset(&v, 0, sizeof(struct serial_struct)); 2734 2737 2735 - if (!tty->ops->set_serial) 2738 + memset(&v, 0, sizeof(v)); 2739 + memset(&v32, 0, sizeof(v32)); 2740 + 2741 + if (!tty->ops->get_serial) 2736 2742 return -ENOTTY; 2737 2743 err = tty->ops->get_serial(tty, &v); 2738 2744 if (!err) {
+4 -3
drivers/usb/chipidea/udc.c
··· 1530 1530 static void ci_hdrc_gadget_connect(struct usb_gadget *_gadget, int is_active) 1531 1531 { 1532 1532 struct ci_hdrc *ci = container_of(_gadget, struct ci_hdrc, gadget); 1533 - unsigned long flags; 1534 1533 1535 1534 if (is_active) { 1536 1535 pm_runtime_get_sync(&_gadget->dev); 1537 1536 hw_device_reset(ci); 1538 - spin_lock_irqsave(&ci->lock, flags); 1537 + spin_lock_irq(&ci->lock); 1539 1538 if (ci->driver) { 1540 1539 hw_device_state(ci, ci->ep0out->qh.dma); 1541 1540 usb_gadget_set_state(_gadget, USB_STATE_POWERED); 1541 + spin_unlock_irq(&ci->lock); 1542 1542 usb_udc_vbus_handler(_gadget, true); 1543 + } else { 1544 + spin_unlock_irq(&ci->lock); 1543 1545 } 1544 - spin_unlock_irqrestore(&ci->lock, flags); 1545 1546 } else { 1546 1547 usb_udc_vbus_handler(_gadget, false); 1547 1548 if (ci->driver)
+21 -13
drivers/usb/class/cdc-acm.c
··· 896 896 897 897 ss->xmit_fifo_size = acm->writesize; 898 898 ss->baud_base = le32_to_cpu(acm->line.dwDTERate); 899 - ss->close_delay = acm->port.close_delay / 10; 899 + ss->close_delay = jiffies_to_msecs(acm->port.close_delay) / 10; 900 900 ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ? 901 901 ASYNC_CLOSING_WAIT_NONE : 902 - acm->port.closing_wait / 10; 902 + jiffies_to_msecs(acm->port.closing_wait) / 10; 903 903 return 0; 904 904 } 905 905 ··· 907 907 { 908 908 struct acm *acm = tty->driver_data; 909 909 unsigned int closing_wait, close_delay; 910 + unsigned int old_closing_wait, old_close_delay; 910 911 int retval = 0; 911 912 912 - close_delay = ss->close_delay * 10; 913 + close_delay = msecs_to_jiffies(ss->close_delay * 10); 913 914 closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ? 914 - ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10; 915 + ASYNC_CLOSING_WAIT_NONE : 916 + msecs_to_jiffies(ss->closing_wait * 10); 917 + 918 + /* we must redo the rounding here, so that the values match */ 919 + old_close_delay = jiffies_to_msecs(acm->port.close_delay) / 10; 920 + old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ? 921 + ASYNC_CLOSING_WAIT_NONE : 922 + jiffies_to_msecs(acm->port.closing_wait) / 10; 915 923 916 924 mutex_lock(&acm->port.mutex); 917 925 918 - if (!capable(CAP_SYS_ADMIN)) { 919 - if ((close_delay != acm->port.close_delay) || 920 - (closing_wait != acm->port.closing_wait)) 926 + if ((ss->close_delay != old_close_delay) || 927 + (ss->closing_wait != old_closing_wait)) { 928 + if (!capable(CAP_SYS_ADMIN)) 921 929 retval = -EPERM; 922 - else 923 - retval = -EOPNOTSUPP; 924 - } else { 925 - acm->port.close_delay = close_delay; 926 - acm->port.closing_wait = closing_wait; 927 - } 930 + else { 931 + acm->port.close_delay = close_delay; 932 + acm->port.closing_wait = closing_wait; 933 + } 934 + } else 935 + retval = -EOPNOTSUPP; 928 936 929 937 mutex_unlock(&acm->port.mutex); 930 938 return retval;
+6
drivers/usb/core/quirks.c
··· 378 378 { USB_DEVICE(0x0b05, 0x17e0), .driver_info = 379 379 USB_QUIRK_IGNORE_REMOTE_WAKEUP }, 380 380 381 + /* Realtek hub in Dell WD19 (Type-C) */ 382 + { USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM }, 383 + 384 + /* Generic RTL8153 based ethernet adapters */ 385 + { USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM }, 386 + 381 387 /* Action Semiconductor flash disk */ 382 388 { USB_DEVICE(0x10d6, 0x2200), .driver_info = 383 389 USB_QUIRK_STRING_FETCH_255 },
+2 -1
drivers/usb/host/xhci-pci.c
··· 136 136 xhci->quirks |= XHCI_AMD_PLL_FIX; 137 137 138 138 if (pdev->vendor == PCI_VENDOR_ID_AMD && 139 - (pdev->device == 0x15e0 || 139 + (pdev->device == 0x145c || 140 + pdev->device == 0x15e0 || 140 141 pdev->device == 0x15e1 || 141 142 pdev->device == 0x43bb)) 142 143 xhci->quirks |= XHCI_SUSPEND_DELAY;
+1
drivers/usb/host/xhci-plat.c
··· 445 445 static struct platform_driver usb_xhci_driver = { 446 446 .probe = xhci_plat_probe, 447 447 .remove = xhci_plat_remove, 448 + .shutdown = usb_hcd_platform_shutdown, 448 449 .driver = { 449 450 .name = "xhci-hcd", 450 451 .pm = &xhci_plat_pm_ops,
+6 -17
drivers/usb/host/xhci-trace.h
··· 289 289 ), 290 290 TP_printk("ep%d%s-%s: urb %p pipe %u slot %d length %d/%d sgs %d/%d stream %d flags %08x", 291 291 __entry->epnum, __entry->dir_in ? "in" : "out", 292 - ({ char *s; 293 - switch (__entry->type) { 294 - case USB_ENDPOINT_XFER_INT: 295 - s = "intr"; 296 - break; 297 - case USB_ENDPOINT_XFER_CONTROL: 298 - s = "control"; 299 - break; 300 - case USB_ENDPOINT_XFER_BULK: 301 - s = "bulk"; 302 - break; 303 - case USB_ENDPOINT_XFER_ISOC: 304 - s = "isoc"; 305 - break; 306 - default: 307 - s = "UNKNOWN"; 308 - } s; }), __entry->urb, __entry->pipe, __entry->slot_id, 292 + __print_symbolic(__entry->type, 293 + { USB_ENDPOINT_XFER_INT, "intr" }, 294 + { USB_ENDPOINT_XFER_CONTROL, "control" }, 295 + { USB_ENDPOINT_XFER_BULK, "bulk" }, 296 + { USB_ENDPOINT_XFER_ISOC, "isoc" }), 297 + __entry->urb, __entry->pipe, __entry->slot_id, 309 298 __entry->actual, __entry->length, __entry->num_mapped_sgs, 310 299 __entry->num_sgs, __entry->stream, __entry->flags 311 300 )
+2
drivers/usb/serial/option.c
··· 1183 1183 .driver_info = NCTRL(0) }, 1184 1184 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x110a, 0xff), /* Telit ME910G1 */ 1185 1185 .driver_info = NCTRL(0) | RSVD(3) }, 1186 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x110b, 0xff), /* Telit ME910G1 (ECM) */ 1187 + .driver_info = NCTRL(0) }, 1186 1188 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), 1187 1189 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1188 1190 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+1
drivers/usb/serial/pl2303.c
··· 99 99 { USB_DEVICE(SUPERIAL_VENDOR_ID, SUPERIAL_PRODUCT_ID) }, 100 100 { USB_DEVICE(HP_VENDOR_ID, HP_LD220_PRODUCT_ID) }, 101 101 { USB_DEVICE(HP_VENDOR_ID, HP_LD220TA_PRODUCT_ID) }, 102 + { USB_DEVICE(HP_VENDOR_ID, HP_LD381_PRODUCT_ID) }, 102 103 { USB_DEVICE(HP_VENDOR_ID, HP_LD960_PRODUCT_ID) }, 103 104 { USB_DEVICE(HP_VENDOR_ID, HP_LD960TA_PRODUCT_ID) }, 104 105 { USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) },
+1
drivers/usb/serial/pl2303.h
··· 130 130 #define HP_LM920_PRODUCT_ID 0x026b 131 131 #define HP_TD620_PRODUCT_ID 0x0956 132 132 #define HP_LD960_PRODUCT_ID 0x0b39 133 + #define HP_LD381_PRODUCT_ID 0x0f7f 133 134 #define HP_LCM220_PRODUCT_ID 0x3139 134 135 #define HP_LCM960_PRODUCT_ID 0x3239 135 136 #define HP_LD220_PRODUCT_ID 0x3524
+11 -1
drivers/usb/typec/ucsi/displayport.c
··· 271 271 return; 272 272 273 273 dp = typec_altmode_get_drvdata(alt); 274 + if (!dp) 275 + return; 276 + 274 277 dp->data.conf = 0; 275 278 dp->data.status = 0; 276 279 dp->initialized = false; ··· 288 285 struct typec_altmode *alt; 289 286 struct ucsi_dp *dp; 290 287 288 + mutex_lock(&con->lock); 289 + 291 290 /* We can't rely on the firmware with the capabilities. */ 292 291 desc->vdo |= DP_CAP_DP_SIGNALING | DP_CAP_RECEPTACLE; 293 292 ··· 298 293 desc->vdo |= all_assignments << 16; 299 294 300 295 alt = typec_port_register_altmode(con->port, desc); 301 - if (IS_ERR(alt)) 296 + if (IS_ERR(alt)) { 297 + mutex_unlock(&con->lock); 302 298 return alt; 299 + } 303 300 304 301 dp = devm_kzalloc(&alt->dev, sizeof(*dp), GFP_KERNEL); 305 302 if (!dp) { 306 303 typec_unregister_altmode(alt); 304 + mutex_unlock(&con->lock); 307 305 return ERR_PTR(-ENOMEM); 308 306 } 309 307 ··· 318 310 319 311 alt->ops = &ucsi_displayport_ops; 320 312 typec_altmode_set_drvdata(alt, dp); 313 + 314 + mutex_unlock(&con->lock); 321 315 322 316 return alt; 323 317 }
+1 -1
drivers/virtio/virtio_balloon.c
··· 959 959 iput(vb->vb_dev_info.inode); 960 960 out_kern_unmount: 961 961 kern_unmount(balloon_mnt); 962 - #endif 963 962 out_del_vqs: 963 + #endif 964 964 vdev->config->del_vqs(vdev); 965 965 out_free_vb: 966 966 kfree(vb);
+2 -2
drivers/virtio/virtio_ring.c
··· 2203 2203 vq->split.queue_size_in_bytes, 2204 2204 vq->split.vring.desc, 2205 2205 vq->split.queue_dma_addr); 2206 - 2207 - kfree(vq->split.desc_state); 2208 2206 } 2209 2207 } 2208 + if (!vq->packed_ring) 2209 + kfree(vq->split.desc_state); 2210 2210 list_del(&_vq->list); 2211 2211 kfree(vq); 2212 2212 }
+2
drivers/watchdog/iTCO_vendor.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* iTCO Vendor Specific Support hooks */ 3 3 #ifdef CONFIG_ITCO_VENDOR_SUPPORT 4 + extern int iTCO_vendorsupport; 4 5 extern void iTCO_vendor_pre_start(struct resource *, unsigned int); 5 6 extern void iTCO_vendor_pre_stop(struct resource *); 6 7 extern int iTCO_vendor_check_noreboot_on(void); 7 8 #else 9 + #define iTCO_vendorsupport 0 8 10 #define iTCO_vendor_pre_start(acpibase, heartbeat) {} 9 11 #define iTCO_vendor_pre_stop(acpibase) {} 10 12 #define iTCO_vendor_check_noreboot_on() 1
+9 -7
drivers/watchdog/iTCO_vendor_support.c
··· 39 39 /* Broken BIOS */ 40 40 #define BROKEN_BIOS 911 41 41 42 - static int vendorsupport; 43 - module_param(vendorsupport, int, 0); 42 + int iTCO_vendorsupport; 43 + EXPORT_SYMBOL(iTCO_vendorsupport); 44 + 45 + module_param_named(vendorsupport, iTCO_vendorsupport, int, 0); 44 46 MODULE_PARM_DESC(vendorsupport, "iTCO vendor specific support mode, default=" 45 47 "0 (none), 1=SuperMicro Pent3, 911=Broken SMI BIOS"); 46 48 ··· 154 152 void iTCO_vendor_pre_start(struct resource *smires, 155 153 unsigned int heartbeat) 156 154 { 157 - switch (vendorsupport) { 155 + switch (iTCO_vendorsupport) { 158 156 case SUPERMICRO_OLD_BOARD: 159 157 supermicro_old_pre_start(smires); 160 158 break; ··· 167 165 168 166 void iTCO_vendor_pre_stop(struct resource *smires) 169 167 { 170 - switch (vendorsupport) { 168 + switch (iTCO_vendorsupport) { 171 169 case SUPERMICRO_OLD_BOARD: 172 170 supermicro_old_pre_stop(smires); 173 171 break; ··· 180 178 181 179 int iTCO_vendor_check_noreboot_on(void) 182 180 { 183 - switch (vendorsupport) { 181 + switch (iTCO_vendorsupport) { 184 182 case SUPERMICRO_OLD_BOARD: 185 183 return 0; 186 184 default: ··· 191 189 192 190 static int __init iTCO_vendor_init_module(void) 193 191 { 194 - if (vendorsupport == SUPERMICRO_NEW_BOARD) { 192 + if (iTCO_vendorsupport == SUPERMICRO_NEW_BOARD) { 195 193 pr_warn("Option vendorsupport=%d is no longer supported, " 196 194 "please use the w83627hf_wdt driver instead\n", 197 195 SUPERMICRO_NEW_BOARD); 198 196 return -EINVAL; 199 197 } 200 - pr_info("vendor-support=%d\n", vendorsupport); 198 + pr_info("vendor-support=%d\n", iTCO_vendorsupport); 201 199 return 0; 202 200 } 203 201
+16 -12
drivers/watchdog/iTCO_wdt.c
··· 459 459 if (!p->tco_res) 460 460 return -ENODEV; 461 461 462 - p->smi_res = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_IO_SMI); 463 - if (!p->smi_res) 464 - return -ENODEV; 465 - 466 462 p->iTCO_version = pdata->version; 467 463 p->pci_dev = to_pci_dev(dev->parent); 464 + 465 + p->smi_res = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_IO_SMI); 466 + if (p->smi_res) { 467 + /* The TCO logic uses the TCO_EN bit in the SMI_EN register */ 468 + if (!devm_request_region(dev, p->smi_res->start, 469 + resource_size(p->smi_res), 470 + pdev->name)) { 471 + pr_err("I/O address 0x%04llx already in use, device disabled\n", 472 + (u64)SMI_EN(p)); 473 + return -EBUSY; 474 + } 475 + } else if (iTCO_vendorsupport || 476 + turn_SMI_watchdog_clear_off >= p->iTCO_version) { 477 + pr_err("SMI I/O resource is missing\n"); 478 + return -ENODEV; 479 + } 468 480 469 481 iTCO_wdt_no_reboot_bit_setup(p, pdata); 470 482 ··· 504 492 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */ 505 493 p->update_no_reboot_bit(p->no_reboot_priv, true); 506 494 507 - /* The TCO logic uses the TCO_EN bit in the SMI_EN register */ 508 - if (!devm_request_region(dev, p->smi_res->start, 509 - resource_size(p->smi_res), 510 - pdev->name)) { 511 - pr_err("I/O address 0x%04llx already in use, device disabled\n", 512 - (u64)SMI_EN(p)); 513 - return -EBUSY; 514 - } 515 495 if (turn_SMI_watchdog_clear_off >= p->iTCO_version) { 516 496 /* 517 497 * Bit 13: TCO_EN -> 0
+1 -1
fs/afs/addr_list.c
··· 19 19 void afs_put_addrlist(struct afs_addr_list *alist) 20 20 { 21 21 if (alist && refcount_dec_and_test(&alist->usage)) 22 - call_rcu(&alist->rcu, (rcu_callback_t)kfree); 22 + kfree_rcu(alist, rcu); 23 23 } 24 24 25 25 /*
+1 -1
fs/afs/internal.h
··· 81 81 * List of server addresses. 82 82 */ 83 83 struct afs_addr_list { 84 - struct rcu_head rcu; /* Must be first */ 84 + struct rcu_head rcu; 85 85 refcount_t usage; 86 86 u32 version; /* Version */ 87 87 unsigned char max_addrs;
+2 -2
fs/btrfs/block-group.c
··· 856 856 found_raid1c34 = true; 857 857 up_read(&sinfo->groups_sem); 858 858 } 859 - if (found_raid56) 859 + if (!found_raid56) 860 860 btrfs_clear_fs_incompat(fs_info, RAID56); 861 - if (found_raid1c34) 861 + if (!found_raid1c34) 862 862 btrfs_clear_fs_incompat(fs_info, RAID1C34); 863 863 } 864 864 }
+4
fs/btrfs/inode.c
··· 9496 9496 ret = btrfs_sync_log(trans, BTRFS_I(old_inode)->root, &ctx); 9497 9497 if (ret) 9498 9498 commit_transaction = true; 9499 + } else if (sync_log) { 9500 + mutex_lock(&root->log_mutex); 9501 + list_del(&ctx.list); 9502 + mutex_unlock(&root->log_mutex); 9499 9503 } 9500 9504 if (commit_transaction) { 9501 9505 ret = btrfs_commit_transaction(trans);
-1
fs/cifs/dir.c
··· 555 555 if (server->ops->close) 556 556 server->ops->close(xid, tcon, &fid); 557 557 cifs_del_pending_open(&open); 558 - fput(file); 559 558 rc = -ENOMEM; 560 559 } 561 560
+2 -1
fs/cifs/file.c
··· 1169 1169 rc = posix_lock_file(file, flock, NULL); 1170 1170 up_write(&cinode->lock_sem); 1171 1171 if (rc == FILE_LOCK_DEFERRED) { 1172 - rc = wait_event_interruptible(flock->fl_wait, !flock->fl_blocker); 1172 + rc = wait_event_interruptible(flock->fl_wait, 1173 + list_empty(&flock->fl_blocked_member)); 1173 1174 if (!rc) 1174 1175 goto try_again; 1175 1176 locks_delete_block(flock);
+1 -1
fs/cifs/inode.c
··· 2191 2191 if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID)) 2192 2192 stat->gid = current_fsgid(); 2193 2193 } 2194 - return rc; 2194 + return 0; 2195 2195 } 2196 2196 2197 2197 int cifs_fiemap(struct inode *inode, struct fiemap_extent_info *fei, u64 start,
+3 -1
fs/cifs/smb2ops.c
··· 2222 2222 goto qdf_free; 2223 2223 } 2224 2224 2225 + atomic_inc(&tcon->num_remote_opens); 2226 + 2225 2227 qd_rsp = (struct smb2_query_directory_rsp *)rsp_iov[1].iov_base; 2226 2228 if (qd_rsp->sync_hdr.Status == STATUS_NO_MORE_FILES) { 2227 2229 trace_smb3_query_dir_done(xid, fid->persistent_fid, ··· 3419 3417 if (rc) 3420 3418 goto out; 3421 3419 3422 - if (out_data_len < sizeof(struct file_allocated_range_buffer)) { 3420 + if (out_data_len && out_data_len < sizeof(struct file_allocated_range_buffer)) { 3423 3421 rc = -EINVAL; 3424 3422 goto out; 3425 3423 }
+9
fs/crypto/keysetup.c
··· 539 539 mk = ci->ci_master_key->payload.data[0]; 540 540 541 541 /* 542 + * With proper, non-racy use of FS_IOC_REMOVE_ENCRYPTION_KEY, all inodes 543 + * protected by the key were cleaned by sync_filesystem(). But if 544 + * userspace is still using the files, inodes can be dirtied between 545 + * then and now. We mustn't lose any writes, so skip dirty inodes here. 546 + */ 547 + if (inode->i_state & I_DIRTY_ALL) 548 + return 0; 549 + 550 + /* 542 551 * Note: since we aren't holding ->mk_secret_sem, the result here can 543 552 * immediately become outdated. But there's no correctness problem with 544 553 * unnecessarily evicting. Nor is there a correctness problem with not
+4 -4
fs/eventpoll.c
··· 1854 1854 waiter = true; 1855 1855 init_waitqueue_entry(&wait, current); 1856 1856 1857 - spin_lock_irq(&ep->wq.lock); 1857 + write_lock_irq(&ep->lock); 1858 1858 __add_wait_queue_exclusive(&ep->wq, &wait); 1859 - spin_unlock_irq(&ep->wq.lock); 1859 + write_unlock_irq(&ep->lock); 1860 1860 } 1861 1861 1862 1862 for (;;) { ··· 1904 1904 goto fetch_events; 1905 1905 1906 1906 if (waiter) { 1907 - spin_lock_irq(&ep->wq.lock); 1907 + write_lock_irq(&ep->lock); 1908 1908 __remove_wait_queue(&ep->wq, &wait); 1909 - spin_unlock_irq(&ep->wq.lock); 1909 + write_unlock_irq(&ep->lock); 1910 1910 } 1911 1911 1912 1912 return res;
+6 -1
fs/file.c
··· 540 540 return __alloc_fd(current->files, start, rlimit(RLIMIT_NOFILE), flags); 541 541 } 542 542 543 + int __get_unused_fd_flags(unsigned flags, unsigned long nofile) 544 + { 545 + return __alloc_fd(current->files, 0, nofile, flags); 546 + } 547 + 543 548 int get_unused_fd_flags(unsigned flags) 544 549 { 545 - return __alloc_fd(current->files, 0, rlimit(RLIMIT_NOFILE), flags); 550 + return __get_unused_fd_flags(flags, rlimit(RLIMIT_NOFILE)); 546 551 } 547 552 EXPORT_SYMBOL(get_unused_fd_flags); 548 553
+3 -3
fs/fuse/dev.c
··· 276 276 void fuse_request_end(struct fuse_conn *fc, struct fuse_req *req) 277 277 { 278 278 struct fuse_iqueue *fiq = &fc->iq; 279 - bool async; 280 279 281 280 if (test_and_set_bit(FR_FINISHED, &req->flags)) 282 281 goto put_request; 283 282 284 - async = req->args->end; 285 283 /* 286 284 * test_and_set_bit() implies smp_mb() between bit 287 285 * changing and below intr_entry check. Pairs with ··· 322 324 wake_up(&req->waitq); 323 325 } 324 326 325 - if (async) 327 + if (test_bit(FR_ASYNC, &req->flags)) 326 328 req->args->end(fc, req->args, req->out.h.error); 327 329 put_request: 328 330 fuse_put_request(fc, req); ··· 469 471 req->in.h.opcode = args->opcode; 470 472 req->in.h.nodeid = args->nodeid; 471 473 req->args = args; 474 + if (args->end) 475 + __set_bit(FR_ASYNC, &req->flags); 472 476 } 473 477 474 478 ssize_t fuse_simple_request(struct fuse_conn *fc, struct fuse_args *args)
+2
fs/fuse/fuse_i.h
··· 301 301 * FR_SENT: request is in userspace, waiting for an answer 302 302 * FR_FINISHED: request is finished 303 303 * FR_PRIVATE: request is on private list 304 + * FR_ASYNC: request is asynchronous 304 305 */ 305 306 enum fuse_req_flag { 306 307 FR_ISREPLY, ··· 315 314 FR_SENT, 316 315 FR_FINISHED, 317 316 FR_PRIVATE, 317 + FR_ASYNC, 318 318 }; 319 319 320 320 /**
+1 -1
fs/gfs2/inode.c
··· 1248 1248 if (!(file->f_mode & FMODE_OPENED)) 1249 1249 return finish_no_open(file, d); 1250 1250 dput(d); 1251 - return 0; 1251 + return excl && (flags & O_CREAT) ? -EEXIST : 0; 1252 1252 } 1253 1253 1254 1254 BUG_ON(d != NULL);
+1
fs/inode.c
··· 138 138 inode->i_sb = sb; 139 139 inode->i_blkbits = sb->s_blocksize_bits; 140 140 inode->i_flags = 0; 141 + atomic64_set(&inode->i_sequence, 0); 141 142 atomic_set(&inode->i_count, 1); 142 143 inode->i_op = &empty_iops; 143 144 inode->i_fop = &no_open_fops;
+30 -19
fs/io_uring.c
··· 191 191 struct llist_head put_llist; 192 192 struct work_struct ref_work; 193 193 struct completion done; 194 - struct rcu_head rcu; 195 194 }; 196 195 197 196 struct io_ring_ctx { ··· 343 344 struct sockaddr __user *addr; 344 345 int __user *addr_len; 345 346 int flags; 347 + unsigned long nofile; 346 348 }; 347 349 348 350 struct io_sync { ··· 398 398 struct filename *filename; 399 399 struct statx __user *buffer; 400 400 struct open_how how; 401 + unsigned long nofile; 401 402 }; 402 403 403 404 struct io_files_update { ··· 2579 2578 return ret; 2580 2579 } 2581 2580 2581 + req->open.nofile = rlimit(RLIMIT_NOFILE); 2582 2582 req->flags |= REQ_F_NEED_CLEANUP; 2583 2583 return 0; 2584 2584 } ··· 2621 2619 return ret; 2622 2620 } 2623 2621 2622 + req->open.nofile = rlimit(RLIMIT_NOFILE); 2624 2623 req->flags |= REQ_F_NEED_CLEANUP; 2625 2624 return 0; 2626 2625 } ··· 2640 2637 if (ret) 2641 2638 goto err; 2642 2639 2643 - ret = get_unused_fd_flags(req->open.how.flags); 2640 + ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile); 2644 2641 if (ret < 0) 2645 2642 goto err; 2646 2643 ··· 3325 3322 accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr)); 3326 3323 accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2)); 3327 3324 accept->flags = READ_ONCE(sqe->accept_flags); 3325 + accept->nofile = rlimit(RLIMIT_NOFILE); 3328 3326 return 0; 3329 3327 #else 3330 3328 return -EOPNOTSUPP; ··· 3342 3338 3343 3339 file_flags = force_nonblock ? O_NONBLOCK : 0; 3344 3340 ret = __sys_accept4_file(req->file, file_flags, accept->addr, 3345 - accept->addr_len, accept->flags); 3341 + accept->addr_len, accept->flags, 3342 + accept->nofile); 3346 3343 if (ret == -EAGAIN && force_nonblock) 3347 3344 return -EAGAIN; 3348 3345 if (ret == -ERESTARTSYS) ··· 4137 4132 { 4138 4133 ssize_t ret = 0; 4139 4134 4135 + if (!sqe) 4136 + return 0; 4137 + 4140 4138 if (io_op_defs[req->opcode].file_table) { 4141 4139 ret = io_grab_files(req); 4142 4140 if (unlikely(ret)) ··· 4916 4908 if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) { 4917 4909 req->flags |= REQ_F_LINK; 4918 4910 INIT_LIST_HEAD(&req->link_list); 4911 + 4912 + if (io_alloc_async_ctx(req)) { 4913 + ret = -EAGAIN; 4914 + goto err_req; 4915 + } 4919 4916 ret = io_req_defer_prep(req, sqe); 4920 4917 if (ret) 4921 4918 req->flags |= REQ_F_FAIL_LINK; ··· 5344 5331 complete(&data->done); 5345 5332 } 5346 5333 5347 - static void __io_file_ref_exit_and_free(struct rcu_head *rcu) 5334 + static void io_file_ref_exit_and_free(struct work_struct *work) 5348 5335 { 5349 - struct fixed_file_data *data = container_of(rcu, struct fixed_file_data, 5350 - rcu); 5336 + struct fixed_file_data *data; 5337 + 5338 + data = container_of(work, struct fixed_file_data, ref_work); 5339 + 5340 + /* 5341 + * Ensure any percpu-ref atomic switch callback has run, it could have 5342 + * been in progress when the files were being unregistered. Once 5343 + * that's done, we can safely exit and free the ref and containing 5344 + * data structure. 5345 + */ 5346 + rcu_barrier(); 5351 5347 percpu_ref_exit(&data->refs); 5352 5348 kfree(data); 5353 - } 5354 - 5355 - static void io_file_ref_exit_and_free(struct rcu_head *rcu) 5356 - { 5357 - /* 5358 - * We need to order our exit+free call against the potentially 5359 - * existing call_rcu() for switching to atomic. One way to do that 5360 - * is to have this rcu callback queue the final put and free, as we 5361 - * could otherwise have a pre-existing atomic switch complete _after_ 5362 - * the free callback we queued. 5363 - */ 5364 - call_rcu(rcu, __io_file_ref_exit_and_free); 5365 5349 } 5366 5350 5367 5351 static int io_sqe_files_unregister(struct io_ring_ctx *ctx) ··· 5379 5369 for (i = 0; i < nr_tables; i++) 5380 5370 kfree(data->table[i].files); 5381 5371 kfree(data->table); 5382 - call_rcu(&data->rcu, io_file_ref_exit_and_free); 5372 + INIT_WORK(&data->ref_work, io_file_ref_exit_and_free); 5373 + queue_work(system_wq, &data->ref_work); 5383 5374 ctx->file_data = NULL; 5384 5375 ctx->nr_user_files = 0; 5385 5376 return 0;
+48 -6
fs/locks.c
··· 725 725 { 726 726 locks_delete_global_blocked(waiter); 727 727 list_del_init(&waiter->fl_blocked_member); 728 - waiter->fl_blocker = NULL; 729 728 } 730 729 731 730 static void __locks_wake_up_blocks(struct file_lock *blocker) ··· 739 740 waiter->fl_lmops->lm_notify(waiter); 740 741 else 741 742 wake_up(&waiter->fl_wait); 743 + 744 + /* 745 + * The setting of fl_blocker to NULL marks the "done" 746 + * point in deleting a block. Paired with acquire at the top 747 + * of locks_delete_block(). 748 + */ 749 + smp_store_release(&waiter->fl_blocker, NULL); 742 750 } 743 751 } 744 752 ··· 759 753 { 760 754 int status = -ENOENT; 761 755 756 + /* 757 + * If fl_blocker is NULL, it won't be set again as this thread "owns" 758 + * the lock and is the only one that might try to claim the lock. 759 + * 760 + * We use acquire/release to manage fl_blocker so that we can 761 + * optimize away taking the blocked_lock_lock in many cases. 762 + * 763 + * The smp_load_acquire guarantees two things: 764 + * 765 + * 1/ that fl_blocked_requests can be tested locklessly. If something 766 + * was recently added to that list it must have been in a locked region 767 + * *before* the locked region when fl_blocker was set to NULL. 768 + * 769 + * 2/ that no other thread is accessing 'waiter', so it is safe to free 770 + * it. __locks_wake_up_blocks is careful not to touch waiter after 771 + * fl_blocker is released. 772 + * 773 + * If a lockless check of fl_blocker shows it to be NULL, we know that 774 + * no new locks can be inserted into its fl_blocked_requests list, and 775 + * can avoid doing anything further if the list is empty. 776 + */ 777 + if (!smp_load_acquire(&waiter->fl_blocker) && 778 + list_empty(&waiter->fl_blocked_requests)) 779 + return status; 780 + 762 781 spin_lock(&blocked_lock_lock); 763 782 if (waiter->fl_blocker) 764 783 status = 0; 765 784 __locks_wake_up_blocks(waiter); 766 785 __locks_delete_block(waiter); 786 + 787 + /* 788 + * The setting of fl_blocker to NULL marks the "done" point in deleting 789 + * a block. Paired with acquire at the top of this function. 790 + */ 791 + smp_store_release(&waiter->fl_blocker, NULL); 767 792 spin_unlock(&blocked_lock_lock); 768 793 return status; 769 794 } ··· 1387 1350 error = posix_lock_inode(inode, fl, NULL); 1388 1351 if (error != FILE_LOCK_DEFERRED) 1389 1352 break; 1390 - error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); 1353 + error = wait_event_interruptible(fl->fl_wait, 1354 + list_empty(&fl->fl_blocked_member)); 1391 1355 if (error) 1392 1356 break; 1393 1357 } ··· 1473 1435 error = posix_lock_inode(inode, &fl, NULL); 1474 1436 if (error != FILE_LOCK_DEFERRED) 1475 1437 break; 1476 - error = wait_event_interruptible(fl.fl_wait, !fl.fl_blocker); 1438 + error = wait_event_interruptible(fl.fl_wait, 1439 + list_empty(&fl.fl_blocked_member)); 1477 1440 if (!error) { 1478 1441 /* 1479 1442 * If we've been sleeping someone might have ··· 1677 1638 1678 1639 locks_dispose_list(&dispose); 1679 1640 error = wait_event_interruptible_timeout(new_fl->fl_wait, 1680 - !new_fl->fl_blocker, break_time); 1641 + list_empty(&new_fl->fl_blocked_member), 1642 + break_time); 1681 1643 1682 1644 percpu_down_read(&file_rwsem); 1683 1645 spin_lock(&ctx->flc_lock); ··· 2162 2122 error = flock_lock_inode(inode, fl); 2163 2123 if (error != FILE_LOCK_DEFERRED) 2164 2124 break; 2165 - error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); 2125 + error = wait_event_interruptible(fl->fl_wait, 2126 + list_empty(&fl->fl_blocked_member)); 2166 2127 if (error) 2167 2128 break; 2168 2129 } ··· 2440 2399 error = vfs_lock_file(filp, cmd, fl, NULL); 2441 2400 if (error != FILE_LOCK_DEFERRED) 2442 2401 break; 2443 - error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); 2402 + error = wait_event_interruptible(fl->fl_wait, 2403 + list_empty(&fl->fl_blocked_member)); 2444 2404 if (error) 2445 2405 break; 2446 2406 }
+1
fs/nfs/client.c
··· 153 153 if ((clp = kzalloc(sizeof(*clp), GFP_KERNEL)) == NULL) 154 154 goto error_0; 155 155 156 + clp->cl_minorversion = cl_init->minorversion; 156 157 clp->cl_nfs_mod = cl_init->nfs_mod; 157 158 if (!try_module_get(clp->cl_nfs_mod->owner)) 158 159 goto error_dealloc;
+9
fs/nfs/fs_context.c
··· 832 832 if (len > maxnamlen) 833 833 goto out_hostname; 834 834 835 + kfree(ctx->nfs_server.hostname); 836 + 835 837 /* N.B. caller will free nfs_server.hostname in all cases */ 836 838 ctx->nfs_server.hostname = kmemdup_nul(dev_name, len, GFP_KERNEL); 837 839 if (!ctx->nfs_server.hostname) ··· 1241 1239 goto out_version_unavailable; 1242 1240 } 1243 1241 ctx->nfs_mod = nfs_mod; 1242 + } 1243 + 1244 + /* Ensure the filesystem context has the correct fs_type */ 1245 + if (fc->fs_type != ctx->nfs_mod->nfs_fs) { 1246 + module_put(fc->fs_type->owner); 1247 + __module_get(ctx->nfs_mod->nfs_fs->owner); 1248 + fc->fs_type = ctx->nfs_mod->nfs_fs; 1244 1249 } 1245 1250 return 0; 1246 1251
+2
fs/nfs/fscache.c
··· 31 31 struct nfs_server_key { 32 32 struct { 33 33 uint16_t nfsversion; /* NFS protocol version */ 34 + uint32_t minorversion; /* NFSv4 minor version */ 34 35 uint16_t family; /* address family */ 35 36 __be16 port; /* IP port */ 36 37 } hdr; ··· 56 55 57 56 memset(&key, 0, sizeof(key)); 58 57 key.hdr.nfsversion = clp->rpc_ops->version; 58 + key.hdr.minorversion = clp->cl_minorversion; 59 59 key.hdr.family = clp->cl_addr.ss_family; 60 60 61 61 switch (clp->cl_addr.ss_family) {
+1 -1
fs/nfs/namespace.c
··· 153 153 /* Open a new filesystem context, transferring parameters from the 154 154 * parent superblock, including the network namespace. 155 155 */ 156 - fc = fs_context_for_submount(&nfs_fs_type, path->dentry); 156 + fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, path->dentry); 157 157 if (IS_ERR(fc)) 158 158 return ERR_CAST(fc); 159 159
-1
fs/nfs/nfs4client.c
··· 216 216 INIT_LIST_HEAD(&clp->cl_ds_clients); 217 217 rpc_init_wait_queue(&clp->cl_rpcwaitq, "NFS client"); 218 218 clp->cl_state = 1 << NFS4CLNT_LEASE_EXPIRED; 219 - clp->cl_minorversion = cl_init->minorversion; 220 219 clp->cl_mvops = nfs_v4_minor_ops[cl_init->minorversion]; 221 220 clp->cl_mig_gen = 1; 222 221 #if IS_ENABLED(CONFIG_NFS_V4_1)
-3
fs/open.c
··· 860 860 * the return value of d_splice_alias(), then the caller needs to perform dput() 861 861 * on it after finish_open(). 862 862 * 863 - * On successful return @file is a fully instantiated open file. After this, if 864 - * an error occurs in ->atomic_open(), it needs to clean up with fput(). 865 - * 866 863 * Returns zero on success or -errno if the open failed. 867 864 */ 868 865 int finish_open(struct file *file, struct dentry *dentry,
+1
fs/overlayfs/Kconfig
··· 93 93 bool "Overlayfs: auto enable inode number mapping" 94 94 default n 95 95 depends on OVERLAY_FS 96 + depends on 64BIT 96 97 help 97 98 If this config option is enabled then overlay filesystems will use 98 99 unused high bits in undelying filesystem inode numbers to map all
+6
fs/overlayfs/file.c
··· 244 244 if (iocb->ki_flags & IOCB_WRITE) { 245 245 struct inode *inode = file_inode(orig_iocb->ki_filp); 246 246 247 + /* Actually acquired in ovl_write_iter() */ 248 + __sb_writers_acquired(file_inode(iocb->ki_filp)->i_sb, 249 + SB_FREEZE_WRITE); 247 250 file_end_write(iocb->ki_filp); 248 251 ovl_copyattr(ovl_inode_real(inode), inode); 249 252 } ··· 349 346 goto out; 350 347 351 348 file_start_write(real.file); 349 + /* Pacify lockdep, same trick as done in aio_write() */ 350 + __sb_writers_release(file_inode(real.file)->i_sb, 351 + SB_FREEZE_WRITE); 352 352 aio_req->fd = real; 353 353 real.flags = 0; 354 354 aio_req->orig_iocb = iocb;
+6 -1
fs/overlayfs/overlayfs.h
··· 318 318 return ovl_same_dev(sb) ? OVL_FS(sb)->xino_mode : 0; 319 319 } 320 320 321 - static inline int ovl_inode_lock(struct inode *inode) 321 + static inline void ovl_inode_lock(struct inode *inode) 322 + { 323 + mutex_lock(&OVL_I(inode)->lock); 324 + } 325 + 326 + static inline int ovl_inode_lock_interruptible(struct inode *inode) 322 327 { 323 328 return mutex_lock_interruptible(&OVL_I(inode)->lock); 324 329 }
+8 -1
fs/overlayfs/super.c
··· 1411 1411 if (ofs->config.xino == OVL_XINO_ON) 1412 1412 pr_info("\"xino=on\" is useless with all layers on same fs, ignore.\n"); 1413 1413 ofs->xino_mode = 0; 1414 + } else if (ofs->config.xino == OVL_XINO_OFF) { 1415 + ofs->xino_mode = -1; 1414 1416 } else if (ofs->config.xino == OVL_XINO_ON && ofs->xino_mode < 0) { 1415 1417 /* 1416 1418 * This is a roundup of number of bits needed for encoding ··· 1625 1623 sb->s_stack_depth = 0; 1626 1624 sb->s_maxbytes = MAX_LFS_FILESIZE; 1627 1625 /* Assume underlaying fs uses 32bit inodes unless proven otherwise */ 1628 - if (ofs->config.xino != OVL_XINO_OFF) 1626 + if (ofs->config.xino != OVL_XINO_OFF) { 1629 1627 ofs->xino_mode = BITS_PER_LONG - 32; 1628 + if (!ofs->xino_mode) { 1629 + pr_warn("xino not supported on 32bit kernel, falling back to xino=off.\n"); 1630 + ofs->config.xino = OVL_XINO_OFF; 1631 + } 1632 + } 1630 1633 1631 1634 /* alloc/destroy_inode needed for setting up traps in inode cache */ 1632 1635 sb->s_op = &ovl_super_operations;
+2 -2
fs/overlayfs/util.c
··· 509 509 struct inode *inode = d_inode(dentry); 510 510 int err; 511 511 512 - err = ovl_inode_lock(inode); 512 + err = ovl_inode_lock_interruptible(inode); 513 513 if (!err && ovl_already_copied_up_locked(dentry, flags)) { 514 514 err = 1; /* Already copied up */ 515 515 ovl_inode_unlock(inode); ··· 764 764 return err; 765 765 } 766 766 767 - err = ovl_inode_lock(inode); 767 + err = ovl_inode_lock_interruptible(inode); 768 768 if (err) 769 769 return err; 770 770
+4 -2
include/crypto/curve25519.h
··· 33 33 const u8 secret[CURVE25519_KEY_SIZE], 34 34 const u8 basepoint[CURVE25519_KEY_SIZE]) 35 35 { 36 - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519)) 36 + if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519) && 37 + (!IS_ENABLED(CONFIG_CRYPTO_CURVE25519_X86) || IS_ENABLED(CONFIG_AS_ADX))) 37 38 curve25519_arch(mypublic, secret, basepoint); 38 39 else 39 40 curve25519_generic(mypublic, secret, basepoint); ··· 50 49 CURVE25519_KEY_SIZE))) 51 50 return false; 52 51 53 - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519)) 52 + if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519) && 53 + (!IS_ENABLED(CONFIG_CRYPTO_CURVE25519_X86) || IS_ENABLED(CONFIG_AS_ADX))) 54 54 curve25519_base_arch(pub, secret); 55 55 else 56 56 curve25519_generic(pub, secret, curve25519_base_point);
+2 -2
include/drm/drm_dp_mst_helper.h
··· 81 81 * &drm_dp_mst_topology_mgr.base.lock. 82 82 * @num_sdp_stream_sinks: Number of stream sinks. Protected by 83 83 * &drm_dp_mst_topology_mgr.base.lock. 84 - * @available_pbn: Available bandwidth for this port. Protected by 84 + * @full_pbn: Max possible bandwidth for this port. Protected by 85 85 * &drm_dp_mst_topology_mgr.base.lock. 86 86 * @next: link to next port on this branch device 87 87 * @aux: i2c aux transport to talk to device connected to this port, protected ··· 126 126 u8 dpcd_rev; 127 127 u8 num_sdp_streams; 128 128 u8 num_sdp_stream_sinks; 129 - uint16_t available_pbn; 129 + uint16_t full_pbn; 130 130 struct list_head next; 131 131 /** 132 132 * @mstb: the branch device connected to this port, if there is one.
+2 -2
include/dt-bindings/clock/imx8mn-clock.h
··· 122 122 #define IMX8MN_CLK_I2C1 105 123 123 #define IMX8MN_CLK_I2C2 106 124 124 #define IMX8MN_CLK_I2C3 107 125 - #define IMX8MN_CLK_I2C4 118 126 - #define IMX8MN_CLK_UART1 119 125 + #define IMX8MN_CLK_I2C4 108 126 + #define IMX8MN_CLK_UART1 109 127 127 #define IMX8MN_CLK_UART2 110 128 128 #define IMX8MN_CLK_UART3 111 129 129 #define IMX8MN_CLK_UART4 112
+1
include/linux/cgroup.h
··· 62 62 struct list_head *mg_tasks_head; 63 63 struct list_head *dying_tasks_head; 64 64 65 + struct list_head *cur_tasks_head; 65 66 struct css_set *cur_cset; 66 67 struct css_set *cur_dcset; 67 68 struct task_struct *cur_task;
+9 -5
include/linux/dmar.h
··· 69 69 extern struct rw_semaphore dmar_global_lock; 70 70 extern struct list_head dmar_drhd_units; 71 71 72 - #define for_each_drhd_unit(drhd) \ 73 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) 72 + #define for_each_drhd_unit(drhd) \ 73 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 74 + dmar_rcu_check()) 74 75 75 76 #define for_each_active_drhd_unit(drhd) \ 76 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) \ 77 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 78 + dmar_rcu_check()) \ 77 79 if (drhd->ignored) {} else 78 80 79 81 #define for_each_active_iommu(i, drhd) \ 80 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) \ 82 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 83 + dmar_rcu_check()) \ 81 84 if (i=drhd->iommu, drhd->ignored) {} else 82 85 83 86 #define for_each_iommu(i, drhd) \ 84 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) \ 87 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 88 + dmar_rcu_check()) \ 85 89 if (i=drhd->iommu, 0) {} else 86 90 87 91 static inline bool dmar_rcu_check(void)
+1
include/linux/file.h
··· 85 85 extern int replace_fd(unsigned fd, struct file *file, unsigned flags); 86 86 extern void set_close_on_exec(unsigned int fd, int flag); 87 87 extern bool get_close_on_exec(unsigned int fd); 88 + extern int __get_unused_fd_flags(unsigned flags, unsigned long nofile); 88 89 extern int get_unused_fd_flags(unsigned flags); 89 90 extern void put_unused_fd(unsigned int fd); 90 91
+1
include/linux/fs.h
··· 698 698 struct rcu_head i_rcu; 699 699 }; 700 700 atomic64_t i_version; 701 + atomic64_t i_sequence; /* see futex */ 701 702 atomic_t i_count; 702 703 atomic_t i_dio_count; 703 704 atomic_t i_writecount;
+10 -7
include/linux/futex.h
··· 31 31 32 32 union futex_key { 33 33 struct { 34 + u64 i_seq; 34 35 unsigned long pgoff; 35 - struct inode *inode; 36 - int offset; 36 + unsigned int offset; 37 37 } shared; 38 38 struct { 39 + union { 40 + struct mm_struct *mm; 41 + u64 __tmp; 42 + }; 39 43 unsigned long address; 40 - struct mm_struct *mm; 41 - int offset; 44 + unsigned int offset; 42 45 } private; 43 46 struct { 47 + u64 ptr; 44 48 unsigned long word; 45 - void *ptr; 46 - int offset; 49 + unsigned int offset; 47 50 } both; 48 51 }; 49 52 50 - #define FUTEX_KEY_INIT (union futex_key) { .both = { .ptr = NULL } } 53 + #define FUTEX_KEY_INIT (union futex_key) { .both = { .ptr = 0ULL } } 51 54 52 55 #ifdef CONFIG_FUTEX 53 56 enum {
+1 -12
include/linux/genhd.h
··· 245 245 !(disk->flags & GENHD_FL_NO_PART_SCAN); 246 246 } 247 247 248 - static inline bool disk_has_partitions(struct gendisk *disk) 249 - { 250 - bool ret = false; 251 - 252 - rcu_read_lock(); 253 - if (rcu_dereference(disk->part_tbl)->len > 1) 254 - ret = true; 255 - rcu_read_unlock(); 256 - 257 - return ret; 258 - } 259 - 260 248 static inline dev_t disk_devt(struct gendisk *disk) 261 249 { 262 250 return MKDEV(disk->major, disk->first_minor); ··· 286 298 287 299 extern struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, 288 300 sector_t sector); 301 + bool disk_has_partitions(struct gendisk *disk); 289 302 290 303 /* 291 304 * Macros to operate on percpu disk statistics:
+12 -6
include/linux/inet_diag.h
··· 2 2 #ifndef _INET_DIAG_H_ 3 3 #define _INET_DIAG_H_ 1 4 4 5 + #include <net/netlink.h> 5 6 #include <uapi/linux/inet_diag.h> 6 7 7 - struct net; 8 - struct sock; 9 8 struct inet_hashinfo; 10 - struct nlattr; 11 - struct nlmsghdr; 12 - struct sk_buff; 13 - struct netlink_callback; 14 9 15 10 struct inet_diag_handler { 16 11 void (*dump)(struct sk_buff *skb, ··· 57 62 58 63 void inet_diag_msg_common_fill(struct inet_diag_msg *r, struct sock *sk); 59 64 65 + static inline size_t inet_diag_msg_attrs_size(void) 66 + { 67 + return nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 68 + + nla_total_size(1) /* INET_DIAG_TOS */ 69 + #if IS_ENABLED(CONFIG_IPV6) 70 + + nla_total_size(1) /* INET_DIAG_TCLASS */ 71 + + nla_total_size(1) /* INET_DIAG_SKV6ONLY */ 72 + #endif 73 + + nla_total_size(4) /* INET_DIAG_MARK */ 74 + + nla_total_size(4); /* INET_DIAG_CLASS_ID */ 75 + } 60 76 int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb, 61 77 struct inet_diag_msg *r, int ext, 62 78 struct user_namespace *user_ns, bool net_admin);
+2
include/linux/intel-iommu.h
··· 123 123 124 124 #define dmar_readq(a) readq(a) 125 125 #define dmar_writeq(a,v) writeq(v,a) 126 + #define dmar_readl(a) readl(a) 127 + #define dmar_writel(a, v) writel(v, a) 126 128 127 129 #define DMAR_VER_MAJOR(v) (((v) & 0xf0) >> 4) 128 130 #define DMAR_VER_MINOR(v) ((v) & 0x0f)
+1
include/linux/mmc/host.h
··· 333 333 MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | \ 334 334 MMC_CAP_UHS_DDR50) 335 335 #define MMC_CAP_SYNC_RUNTIME_PM (1 << 21) /* Synced runtime PM suspends. */ 336 + #define MMC_CAP_NEED_RSP_BUSY (1 << 22) /* Commands with R1B can't use R1. */ 336 337 #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ 337 338 #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ 338 339 #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */
+4 -4
include/linux/of_clk.h
··· 11 11 12 12 #if defined(CONFIG_COMMON_CLK) && defined(CONFIG_OF) 13 13 14 - unsigned int of_clk_get_parent_count(struct device_node *np); 15 - const char *of_clk_get_parent_name(struct device_node *np, int index); 14 + unsigned int of_clk_get_parent_count(const struct device_node *np); 15 + const char *of_clk_get_parent_name(const struct device_node *np, int index); 16 16 void of_clk_init(const struct of_device_id *matches); 17 17 18 18 #else /* !CONFIG_COMMON_CLK || !CONFIG_OF */ 19 19 20 - static inline unsigned int of_clk_get_parent_count(struct device_node *np) 20 + static inline unsigned int of_clk_get_parent_count(const struct device_node *np) 21 21 { 22 22 return 0; 23 23 } 24 - static inline const char *of_clk_get_parent_name(struct device_node *np, 24 + static inline const char *of_clk_get_parent_name(const struct device_node *np, 25 25 int index) 26 26 { 27 27 return NULL;
+1 -1
include/linux/page-flags.h
··· 311 311 312 312 __PAGEFLAG(Locked, locked, PF_NO_TAIL) 313 313 PAGEFLAG(Waiters, waiters, PF_ONLY_HEAD) __CLEARPAGEFLAG(Waiters, waiters, PF_ONLY_HEAD) 314 - PAGEFLAG(Error, error, PF_NO_COMPOUND) TESTCLEARFLAG(Error, error, PF_NO_COMPOUND) 314 + PAGEFLAG(Error, error, PF_NO_TAIL) TESTCLEARFLAG(Error, error, PF_NO_TAIL) 315 315 PAGEFLAG(Referenced, referenced, PF_HEAD) 316 316 TESTCLEARFLAG(Referenced, referenced, PF_HEAD) 317 317 __SETPAGEFLAG(Referenced, referenced, PF_HEAD)
+3
include/linux/phy.h
··· 357 357 * is_gigabit_capable: Set to true if PHY supports 1000Mbps 358 358 * has_fixups: Set to true if this phy has fixups/quirks. 359 359 * suspended: Set to true if this phy has been suspended successfully. 360 + * suspended_by_mdio_bus: Set to true if this phy was suspended by MDIO bus. 360 361 * sysfs_links: Internal boolean tracking sysfs symbolic links setup/removal. 361 362 * loopback_enabled: Set true if this phy has been loopbacked successfully. 362 363 * state: state of the PHY for management purposes ··· 397 396 unsigned is_gigabit_capable:1; 398 397 unsigned has_fixups:1; 399 398 unsigned suspended:1; 399 + unsigned suspended_by_mdio_bus:1; 400 400 unsigned sysfs_links:1; 401 401 unsigned loopback_enabled:1; 402 402 ··· 559 557 /* 560 558 * Checks if the PHY generated an interrupt. 561 559 * For multi-PHY devices with shared PHY interrupt pin 560 + * Set interrupt bits have to be cleared. 562 561 */ 563 562 int (*did_interrupt)(struct phy_device *phydev); 564 563
+1 -1
include/linux/platform_device.h
··· 24 24 int id; 25 25 bool id_auto; 26 26 struct device dev; 27 - u64 dma_mask; 27 + u64 platform_dma_mask; 28 28 u32 num_resources; 29 29 struct resource *resource; 30 30
+1 -1
include/linux/rhashtable.h
··· 972 972 /** 973 973 * rhashtable_lookup_get_insert_key - lookup and insert object into hash table 974 974 * @ht: hash table 975 + * @key: key 975 976 * @obj: pointer to hash head inside object 976 977 * @params: hash table parameters 977 - * @data: pointer to element data already in hashes 978 978 * 979 979 * Just like rhashtable_lookup_insert_key(), but this function returns the 980 980 * object if it exists, NULL if it does not and the insertion was successful,
+2 -1
include/linux/socket.h
··· 401 401 int addr_len); 402 402 extern int __sys_accept4_file(struct file *file, unsigned file_flags, 403 403 struct sockaddr __user *upeer_sockaddr, 404 - int __user *upeer_addrlen, int flags); 404 + int __user *upeer_addrlen, int flags, 405 + unsigned long nofile); 405 406 extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr, 406 407 int __user *upeer_addrlen, int flags); 407 408 extern int __sys_socket(int family, int type, int protocol);
+3 -2
include/linux/vmalloc.h
··· 141 141 142 142 extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, 143 143 unsigned long pgoff); 144 - void vmalloc_sync_all(void); 145 - 144 + void vmalloc_sync_mappings(void); 145 + void vmalloc_sync_unmappings(void); 146 + 146 147 /* 147 148 * Lowlevel-APIs (not for driver use!) 148 149 */
+16
include/linux/workqueue.h
··· 487 487 * 488 488 * We queue the work to the CPU on which it was submitted, but if the CPU dies 489 489 * it can be processed by another CPU. 490 + * 491 + * Memory-ordering properties: If it returns %true, guarantees that all stores 492 + * preceding the call to queue_work() in the program order will be visible from 493 + * the CPU which will execute @work by the time such work executes, e.g., 494 + * 495 + * { x is initially 0 } 496 + * 497 + * CPU0 CPU1 498 + * 499 + * WRITE_ONCE(x, 1); [ @work is being executed ] 500 + * r0 = queue_work(wq, work); r1 = READ_ONCE(x); 501 + * 502 + * Forbids: r0 == true && r1 == 0 490 503 */ 491 504 static inline bool queue_work(struct workqueue_struct *wq, 492 505 struct work_struct *work) ··· 559 546 * This puts a job in the kernel-global workqueue if it was not already 560 547 * queued and leaves it in the same position on the kernel-global 561 548 * workqueue otherwise. 549 + * 550 + * Shares the same memory-ordering properties of queue_work(), cf. the 551 + * DocBook header of queue_work(). 562 552 */ 563 553 static inline bool schedule_work(struct work_struct *work) 564 554 {
+1
include/net/fib_rules.h
··· 108 108 [FRA_OIFNAME] = { .type = NLA_STRING, .len = IFNAMSIZ - 1 }, \ 109 109 [FRA_PRIORITY] = { .type = NLA_U32 }, \ 110 110 [FRA_FWMARK] = { .type = NLA_U32 }, \ 111 + [FRA_TUN_ID] = { .type = NLA_U64 }, \ 111 112 [FRA_FWMASK] = { .type = NLA_U32 }, \ 112 113 [FRA_TABLE] = { .type = NLA_U32 }, \ 113 114 [FRA_SUPPRESS_PREFIXLEN] = { .type = NLA_U32 }, \
+1 -1
include/soc/mscc/ocelot_dev.h
··· 74 74 #define DEV_MAC_TAGS_CFG_TAG_ID_M GENMASK(31, 16) 75 75 #define DEV_MAC_TAGS_CFG_TAG_ID_X(x) (((x) & GENMASK(31, 16)) >> 16) 76 76 #define DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA BIT(2) 77 - #define DEV_MAC_TAGS_CFG_PB_ENA BIT(1) 77 + #define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA BIT(1) 78 78 #define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0) 79 79 80 80 #define DEV_MAC_ADV_CHK_CFG 0x2c
+2
include/uapi/linux/in.h
··· 74 74 #define IPPROTO_UDPLITE IPPROTO_UDPLITE 75 75 IPPROTO_MPLS = 137, /* MPLS in IP (RFC 4023) */ 76 76 #define IPPROTO_MPLS IPPROTO_MPLS 77 + IPPROTO_ETHERNET = 143, /* Ethernet-within-IPv6 Encapsulation */ 78 + #define IPPROTO_ETHERNET IPPROTO_ETHERNET 77 79 IPPROTO_RAW = 255, /* Raw IP packets */ 78 80 #define IPPROTO_RAW IPPROTO_RAW 79 81 IPPROTO_MPTCP = 262, /* Multipath TCP connection */
+1 -2
init/Kconfig
··· 767 767 bool 768 768 769 769 config CC_HAS_INT128 770 - def_bool y 771 - depends on !$(cc-option,-D__SIZEOF_INT128__=0) 770 + def_bool !$(cc-option,$(m64-flag) -D__SIZEOF_INT128__=0) && 64BIT 772 771 773 772 # 774 773 # For architectures that know their GCC __int128 support is sound
+2 -1
kernel/cgroup/cgroup-v1.c
··· 471 471 */ 472 472 p++; 473 473 if (p >= end) { 474 + (*pos)++; 474 475 return NULL; 475 476 } else { 476 477 *pos = *p; ··· 783 782 784 783 pathbuf = kmalloc(PATH_MAX, GFP_KERNEL); 785 784 agentbuf = kstrdup(cgrp->root->release_agent_path, GFP_KERNEL); 786 - if (!pathbuf || !agentbuf) 785 + if (!pathbuf || !agentbuf || !strlen(agentbuf)) 787 786 goto out; 788 787 789 788 spin_lock_irq(&css_set_lock);
+30 -13
kernel/cgroup/cgroup.c
··· 3542 3542 static int cgroup_io_pressure_show(struct seq_file *seq, void *v) 3543 3543 { 3544 3544 struct cgroup *cgrp = seq_css(seq)->cgroup; 3545 - struct psi_group *psi = cgroup_id(cgrp) == 1 ? &psi_system : &cgrp->psi; 3545 + struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi; 3546 3546 3547 3547 return psi_show(seq, psi, PSI_IO); 3548 3548 } 3549 3549 static int cgroup_memory_pressure_show(struct seq_file *seq, void *v) 3550 3550 { 3551 3551 struct cgroup *cgrp = seq_css(seq)->cgroup; 3552 - struct psi_group *psi = cgroup_id(cgrp) == 1 ? &psi_system : &cgrp->psi; 3552 + struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi; 3553 3553 3554 3554 return psi_show(seq, psi, PSI_MEM); 3555 3555 } 3556 3556 static int cgroup_cpu_pressure_show(struct seq_file *seq, void *v) 3557 3557 { 3558 3558 struct cgroup *cgrp = seq_css(seq)->cgroup; 3559 - struct psi_group *psi = cgroup_id(cgrp) == 1 ? &psi_system : &cgrp->psi; 3559 + struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi; 3560 3560 3561 3561 return psi_show(seq, psi, PSI_CPU); 3562 3562 } ··· 4400 4400 } 4401 4401 } while (!css_set_populated(cset) && list_empty(&cset->dying_tasks)); 4402 4402 4403 - if (!list_empty(&cset->tasks)) 4403 + if (!list_empty(&cset->tasks)) { 4404 4404 it->task_pos = cset->tasks.next; 4405 - else if (!list_empty(&cset->mg_tasks)) 4405 + it->cur_tasks_head = &cset->tasks; 4406 + } else if (!list_empty(&cset->mg_tasks)) { 4406 4407 it->task_pos = cset->mg_tasks.next; 4407 - else 4408 + it->cur_tasks_head = &cset->mg_tasks; 4409 + } else { 4408 4410 it->task_pos = cset->dying_tasks.next; 4411 + it->cur_tasks_head = &cset->dying_tasks; 4412 + } 4409 4413 4410 4414 it->tasks_head = &cset->tasks; 4411 4415 it->mg_tasks_head = &cset->mg_tasks; ··· 4467 4463 else 4468 4464 it->task_pos = it->task_pos->next; 4469 4465 4470 - if (it->task_pos == it->tasks_head) 4466 + if (it->task_pos == it->tasks_head) { 4471 4467 it->task_pos = it->mg_tasks_head->next; 4472 - if (it->task_pos == it->mg_tasks_head) 4468 + it->cur_tasks_head = it->mg_tasks_head; 4469 + } 4470 + if (it->task_pos == it->mg_tasks_head) { 4473 4471 it->task_pos = it->dying_tasks_head->next; 4472 + it->cur_tasks_head = it->dying_tasks_head; 4473 + } 4474 4474 if (it->task_pos == it->dying_tasks_head) 4475 4475 css_task_iter_advance_css_set(it); 4476 4476 } else { ··· 4493 4485 goto repeat; 4494 4486 4495 4487 /* and dying leaders w/o live member threads */ 4496 - if (!atomic_read(&task->signal->live)) 4488 + if (it->cur_tasks_head == it->dying_tasks_head && 4489 + !atomic_read(&task->signal->live)) 4497 4490 goto repeat; 4498 4491 } else { 4499 4492 /* skip all dying ones */ 4500 - if (task->flags & PF_EXITING) 4493 + if (it->cur_tasks_head == it->dying_tasks_head) 4501 4494 goto repeat; 4502 4495 } 4503 4496 } ··· 4604 4595 struct kernfs_open_file *of = s->private; 4605 4596 struct css_task_iter *it = of->priv; 4606 4597 4598 + if (pos) 4599 + (*pos)++; 4600 + 4607 4601 return css_task_iter_next(it); 4608 4602 } 4609 4603 ··· 4622 4610 * from position 0, so we can simply keep iterating on !0 *pos. 4623 4611 */ 4624 4612 if (!it) { 4625 - if (WARN_ON_ONCE((*pos)++)) 4613 + if (WARN_ON_ONCE((*pos))) 4626 4614 return ERR_PTR(-EINVAL); 4627 4615 4628 4616 it = kzalloc(sizeof(*it), GFP_KERNEL); ··· 4630 4618 return ERR_PTR(-ENOMEM); 4631 4619 of->priv = it; 4632 4620 css_task_iter_start(&cgrp->self, iter_flags, it); 4633 - } else if (!(*pos)++) { 4621 + } else if (!(*pos)) { 4634 4622 css_task_iter_end(it); 4635 4623 css_task_iter_start(&cgrp->self, iter_flags, it); 4636 - } 4624 + } else 4625 + return it->cur_task; 4637 4626 4638 4627 return cgroup_procs_next(s, NULL, NULL); 4639 4628 } ··· 6270 6257 cgroup_bpf_get(sock_cgroup_ptr(skcd)); 6271 6258 return; 6272 6259 } 6260 + 6261 + /* Don't associate the sock with unrelated interrupted task's cgroup. */ 6262 + if (in_interrupt()) 6263 + return; 6273 6264 6274 6265 rcu_read_lock(); 6275 6266
+55 -38
kernel/futex.c
··· 385 385 */ 386 386 static struct futex_hash_bucket *hash_futex(union futex_key *key) 387 387 { 388 - u32 hash = jhash2((u32*)&key->both.word, 389 - (sizeof(key->both.word)+sizeof(key->both.ptr))/4, 388 + u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, 390 389 key->both.offset); 390 + 391 391 return &futex_queues[hash & (futex_hashsize - 1)]; 392 392 } 393 393 ··· 429 429 430 430 switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) { 431 431 case FUT_OFF_INODE: 432 - ihold(key->shared.inode); /* implies smp_mb(); (B) */ 432 + smp_mb(); /* explicit smp_mb(); (B) */ 433 433 break; 434 434 case FUT_OFF_MMSHARED: 435 435 futex_get_mm(key); /* implies smp_mb(); (B) */ ··· 463 463 464 464 switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) { 465 465 case FUT_OFF_INODE: 466 - iput(key->shared.inode); 467 466 break; 468 467 case FUT_OFF_MMSHARED: 469 468 mmdrop(key->private.mm); ··· 504 505 return timeout; 505 506 } 506 507 508 + /* 509 + * Generate a machine wide unique identifier for this inode. 510 + * 511 + * This relies on u64 not wrapping in the life-time of the machine; which with 512 + * 1ns resolution means almost 585 years. 513 + * 514 + * This further relies on the fact that a well formed program will not unmap 515 + * the file while it has a (shared) futex waiting on it. This mapping will have 516 + * a file reference which pins the mount and inode. 517 + * 518 + * If for some reason an inode gets evicted and read back in again, it will get 519 + * a new sequence number and will _NOT_ match, even though it is the exact same 520 + * file. 521 + * 522 + * It is important that match_futex() will never have a false-positive, esp. 523 + * for PI futexes that can mess up the state. The above argues that false-negatives 524 + * are only possible for malformed programs. 525 + */ 526 + static u64 get_inode_sequence_number(struct inode *inode) 527 + { 528 + static atomic64_t i_seq; 529 + u64 old; 530 + 531 + /* Does the inode already have a sequence number? */ 532 + old = atomic64_read(&inode->i_sequence); 533 + if (likely(old)) 534 + return old; 535 + 536 + for (;;) { 537 + u64 new = atomic64_add_return(1, &i_seq); 538 + if (WARN_ON_ONCE(!new)) 539 + continue; 540 + 541 + old = atomic64_cmpxchg_relaxed(&inode->i_sequence, 0, new); 542 + if (old) 543 + return old; 544 + return new; 545 + } 546 + } 547 + 507 548 /** 508 549 * get_futex_key() - Get parameters which are the keys for a futex 509 550 * @uaddr: virtual address of the futex ··· 556 517 * 557 518 * The key words are stored in @key on success. 558 519 * 559 - * For shared mappings, it's (page->index, file_inode(vma->vm_file), 560 - * offset_within_page). For private mappings, it's (uaddr, current->mm). 561 - * We can usually work out the index without swapping in the page. 520 + * For shared mappings (when @fshared), the key is: 521 + * ( inode->i_sequence, page->index, offset_within_page ) 522 + * [ also see get_inode_sequence_number() ] 523 + * 524 + * For private mappings (or when !@fshared), the key is: 525 + * ( current->mm, address, 0 ) 526 + * 527 + * This allows (cross process, where applicable) identification of the futex 528 + * without keeping the page pinned for the duration of the FUTEX_WAIT. 562 529 * 563 530 * lock_page() might sleep, the caller should not hold a spinlock. 564 531 */ ··· 704 659 key->private.mm = mm; 705 660 key->private.address = address; 706 661 707 - get_futex_key_refs(key); /* implies smp_mb(); (B) */ 708 - 709 662 } else { 710 663 struct inode *inode; 711 664 ··· 735 692 goto again; 736 693 } 737 694 738 - /* 739 - * Take a reference unless it is about to be freed. Previously 740 - * this reference was taken by ihold under the page lock 741 - * pinning the inode in place so i_lock was unnecessary. The 742 - * only way for this check to fail is if the inode was 743 - * truncated in parallel which is almost certainly an 744 - * application bug. In such a case, just retry. 745 - * 746 - * We are not calling into get_futex_key_refs() in file-backed 747 - * cases, therefore a successful atomic_inc return below will 748 - * guarantee that get_futex_key() will still imply smp_mb(); (B). 749 - */ 750 - if (!atomic_inc_not_zero(&inode->i_count)) { 751 - rcu_read_unlock(); 752 - put_page(page); 753 - 754 - goto again; 755 - } 756 - 757 - /* Should be impossible but lets be paranoid for now */ 758 - if (WARN_ON_ONCE(inode->i_mapping != mapping)) { 759 - err = -EFAULT; 760 - rcu_read_unlock(); 761 - iput(inode); 762 - 763 - goto out; 764 - } 765 - 766 695 key->both.offset |= FUT_OFF_INODE; /* inode-based key */ 767 - key->shared.inode = inode; 696 + key->shared.i_seq = get_inode_sequence_number(inode); 768 697 key->shared.pgoff = basepage_index(tail); 769 698 rcu_read_unlock(); 770 699 } 700 + 701 + get_futex_key_refs(key); /* implies smp_mb(); (B) */ 771 702 772 703 out: 773 704 put_page(page);
+1 -1
kernel/notifier.c
··· 519 519 520 520 int register_die_notifier(struct notifier_block *nb) 521 521 { 522 - vmalloc_sync_all(); 522 + vmalloc_sync_mappings(); 523 523 return atomic_notifier_chain_register(&die_chain, nb); 524 524 } 525 525 EXPORT_SYMBOL_GPL(register_die_notifier);
+10
kernel/pid.c
··· 247 247 tmp = tmp->parent; 248 248 } 249 249 250 + /* 251 + * ENOMEM is not the most obvious choice especially for the case 252 + * where the child subreaper has already exited and the pid 253 + * namespace denies the creation of any new processes. But ENOMEM 254 + * is what we have exposed to userspace for a long time and it is 255 + * documented behavior for pid namespaces. So we can't easily 256 + * change it even if there were an error code better suited. 257 + */ 258 + retval = -ENOMEM; 259 + 250 260 if (unlikely(is_child_reaper(pid))) { 251 261 if (pid_ns_prepare_proc(ns)) 252 262 goto out_free;
+2
kernel/sys.c
··· 47 47 #include <linux/syscalls.h> 48 48 #include <linux/kprobes.h> 49 49 #include <linux/user_namespace.h> 50 + #include <linux/time_namespace.h> 50 51 #include <linux/binfmts.h> 51 52 52 53 #include <linux/sched.h> ··· 2547 2546 memset(info, 0, sizeof(struct sysinfo)); 2548 2547 2549 2548 ktime_get_boottime_ts64(&tp); 2549 + timens_add_boottime(&tp); 2550 2550 info->uptime = tp.tv_sec + (tp.tv_nsec ? 1 : 0); 2551 2551 2552 2552 get_avenrun(info->loads, 0, SI_LOAD_SHIFT - FSHIFT);
+2
kernel/trace/ftrace.c
··· 1547 1547 rec = bsearch(&key, pg->records, pg->index, 1548 1548 sizeof(struct dyn_ftrace), 1549 1549 ftrace_cmp_recs); 1550 + if (rec) 1551 + break; 1550 1552 } 1551 1553 return rec; 1552 1554 }
+8 -6
kernel/workqueue.c
··· 1411 1411 return; 1412 1412 rcu_read_lock(); 1413 1413 retry: 1414 - if (req_cpu == WORK_CPU_UNBOUND) 1415 - cpu = wq_select_unbound_cpu(raw_smp_processor_id()); 1416 - 1417 1414 /* pwq which will be used unless @work is executing elsewhere */ 1418 - if (!(wq->flags & WQ_UNBOUND)) 1419 - pwq = per_cpu_ptr(wq->cpu_pwqs, cpu); 1420 - else 1415 + if (wq->flags & WQ_UNBOUND) { 1416 + if (req_cpu == WORK_CPU_UNBOUND) 1417 + cpu = wq_select_unbound_cpu(raw_smp_processor_id()); 1421 1418 pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu)); 1419 + } else { 1420 + if (req_cpu == WORK_CPU_UNBOUND) 1421 + cpu = raw_smp_processor_id(); 1422 + pwq = per_cpu_ptr(wq->cpu_pwqs, cpu); 1423 + } 1422 1424 1423 1425 /* 1424 1426 * If @work was previously on a different pool, it might still be
+9 -3
mm/madvise.c
··· 335 335 } 336 336 337 337 page = pmd_page(orig_pmd); 338 + 339 + /* Do not interfere with other mappings of this page */ 340 + if (page_mapcount(page) != 1) 341 + goto huge_unlock; 342 + 338 343 if (next - addr != HPAGE_PMD_SIZE) { 339 344 int err; 340 - 341 - if (page_mapcount(page) != 1) 342 - goto huge_unlock; 343 345 344 346 get_page(page); 345 347 spin_unlock(ptl); ··· 427 425 addr -= PAGE_SIZE; 428 426 continue; 429 427 } 428 + 429 + /* Do not interfere with other mappings of this page */ 430 + if (page_mapcount(page) != 1) 431 + continue; 430 432 431 433 VM_BUG_ON_PAGE(PageTransCompound(page), page); 432 434
+68 -49
mm/memcontrol.c
··· 2297 2297 #define MEMCG_DELAY_SCALING_SHIFT 14 2298 2298 2299 2299 /* 2300 - * Scheduled by try_charge() to be executed from the userland return path 2301 - * and reclaims memory over the high limit. 2300 + * Get the number of jiffies that we should penalise a mischievous cgroup which 2301 + * is exceeding its memory.high by checking both it and its ancestors. 2302 2302 */ 2303 - void mem_cgroup_handle_over_high(void) 2303 + static unsigned long calculate_high_delay(struct mem_cgroup *memcg, 2304 + unsigned int nr_pages) 2304 2305 { 2305 - unsigned long usage, high, clamped_high; 2306 - unsigned long pflags; 2307 - unsigned long penalty_jiffies, overage; 2308 - unsigned int nr_pages = current->memcg_nr_pages_over_high; 2309 - struct mem_cgroup *memcg; 2306 + unsigned long penalty_jiffies; 2307 + u64 max_overage = 0; 2310 2308 2311 - if (likely(!nr_pages)) 2312 - return; 2309 + do { 2310 + unsigned long usage, high; 2311 + u64 overage; 2313 2312 2314 - memcg = get_mem_cgroup_from_mm(current->mm); 2315 - reclaim_high(memcg, nr_pages, GFP_KERNEL); 2316 - current->memcg_nr_pages_over_high = 0; 2313 + usage = page_counter_read(&memcg->memory); 2314 + high = READ_ONCE(memcg->high); 2315 + 2316 + /* 2317 + * Prevent division by 0 in overage calculation by acting as if 2318 + * it was a threshold of 1 page 2319 + */ 2320 + high = max(high, 1UL); 2321 + 2322 + overage = usage - high; 2323 + overage <<= MEMCG_DELAY_PRECISION_SHIFT; 2324 + overage = div64_u64(overage, high); 2325 + 2326 + if (overage > max_overage) 2327 + max_overage = overage; 2328 + } while ((memcg = parent_mem_cgroup(memcg)) && 2329 + !mem_cgroup_is_root(memcg)); 2330 + 2331 + if (!max_overage) 2332 + return 0; 2317 2333 2318 2334 /* 2319 - * memory.high is breached and reclaim is unable to keep up. Throttle 2320 - * allocators proactively to slow down excessive growth. 2321 - * 2322 2335 * We use overage compared to memory.high to calculate the number of 2323 2336 * jiffies to sleep (penalty_jiffies). Ideally this value should be 2324 2337 * fairly lenient on small overages, and increasingly harsh when the ··· 2339 2326 * its crazy behaviour, so we exponentially increase the delay based on 2340 2327 * overage amount. 2341 2328 */ 2342 - 2343 - usage = page_counter_read(&memcg->memory); 2344 - high = READ_ONCE(memcg->high); 2345 - 2346 - if (usage <= high) 2347 - goto out; 2348 - 2349 - /* 2350 - * Prevent division by 0 in overage calculation by acting as if it was a 2351 - * threshold of 1 page 2352 - */ 2353 - clamped_high = max(high, 1UL); 2354 - 2355 - overage = div_u64((u64)(usage - high) << MEMCG_DELAY_PRECISION_SHIFT, 2356 - clamped_high); 2357 - 2358 - penalty_jiffies = ((u64)overage * overage * HZ) 2359 - >> (MEMCG_DELAY_PRECISION_SHIFT + MEMCG_DELAY_SCALING_SHIFT); 2329 + penalty_jiffies = max_overage * max_overage * HZ; 2330 + penalty_jiffies >>= MEMCG_DELAY_PRECISION_SHIFT; 2331 + penalty_jiffies >>= MEMCG_DELAY_SCALING_SHIFT; 2360 2332 2361 2333 /* 2362 2334 * Factor in the task's own contribution to the overage, such that four ··· 2358 2360 * application moving forwards and also permit diagnostics, albeit 2359 2361 * extremely slowly. 2360 2362 */ 2361 - penalty_jiffies = min(penalty_jiffies, MEMCG_MAX_HIGH_DELAY_JIFFIES); 2363 + return min(penalty_jiffies, MEMCG_MAX_HIGH_DELAY_JIFFIES); 2364 + } 2365 + 2366 + /* 2367 + * Scheduled by try_charge() to be executed from the userland return path 2368 + * and reclaims memory over the high limit. 2369 + */ 2370 + void mem_cgroup_handle_over_high(void) 2371 + { 2372 + unsigned long penalty_jiffies; 2373 + unsigned long pflags; 2374 + unsigned int nr_pages = current->memcg_nr_pages_over_high; 2375 + struct mem_cgroup *memcg; 2376 + 2377 + if (likely(!nr_pages)) 2378 + return; 2379 + 2380 + memcg = get_mem_cgroup_from_mm(current->mm); 2381 + reclaim_high(memcg, nr_pages, GFP_KERNEL); 2382 + current->memcg_nr_pages_over_high = 0; 2383 + 2384 + /* 2385 + * memory.high is breached and reclaim is unable to keep up. Throttle 2386 + * allocators proactively to slow down excessive growth. 2387 + */ 2388 + penalty_jiffies = calculate_high_delay(memcg, nr_pages); 2362 2389 2363 2390 /* 2364 2391 * Don't sleep if the amount of jiffies this memcg owes us is so low ··· 4050 4027 struct mem_cgroup_thresholds *thresholds; 4051 4028 struct mem_cgroup_threshold_ary *new; 4052 4029 unsigned long usage; 4053 - int i, j, size; 4030 + int i, j, size, entries; 4054 4031 4055 4032 mutex_lock(&memcg->thresholds_lock); 4056 4033 ··· 4070 4047 __mem_cgroup_threshold(memcg, type == _MEMSWAP); 4071 4048 4072 4049 /* Calculate new number of threshold */ 4073 - size = 0; 4050 + size = entries = 0; 4074 4051 for (i = 0; i < thresholds->primary->size; i++) { 4075 4052 if (thresholds->primary->entries[i].eventfd != eventfd) 4076 4053 size++; 4054 + else 4055 + entries++; 4077 4056 } 4078 4057 4079 4058 new = thresholds->spare; 4059 + 4060 + /* If no items related to eventfd have been cleared, nothing to do */ 4061 + if (!entries) 4062 + goto unlock; 4080 4063 4081 4064 /* Set thresholds array to NULL if we don't have thresholds */ 4082 4065 if (!size) { ··· 6711 6682 if (!mem_cgroup_sockets_enabled) 6712 6683 return; 6713 6684 6714 - /* 6715 - * Socket cloning can throw us here with sk_memcg already 6716 - * filled. It won't however, necessarily happen from 6717 - * process context. So the test for root memcg given 6718 - * the current task's memcg won't help us in this case. 6719 - * 6720 - * Respecting the original socket's memcg is a better 6721 - * decision in this case. 6722 - */ 6723 - if (sk->sk_memcg) { 6724 - css_get(&sk->sk_memcg->css); 6685 + /* Do not associate the sock with unrelated interrupted task's memcg. */ 6686 + if (in_interrupt()) 6725 6687 return; 6726 - } 6727 6688 6728 6689 rcu_read_lock(); 6729 6690 memcg = mem_cgroup_from_task(current);
+18 -9
mm/mmu_notifier.c
··· 307 307 * ->release returns. 308 308 */ 309 309 id = srcu_read_lock(&srcu); 310 - hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) 310 + hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, 311 + srcu_read_lock_held(&srcu)) 311 312 /* 312 313 * If ->release runs before mmu_notifier_unregister it must be 313 314 * handled, as it's the only way for the driver to flush all ··· 371 370 372 371 id = srcu_read_lock(&srcu); 373 372 hlist_for_each_entry_rcu(subscription, 374 - &mm->notifier_subscriptions->list, hlist) { 373 + &mm->notifier_subscriptions->list, hlist, 374 + srcu_read_lock_held(&srcu)) { 375 375 if (subscription->ops->clear_flush_young) 376 376 young |= subscription->ops->clear_flush_young( 377 377 subscription, mm, start, end); ··· 391 389 392 390 id = srcu_read_lock(&srcu); 393 391 hlist_for_each_entry_rcu(subscription, 394 - &mm->notifier_subscriptions->list, hlist) { 392 + &mm->notifier_subscriptions->list, hlist, 393 + srcu_read_lock_held(&srcu)) { 395 394 if (subscription->ops->clear_young) 396 395 young |= subscription->ops->clear_young(subscription, 397 396 mm, start, end); ··· 410 407 411 408 id = srcu_read_lock(&srcu); 412 409 hlist_for_each_entry_rcu(subscription, 413 - &mm->notifier_subscriptions->list, hlist) { 410 + &mm->notifier_subscriptions->list, hlist, 411 + srcu_read_lock_held(&srcu)) { 414 412 if (subscription->ops->test_young) { 415 413 young = subscription->ops->test_young(subscription, mm, 416 414 address); ··· 432 428 433 429 id = srcu_read_lock(&srcu); 434 430 hlist_for_each_entry_rcu(subscription, 435 - &mm->notifier_subscriptions->list, hlist) { 431 + &mm->notifier_subscriptions->list, hlist, 432 + srcu_read_lock_held(&srcu)) { 436 433 if (subscription->ops->change_pte) 437 434 subscription->ops->change_pte(subscription, mm, address, 438 435 pte); ··· 481 476 int id; 482 477 483 478 id = srcu_read_lock(&srcu); 484 - hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) { 479 + hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, 480 + srcu_read_lock_held(&srcu)) { 485 481 const struct mmu_notifier_ops *ops = subscription->ops; 486 482 487 483 if (ops->invalidate_range_start) { ··· 534 528 int id; 535 529 536 530 id = srcu_read_lock(&srcu); 537 - hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) { 531 + hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, 532 + srcu_read_lock_held(&srcu)) { 538 533 /* 539 534 * Call invalidate_range here too to avoid the need for the 540 535 * subsystem of having to register an invalidate_range_end ··· 589 582 590 583 id = srcu_read_lock(&srcu); 591 584 hlist_for_each_entry_rcu(subscription, 592 - &mm->notifier_subscriptions->list, hlist) { 585 + &mm->notifier_subscriptions->list, hlist, 586 + srcu_read_lock_held(&srcu)) { 593 587 if (subscription->ops->invalidate_range) 594 588 subscription->ops->invalidate_range(subscription, mm, 595 589 start, end); ··· 722 714 723 715 spin_lock(&mm->notifier_subscriptions->lock); 724 716 hlist_for_each_entry_rcu(subscription, 725 - &mm->notifier_subscriptions->list, hlist) { 717 + &mm->notifier_subscriptions->list, hlist, 718 + lockdep_is_held(&mm->notifier_subscriptions->lock)) { 726 719 if (subscription->ops != ops) 727 720 continue; 728 721
+7 -3
mm/nommu.c
··· 370 370 EXPORT_SYMBOL_GPL(vm_unmap_aliases); 371 371 372 372 /* 373 - * Implement a stub for vmalloc_sync_all() if the architecture chose not to 374 - * have one. 373 + * Implement a stub for vmalloc_sync_[un]mapping() if the architecture 374 + * chose not to have one. 375 375 */ 376 - void __weak vmalloc_sync_all(void) 376 + void __weak vmalloc_sync_mappings(void) 377 + { 378 + } 379 + 380 + void __weak vmalloc_sync_unmappings(void) 377 381 { 378 382 } 379 383
+30 -11
mm/slub.c
··· 1973 1973 1974 1974 if (node == NUMA_NO_NODE) 1975 1975 searchnode = numa_mem_id(); 1976 - else if (!node_present_pages(node)) 1977 - searchnode = node_to_mem_node(node); 1978 1976 1979 1977 object = get_partial_node(s, get_node(s, searchnode), c, flags); 1980 1978 if (object || node != NUMA_NO_NODE) ··· 2561 2563 struct page *page; 2562 2564 2563 2565 page = c->page; 2564 - if (!page) 2566 + if (!page) { 2567 + /* 2568 + * if the node is not online or has no normal memory, just 2569 + * ignore the node constraint 2570 + */ 2571 + if (unlikely(node != NUMA_NO_NODE && 2572 + !node_state(node, N_NORMAL_MEMORY))) 2573 + node = NUMA_NO_NODE; 2565 2574 goto new_slab; 2575 + } 2566 2576 redo: 2567 2577 2568 2578 if (unlikely(!node_match(page, node))) { 2569 - int searchnode = node; 2570 - 2571 - if (node != NUMA_NO_NODE && !node_present_pages(node)) 2572 - searchnode = node_to_mem_node(node); 2573 - 2574 - if (unlikely(!node_match(page, searchnode))) { 2579 + /* 2580 + * same as above but node_match() being false already 2581 + * implies node != NUMA_NO_NODE 2582 + */ 2583 + if (!node_state(node, N_NORMAL_MEMORY)) { 2584 + node = NUMA_NO_NODE; 2585 + goto redo; 2586 + } else { 2575 2587 stat(s, ALLOC_NODE_MISMATCH); 2576 2588 deactivate_slab(s, page, c->freelist, c); 2577 2589 goto new_slab; ··· 3005 2997 barrier(); 3006 2998 3007 2999 if (likely(page == c->page)) { 3008 - set_freepointer(s, tail_obj, c->freelist); 3000 + void **freelist = READ_ONCE(c->freelist); 3001 + 3002 + set_freepointer(s, tail_obj, freelist); 3009 3003 3010 3004 if (unlikely(!this_cpu_cmpxchg_double( 3011 3005 s->cpu_slab->freelist, s->cpu_slab->tid, 3012 - c->freelist, tid, 3006 + freelist, tid, 3013 3007 head, next_tid(tid)))) { 3014 3008 3015 3009 note_cmpxchg_failure("slab_free", s, tid); ··· 3184 3174 void *object = c->freelist; 3185 3175 3186 3176 if (unlikely(!object)) { 3177 + /* 3178 + * We may have removed an object from c->freelist using 3179 + * the fastpath in the previous iteration; in that case, 3180 + * c->tid has not been bumped yet. 3181 + * Since ___slab_alloc() may reenable interrupts while 3182 + * allocating memory, we should bump c->tid now. 3183 + */ 3184 + c->tid = next_tid(c->tid); 3185 + 3187 3186 /* 3188 3187 * Invoking slow path likely have side-effect 3189 3188 * of re-populating per CPU c->freelist
+6 -2
mm/sparse.c
··· 734 734 struct mem_section *ms = __pfn_to_section(pfn); 735 735 bool section_is_early = early_section(ms); 736 736 struct page *memmap = NULL; 737 + bool empty; 737 738 unsigned long *subsection_map = ms->usage 738 739 ? &ms->usage->subsection_map[0] : NULL; 739 740 ··· 765 764 * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified 766 765 */ 767 766 bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); 768 - if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) { 767 + empty = bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION); 768 + if (empty) { 769 769 unsigned long section_nr = pfn_to_section_nr(pfn); 770 770 771 771 /* ··· 781 779 ms->usage = NULL; 782 780 } 783 781 memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); 784 - ms->section_mem_map = (unsigned long)NULL; 785 782 } 786 783 787 784 if (section_is_early && memmap) 788 785 free_map_bootmem(memmap); 789 786 else 790 787 depopulate_section_memmap(pfn, nr_pages, altmap); 788 + 789 + if (empty) 790 + ms->section_mem_map = (unsigned long)NULL; 791 791 } 792 792 793 793 static struct page * __meminit section_activate(int nid, unsigned long pfn,
+7 -4
mm/vmalloc.c
··· 1295 1295 * First make sure the mappings are removed from all page-tables 1296 1296 * before they are freed. 1297 1297 */ 1298 - vmalloc_sync_all(); 1298 + vmalloc_sync_unmappings(); 1299 1299 1300 1300 /* 1301 1301 * TODO: to calculate a flush range without looping. ··· 3128 3128 EXPORT_SYMBOL(remap_vmalloc_range); 3129 3129 3130 3130 /* 3131 - * Implement a stub for vmalloc_sync_all() if the architecture chose not to 3132 - * have one. 3131 + * Implement stubs for vmalloc_sync_[un]mappings () if the architecture chose 3132 + * not to have one. 3133 3133 * 3134 3134 * The purpose of this function is to make sure the vmalloc area 3135 3135 * mappings are identical in all page-tables in the system. 3136 3136 */ 3137 - void __weak vmalloc_sync_all(void) 3137 + void __weak vmalloc_sync_mappings(void) 3138 3138 { 3139 3139 } 3140 3140 3141 + void __weak vmalloc_sync_unmappings(void) 3142 + { 3143 + } 3141 3144 3142 3145 static int f(pte_t *pte, unsigned long addr, void *data) 3143 3146 {
+4
net/batman-adv/bat_iv_ogm.c
··· 789 789 790 790 lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex); 791 791 792 + /* interface already disabled by batadv_iv_ogm_iface_disable */ 793 + if (!*ogm_buff) 794 + return; 795 + 792 796 /* the interface gets activated here to avoid race conditions between 793 797 * the moment of activating the interface in 794 798 * hardif_activate_interface() where the originator mac is set and
+2 -1
net/caif/caif_dev.c
··· 112 112 caif_device_list(dev_net(dev)); 113 113 struct caif_device_entry *caifd; 114 114 115 - list_for_each_entry_rcu(caifd, &caifdevs->list, list) { 115 + list_for_each_entry_rcu(caifd, &caifdevs->list, list, 116 + lockdep_rtnl_is_held()) { 116 117 if (caifd->netdev == dev) 117 118 return caifd; 118 119 }
+21 -12
net/core/devlink.c
··· 3352 3352 struct genl_info *info, 3353 3353 union devlink_param_value *value) 3354 3354 { 3355 + struct nlattr *param_data; 3355 3356 int len; 3356 3357 3357 - if (param->type != DEVLINK_PARAM_TYPE_BOOL && 3358 - !info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]) 3358 + param_data = info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]; 3359 + 3360 + if (param->type != DEVLINK_PARAM_TYPE_BOOL && !param_data) 3359 3361 return -EINVAL; 3360 3362 3361 3363 switch (param->type) { 3362 3364 case DEVLINK_PARAM_TYPE_U8: 3363 - value->vu8 = nla_get_u8(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]); 3365 + if (nla_len(param_data) != sizeof(u8)) 3366 + return -EINVAL; 3367 + value->vu8 = nla_get_u8(param_data); 3364 3368 break; 3365 3369 case DEVLINK_PARAM_TYPE_U16: 3366 - value->vu16 = nla_get_u16(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]); 3370 + if (nla_len(param_data) != sizeof(u16)) 3371 + return -EINVAL; 3372 + value->vu16 = nla_get_u16(param_data); 3367 3373 break; 3368 3374 case DEVLINK_PARAM_TYPE_U32: 3369 - value->vu32 = nla_get_u32(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]); 3375 + if (nla_len(param_data) != sizeof(u32)) 3376 + return -EINVAL; 3377 + value->vu32 = nla_get_u32(param_data); 3370 3378 break; 3371 3379 case DEVLINK_PARAM_TYPE_STRING: 3372 - len = strnlen(nla_data(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]), 3373 - nla_len(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA])); 3374 - if (len == nla_len(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]) || 3380 + len = strnlen(nla_data(param_data), nla_len(param_data)); 3381 + if (len == nla_len(param_data) || 3375 3382 len >= __DEVLINK_PARAM_MAX_STRING_VALUE) 3376 3383 return -EINVAL; 3377 - strcpy(value->vstr, 3378 - nla_data(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA])); 3384 + strcpy(value->vstr, nla_data(param_data)); 3379 3385 break; 3380 3386 case DEVLINK_PARAM_TYPE_BOOL: 3381 - value->vbool = info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA] ? 3382 - true : false; 3387 + if (param_data && nla_len(param_data)) 3388 + return -EINVAL; 3389 + value->vbool = nla_get_flag(param_data); 3383 3390 break; 3384 3391 } 3385 3392 return 0; ··· 5958 5951 [DEVLINK_ATTR_PARAM_VALUE_CMODE] = { .type = NLA_U8 }, 5959 5952 [DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING }, 5960 5953 [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32 }, 5954 + [DEVLINK_ATTR_REGION_CHUNK_ADDR] = { .type = NLA_U64 }, 5955 + [DEVLINK_ATTR_REGION_CHUNK_LEN] = { .type = NLA_U64 }, 5961 5956 [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING }, 5962 5957 [DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD] = { .type = NLA_U64 }, 5963 5958 [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER] = { .type = NLA_U8 },
+37 -10
net/core/netclassid_cgroup.c
··· 53 53 kfree(css_cls_state(css)); 54 54 } 55 55 56 + /* 57 + * To avoid freezing of sockets creation for tasks with big number of threads 58 + * and opened sockets lets release file_lock every 1000 iterated descriptors. 59 + * New sockets will already have been created with new classid. 60 + */ 61 + 62 + struct update_classid_context { 63 + u32 classid; 64 + unsigned int batch; 65 + }; 66 + 67 + #define UPDATE_CLASSID_BATCH 1000 68 + 56 69 static int update_classid_sock(const void *v, struct file *file, unsigned n) 57 70 { 58 71 int err; 72 + struct update_classid_context *ctx = (void *)v; 59 73 struct socket *sock = sock_from_file(file, &err); 60 74 61 75 if (sock) { 62 76 spin_lock(&cgroup_sk_update_lock); 63 - sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, 64 - (unsigned long)v); 77 + sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid); 65 78 spin_unlock(&cgroup_sk_update_lock); 66 79 } 80 + if (--ctx->batch == 0) { 81 + ctx->batch = UPDATE_CLASSID_BATCH; 82 + return n + 1; 83 + } 67 84 return 0; 85 + } 86 + 87 + static void update_classid_task(struct task_struct *p, u32 classid) 88 + { 89 + struct update_classid_context ctx = { 90 + .classid = classid, 91 + .batch = UPDATE_CLASSID_BATCH 92 + }; 93 + unsigned int fd = 0; 94 + 95 + do { 96 + task_lock(p); 97 + fd = iterate_fd(p->files, fd, update_classid_sock, &ctx); 98 + task_unlock(p); 99 + cond_resched(); 100 + } while (fd); 68 101 } 69 102 70 103 static void cgrp_attach(struct cgroup_taskset *tset) ··· 106 73 struct task_struct *p; 107 74 108 75 cgroup_taskset_for_each(p, css, tset) { 109 - task_lock(p); 110 - iterate_fd(p->files, 0, update_classid_sock, 111 - (void *)(unsigned long)css_cls_state(css)->classid); 112 - task_unlock(p); 76 + update_classid_task(p, css_cls_state(css)->classid); 113 77 } 114 78 } 115 79 ··· 128 98 129 99 css_task_iter_start(css, 0, &it); 130 100 while ((p = css_task_iter_next(&it))) { 131 - task_lock(p); 132 - iterate_fd(p->files, 0, update_classid_sock, 133 - (void *)(unsigned long)cs->classid); 134 - task_unlock(p); 101 + update_classid_task(p, cs->classid); 135 102 cond_resched(); 136 103 } 137 104 css_task_iter_end(&it);
+4 -1
net/core/sock.c
··· 1830 1830 atomic_set(&newsk->sk_zckey, 0); 1831 1831 1832 1832 sock_reset_flag(newsk, SOCK_DONE); 1833 - mem_cgroup_sk_alloc(newsk); 1833 + 1834 + /* sk->sk_memcg will be populated at accept() time */ 1835 + newsk->sk_memcg = NULL; 1836 + 1834 1837 cgroup_sk_alloc(&newsk->sk_cgrp_data); 1835 1838 1836 1839 rcu_read_lock();
+2
net/dsa/dsa_priv.h
··· 117 117 /* port.c */ 118 118 int dsa_port_set_state(struct dsa_port *dp, u8 state, 119 119 struct switchdev_trans *trans); 120 + int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy); 120 121 int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy); 122 + void dsa_port_disable_rt(struct dsa_port *dp); 121 123 void dsa_port_disable(struct dsa_port *dp); 122 124 int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br); 123 125 void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br);
+35 -9
net/dsa/port.c
··· 63 63 pr_err("DSA: failed to set STP state %u (%d)\n", state, err); 64 64 } 65 65 66 - int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy) 66 + int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy) 67 67 { 68 68 struct dsa_switch *ds = dp->ds; 69 69 int port = dp->index; ··· 78 78 if (!dp->bridge_dev) 79 79 dsa_port_set_state_now(dp, BR_STATE_FORWARDING); 80 80 81 + if (dp->pl) 82 + phylink_start(dp->pl); 83 + 81 84 return 0; 82 85 } 83 86 84 - void dsa_port_disable(struct dsa_port *dp) 87 + int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy) 88 + { 89 + int err; 90 + 91 + rtnl_lock(); 92 + err = dsa_port_enable_rt(dp, phy); 93 + rtnl_unlock(); 94 + 95 + return err; 96 + } 97 + 98 + void dsa_port_disable_rt(struct dsa_port *dp) 85 99 { 86 100 struct dsa_switch *ds = dp->ds; 87 101 int port = dp->index; 102 + 103 + if (dp->pl) 104 + phylink_stop(dp->pl); 88 105 89 106 if (!dp->bridge_dev) 90 107 dsa_port_set_state_now(dp, BR_STATE_DISABLED); 91 108 92 109 if (ds->ops->port_disable) 93 110 ds->ops->port_disable(ds, port); 111 + } 112 + 113 + void dsa_port_disable(struct dsa_port *dp) 114 + { 115 + rtnl_lock(); 116 + dsa_port_disable_rt(dp); 117 + rtnl_unlock(); 94 118 } 95 119 96 120 int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) ··· 638 614 goto err_phy_connect; 639 615 } 640 616 641 - rtnl_lock(); 642 - phylink_start(dp->pl); 643 - rtnl_unlock(); 644 - 645 617 return 0; 646 618 647 619 err_phy_connect: ··· 648 628 int dsa_port_link_register_of(struct dsa_port *dp) 649 629 { 650 630 struct dsa_switch *ds = dp->ds; 631 + struct device_node *phy_np; 651 632 652 - if (!ds->ops->adjust_link) 653 - return dsa_port_phylink_register(dp); 633 + if (!ds->ops->adjust_link) { 634 + phy_np = of_parse_phandle(dp->dn, "phy-handle", 0); 635 + if (of_phy_is_fixed_link(dp->dn) || phy_np) 636 + return dsa_port_phylink_register(dp); 637 + return 0; 638 + } 654 639 655 640 dev_warn(ds->dev, 656 641 "Using legacy PHYLIB callbacks. Please migrate to PHYLINK!\n"); ··· 670 645 { 671 646 struct dsa_switch *ds = dp->ds; 672 647 673 - if (!ds->ops->adjust_link) { 648 + if (!ds->ops->adjust_link && dp->pl) { 674 649 rtnl_lock(); 675 650 phylink_disconnect_phy(dp->pl); 676 651 rtnl_unlock(); 677 652 phylink_destroy(dp->pl); 653 + dp->pl = NULL; 678 654 return; 679 655 } 680 656
+2 -6
net/dsa/slave.c
··· 88 88 goto clear_allmulti; 89 89 } 90 90 91 - err = dsa_port_enable(dp, dev->phydev); 91 + err = dsa_port_enable_rt(dp, dev->phydev); 92 92 if (err) 93 93 goto clear_promisc; 94 - 95 - phylink_start(dp->pl); 96 94 97 95 return 0; 98 96 ··· 112 114 struct net_device *master = dsa_slave_to_master(dev); 113 115 struct dsa_port *dp = dsa_slave_to_port(dev); 114 116 115 - phylink_stop(dp->pl); 116 - 117 - dsa_port_disable(dp); 117 + dsa_port_disable_rt(dp); 118 118 119 119 dev_mc_unsync(master, dev); 120 120 dev_uc_unsync(master, dev);
+6
net/ieee802154/nl_policy.c
··· 21 21 [IEEE802154_ATTR_HW_ADDR] = { .type = NLA_HW_ADDR, }, 22 22 [IEEE802154_ATTR_PAN_ID] = { .type = NLA_U16, }, 23 23 [IEEE802154_ATTR_CHANNEL] = { .type = NLA_U8, }, 24 + [IEEE802154_ATTR_BCN_ORD] = { .type = NLA_U8, }, 25 + [IEEE802154_ATTR_SF_ORD] = { .type = NLA_U8, }, 26 + [IEEE802154_ATTR_PAN_COORD] = { .type = NLA_U8, }, 27 + [IEEE802154_ATTR_BAT_EXT] = { .type = NLA_U8, }, 28 + [IEEE802154_ATTR_COORD_REALIGN] = { .type = NLA_U8, }, 24 29 [IEEE802154_ATTR_PAGE] = { .type = NLA_U8, }, 30 + [IEEE802154_ATTR_DEV_TYPE] = { .type = NLA_U8, }, 25 31 [IEEE802154_ATTR_COORD_SHORT_ADDR] = { .type = NLA_U16, }, 26 32 [IEEE802154_ATTR_COORD_HW_ADDR] = { .type = NLA_HW_ADDR, }, 27 33 [IEEE802154_ATTR_COORD_PAN_ID] = { .type = NLA_U16, },
+10 -2
net/ipv4/gre_demux.c
··· 56 56 } 57 57 EXPORT_SYMBOL_GPL(gre_del_protocol); 58 58 59 - /* Fills in tpi and returns header length to be pulled. */ 59 + /* Fills in tpi and returns header length to be pulled. 60 + * Note that caller must use pskb_may_pull() before pulling GRE header. 61 + */ 60 62 int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi, 61 63 bool *csum_err, __be16 proto, int nhs) 62 64 { ··· 112 110 * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header 113 111 */ 114 112 if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) { 113 + u8 _val, *val; 114 + 115 + val = skb_header_pointer(skb, nhs + hdr_len, 116 + sizeof(_val), &_val); 117 + if (!val) 118 + return -EINVAL; 115 119 tpi->proto = proto; 116 - if ((*(u8 *)options & 0xF0) != 0x40) 120 + if ((*val & 0xF0) != 0x40) 117 121 hdr_len += 4; 118 122 } 119 123 tpi->hdr_len = hdr_len;
+20
net/ipv4/inet_connection_sock.c
··· 482 482 } 483 483 spin_unlock_bh(&queue->fastopenq.lock); 484 484 } 485 + 485 486 out: 486 487 release_sock(sk); 488 + if (newsk && mem_cgroup_sockets_enabled) { 489 + int amt; 490 + 491 + /* atomically get the memory usage, set and charge the 492 + * newsk->sk_memcg. 493 + */ 494 + lock_sock(newsk); 495 + 496 + /* The socket has not been accepted yet, no need to look at 497 + * newsk->sk_wmem_queued. 498 + */ 499 + amt = sk_mem_pages(newsk->sk_forward_alloc + 500 + atomic_read(&newsk->sk_rmem_alloc)); 501 + mem_cgroup_sk_alloc(newsk); 502 + if (newsk->sk_memcg && amt) 503 + mem_cgroup_charge_skmem(newsk->sk_memcg, amt); 504 + 505 + release_sock(newsk); 506 + } 487 507 if (req) 488 508 reqsk_put(req); 489 509 return newsk;
+20 -24
net/ipv4/inet_diag.c
··· 100 100 aux = handler->idiag_get_aux_size(sk, net_admin); 101 101 102 102 return nla_total_size(sizeof(struct tcp_info)) 103 - + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 104 - + nla_total_size(1) /* INET_DIAG_TOS */ 105 - + nla_total_size(1) /* INET_DIAG_TCLASS */ 106 - + nla_total_size(4) /* INET_DIAG_MARK */ 107 - + nla_total_size(4) /* INET_DIAG_CLASS_ID */ 108 - + nla_total_size(sizeof(struct inet_diag_meminfo)) 109 103 + nla_total_size(sizeof(struct inet_diag_msg)) 104 + + inet_diag_msg_attrs_size() 105 + + nla_total_size(sizeof(struct inet_diag_meminfo)) 110 106 + nla_total_size(SK_MEMINFO_VARS * sizeof(u32)) 111 107 + nla_total_size(TCP_CA_NAME_MAX) 112 108 + nla_total_size(sizeof(struct tcpvegas_info)) ··· 142 146 143 147 if (net_admin && nla_put_u32(skb, INET_DIAG_MARK, sk->sk_mark)) 144 148 goto errout; 149 + 150 + if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) || 151 + ext & (1 << (INET_DIAG_TCLASS - 1))) { 152 + u32 classid = 0; 153 + 154 + #ifdef CONFIG_SOCK_CGROUP_DATA 155 + classid = sock_cgroup_classid(&sk->sk_cgrp_data); 156 + #endif 157 + /* Fallback to socket priority if class id isn't set. 158 + * Classful qdiscs use it as direct reference to class. 159 + * For cgroup2 classid is always zero. 160 + */ 161 + if (!classid) 162 + classid = sk->sk_priority; 163 + 164 + if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid)) 165 + goto errout; 166 + } 145 167 146 168 r->idiag_uid = from_kuid_munged(user_ns, sock_i_uid(sk)); 147 169 r->idiag_inode = sock_i_ino(sk); ··· 295 281 sz = ca_ops->get_info(sk, ext, &attr, &info); 296 282 rcu_read_unlock(); 297 283 if (sz && nla_put(skb, attr, sz, &info) < 0) 298 - goto errout; 299 - } 300 - 301 - if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) || 302 - ext & (1 << (INET_DIAG_TCLASS - 1))) { 303 - u32 classid = 0; 304 - 305 - #ifdef CONFIG_SOCK_CGROUP_DATA 306 - classid = sock_cgroup_classid(&sk->sk_cgrp_data); 307 - #endif 308 - /* Fallback to socket priority if class id isn't set. 309 - * Classful qdiscs use it as direct reference to class. 310 - * For cgroup2 classid is always zero. 311 - */ 312 - if (!classid) 313 - classid = sk->sk_priority; 314 - 315 - if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid)) 316 284 goto errout; 317 285 } 318 286
+3 -2
net/ipv4/raw_diag.c
··· 100 100 if (IS_ERR(sk)) 101 101 return PTR_ERR(sk); 102 102 103 - rep = nlmsg_new(sizeof(struct inet_diag_msg) + 104 - sizeof(struct inet_diag_meminfo) + 64, 103 + rep = nlmsg_new(nla_total_size(sizeof(struct inet_diag_msg)) + 104 + inet_diag_msg_attrs_size() + 105 + nla_total_size(sizeof(struct inet_diag_meminfo)) + 64, 105 106 GFP_KERNEL); 106 107 if (!rep) { 107 108 sock_put(sk);
+3 -2
net/ipv4/udp_diag.c
··· 64 64 goto out; 65 65 66 66 err = -ENOMEM; 67 - rep = nlmsg_new(sizeof(struct inet_diag_msg) + 68 - sizeof(struct inet_diag_meminfo) + 64, 67 + rep = nlmsg_new(nla_total_size(sizeof(struct inet_diag_msg)) + 68 + inet_diag_msg_attrs_size() + 69 + nla_total_size(sizeof(struct inet_diag_meminfo)) + 64, 69 70 GFP_KERNEL); 70 71 if (!rep) 71 72 goto out;
+40 -11
net/ipv6/addrconf.c
··· 1226 1226 } 1227 1227 1228 1228 static void 1229 - cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long expires, bool del_rt) 1229 + cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long expires, 1230 + bool del_rt, bool del_peer) 1230 1231 { 1231 1232 struct fib6_info *f6i; 1232 1233 1233 - f6i = addrconf_get_prefix_route(&ifp->addr, ifp->prefix_len, 1234 + f6i = addrconf_get_prefix_route(del_peer ? &ifp->peer_addr : &ifp->addr, 1235 + ifp->prefix_len, 1234 1236 ifp->idev->dev, 0, RTF_DEFAULT, true); 1235 1237 if (f6i) { 1236 1238 if (del_rt) ··· 1295 1293 1296 1294 if (action != CLEANUP_PREFIX_RT_NOP) { 1297 1295 cleanup_prefix_route(ifp, expires, 1298 - action == CLEANUP_PREFIX_RT_DEL); 1296 + action == CLEANUP_PREFIX_RT_DEL, false); 1299 1297 } 1300 1298 1301 1299 /* clean up prefsrc entries */ ··· 3347 3345 (dev->type != ARPHRD_NONE) && 3348 3346 (dev->type != ARPHRD_RAWIP)) { 3349 3347 /* Alas, we support only Ethernet autoconfiguration. */ 3348 + idev = __in6_dev_get(dev); 3349 + if (!IS_ERR_OR_NULL(idev) && dev->flags & IFF_UP && 3350 + dev->flags & IFF_MULTICAST) 3351 + ipv6_mc_up(idev); 3350 3352 return; 3351 3353 } 3352 3354 ··· 4592 4586 } 4593 4587 4594 4588 static int modify_prefix_route(struct inet6_ifaddr *ifp, 4595 - unsigned long expires, u32 flags) 4589 + unsigned long expires, u32 flags, 4590 + bool modify_peer) 4596 4591 { 4597 4592 struct fib6_info *f6i; 4598 4593 u32 prio; 4599 4594 4600 - f6i = addrconf_get_prefix_route(&ifp->addr, ifp->prefix_len, 4595 + f6i = addrconf_get_prefix_route(modify_peer ? &ifp->peer_addr : &ifp->addr, 4596 + ifp->prefix_len, 4601 4597 ifp->idev->dev, 0, RTF_DEFAULT, true); 4602 4598 if (!f6i) 4603 4599 return -ENOENT; ··· 4610 4602 ip6_del_rt(dev_net(ifp->idev->dev), f6i); 4611 4603 4612 4604 /* add new one */ 4613 - addrconf_prefix_route(&ifp->addr, ifp->prefix_len, 4605 + addrconf_prefix_route(modify_peer ? &ifp->peer_addr : &ifp->addr, 4606 + ifp->prefix_len, 4614 4607 ifp->rt_priority, ifp->idev->dev, 4615 4608 expires, flags, GFP_KERNEL); 4616 4609 } else { ··· 4633 4624 unsigned long timeout; 4634 4625 bool was_managetempaddr; 4635 4626 bool had_prefixroute; 4627 + bool new_peer = false; 4636 4628 4637 4629 ASSERT_RTNL(); 4638 4630 ··· 4665 4655 cfg->preferred_lft = timeout; 4666 4656 } 4667 4657 4658 + if (cfg->peer_pfx && 4659 + memcmp(&ifp->peer_addr, cfg->peer_pfx, sizeof(struct in6_addr))) { 4660 + if (!ipv6_addr_any(&ifp->peer_addr)) 4661 + cleanup_prefix_route(ifp, expires, true, true); 4662 + new_peer = true; 4663 + } 4664 + 4668 4665 spin_lock_bh(&ifp->lock); 4669 4666 was_managetempaddr = ifp->flags & IFA_F_MANAGETEMPADDR; 4670 4667 had_prefixroute = ifp->flags & IFA_F_PERMANENT && ··· 4687 4670 if (cfg->rt_priority && cfg->rt_priority != ifp->rt_priority) 4688 4671 ifp->rt_priority = cfg->rt_priority; 4689 4672 4673 + if (new_peer) 4674 + ifp->peer_addr = *cfg->peer_pfx; 4675 + 4690 4676 spin_unlock_bh(&ifp->lock); 4691 4677 if (!(ifp->flags&IFA_F_TENTATIVE)) 4692 4678 ipv6_ifa_notify(0, ifp); ··· 4698 4678 int rc = -ENOENT; 4699 4679 4700 4680 if (had_prefixroute) 4701 - rc = modify_prefix_route(ifp, expires, flags); 4681 + rc = modify_prefix_route(ifp, expires, flags, false); 4702 4682 4703 4683 /* prefix route could have been deleted; if so restore it */ 4704 4684 if (rc == -ENOENT) { 4705 4685 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, 4686 + ifp->rt_priority, ifp->idev->dev, 4687 + expires, flags, GFP_KERNEL); 4688 + } 4689 + 4690 + if (had_prefixroute && !ipv6_addr_any(&ifp->peer_addr)) 4691 + rc = modify_prefix_route(ifp, expires, flags, true); 4692 + 4693 + if (rc == -ENOENT && !ipv6_addr_any(&ifp->peer_addr)) { 4694 + addrconf_prefix_route(&ifp->peer_addr, ifp->prefix_len, 4706 4695 ifp->rt_priority, ifp->idev->dev, 4707 4696 expires, flags, GFP_KERNEL); 4708 4697 } ··· 4725 4696 4726 4697 if (action != CLEANUP_PREFIX_RT_NOP) { 4727 4698 cleanup_prefix_route(ifp, rt_expires, 4728 - action == CLEANUP_PREFIX_RT_DEL); 4699 + action == CLEANUP_PREFIX_RT_DEL, false); 4729 4700 } 4730 4701 } 4731 4702 ··· 6012 5983 if (ifp->idev->cnf.forwarding) 6013 5984 addrconf_join_anycast(ifp); 6014 5985 if (!ipv6_addr_any(&ifp->peer_addr)) 6015 - addrconf_prefix_route(&ifp->peer_addr, 128, 0, 6016 - ifp->idev->dev, 0, 0, 6017 - GFP_ATOMIC); 5986 + addrconf_prefix_route(&ifp->peer_addr, 128, 5987 + ifp->rt_priority, ifp->idev->dev, 5988 + 0, 0, GFP_ATOMIC); 6018 5989 break; 6019 5990 case RTM_DELADDR: 6020 5991 if (ifp->idev->cnf.forwarding)
+1 -1
net/ipv6/seg6_iptunnel.c
··· 268 268 skb_mac_header_rebuild(skb); 269 269 skb_push(skb, skb->mac_len); 270 270 271 - err = seg6_do_srh_encap(skb, tinfo->srh, NEXTHDR_NONE); 271 + err = seg6_do_srh_encap(skb, tinfo->srh, IPPROTO_ETHERNET); 272 272 if (err) 273 273 return err; 274 274
+1 -1
net/ipv6/seg6_local.c
··· 282 282 struct net_device *odev; 283 283 struct ethhdr *eth; 284 284 285 - if (!decap_and_validate(skb, NEXTHDR_NONE)) 285 + if (!decap_and_validate(skb, IPPROTO_ETHERNET)) 286 286 goto drop; 287 287 288 288 if (!pskb_may_pull(skb, ETH_HLEN))
+2 -1
net/mac80211/mesh_hwmp.c
··· 1152 1152 } 1153 1153 } 1154 1154 1155 - if (!(mpath->flags & MESH_PATH_RESOLVING)) 1155 + if (!(mpath->flags & MESH_PATH_RESOLVING) && 1156 + mesh_path_sel_is_hwmp(sdata)) 1156 1157 mesh_queue_preq(mpath, PREQ_Q_F_START); 1157 1158 1158 1159 if (skb_queue_len(&mpath->frame_queue) >= MESH_FRAME_QUEUE_LEN)
+17 -2
net/mptcp/options.c
··· 334 334 struct mptcp_sock *msk; 335 335 unsigned int ack_size; 336 336 bool ret = false; 337 + bool can_ack; 338 + u64 ack_seq; 337 339 u8 tcp_fin; 338 340 339 341 if (skb) { ··· 362 360 ret = true; 363 361 } 364 362 363 + /* passive sockets msk will set the 'can_ack' after accept(), even 364 + * if the first subflow may have the already the remote key handy 365 + */ 366 + can_ack = true; 365 367 opts->ext_copy.use_ack = 0; 366 368 msk = mptcp_sk(subflow->conn); 367 - if (!msk || !READ_ONCE(msk->can_ack)) { 369 + if (likely(msk && READ_ONCE(msk->can_ack))) { 370 + ack_seq = msk->ack_seq; 371 + } else if (subflow->can_ack) { 372 + mptcp_crypto_key_sha(subflow->remote_key, NULL, &ack_seq); 373 + ack_seq++; 374 + } else { 375 + can_ack = false; 376 + } 377 + 378 + if (unlikely(!can_ack)) { 368 379 *size = ALIGN(dss_size, 4); 369 380 return ret; 370 381 } ··· 390 375 391 376 dss_size += ack_size; 392 377 393 - opts->ext_copy.data_ack = msk->ack_seq; 378 + opts->ext_copy.data_ack = ack_seq; 394 379 opts->ext_copy.ack64 = 1; 395 380 opts->ext_copy.use_ack = 1; 396 381
+1 -1
net/netfilter/nf_conntrack_standalone.c
··· 411 411 *pos = cpu + 1; 412 412 return per_cpu_ptr(net->ct.stat, cpu); 413 413 } 414 - 414 + (*pos)++; 415 415 return NULL; 416 416 } 417 417
+1 -1
net/netfilter/nf_synproxy_core.c
··· 267 267 *pos = cpu + 1; 268 268 return per_cpu_ptr(snet->stats, cpu); 269 269 } 270 - 270 + (*pos)++; 271 271 return NULL; 272 272 } 273 273
+14 -8
net/netfilter/nf_tables_api.c
··· 1405 1405 lockdep_commit_lock_is_held(net)); 1406 1406 if (nft_dump_stats(skb, stats)) 1407 1407 goto nla_put_failure; 1408 + 1409 + if ((chain->flags & NFT_CHAIN_HW_OFFLOAD) && 1410 + nla_put_be32(skb, NFTA_CHAIN_FLAGS, 1411 + htonl(NFT_CHAIN_HW_OFFLOAD))) 1412 + goto nla_put_failure; 1408 1413 } 1409 1414 1410 1415 if (nla_put_be32(skb, NFTA_CHAIN_USE, htonl(chain->use))) ··· 6305 6300 goto err4; 6306 6301 6307 6302 err = nft_register_flowtable_net_hooks(ctx.net, table, flowtable); 6308 - if (err < 0) 6303 + if (err < 0) { 6304 + list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { 6305 + list_del_rcu(&hook->list); 6306 + kfree_rcu(hook, rcu); 6307 + } 6309 6308 goto err4; 6309 + } 6310 6310 6311 6311 err = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable); 6312 6312 if (err < 0) ··· 7388 7378 list_splice_init(&net->nft.module_list, &module_list); 7389 7379 mutex_unlock(&net->nft.commit_mutex); 7390 7380 list_for_each_entry_safe(req, next, &module_list, list) { 7391 - if (req->done) { 7392 - list_del(&req->list); 7393 - kfree(req); 7394 - } else { 7395 - request_module("%s", req->module); 7396 - req->done = true; 7397 - } 7381 + request_module("%s", req->module); 7382 + req->done = true; 7398 7383 } 7399 7384 mutex_lock(&net->nft.commit_mutex); 7400 7385 list_splice(&module_list, &net->nft.module_list); ··· 8172 8167 __nft_release_tables(net); 8173 8168 mutex_unlock(&net->nft.commit_mutex); 8174 8169 WARN_ON_ONCE(!list_empty(&net->nft.tables)); 8170 + WARN_ON_ONCE(!list_empty(&net->nft.module_list)); 8175 8171 } 8176 8172 8177 8173 static struct pernet_operations nf_tables_net_ops = {
+1
net/netfilter/nft_chain_nat.c
··· 89 89 .name = "nat", 90 90 .type = NFT_CHAIN_T_NAT, 91 91 .family = NFPROTO_INET, 92 + .owner = THIS_MODULE, 92 93 .hook_mask = (1 << NF_INET_PRE_ROUTING) | 93 94 (1 << NF_INET_LOCAL_IN) | 94 95 (1 << NF_INET_LOCAL_OUT) |
+1
net/netfilter/nft_payload.c
··· 129 129 [NFTA_PAYLOAD_LEN] = { .type = NLA_U32 }, 130 130 [NFTA_PAYLOAD_CSUM_TYPE] = { .type = NLA_U32 }, 131 131 [NFTA_PAYLOAD_CSUM_OFFSET] = { .type = NLA_U32 }, 132 + [NFTA_PAYLOAD_CSUM_FLAGS] = { .type = NLA_U32 }, 132 133 }; 133 134 134 135 static int nft_payload_init(const struct nft_ctx *ctx,
+2
net/netfilter/nft_tunnel.c
··· 339 339 [NFTA_TUNNEL_KEY_FLAGS] = { .type = NLA_U32, }, 340 340 [NFTA_TUNNEL_KEY_TOS] = { .type = NLA_U8, }, 341 341 [NFTA_TUNNEL_KEY_TTL] = { .type = NLA_U8, }, 342 + [NFTA_TUNNEL_KEY_SPORT] = { .type = NLA_U16, }, 343 + [NFTA_TUNNEL_KEY_DPORT] = { .type = NLA_U16, }, 342 344 [NFTA_TUNNEL_KEY_OPTS] = { .type = NLA_NESTED, }, 343 345 }; 344 346
+3 -3
net/netfilter/x_tables.c
··· 1551 1551 uint8_t nfproto = (unsigned long)PDE_DATA(file_inode(seq->file)); 1552 1552 struct nf_mttg_trav *trav = seq->private; 1553 1553 1554 + if (ppos != NULL) 1555 + ++(*ppos); 1556 + 1554 1557 switch (trav->class) { 1555 1558 case MTTG_TRAV_INIT: 1556 1559 trav->class = MTTG_TRAV_NFP_UNSPEC; ··· 1579 1576 default: 1580 1577 return NULL; 1581 1578 } 1582 - 1583 - if (ppos != NULL) 1584 - ++*ppos; 1585 1579 return trav; 1586 1580 } 1587 1581
+1 -1
net/netfilter/xt_recent.c
··· 492 492 const struct recent_entry *e = v; 493 493 const struct list_head *head = e->list.next; 494 494 495 + (*pos)++; 495 496 while (head == &t->iphash[st->bucket]) { 496 497 if (++st->bucket >= ip_list_hash_size) 497 498 return NULL; 498 499 head = t->iphash[st->bucket].next; 499 500 } 500 - (*pos)++; 501 501 return list_entry(head, struct recent_entry, list); 502 502 } 503 503
+1 -1
net/netlink/af_netlink.c
··· 2434 2434 in_skb->len)) 2435 2435 WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, 2436 2436 (u8 *)extack->bad_attr - 2437 - in_skb->data)); 2437 + (u8 *)nlh)); 2438 2438 } else { 2439 2439 if (extack->cookie_len) 2440 2440 WARN_ON(nla_put(skb, NLMSGERR_ATTR_COOKIE,
+16 -3
net/nfc/hci/core.c
··· 181 181 void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd, 182 182 struct sk_buff *skb) 183 183 { 184 - u8 gate = hdev->pipes[pipe].gate; 185 184 u8 status = NFC_HCI_ANY_OK; 186 185 struct hci_create_pipe_resp *create_info; 187 186 struct hci_delete_pipe_noti *delete_info; 188 187 struct hci_all_pipe_cleared_noti *cleared_info; 188 + u8 gate; 189 189 190 - pr_debug("from gate %x pipe %x cmd %x\n", gate, pipe, cmd); 190 + pr_debug("from pipe %x cmd %x\n", pipe, cmd); 191 + 192 + if (pipe >= NFC_HCI_MAX_PIPES) { 193 + status = NFC_HCI_ANY_E_NOK; 194 + goto exit; 195 + } 196 + 197 + gate = hdev->pipes[pipe].gate; 191 198 192 199 switch (cmd) { 193 200 case NFC_HCI_ADM_NOTIFY_PIPE_CREATED: ··· 382 375 struct sk_buff *skb) 383 376 { 384 377 int r = 0; 385 - u8 gate = hdev->pipes[pipe].gate; 378 + u8 gate; 386 379 380 + if (pipe >= NFC_HCI_MAX_PIPES) { 381 + pr_err("Discarded event %x to invalid pipe %x\n", event, pipe); 382 + goto exit; 383 + } 384 + 385 + gate = hdev->pipes[pipe].gate; 387 386 if (gate == NFC_HCI_INVALID_GATE) { 388 387 pr_err("Discarded event %x to unopened pipe %x\n", event, pipe); 389 388 goto exit;
+4
net/nfc/netlink.c
··· 32 32 [NFC_ATTR_DEVICE_NAME] = { .type = NLA_STRING, 33 33 .len = NFC_DEVICE_NAME_MAXSIZE }, 34 34 [NFC_ATTR_PROTOCOLS] = { .type = NLA_U32 }, 35 + [NFC_ATTR_TARGET_INDEX] = { .type = NLA_U32 }, 35 36 [NFC_ATTR_COMM_MODE] = { .type = NLA_U8 }, 36 37 [NFC_ATTR_RF_MODE] = { .type = NLA_U8 }, 37 38 [NFC_ATTR_DEVICE_POWERED] = { .type = NLA_U8 }, ··· 44 43 [NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED }, 45 44 [NFC_ATTR_FIRMWARE_NAME] = { .type = NLA_STRING, 46 45 .len = NFC_FIRMWARE_NAME_MAXSIZE }, 46 + [NFC_ATTR_SE_INDEX] = { .type = NLA_U32 }, 47 47 [NFC_ATTR_SE_APDU] = { .type = NLA_BINARY }, 48 + [NFC_ATTR_VENDOR_ID] = { .type = NLA_U32 }, 49 + [NFC_ATTR_VENDOR_SUBCMD] = { .type = NLA_U32 }, 48 50 [NFC_ATTR_VENDOR_DATA] = { .type = NLA_BINARY }, 49 51 50 52 };
+1
net/openvswitch/datapath.c
··· 645 645 [OVS_PACKET_ATTR_ACTIONS] = { .type = NLA_NESTED }, 646 646 [OVS_PACKET_ATTR_PROBE] = { .type = NLA_FLAG }, 647 647 [OVS_PACKET_ATTR_MRU] = { .type = NLA_U16 }, 648 + [OVS_PACKET_ATTR_HASH] = { .type = NLA_U64 }, 648 649 }; 649 650 650 651 static const struct genl_ops dp_packet_genl_ops[] = {
+7 -6
net/packet/af_packet.c
··· 2274 2274 TP_STATUS_KERNEL, (macoff+snaplen)); 2275 2275 if (!h.raw) 2276 2276 goto drop_n_account; 2277 + 2278 + if (do_vnet && 2279 + virtio_net_hdr_from_skb(skb, h.raw + macoff - 2280 + sizeof(struct virtio_net_hdr), 2281 + vio_le(), true, 0)) 2282 + goto drop_n_account; 2283 + 2277 2284 if (po->tp_version <= TPACKET_V2) { 2278 2285 packet_increment_rx_head(po, &po->rx_ring); 2279 2286 /* ··· 2292 2285 if (atomic_read(&po->tp_drops)) 2293 2286 status |= TP_STATUS_LOSING; 2294 2287 } 2295 - 2296 - if (do_vnet && 2297 - virtio_net_hdr_from_skb(skb, h.raw + macoff - 2298 - sizeof(struct virtio_net_hdr), 2299 - vio_le(), true, 0)) 2300 - goto drop_n_account; 2301 2288 2302 2289 po->stats.stats1.tp_packets++; 2303 2290 if (copy_skb) {
+1
net/sched/sch_fq.c
··· 744 744 [TCA_FQ_FLOW_MAX_RATE] = { .type = NLA_U32 }, 745 745 [TCA_FQ_BUCKETS_LOG] = { .type = NLA_U32 }, 746 746 [TCA_FQ_FLOW_REFILL_DELAY] = { .type = NLA_U32 }, 747 + [TCA_FQ_ORPHAN_MASK] = { .type = NLA_U32 }, 747 748 [TCA_FQ_LOW_RATE_THRESHOLD] = { .type = NLA_U32 }, 748 749 [TCA_FQ_CE_THRESHOLD] = { .type = NLA_U32 }, 749 750 };
+10 -3
net/sched/sch_taprio.c
··· 564 564 prio = skb->priority; 565 565 tc = netdev_get_prio_tc_map(dev, prio); 566 566 567 - if (!(gate_mask & BIT(tc))) 567 + if (!(gate_mask & BIT(tc))) { 568 + skb = NULL; 568 569 continue; 570 + } 569 571 570 572 len = qdisc_pkt_len(skb); 571 573 guard = ktime_add_ns(taprio_get_time(q), ··· 577 575 * guard band ... 578 576 */ 579 577 if (gate_mask != TAPRIO_ALL_GATES_OPEN && 580 - ktime_after(guard, entry->close_time)) 578 + ktime_after(guard, entry->close_time)) { 579 + skb = NULL; 581 580 continue; 581 + } 582 582 583 583 /* ... and no budget. */ 584 584 if (gate_mask != TAPRIO_ALL_GATES_OPEN && 585 - atomic_sub_return(len, &entry->budget) < 0) 585 + atomic_sub_return(len, &entry->budget) < 0) { 586 + skb = NULL; 586 587 continue; 588 + } 587 589 588 590 skb = child->ops->dequeue(child); 589 591 if (unlikely(!skb)) ··· 774 768 [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME] = { .type = NLA_S64 }, 775 769 [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION] = { .type = NLA_S64 }, 776 770 [TCA_TAPRIO_ATTR_FLAGS] = { .type = NLA_U32 }, 771 + [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, 777 772 }; 778 773 779 774 static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
+2 -6
net/sctp/diag.c
··· 237 237 addrcnt++; 238 238 239 239 return nla_total_size(sizeof(struct sctp_info)) 240 - + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 241 - + nla_total_size(1) /* INET_DIAG_TOS */ 242 - + nla_total_size(1) /* INET_DIAG_TCLASS */ 243 - + nla_total_size(4) /* INET_DIAG_MARK */ 244 - + nla_total_size(4) /* INET_DIAG_CLASS_ID */ 245 240 + nla_total_size(addrlen * asoc->peer.transport_count) 246 241 + nla_total_size(addrlen * addrcnt) 247 - + nla_total_size(sizeof(struct inet_diag_meminfo)) 248 242 + nla_total_size(sizeof(struct inet_diag_msg)) 243 + + inet_diag_msg_attrs_size() 244 + + nla_total_size(sizeof(struct inet_diag_meminfo)) 249 245 + 64; 250 246 } 251 247
+1
net/smc/smc_ib.c
··· 582 582 smc_smcr_terminate_all(smcibdev); 583 583 smc_ib_cleanup_per_ibdev(smcibdev); 584 584 ib_unregister_event_handler(&smcibdev->event_handler); 585 + cancel_work_sync(&smcibdev->port_event_work); 585 586 kfree(smcibdev); 586 587 } 587 588
+5 -3
net/socket.c
··· 1707 1707 1708 1708 int __sys_accept4_file(struct file *file, unsigned file_flags, 1709 1709 struct sockaddr __user *upeer_sockaddr, 1710 - int __user *upeer_addrlen, int flags) 1710 + int __user *upeer_addrlen, int flags, 1711 + unsigned long nofile) 1711 1712 { 1712 1713 struct socket *sock, *newsock; 1713 1714 struct file *newfile; ··· 1739 1738 */ 1740 1739 __module_get(newsock->ops->owner); 1741 1740 1742 - newfd = get_unused_fd_flags(flags); 1741 + newfd = __get_unused_fd_flags(flags, nofile); 1743 1742 if (unlikely(newfd < 0)) { 1744 1743 err = newfd; 1745 1744 sock_release(newsock); ··· 1808 1807 f = fdget(fd); 1809 1808 if (f.file) { 1810 1809 ret = __sys_accept4_file(f.file, 0, upeer_sockaddr, 1811 - upeer_addrlen, flags); 1810 + upeer_addrlen, flags, 1811 + rlimit(RLIMIT_NOFILE)); 1812 1812 if (f.flags) 1813 1813 fput(f.file); 1814 1814 }
+1
net/tipc/netlink.c
··· 116 116 [TIPC_NLA_PROP_PRIO] = { .type = NLA_U32 }, 117 117 [TIPC_NLA_PROP_TOL] = { .type = NLA_U32 }, 118 118 [TIPC_NLA_PROP_WIN] = { .type = NLA_U32 }, 119 + [TIPC_NLA_PROP_MTU] = { .type = NLA_U32 }, 119 120 [TIPC_NLA_PROP_BROADCAST] = { .type = NLA_U32 }, 120 121 [TIPC_NLA_PROP_BROADCAST_RATIO] = { .type = NLA_U32 } 121 122 };
+5
net/wireless/nl80211.c
··· 470 470 [NL80211_ATTR_WOWLAN_TRIGGERS] = { .type = NLA_NESTED }, 471 471 [NL80211_ATTR_STA_PLINK_STATE] = 472 472 NLA_POLICY_MAX(NLA_U8, NUM_NL80211_PLINK_STATES - 1), 473 + [NL80211_ATTR_MEASUREMENT_DURATION] = { .type = NLA_U16 }, 474 + [NL80211_ATTR_MEASUREMENT_DURATION_MANDATORY] = { .type = NLA_FLAG }, 473 475 [NL80211_ATTR_MESH_PEER_AID] = 474 476 NLA_POLICY_RANGE(NLA_U16, 1, IEEE80211_MAX_AID), 475 477 [NL80211_ATTR_SCHED_SCAN_INTERVAL] = { .type = NLA_U32 }, ··· 533 531 [NL80211_ATTR_MDID] = { .type = NLA_U16 }, 534 532 [NL80211_ATTR_IE_RIC] = { .type = NLA_BINARY, 535 533 .len = IEEE80211_MAX_DATA_LEN }, 534 + [NL80211_ATTR_CRIT_PROT_ID] = { .type = NLA_U16 }, 535 + [NL80211_ATTR_MAX_CRIT_PROT_DURATION] = { .type = NLA_U16 }, 536 536 [NL80211_ATTR_PEER_AID] = 537 537 NLA_POLICY_RANGE(NLA_U16, 1, IEEE80211_MAX_AID), 538 538 [NL80211_ATTR_CH_SWITCH_COUNT] = { .type = NLA_U32 }, ··· 565 561 NLA_POLICY_MAX(NLA_U8, IEEE80211_NUM_UPS - 1), 566 562 [NL80211_ATTR_ADMITTED_TIME] = { .type = NLA_U16 }, 567 563 [NL80211_ATTR_SMPS_MODE] = { .type = NLA_U8 }, 564 + [NL80211_ATTR_OPER_CLASS] = { .type = NLA_U8 }, 568 565 [NL80211_ATTR_MAC_MASK] = { 569 566 .type = NLA_EXACT_LEN_WARN, 570 567 .len = ETH_ALEN
+7
scripts/Kconfig.include
··· 44 44 45 45 # gcc version including patch level 46 46 gcc-version := $(shell,$(srctree)/scripts/gcc-version.sh $(CC)) 47 + 48 + # machine bit flags 49 + # $(m32-flag): -m32 if the compiler supports it, or an empty string otherwise. 50 + # $(m64-flag): -m64 if the compiler supports it, or an empty string otherwise. 51 + cc-option-bit = $(if-success,$(CC) -Werror $(1) -E -x c /dev/null -o /dev/null,$(1)) 52 + m32-flag := $(cc-option-bit,-m32) 53 + m64-flag := $(cc-option-bit,-m64)
+1
scripts/Makefile.extrawarn
··· 48 48 KBUILD_CFLAGS += -Wno-format 49 49 KBUILD_CFLAGS += -Wno-sign-compare 50 50 KBUILD_CFLAGS += -Wno-format-zero-length 51 + KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast) 51 52 endif 52 53 53 54 endif
+1 -1
scripts/export_report.pl
··· 94 94 # 95 95 while ( <$module_symvers> ) { 96 96 chomp; 97 - my (undef, $symbol, $namespace, $module, $gpl) = split('\t'); 97 + my (undef, $symbol, $module, $gpl, $namespace) = split('\t'); 98 98 $SYMBOL { $symbol } = [ $module , "0" , $symbol, $gpl]; 99 99 } 100 100 close($module_symvers);
+4 -4
scripts/kallsyms.c
··· 195 195 return NULL; 196 196 } 197 197 198 - if (is_ignored_symbol(name, type)) 199 - return NULL; 200 - 201 - /* Ignore most absolute/undefined (?) symbols. */ 202 198 if (strcmp(name, "_text") == 0) 203 199 _text = addr; 200 + 201 + /* Ignore most absolute/undefined (?) symbols. */ 202 + if (is_ignored_symbol(name, type)) 203 + return NULL; 204 204 205 205 check_symbol_range(name, addr, text_ranges, ARRAY_SIZE(text_ranges)); 206 206 check_symbol_range(name, addr, &percpu_range, 1);
+14 -13
scripts/mod/modpost.c
··· 308 308 309 309 static void *sym_get_data(const struct elf_info *info, const Elf_Sym *sym) 310 310 { 311 - Elf_Shdr *sechdr = &info->sechdrs[sym->st_shndx]; 311 + unsigned int secindex = get_secindex(info, sym); 312 + Elf_Shdr *sechdr = &info->sechdrs[secindex]; 312 313 unsigned long offset; 313 314 314 315 offset = sym->st_value; ··· 2428 2427 } 2429 2428 2430 2429 /* parse Module.symvers file. line format: 2431 - * 0x12345678<tab>symbol<tab>module[[<tab>export]<tab>something] 2430 + * 0x12345678<tab>symbol<tab>module<tab>export<tab>namespace 2432 2431 **/ 2433 2432 static void read_dump(const char *fname, unsigned int kernel) 2434 2433 { ··· 2441 2440 return; 2442 2441 2443 2442 while ((line = get_next_line(&pos, file, size))) { 2444 - char *symname, *namespace, *modname, *d, *export, *end; 2443 + char *symname, *namespace, *modname, *d, *export; 2445 2444 unsigned int crc; 2446 2445 struct module *mod; 2447 2446 struct symbol *s; ··· 2449 2448 if (!(symname = strchr(line, '\t'))) 2450 2449 goto fail; 2451 2450 *symname++ = '\0'; 2452 - if (!(namespace = strchr(symname, '\t'))) 2453 - goto fail; 2454 - *namespace++ = '\0'; 2455 - if (!(modname = strchr(namespace, '\t'))) 2451 + if (!(modname = strchr(symname, '\t'))) 2456 2452 goto fail; 2457 2453 *modname++ = '\0'; 2458 - if ((export = strchr(modname, '\t')) != NULL) 2459 - *export++ = '\0'; 2460 - if (export && ((end = strchr(export, '\t')) != NULL)) 2461 - *end = '\0'; 2454 + if (!(export = strchr(modname, '\t'))) 2455 + goto fail; 2456 + *export++ = '\0'; 2457 + if (!(namespace = strchr(export, '\t'))) 2458 + goto fail; 2459 + *namespace++ = '\0'; 2460 + 2462 2461 crc = strtoul(line, &d, 16); 2463 2462 if (*symname == '\0' || *modname == '\0' || *d != '\0') 2464 2463 goto fail; ··· 2509 2508 namespace = symbol->namespace; 2510 2509 buf_printf(&buf, "0x%08x\t%s\t%s\t%s\t%s\n", 2511 2510 symbol->crc, symbol->name, 2512 - namespace ? namespace : "", 2513 2511 symbol->module->name, 2514 - export_str(symbol->export)); 2512 + export_str(symbol->export), 2513 + namespace ? namespace : ""); 2515 2514 } 2516 2515 symbol = symbol->next; 2517 2516 }
+10 -2
sound/core/oss/pcm_plugin.c
··· 111 111 while (plugin->next) { 112 112 if (plugin->dst_frames) 113 113 frames = plugin->dst_frames(plugin, frames); 114 - if (snd_BUG_ON((snd_pcm_sframes_t)frames <= 0)) 114 + if ((snd_pcm_sframes_t)frames <= 0) 115 115 return -ENXIO; 116 116 plugin = plugin->next; 117 117 err = snd_pcm_plugin_alloc(plugin, frames); ··· 123 123 while (plugin->prev) { 124 124 if (plugin->src_frames) 125 125 frames = plugin->src_frames(plugin, frames); 126 - if (snd_BUG_ON((snd_pcm_sframes_t)frames <= 0)) 126 + if ((snd_pcm_sframes_t)frames <= 0) 127 127 return -ENXIO; 128 128 plugin = plugin->prev; 129 129 err = snd_pcm_plugin_alloc(plugin, frames); ··· 209 209 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 210 210 plugin = snd_pcm_plug_last(plug); 211 211 while (plugin && drv_frames > 0) { 212 + if (drv_frames > plugin->buf_frames) 213 + drv_frames = plugin->buf_frames; 212 214 plugin_prev = plugin->prev; 213 215 if (plugin->src_frames) 214 216 drv_frames = plugin->src_frames(plugin, drv_frames); ··· 222 220 plugin_next = plugin->next; 223 221 if (plugin->dst_frames) 224 222 drv_frames = plugin->dst_frames(plugin, drv_frames); 223 + if (drv_frames > plugin->buf_frames) 224 + drv_frames = plugin->buf_frames; 225 225 plugin = plugin_next; 226 226 } 227 227 } else ··· 252 248 if (frames < 0) 253 249 return frames; 254 250 } 251 + if (frames > plugin->buf_frames) 252 + frames = plugin->buf_frames; 255 253 plugin = plugin_next; 256 254 } 257 255 } else if (stream == SNDRV_PCM_STREAM_CAPTURE) { 258 256 plugin = snd_pcm_plug_last(plug); 259 257 while (plugin) { 258 + if (frames > plugin->buf_frames) 259 + frames = plugin->buf_frames; 260 260 plugin_prev = plugin->prev; 261 261 if (plugin->src_frames) { 262 262 frames = plugin->src_frames(plugin, frames);
+1
sound/core/seq/oss/seq_oss_midi.c
··· 602 602 len = snd_seq_oss_timer_start(dp->timer); 603 603 if (ev->type == SNDRV_SEQ_EVENT_SYSEX) { 604 604 snd_seq_oss_readq_sysex(dp->readq, mdev->seq_device, ev); 605 + snd_midi_event_reset_decode(mdev->coder); 605 606 } else { 606 607 len = snd_midi_event_decode(mdev->coder, msg, sizeof(msg), ev); 607 608 if (len > 0)
+1
sound/core/seq/seq_virmidi.c
··· 81 81 if ((ev->flags & SNDRV_SEQ_EVENT_LENGTH_MASK) != SNDRV_SEQ_EVENT_LENGTH_VARIABLE) 82 82 continue; 83 83 snd_seq_dump_var_event(ev, (snd_seq_dump_func_t)snd_rawmidi_receive, vmidi->substream); 84 + snd_midi_event_reset_decode(vmidi->parser); 84 85 } else { 85 86 len = snd_midi_event_decode(vmidi->parser, msg, sizeof(msg), ev); 86 87 if (len > 0)
+25
sound/pci/hda/patch_realtek.c
··· 8051 8051 spec->gen.mixer_nid = 0; 8052 8052 break; 8053 8053 case 0x10ec0225: 8054 + codec->power_save_node = 1; 8055 + /* fall through */ 8054 8056 case 0x10ec0295: 8055 8057 case 0x10ec0299: 8056 8058 spec->codec_variant = ALC269_TYPE_ALC225; ··· 8612 8610 ALC669_FIXUP_ACER_ASPIRE_ETHOS, 8613 8611 ALC669_FIXUP_ACER_ASPIRE_ETHOS_HEADSET, 8614 8612 ALC671_FIXUP_HP_HEADSET_MIC2, 8613 + ALC662_FIXUP_ACER_X2660G_HEADSET_MODE, 8614 + ALC662_FIXUP_ACER_NITRO_HEADSET_MODE, 8615 8615 }; 8616 8616 8617 8617 static const struct hda_fixup alc662_fixups[] = { ··· 8959 8955 .type = HDA_FIXUP_FUNC, 8960 8956 .v.func = alc671_fixup_hp_headset_mic2, 8961 8957 }, 8958 + [ALC662_FIXUP_ACER_X2660G_HEADSET_MODE] = { 8959 + .type = HDA_FIXUP_PINS, 8960 + .v.pins = (const struct hda_pintbl[]) { 8961 + { 0x1a, 0x02a1113c }, /* use as headset mic, without its own jack detect */ 8962 + { } 8963 + }, 8964 + .chained = true, 8965 + .chain_id = ALC662_FIXUP_USI_FUNC 8966 + }, 8967 + [ALC662_FIXUP_ACER_NITRO_HEADSET_MODE] = { 8968 + .type = HDA_FIXUP_PINS, 8969 + .v.pins = (const struct hda_pintbl[]) { 8970 + { 0x1a, 0x01a11140 }, /* use as headset mic, without its own jack detect */ 8971 + { 0x1b, 0x0221144f }, 8972 + { } 8973 + }, 8974 + .chained = true, 8975 + .chain_id = ALC662_FIXUP_USI_FUNC 8976 + }, 8962 8977 }; 8963 8978 8964 8979 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 8989 8966 SND_PCI_QUIRK(0x1025, 0x0349, "eMachines eM250", ALC662_FIXUP_INV_DMIC), 8990 8967 SND_PCI_QUIRK(0x1025, 0x034a, "Gateway LT27", ALC662_FIXUP_INV_DMIC), 8991 8968 SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE), 8969 + SND_PCI_QUIRK(0x1025, 0x123c, "Acer Nitro N50-600", ALC662_FIXUP_ACER_NITRO_HEADSET_MODE), 8970 + SND_PCI_QUIRK(0x1025, 0x124e, "Acer 2660G", ALC662_FIXUP_ACER_X2660G_HEADSET_MODE), 8992 8971 SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 8993 8972 SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 8994 8973 SND_PCI_QUIRK(0x1028, 0x05fe, "Dell XPS 15", ALC668_FIXUP_DELL_XPS13),
+1 -1
sound/usb/line6/driver.c
··· 305 305 line6_midibuf_read(mb, line6->buffer_message, 306 306 LINE6_MIDI_MESSAGE_MAXLEN); 307 307 308 - if (done == 0) 308 + if (done <= 0) 309 309 break; 310 310 311 311 line6->message_length = done;
+1 -1
sound/usb/line6/midibuf.c
··· 159 159 int midi_length_prev = 160 160 midibuf_message_length(this->command_prev); 161 161 162 - if (midi_length_prev > 0) { 162 + if (midi_length_prev > 1) { 163 163 midi_length = midi_length_prev - 1; 164 164 repeat = 1; 165 165 } else
+7 -7
tools/include/uapi/asm/errno.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #if defined(__i386__) || defined(__x86_64__) 3 - #include "../../arch/x86/include/uapi/asm/errno.h" 3 + #include "../../../arch/x86/include/uapi/asm/errno.h" 4 4 #elif defined(__powerpc__) 5 - #include "../../arch/powerpc/include/uapi/asm/errno.h" 5 + #include "../../../arch/powerpc/include/uapi/asm/errno.h" 6 6 #elif defined(__sparc__) 7 - #include "../../arch/sparc/include/uapi/asm/errno.h" 7 + #include "../../../arch/sparc/include/uapi/asm/errno.h" 8 8 #elif defined(__alpha__) 9 - #include "../../arch/alpha/include/uapi/asm/errno.h" 9 + #include "../../../arch/alpha/include/uapi/asm/errno.h" 10 10 #elif defined(__mips__) 11 - #include "../../arch/mips/include/uapi/asm/errno.h" 11 + #include "../../../arch/mips/include/uapi/asm/errno.h" 12 12 #elif defined(__ia64__) 13 - #include "../../arch/ia64/include/uapi/asm/errno.h" 13 + #include "../../../arch/ia64/include/uapi/asm/errno.h" 14 14 #elif defined(__xtensa__) 15 - #include "../../arch/xtensa/include/uapi/asm/errno.h" 15 + #include "../../../arch/xtensa/include/uapi/asm/errno.h" 16 16 #else 17 17 #include <asm-generic/errno.h> 18 18 #endif
+10 -10
tools/perf/arch/arm64/util/arm-spe.c
··· 11 11 #include <linux/zalloc.h> 12 12 #include <time.h> 13 13 14 - #include "../../util/cpumap.h" 15 - #include "../../util/event.h" 16 - #include "../../util/evsel.h" 17 - #include "../../util/evlist.h" 18 - #include "../../util/session.h" 14 + #include "../../../util/cpumap.h" 15 + #include "../../../util/event.h" 16 + #include "../../../util/evsel.h" 17 + #include "../../../util/evlist.h" 18 + #include "../../../util/session.h" 19 19 #include <internal/lib.h> // page_size 20 - #include "../../util/pmu.h" 21 - #include "../../util/debug.h" 22 - #include "../../util/auxtrace.h" 23 - #include "../../util/record.h" 24 - #include "../../util/arm-spe.h" 20 + #include "../../../util/pmu.h" 21 + #include "../../../util/debug.h" 22 + #include "../../../util/auxtrace.h" 23 + #include "../../../util/record.h" 24 + #include "../../../util/arm-spe.h" 25 25 26 26 #define KiB(x) ((x) * 1024) 27 27 #define MiB(x) ((x) * 1024 * 1024)
+1 -1
tools/perf/arch/arm64/util/perf_regs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include "../../util/perf_regs.h" 2 + #include "../../../util/perf_regs.h" 3 3 4 4 const struct sample_reg sample_reg_masks[] = { 5 5 SMPL_REG_END
+2 -2
tools/perf/arch/powerpc/util/perf_regs.c
··· 4 4 #include <regex.h> 5 5 #include <linux/zalloc.h> 6 6 7 - #include "../../util/perf_regs.h" 8 - #include "../../util/debug.h" 7 + #include "../../../util/perf_regs.h" 8 + #include "../../../util/debug.h" 9 9 10 10 #include <linux/kernel.h> 11 11
+7 -7
tools/perf/arch/x86/util/auxtrace.c
··· 7 7 #include <errno.h> 8 8 #include <stdbool.h> 9 9 10 - #include "../../util/header.h" 11 - #include "../../util/debug.h" 12 - #include "../../util/pmu.h" 13 - #include "../../util/auxtrace.h" 14 - #include "../../util/intel-pt.h" 15 - #include "../../util/intel-bts.h" 16 - #include "../../util/evlist.h" 10 + #include "../../../util/header.h" 11 + #include "../../../util/debug.h" 12 + #include "../../../util/pmu.h" 13 + #include "../../../util/auxtrace.h" 14 + #include "../../../util/intel-pt.h" 15 + #include "../../../util/intel-bts.h" 16 + #include "../../../util/evlist.h" 17 17 18 18 static 19 19 struct auxtrace_record *auxtrace_record__init_intel(struct evlist *evlist,
+6 -6
tools/perf/arch/x86/util/event.c
··· 3 3 #include <linux/string.h> 4 4 #include <linux/zalloc.h> 5 5 6 - #include "../../util/event.h" 7 - #include "../../util/synthetic-events.h" 8 - #include "../../util/machine.h" 9 - #include "../../util/tool.h" 10 - #include "../../util/map.h" 11 - #include "../../util/debug.h" 6 + #include "../../../util/event.h" 7 + #include "../../../util/synthetic-events.h" 8 + #include "../../../util/machine.h" 9 + #include "../../../util/tool.h" 10 + #include "../../../util/map.h" 11 + #include "../../../util/debug.h" 12 12 13 13 #if defined(__x86_64__) 14 14
+2 -2
tools/perf/arch/x86/util/header.c
··· 7 7 #include <string.h> 8 8 #include <regex.h> 9 9 10 - #include "../../util/debug.h" 11 - #include "../../util/header.h" 10 + #include "../../../util/debug.h" 11 + #include "../../../util/header.h" 12 12 13 13 static inline void 14 14 cpuid(unsigned int op, unsigned int *a, unsigned int *b, unsigned int *c,
+12 -12
tools/perf/arch/x86/util/intel-bts.c
··· 11 11 #include <linux/log2.h> 12 12 #include <linux/zalloc.h> 13 13 14 - #include "../../util/cpumap.h" 15 - #include "../../util/event.h" 16 - #include "../../util/evsel.h" 17 - #include "../../util/evlist.h" 18 - #include "../../util/mmap.h" 19 - #include "../../util/session.h" 20 - #include "../../util/pmu.h" 21 - #include "../../util/debug.h" 22 - #include "../../util/record.h" 23 - #include "../../util/tsc.h" 24 - #include "../../util/auxtrace.h" 25 - #include "../../util/intel-bts.h" 14 + #include "../../../util/cpumap.h" 15 + #include "../../../util/event.h" 16 + #include "../../../util/evsel.h" 17 + #include "../../../util/evlist.h" 18 + #include "../../../util/mmap.h" 19 + #include "../../../util/session.h" 20 + #include "../../../util/pmu.h" 21 + #include "../../../util/debug.h" 22 + #include "../../../util/record.h" 23 + #include "../../../util/tsc.h" 24 + #include "../../../util/auxtrace.h" 25 + #include "../../../util/intel-bts.h" 26 26 #include <internal/lib.h> // page_size 27 27 28 28 #define KiB(x) ((x) * 1024)
+15 -15
tools/perf/arch/x86/util/intel-pt.c
··· 13 13 #include <linux/zalloc.h> 14 14 #include <cpuid.h> 15 15 16 - #include "../../util/session.h" 17 - #include "../../util/event.h" 18 - #include "../../util/evlist.h" 19 - #include "../../util/evsel.h" 20 - #include "../../util/evsel_config.h" 21 - #include "../../util/cpumap.h" 22 - #include "../../util/mmap.h" 16 + #include "../../../util/session.h" 17 + #include "../../../util/event.h" 18 + #include "../../../util/evlist.h" 19 + #include "../../../util/evsel.h" 20 + #include "../../../util/evsel_config.h" 21 + #include "../../../util/cpumap.h" 22 + #include "../../../util/mmap.h" 23 23 #include <subcmd/parse-options.h> 24 - #include "../../util/parse-events.h" 25 - #include "../../util/pmu.h" 26 - #include "../../util/debug.h" 27 - #include "../../util/auxtrace.h" 28 - #include "../../util/record.h" 29 - #include "../../util/target.h" 30 - #include "../../util/tsc.h" 24 + #include "../../../util/parse-events.h" 25 + #include "../../../util/pmu.h" 26 + #include "../../../util/debug.h" 27 + #include "../../../util/auxtrace.h" 28 + #include "../../../util/record.h" 29 + #include "../../../util/target.h" 30 + #include "../../../util/tsc.h" 31 31 #include <internal/lib.h> // page_size 32 - #include "../../util/intel-pt.h" 32 + #include "../../../util/intel-pt.h" 33 33 34 34 #define KiB(x) ((x) * 1024) 35 35 #define MiB(x) ((x) * 1024 * 1024)
+3 -3
tools/perf/arch/x86/util/machine.c
··· 5 5 #include <stdlib.h> 6 6 7 7 #include <internal/lib.h> // page_size 8 - #include "../../util/machine.h" 9 - #include "../../util/map.h" 10 - #include "../../util/symbol.h" 8 + #include "../../../util/machine.h" 9 + #include "../../../util/map.h" 10 + #include "../../../util/symbol.h" 11 11 #include <linux/ctype.h> 12 12 13 13 #include <symbol/kallsyms.h>
+4 -4
tools/perf/arch/x86/util/perf_regs.c
··· 5 5 #include <linux/kernel.h> 6 6 #include <linux/zalloc.h> 7 7 8 - #include "../../perf-sys.h" 9 - #include "../../util/perf_regs.h" 10 - #include "../../util/debug.h" 11 - #include "../../util/event.h" 8 + #include "../../../perf-sys.h" 9 + #include "../../../util/perf_regs.h" 10 + #include "../../../util/debug.h" 11 + #include "../../../util/event.h" 12 12 13 13 const struct sample_reg sample_reg_masks[] = { 14 14 SMPL_REG(AX, PERF_REG_X86_AX),
+3 -3
tools/perf/arch/x86/util/pmu.c
··· 4 4 #include <linux/stddef.h> 5 5 #include <linux/perf_event.h> 6 6 7 - #include "../../util/intel-pt.h" 8 - #include "../../util/intel-bts.h" 9 - #include "../../util/pmu.h" 7 + #include "../../../util/intel-pt.h" 8 + #include "../../../util/intel-bts.h" 9 + #include "../../../util/pmu.h" 10 10 11 11 struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused) 12 12 {
+4
tools/perf/bench/bench.h
··· 2 2 #ifndef BENCH_H 3 3 #define BENCH_H 4 4 5 + #include <sys/time.h> 6 + 7 + extern struct timeval bench__start, bench__end, bench__runtime; 8 + 5 9 /* 6 10 * The madvise transparent hugepage constants were added in glibc 7 11 * 2.13. For compatibility with older versions of glibc, define these
+4 -4
tools/perf/bench/epoll-ctl.c
··· 35 35 36 36 static unsigned int nthreads = 0; 37 37 static unsigned int nsecs = 8; 38 - struct timeval start, end, runtime; 39 38 static bool done, __verbose, randomize; 40 39 41 40 /* ··· 93 94 { 94 95 /* inform all threads that we're done for the day */ 95 96 done = true; 96 - gettimeofday(&end, NULL); 97 - timersub(&end, &start, &runtime); 97 + gettimeofday(&bench__end, NULL); 98 + timersub(&bench__end, &bench__start, &bench__runtime); 98 99 } 99 100 100 101 static void nest_epollfd(void) ··· 312 313 exit(EXIT_FAILURE); 313 314 } 314 315 316 + memset(&act, 0, sizeof(act)); 315 317 sigfillset(&act.sa_mask); 316 318 act.sa_sigaction = toggle_done; 317 319 sigaction(SIGINT, &act, NULL); ··· 361 361 362 362 threads_starting = nthreads; 363 363 364 - gettimeofday(&start, NULL); 364 + gettimeofday(&bench__start, NULL); 365 365 366 366 do_threads(worker, cpu); 367 367
+6 -6
tools/perf/bench/epoll-wait.c
··· 90 90 91 91 static unsigned int nthreads = 0; 92 92 static unsigned int nsecs = 8; 93 - struct timeval start, end, runtime; 94 93 static bool wdone, done, __verbose, randomize, nonblocking; 95 94 96 95 /* ··· 275 276 { 276 277 /* inform all threads that we're done for the day */ 277 278 done = true; 278 - gettimeofday(&end, NULL); 279 - timersub(&end, &start, &runtime); 279 + gettimeofday(&bench__end, NULL); 280 + timersub(&bench__end, &bench__start, &bench__runtime); 280 281 } 281 282 282 283 static void print_summary(void) ··· 286 287 287 288 printf("\nAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", 288 289 avg, rel_stddev_stats(stddev, avg), 289 - (int) runtime.tv_sec); 290 + (int)bench__runtime.tv_sec); 290 291 } 291 292 292 293 static int do_threads(struct worker *worker, struct perf_cpu_map *cpu) ··· 426 427 exit(EXIT_FAILURE); 427 428 } 428 429 430 + memset(&act, 0, sizeof(act)); 429 431 sigfillset(&act.sa_mask); 430 432 act.sa_sigaction = toggle_done; 431 433 sigaction(SIGINT, &act, NULL); ··· 479 479 480 480 threads_starting = nthreads; 481 481 482 - gettimeofday(&start, NULL); 482 + gettimeofday(&bench__start, NULL); 483 483 484 484 do_threads(worker, cpu); 485 485 ··· 519 519 qsort(worker, nthreads, sizeof(struct worker), cmpworker); 520 520 521 521 for (i = 0; i < nthreads; i++) { 522 - unsigned long t = worker[i].ops/runtime.tv_sec; 522 + unsigned long t = worker[i].ops / bench__runtime.tv_sec; 523 523 524 524 update_stats(&throughput_stats, t); 525 525
+7 -6
tools/perf/bench/futex-hash.c
··· 37 37 static bool fshared = false, done = false, silent = false; 38 38 static int futex_flag = 0; 39 39 40 - struct timeval start, end, runtime; 40 + struct timeval bench__start, bench__end, bench__runtime; 41 41 static pthread_mutex_t thread_lock; 42 42 static unsigned int threads_starting; 43 43 static struct stats throughput_stats; ··· 103 103 { 104 104 /* inform all threads that we're done for the day */ 105 105 done = true; 106 - gettimeofday(&end, NULL); 107 - timersub(&end, &start, &runtime); 106 + gettimeofday(&bench__end, NULL); 107 + timersub(&bench__end, &bench__start, &bench__runtime); 108 108 } 109 109 110 110 static void print_summary(void) ··· 114 114 115 115 printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", 116 116 !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg), 117 - (int) runtime.tv_sec); 117 + (int)bench__runtime.tv_sec); 118 118 } 119 119 120 120 int bench_futex_hash(int argc, const char **argv) ··· 137 137 if (!cpu) 138 138 goto errmem; 139 139 140 + memset(&act, 0, sizeof(act)); 140 141 sigfillset(&act.sa_mask); 141 142 act.sa_sigaction = toggle_done; 142 143 sigaction(SIGINT, &act, NULL); ··· 162 161 163 162 threads_starting = nthreads; 164 163 pthread_attr_init(&thread_attr); 165 - gettimeofday(&start, NULL); 164 + gettimeofday(&bench__start, NULL); 166 165 for (i = 0; i < nthreads; i++) { 167 166 worker[i].tid = i; 168 167 worker[i].futex = calloc(nfutexes, sizeof(*worker[i].futex)); ··· 205 204 pthread_mutex_destroy(&thread_lock); 206 205 207 206 for (i = 0; i < nthreads; i++) { 208 - unsigned long t = worker[i].ops/runtime.tv_sec; 207 + unsigned long t = worker[i].ops / bench__runtime.tv_sec; 209 208 update_stats(&throughput_stats, t); 210 209 if (!silent) { 211 210 if (nfutexes == 1)
+6 -6
tools/perf/bench/futex-lock-pi.c
··· 37 37 static bool done = false, fshared = false; 38 38 static unsigned int nthreads = 0; 39 39 static int futex_flag = 0; 40 - struct timeval start, end, runtime; 41 40 static pthread_mutex_t thread_lock; 42 41 static unsigned int threads_starting; 43 42 static struct stats throughput_stats; ··· 63 64 64 65 printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", 65 66 !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg), 66 - (int) runtime.tv_sec); 67 + (int)bench__runtime.tv_sec); 67 68 } 68 69 69 70 static void toggle_done(int sig __maybe_unused, ··· 72 73 { 73 74 /* inform all threads that we're done for the day */ 74 75 done = true; 75 - gettimeofday(&end, NULL); 76 - timersub(&end, &start, &runtime); 76 + gettimeofday(&bench__end, NULL); 77 + timersub(&bench__end, &bench__start, &bench__runtime); 77 78 } 78 79 79 80 static void *workerfn(void *arg) ··· 160 161 if (!cpu) 161 162 err(EXIT_FAILURE, "calloc"); 162 163 164 + memset(&act, 0, sizeof(act)); 163 165 sigfillset(&act.sa_mask); 164 166 act.sa_sigaction = toggle_done; 165 167 sigaction(SIGINT, &act, NULL); ··· 185 185 186 186 threads_starting = nthreads; 187 187 pthread_attr_init(&thread_attr); 188 - gettimeofday(&start, NULL); 188 + gettimeofday(&bench__start, NULL); 189 189 190 190 create_threads(worker, thread_attr, cpu); 191 191 pthread_attr_destroy(&thread_attr); ··· 211 211 pthread_mutex_destroy(&thread_lock); 212 212 213 213 for (i = 0; i < nthreads; i++) { 214 - unsigned long t = worker[i].ops/runtime.tv_sec; 214 + unsigned long t = worker[i].ops / bench__runtime.tv_sec; 215 215 216 216 update_stats(&throughput_stats, t); 217 217 if (!silent)
+1
tools/perf/bench/futex-requeue.c
··· 128 128 if (!cpu) 129 129 err(EXIT_FAILURE, "cpu_map__new"); 130 130 131 + memset(&act, 0, sizeof(act)); 131 132 sigfillset(&act.sa_mask); 132 133 act.sa_sigaction = toggle_done; 133 134 sigaction(SIGINT, &act, NULL);
+1
tools/perf/bench/futex-wake-parallel.c
··· 234 234 exit(EXIT_FAILURE); 235 235 } 236 236 237 + memset(&act, 0, sizeof(act)); 237 238 sigfillset(&act.sa_mask); 238 239 act.sa_sigaction = toggle_done; 239 240 sigaction(SIGINT, &act, NULL);
+3 -2
tools/perf/bench/futex-wake.c
··· 43 43 static pthread_mutex_t thread_lock; 44 44 static pthread_cond_t thread_parent, thread_worker; 45 45 static struct stats waketime_stats, wakeup_stats; 46 - static unsigned int ncpus, threads_starting, nthreads = 0; 46 + static unsigned int threads_starting, nthreads = 0; 47 47 static int futex_flag = 0; 48 48 49 49 static const struct option options[] = { ··· 136 136 if (!cpu) 137 137 err(EXIT_FAILURE, "calloc"); 138 138 139 + memset(&act, 0, sizeof(act)); 139 140 sigfillset(&act.sa_mask); 140 141 act.sa_sigaction = toggle_done; 141 142 sigaction(SIGINT, &act, NULL); 142 143 143 144 if (!nthreads) 144 - nthreads = ncpus; 145 + nthreads = cpu->nr; 145 146 146 147 worker = calloc(nthreads, sizeof(*worker)); 147 148 if (!worker)
+2 -1
tools/perf/builtin-diff.c
··· 1312 1312 end_line = map__srcline(he->ms.map, bi->sym->start + bi->end, 1313 1313 he->ms.sym); 1314 1314 1315 - if ((start_line != SRCLINE_UNKNOWN) && (end_line != SRCLINE_UNKNOWN)) { 1315 + if ((strncmp(start_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0) && 1316 + (strncmp(end_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)) { 1316 1317 scnprintf(buf, sizeof(buf), "[%s -> %s] %4ld", 1317 1318 start_line, end_line, block_he->diff.cycles); 1318 1319 } else {
+3 -1
tools/perf/builtin-top.c
··· 684 684 delay_msecs = top->delay_secs * MSEC_PER_SEC; 685 685 set_term_quiet_input(&save); 686 686 /* trash return*/ 687 - getc(stdin); 687 + clearerr(stdin); 688 + if (poll(&stdin_poll, 1, 0) > 0) 689 + getc(stdin); 688 690 689 691 while (!done) { 690 692 perf_top__print_sym_table(top);
+9 -6
tools/perf/pmu-events/jevents.c
··· 1082 1082 */ 1083 1083 int main(int argc, char *argv[]) 1084 1084 { 1085 - int rc; 1085 + int rc, ret = 0; 1086 1086 int maxfds; 1087 1087 char ldirname[PATH_MAX]; 1088 - 1089 1088 const char *arch; 1090 1089 const char *output_file; 1091 1090 const char *start_dirname; ··· 1155 1156 /* Make build fail */ 1156 1157 fclose(eventsfp); 1157 1158 free_arch_std_events(); 1158 - return 1; 1159 + ret = 1; 1160 + goto out_free_mapfile; 1159 1161 } else if (rc) { 1160 1162 goto empty_map; 1161 1163 } ··· 1174 1174 /* Make build fail */ 1175 1175 fclose(eventsfp); 1176 1176 free_arch_std_events(); 1177 - return 1; 1177 + ret = 1; 1178 1178 } 1179 1179 1180 - return 0; 1180 + 1181 + goto out_free_mapfile; 1181 1182 1182 1183 empty_map: 1183 1184 fclose(eventsfp); 1184 1185 create_empty_mapping(output_file); 1185 1186 free_arch_std_events(); 1186 - return 0; 1187 + out_free_mapfile: 1188 + free(mapfile); 1189 + return ret; 1187 1190 }
+1 -1
tools/perf/tests/bp_account.c
··· 19 19 #include "../perf-sys.h" 20 20 #include "cloexec.h" 21 21 22 - volatile long the_var; 22 + static volatile long the_var; 23 23 24 24 static noinline int test_function(void) 25 25 {
+2 -1
tools/perf/util/block-info.c
··· 295 295 end_line = map__srcline(he->ms.map, bi->sym->start + bi->end, 296 296 he->ms.sym); 297 297 298 - if ((start_line != SRCLINE_UNKNOWN) && (end_line != SRCLINE_UNKNOWN)) { 298 + if ((strncmp(start_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0) && 299 + (strncmp(end_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)) { 299 300 scnprintf(buf, sizeof(buf), "[%s -> %s]", 300 301 start_line, end_line); 301 302 } else {
+2 -2
tools/perf/util/env.c
··· 343 343 344 344 const char *perf_env__arch(struct perf_env *env) 345 345 { 346 - struct utsname uts; 347 346 char *arch_name; 348 347 349 348 if (!env || !env->arch) { /* Assume local operation */ 350 - if (uname(&uts) < 0) 349 + static struct utsname uts = { .machine[0] = '\0', }; 350 + if (uts.machine[0] == '\0' && uname(&uts) < 0) 351 351 return NULL; 352 352 arch_name = uts.machine; 353 353 } else
+1 -1
tools/perf/util/map.c
··· 431 431 432 432 if (map && map->dso) { 433 433 char *srcline = map__srcline(map, addr, NULL); 434 - if (srcline != SRCLINE_UNKNOWN) 434 + if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0) 435 435 ret = fprintf(fp, "%s%s", prefix, srcline); 436 436 free_srcline(srcline); 437 437 }
+2 -8
tools/perf/util/parse-events.c
··· 257 257 path = zalloc(sizeof(*path)); 258 258 if (!path) 259 259 return NULL; 260 - path->system = malloc(MAX_EVENT_LENGTH); 261 - if (!path->system) { 260 + if (asprintf(&path->system, "%.*s", MAX_EVENT_LENGTH, sys_dirent->d_name) < 0) { 262 261 free(path); 263 262 return NULL; 264 263 } 265 - path->name = malloc(MAX_EVENT_LENGTH); 266 - if (!path->name) { 264 + if (asprintf(&path->name, "%.*s", MAX_EVENT_LENGTH, evt_dirent->d_name) < 0) { 267 265 zfree(&path->system); 268 266 free(path); 269 267 return NULL; 270 268 } 271 - strncpy(path->system, sys_dirent->d_name, 272 - MAX_EVENT_LENGTH); 273 - strncpy(path->name, evt_dirent->d_name, 274 - MAX_EVENT_LENGTH); 275 269 return path; 276 270 } 277 271 }
+6 -7
tools/perf/util/symbol.c
··· 1622 1622 goto out; 1623 1623 } 1624 1624 1625 - if (dso->kernel) { 1625 + kmod = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || 1626 + dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP || 1627 + dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE || 1628 + dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE_COMP; 1629 + 1630 + if (dso->kernel && !kmod) { 1626 1631 if (dso->kernel == DSO_TYPE_KERNEL) 1627 1632 ret = dso__load_kernel_sym(dso, map); 1628 1633 else if (dso->kernel == DSO_TYPE_GUEST_KERNEL) ··· 1654 1649 name = malloc(PATH_MAX); 1655 1650 if (!name) 1656 1651 goto out; 1657 - 1658 - kmod = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || 1659 - dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP || 1660 - dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE || 1661 - dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE_COMP; 1662 - 1663 1652 1664 1653 /* 1665 1654 * Read the build id if possible. This is required for
+1 -1
tools/power/cpupower/utils/idle_monitor/amd_fam14h_idle.c
··· 82 82 static struct pci_dev *amd_fam14h_pci_dev; 83 83 static int nbp1_entered; 84 84 85 - struct timespec start_time; 85 + static struct timespec start_time; 86 86 static unsigned long long timediff; 87 87 88 88 #ifdef DEBUG
+1 -1
tools/power/cpupower/utils/idle_monitor/cpuidle_sysfs.c
··· 19 19 20 20 static unsigned long long **previous_count; 21 21 static unsigned long long **current_count; 22 - struct timespec start_time; 22 + static struct timespec start_time; 23 23 static unsigned long long timediff; 24 24 25 25 static int cpuidle_get_count_percent(unsigned int id, double *percent,
+2
tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
··· 27 27 0 28 28 }; 29 29 30 + int cpu_count; 31 + 30 32 static struct cpuidle_monitor *monitors[MONITORS_MAX]; 31 33 static unsigned int avail_monitors; 32 34
+1 -1
tools/power/cpupower/utils/idle_monitor/cpupower-monitor.h
··· 25 25 #endif 26 26 #define CSTATE_DESC_LEN 60 27 27 28 - int cpu_count; 28 + extern int cpu_count; 29 29 30 30 /* Hard to define the right names ...: */ 31 31 enum power_range_e {
+1 -1
tools/power/x86/turbostat/Makefile
··· 16 16 17 17 %: %.c 18 18 @mkdir -p $(BUILD_OUTPUT) 19 - $(CC) $(CFLAGS) $< -o $(BUILD_OUTPUT)/$@ $(LDFLAGS) 19 + $(CC) $(CFLAGS) $< -o $(BUILD_OUTPUT)/$@ $(LDFLAGS) -lcap 20 20 21 21 .PHONY : clean 22 22 clean :
+114 -30
tools/power/x86/turbostat/turbostat.c
··· 30 30 #include <sched.h> 31 31 #include <time.h> 32 32 #include <cpuid.h> 33 - #include <linux/capability.h> 33 + #include <sys/capability.h> 34 34 #include <errno.h> 35 35 #include <math.h> 36 36 ··· 303 303 int *irqs_per_cpu; /* indexed by cpu_num */ 304 304 305 305 void setup_all_buffers(void); 306 + 307 + char *sys_lpi_file; 308 + char *sys_lpi_file_sysfs = "/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us"; 309 + char *sys_lpi_file_debugfs = "/sys/kernel/debug/pmc_core/slp_s0_residency_usec"; 306 310 307 311 int cpu_is_not_present(int cpu) 308 312 { ··· 2920 2916 * 2921 2917 * record snapshot of 2922 2918 * /sys/devices/system/cpu/cpuidle/low_power_idle_cpu_residency_us 2923 - * 2924 - * return 1 if config change requires a restart, else return 0 2925 2919 */ 2926 2920 int snapshot_cpu_lpi_us(void) 2927 2921 { ··· 2943 2941 /* 2944 2942 * snapshot_sys_lpi() 2945 2943 * 2946 - * record snapshot of 2947 - * /sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us 2948 - * 2949 - * return 1 if config change requires a restart, else return 0 2944 + * record snapshot of sys_lpi_file 2950 2945 */ 2951 2946 int snapshot_sys_lpi_us(void) 2952 2947 { 2953 2948 FILE *fp; 2954 2949 int retval; 2955 2950 2956 - fp = fopen_or_die("/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us", "r"); 2951 + fp = fopen_or_die(sys_lpi_file, "r"); 2957 2952 2958 2953 retval = fscanf(fp, "%lld", &cpuidle_cur_sys_lpi_us); 2959 2954 if (retval != 1) { ··· 3150 3151 err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" "); 3151 3152 } 3152 3153 3153 - void check_permissions() 3154 + /* 3155 + * check for CAP_SYS_RAWIO 3156 + * return 0 on success 3157 + * return 1 on fail 3158 + */ 3159 + int check_for_cap_sys_rawio(void) 3154 3160 { 3155 - struct __user_cap_header_struct cap_header_data; 3156 - cap_user_header_t cap_header = &cap_header_data; 3157 - struct __user_cap_data_struct cap_data_data; 3158 - cap_user_data_t cap_data = &cap_data_data; 3159 - extern int capget(cap_user_header_t hdrp, cap_user_data_t datap); 3161 + cap_t caps; 3162 + cap_flag_value_t cap_flag_value; 3163 + 3164 + caps = cap_get_proc(); 3165 + if (caps == NULL) 3166 + err(-6, "cap_get_proc\n"); 3167 + 3168 + if (cap_get_flag(caps, CAP_SYS_RAWIO, CAP_EFFECTIVE, &cap_flag_value)) 3169 + err(-6, "cap_get\n"); 3170 + 3171 + if (cap_flag_value != CAP_SET) { 3172 + warnx("capget(CAP_SYS_RAWIO) failed," 3173 + " try \"# setcap cap_sys_rawio=ep %s\"", progname); 3174 + return 1; 3175 + } 3176 + 3177 + if (cap_free(caps) == -1) 3178 + err(-6, "cap_free\n"); 3179 + 3180 + return 0; 3181 + } 3182 + void check_permissions(void) 3183 + { 3160 3184 int do_exit = 0; 3161 3185 char pathname[32]; 3162 3186 3163 3187 /* check for CAP_SYS_RAWIO */ 3164 - cap_header->pid = getpid(); 3165 - cap_header->version = _LINUX_CAPABILITY_VERSION; 3166 - if (capget(cap_header, cap_data) < 0) 3167 - err(-6, "capget(2) failed"); 3168 - 3169 - if ((cap_data->effective & (1 << CAP_SYS_RAWIO)) == 0) { 3170 - do_exit++; 3171 - warnx("capget(CAP_SYS_RAWIO) failed," 3172 - " try \"# setcap cap_sys_rawio=ep %s\"", progname); 3173 - } 3188 + do_exit += check_for_cap_sys_rawio(); 3174 3189 3175 3190 /* test file permissions */ 3176 3191 sprintf(pathname, "/dev/cpu/%d/msr", base_cpu); ··· 3278 3265 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */ 3279 3266 case INTEL_FAM6_ATOM_GOLDMONT_PLUS: 3280 3267 case INTEL_FAM6_ATOM_GOLDMONT_D: /* DNV */ 3268 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 3281 3269 pkg_cstate_limits = glm_pkg_cstate_limits; 3282 3270 break; 3283 3271 default: ··· 3346 3332 3347 3333 switch (model) { 3348 3334 case INTEL_FAM6_SKYLAKE_X: 3335 + return 1; 3336 + } 3337 + return 0; 3338 + } 3339 + int is_ehl(unsigned int family, unsigned int model) 3340 + { 3341 + if (!genuine_intel) 3342 + return 0; 3343 + 3344 + switch (model) { 3345 + case INTEL_FAM6_ATOM_TREMONT: 3349 3346 return 1; 3350 3347 } 3351 3348 return 0; ··· 3503 3478 dump_nhm_cst_cfg(); 3504 3479 } 3505 3480 3481 + static void dump_sysfs_file(char *path) 3482 + { 3483 + FILE *input; 3484 + char cpuidle_buf[64]; 3485 + 3486 + input = fopen(path, "r"); 3487 + if (input == NULL) { 3488 + if (debug) 3489 + fprintf(outf, "NSFOD %s\n", path); 3490 + return; 3491 + } 3492 + if (!fgets(cpuidle_buf, sizeof(cpuidle_buf), input)) 3493 + err(1, "%s: failed to read file", path); 3494 + fclose(input); 3495 + 3496 + fprintf(outf, "%s: %s", strrchr(path, '/') + 1, cpuidle_buf); 3497 + } 3506 3498 static void 3507 3499 dump_sysfs_cstate_config(void) 3508 3500 { ··· 3532 3490 3533 3491 if (!DO_BIC(BIC_sysfs)) 3534 3492 return; 3493 + 3494 + if (access("/sys/devices/system/cpu/cpuidle", R_OK)) { 3495 + fprintf(outf, "cpuidle not loaded\n"); 3496 + return; 3497 + } 3498 + 3499 + dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_driver"); 3500 + dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_governor"); 3501 + dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_governor_ro"); 3535 3502 3536 3503 for (state = 0; state < 10; ++state) { 3537 3504 ··· 3945 3894 else 3946 3895 BIC_PRESENT(BIC_PkgWatt); 3947 3896 break; 3897 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 3898 + do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_GFX | RAPL_PKG_POWER_INFO; 3899 + if (rapl_joules) { 3900 + BIC_PRESENT(BIC_Pkg_J); 3901 + BIC_PRESENT(BIC_Cor_J); 3902 + BIC_PRESENT(BIC_RAM_J); 3903 + BIC_PRESENT(BIC_GFX_J); 3904 + } else { 3905 + BIC_PRESENT(BIC_PkgWatt); 3906 + BIC_PRESENT(BIC_CorWatt); 3907 + BIC_PRESENT(BIC_RAMWatt); 3908 + BIC_PRESENT(BIC_GFXWatt); 3909 + } 3910 + break; 3948 3911 case INTEL_FAM6_SKYLAKE_L: /* SKL */ 3949 3912 case INTEL_FAM6_CANNONLAKE_L: /* CNL */ 3950 3913 do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_GFX | RAPL_PKG_POWER_INFO; ··· 4360 4295 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */ 4361 4296 case INTEL_FAM6_ATOM_GOLDMONT_PLUS: 4362 4297 case INTEL_FAM6_ATOM_GOLDMONT_D: /* DNV */ 4298 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 4363 4299 return 1; 4364 4300 } 4365 4301 return 0; ··· 4390 4324 case INTEL_FAM6_CANNONLAKE_L: /* CNL */ 4391 4325 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */ 4392 4326 case INTEL_FAM6_ATOM_GOLDMONT_PLUS: 4327 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 4393 4328 return 1; 4394 4329 } 4395 4330 return 0; ··· 4677 4610 case INTEL_FAM6_SKYLAKE: 4678 4611 case INTEL_FAM6_KABYLAKE_L: 4679 4612 case INTEL_FAM6_KABYLAKE: 4613 + case INTEL_FAM6_COMETLAKE_L: 4614 + case INTEL_FAM6_COMETLAKE: 4680 4615 return INTEL_FAM6_SKYLAKE_L; 4681 4616 4682 4617 case INTEL_FAM6_ICELAKE_L: 4683 4618 case INTEL_FAM6_ICELAKE_NNPI: 4619 + case INTEL_FAM6_TIGERLAKE_L: 4620 + case INTEL_FAM6_TIGERLAKE: 4684 4621 return INTEL_FAM6_CANNONLAKE_L; 4685 4622 4686 4623 case INTEL_FAM6_ATOM_TREMONT_D: 4687 4624 return INTEL_FAM6_ATOM_GOLDMONT_D; 4625 + 4626 + case INTEL_FAM6_ATOM_TREMONT_L: 4627 + return INTEL_FAM6_ATOM_TREMONT; 4628 + 4629 + case INTEL_FAM6_ICELAKE_X: 4630 + return INTEL_FAM6_SKYLAKE_X; 4688 4631 } 4689 4632 return model; 4690 4633 } ··· 4949 4872 do_slm_cstates = is_slm(family, model); 4950 4873 do_knl_cstates = is_knl(family, model); 4951 4874 4952 - if (do_slm_cstates || do_knl_cstates || is_cnl(family, model)) 4875 + if (do_slm_cstates || do_knl_cstates || is_cnl(family, model) || 4876 + is_ehl(family, model)) 4953 4877 BIC_NOT_PRESENT(BIC_CPU_c3); 4954 4878 4955 4879 if (!quiet) ··· 4985 4907 else 4986 4908 BIC_NOT_PRESENT(BIC_CPU_LPI); 4987 4909 4988 - if (!access("/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us", R_OK)) 4910 + if (!access(sys_lpi_file_sysfs, R_OK)) { 4911 + sys_lpi_file = sys_lpi_file_sysfs; 4989 4912 BIC_PRESENT(BIC_SYS_LPI); 4990 - else 4913 + } else if (!access(sys_lpi_file_debugfs, R_OK)) { 4914 + sys_lpi_file = sys_lpi_file_debugfs; 4915 + BIC_PRESENT(BIC_SYS_LPI); 4916 + } else { 4917 + sys_lpi_file_sysfs = NULL; 4991 4918 BIC_NOT_PRESENT(BIC_SYS_LPI); 4919 + } 4992 4920 4993 4921 if (!quiet) 4994 4922 decode_misc_feature_control(); ··· 5390 5306 } 5391 5307 5392 5308 void print_version() { 5393 - fprintf(outf, "turbostat version 19.08.31" 5309 + fprintf(outf, "turbostat version 20.03.20" 5394 5310 " - Len Brown <lenb@kernel.org>\n"); 5395 5311 } 5396 5312 ··· 5407 5323 } 5408 5324 5409 5325 msrp->msr_num = msr_num; 5410 - strncpy(msrp->name, name, NAME_BYTES); 5326 + strncpy(msrp->name, name, NAME_BYTES - 1); 5411 5327 if (path) 5412 - strncpy(msrp->path, path, PATH_BYTES); 5328 + strncpy(msrp->path, path, PATH_BYTES - 1); 5413 5329 msrp->width = width; 5414 5330 msrp->type = type; 5415 5331 msrp->format = format;
+8 -8
tools/testing/ktest/ktest.pl
··· 30 30 "EMAIL_WHEN_STARTED" => 0, 31 31 "NUM_TESTS" => 1, 32 32 "TEST_TYPE" => "build", 33 - "BUILD_TYPE" => "randconfig", 33 + "BUILD_TYPE" => "oldconfig", 34 34 "MAKE_CMD" => "make", 35 35 "CLOSE_CONSOLE_SIGNAL" => "INT", 36 36 "TIMEOUT" => 120, ··· 1030 1030 } 1031 1031 1032 1032 if (!$skip && $rest !~ /^\s*$/) { 1033 - die "$name: $.: Gargbage found after $type\n$_"; 1033 + die "$name: $.: Garbage found after $type\n$_"; 1034 1034 } 1035 1035 1036 1036 if ($skip && $type eq "TEST_START") { ··· 1063 1063 } 1064 1064 1065 1065 if ($rest !~ /^\s*$/) { 1066 - die "$name: $.: Gargbage found after DEFAULTS\n$_"; 1066 + die "$name: $.: Garbage found after DEFAULTS\n$_"; 1067 1067 } 1068 1068 1069 1069 } elsif (/^\s*INCLUDE\s+(\S+)/) { ··· 1154 1154 # on of these sections that have SKIP defined. 1155 1155 # The save variable can be 1156 1156 # defined multiple times and the new one simply overrides 1157 - # the prevous one. 1157 + # the previous one. 1158 1158 set_variable($lvalue, $rvalue); 1159 1159 1160 1160 } else { ··· 1234 1234 foreach my $option (keys %not_used) { 1235 1235 print "$option\n"; 1236 1236 } 1237 - print "Set IGRNORE_UNUSED = 1 to have ktest ignore unused variables\n"; 1237 + print "Set IGNORE_UNUSED = 1 to have ktest ignore unused variables\n"; 1238 1238 if (!read_yn "Do you want to continue?") { 1239 1239 exit -1; 1240 1240 } ··· 1345 1345 # Check for recursive evaluations. 1346 1346 # 100 deep should be more than enough. 1347 1347 if ($r++ > 100) { 1348 - die "Over 100 evaluations accurred with $option\n" . 1348 + die "Over 100 evaluations occurred with $option\n" . 1349 1349 "Check for recursive variables\n"; 1350 1350 } 1351 1351 $prev = $option; ··· 1383 1383 1384 1384 } else { 1385 1385 # Make sure everything has been written to disk 1386 - run_ssh("sync"); 1386 + run_ssh("sync", 10); 1387 1387 1388 1388 if (defined($time)) { 1389 1389 start_monitor; ··· 1461 1461 1462 1462 sub dodie { 1463 1463 1464 - # avoid recusion 1464 + # avoid recursion 1465 1465 return if ($in_die); 1466 1466 $in_die = 1; 1467 1467
+11 -11
tools/testing/ktest/sample.conf
··· 10 10 # 11 11 12 12 # Options set in the beginning of the file are considered to be 13 - # default options. These options can be overriden by test specific 13 + # default options. These options can be overridden by test specific 14 14 # options, with the following exceptions: 15 15 # 16 16 # LOG_FILE ··· 204 204 # 205 205 # This config file can also contain "config variables". 206 206 # These are assigned with ":=" instead of the ktest option 207 - # assigment "=". 207 + # assignment "=". 208 208 # 209 209 # The difference between ktest options and config variables 210 210 # is that config variables can be used multiple times, ··· 263 263 #### Using options in other options #### 264 264 # 265 265 # Options that are defined in the config file may also be used 266 - # by other options. All options are evaulated at time of 266 + # by other options. All options are evaluated at time of 267 267 # use (except that config variables are evaluated at config 268 268 # processing time). 269 269 # ··· 505 505 #TEST = ssh user@machine /root/run_test 506 506 507 507 # The build type is any make config type or special command 508 - # (default randconfig) 508 + # (default oldconfig) 509 509 # nobuild - skip the clean and build step 510 510 # useconfig:/path/to/config - use the given config and run 511 511 # oldconfig on it. ··· 707 707 708 708 # Line to define a successful boot up in console output. 709 709 # This is what the line contains, not the entire line. If you need 710 - # the entire line to match, then use regural expression syntax like: 710 + # the entire line to match, then use regular expression syntax like: 711 711 # (do not add any quotes around it) 712 712 # 713 713 # SUCCESS_LINE = ^MyBox Login:$ ··· 839 839 # (ignored if POWEROFF_ON_SUCCESS is set) 840 840 #REBOOT_ON_SUCCESS = 1 841 841 842 - # In case there are isses with rebooting, you can specify this 842 + # In case there are issues with rebooting, you can specify this 843 843 # to always powercycle after this amount of time after calling 844 844 # reboot. 845 845 # Note, POWERCYCLE_AFTER_REBOOT = 0 does NOT disable it. It just ··· 848 848 # (default undefined) 849 849 #POWERCYCLE_AFTER_REBOOT = 5 850 850 851 - # In case there's isses with halting, you can specify this 851 + # In case there's issues with halting, you can specify this 852 852 # to always poweroff after this amount of time after calling 853 853 # halt. 854 854 # Note, POWEROFF_AFTER_HALT = 0 does NOT disable it. It just ··· 972 972 # 973 973 # PATCHCHECK_START is required and is the first patch to 974 974 # test (the SHA1 of the commit). You may also specify anything 975 - # that git checkout allows (branch name, tage, HEAD~3). 975 + # that git checkout allows (branch name, tag, HEAD~3). 976 976 # 977 977 # PATCHCHECK_END is the last patch to check (default HEAD) 978 978 # ··· 994 994 # IGNORE_WARNINGS is set for the given commit's sha1 995 995 # 996 996 # IGNORE_WARNINGS can be used to disable the failure of patchcheck 997 - # on a particuler commit (SHA1). You can add more than one commit 997 + # on a particular commit (SHA1). You can add more than one commit 998 998 # by adding a list of SHA1s that are space delimited. 999 999 # 1000 1000 # If BUILD_NOCLEAN is set, then make mrproper will not be run on ··· 1093 1093 # whatever reason. (Can't reboot, want to inspect each iteration) 1094 1094 # Doing a BISECT_MANUAL will have the test wait for you to 1095 1095 # tell it if the test passed or failed after each iteration. 1096 - # This is basicall the same as running git bisect yourself 1096 + # This is basically the same as running git bisect yourself 1097 1097 # but ktest will rebuild and install the kernel for you. 1098 1098 # 1099 1099 # BISECT_CHECK = 1 (optional, default 0) ··· 1239 1239 # 1240 1240 # CONFIG_BISECT_EXEC (optional) 1241 1241 # The config bisect is a separate program that comes with ktest.pl. 1242 - # By befault, it will look for: 1242 + # By default, it will look for: 1243 1243 # `pwd`/config-bisect.pl # the location ktest.pl was executed from. 1244 1244 # If it does not find it there, it will look for: 1245 1245 # `dirname <ktest.pl>`/config-bisect.pl # The directory that holds ktest.pl
+31 -3
tools/testing/selftests/net/fib_tests.sh
··· 1041 1041 fi 1042 1042 log_test $rc 0 "Prefix route with metric on link up" 1043 1043 1044 + # verify peer metric added correctly 1045 + set -e 1046 + run_cmd "$IP -6 addr flush dev dummy2" 1047 + run_cmd "$IP -6 addr add dev dummy2 2001:db8:104::1 peer 2001:db8:104::2 metric 260" 1048 + set +e 1049 + 1050 + check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 260" 1051 + log_test $? 0 "Set metric with peer route on local side" 1052 + log_test $? 0 "User specified metric on local address" 1053 + check_route6 "2001:db8:104::2 dev dummy2 proto kernel metric 260" 1054 + log_test $? 0 "Set metric with peer route on peer side" 1055 + 1056 + set -e 1057 + run_cmd "$IP -6 addr change dev dummy2 2001:db8:104::1 peer 2001:db8:104::3 metric 261" 1058 + set +e 1059 + 1060 + check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 261" 1061 + log_test $? 0 "Modify metric and peer address on local side" 1062 + check_route6 "2001:db8:104::3 dev dummy2 proto kernel metric 261" 1063 + log_test $? 0 "Modify metric and peer address on peer side" 1064 + 1044 1065 $IP li del dummy1 1045 1066 $IP li del dummy2 1046 1067 cleanup ··· 1478 1457 1479 1458 run_cmd "$IP addr flush dev dummy2" 1480 1459 run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260" 1481 - run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261" 1482 1460 rc=$? 1483 1461 if [ $rc -eq 0 ]; then 1484 - check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261" 1462 + check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 260" 1485 1463 rc=$? 1486 1464 fi 1487 - log_test $rc 0 "Modify metric of address with peer route" 1465 + log_test $rc 0 "Set metric of address with peer route" 1466 + 1467 + run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.3 metric 261" 1468 + rc=$? 1469 + if [ $rc -eq 0 ]; then 1470 + check_route "172.16.104.3 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261" 1471 + rc=$? 1472 + fi 1473 + log_test $rc 0 "Modify metric and peer address for peer route" 1488 1474 1489 1475 $IP li del dummy1 1490 1476 $IP li del dummy2
+1
tools/testing/selftests/tc-testing/config
··· 57 57 CONFIG_NET_IFE_SKBPRIO=m 58 58 CONFIG_NET_IFE_SKBTCINDEX=m 59 59 CONFIG_NET_SCH_FIFO=y 60 + CONFIG_NET_SCH_ETS=m
+11 -11
usr/Kconfig
··· 124 124 125 125 If in doubt, select 'None' 126 126 127 - config INITRAMFS_COMPRESSION_NONE 128 - bool "None" 129 - help 130 - Do not compress the built-in initramfs at all. This may sound wasteful 131 - in space, but, you should be aware that the built-in initramfs will be 132 - compressed at a later stage anyways along with the rest of the kernel, 133 - on those architectures that support this. However, not compressing the 134 - initramfs may lead to slightly higher memory consumption during a 135 - short time at boot, while both the cpio image and the unpacked 136 - filesystem image will be present in memory simultaneously 137 - 138 127 config INITRAMFS_COMPRESSION_GZIP 139 128 bool "Gzip" 140 129 depends on RD_GZIP ··· 195 206 196 207 If you choose this, keep in mind that most distros don't provide lz4 197 208 by default which could cause a build failure. 209 + 210 + config INITRAMFS_COMPRESSION_NONE 211 + bool "None" 212 + help 213 + Do not compress the built-in initramfs at all. This may sound wasteful 214 + in space, but, you should be aware that the built-in initramfs will be 215 + compressed at a later stage anyways along with the rest of the kernel, 216 + on those architectures that support this. However, not compressing the 217 + initramfs may lead to slightly higher memory consumption during a 218 + short time at boot, while both the cpio image and the unpacked 219 + filesystem image will be present in memory simultaneously 198 220 199 221 endchoice