···279279 size in 512B sectors of the zones of the device, with280280 the eventual exception of the last zone of the device281281 which may be smaller.282282+283283+What: /sys/block/<disk>/queue/io_timeout284284+Date: November 2018285285+Contact: Weiping Zhang <zhangweiping@didiglobal.com>286286+Description:287287+ io_timeout is the request timeout in milliseconds. If a request288288+ does not complete in this time then the block driver timeout289289+ handler is invoked. That timeout handler can decide to retry290290+ the request, to fail it or to start a device recovery strategy.
+9-2
Documentation/ABI/testing/sysfs-block-zram
···122122 statistics (bd_count, bd_reads, bd_writes) in a format123123 similar to block layer statistics file format.124124125125+What: /sys/block/zram<id>/writeback_limit_enable126126+Date: November 2018127127+Contact: Minchan Kim <minchan@kernel.org>128128+Description:129129+ The writeback_limit_enable file is read-write and specifies130130+ eanbe of writeback_limit feature. "1" means eable the feature.131131+ No limit "0" is the initial state.132132+125133What: /sys/block/zram<id>/writeback_limit126134Date: November 2018127135Contact: Minchan Kim <minchan@kernel.org>128136Description:129137 The writeback_limit file is read-write and specifies the maximum130138 amount of writeback ZRAM can do. The limit could be changed131131- in run time and "0" means disable the limit.132132- No limit is the initial state.139139+ in run time.
+7
Documentation/block/bfq-iosched.txt
···357357than maximum throughput. In these cases, consider setting the358358strict_guarantees parameter.359359360360+slice_idle_us361361+-------------362362+363363+Controls the same tuning parameter as slice_idle, but in microseconds.364364+Either tunable can be used to set idling behavior. Afterwards, the365365+other tunable will reflect the newly set value in sysfs.366366+360367strict_guarantees361368-----------------362369
+2-1
Documentation/block/null_blk.txt
···88888989zoned=[0/1]: Default: 09090 0: Block device is exposed as a random-access block device.9191- 1: Block device is exposed as a host-managed zoned block device.9191+ 1: Block device is exposed as a host-managed zoned block device. Requires9292+ CONFIG_BLK_DEV_ZONED.92939394zone_size=[MB]: Default: 2569495 Per zone size when exposed as a zoned block device. Must be a power of two.
+7
Documentation/block/queue-sysfs.txt
···6767IO to sleep for this amount of microseconds before entering classic6868polling.69697070+io_timeout (RW)7171+---------------7272+io_timeout is the request timeout in milliseconds. If a request does not7373+complete in this time then the block driver timeout handler is invoked.7474+That timeout handler can decide to retry the request, to fail it or to start7575+a device recovery strategy.7676+7077iostats (RW)7178-------------7279This file is used to control (on/off) the iostats accounting of the
+47-27
Documentation/blockdev/zram.txt
···156156A brief description of exported device attributes. For more details please157157read Documentation/ABI/testing/sysfs-block-zram.158158159159-Name access description160160----- ------ -----------161161-disksize RW show and set the device's disk size162162-initstate RO shows the initialization state of the device163163-reset WO trigger device reset164164-mem_used_max WO reset the `mem_used_max' counter (see later)165165-mem_limit WO specifies the maximum amount of memory ZRAM can use166166- to store the compressed data167167-writeback_limit WO specifies the maximum amount of write IO zram can168168- write out to backing device as 4KB unit169169-max_comp_streams RW the number of possible concurrent compress operations170170-comp_algorithm RW show and change the compression algorithm171171-compact WO trigger memory compaction172172-debug_stat RO this file is used for zram debugging purposes173173-backing_dev RW set up backend storage for zram to write out174174-idle WO mark allocated slot as idle159159+Name access description160160+---- ------ -----------161161+disksize RW show and set the device's disk size162162+initstate RO shows the initialization state of the device163163+reset WO trigger device reset164164+mem_used_max WO reset the `mem_used_max' counter (see later)165165+mem_limit WO specifies the maximum amount of memory ZRAM can use166166+ to store the compressed data167167+writeback_limit WO specifies the maximum amount of write IO zram can168168+ write out to backing device as 4KB unit169169+writeback_limit_enable RW show and set writeback_limit feature170170+max_comp_streams RW the number of possible concurrent compress operations171171+comp_algorithm RW show and change the compression algorithm172172+compact WO trigger memory compaction173173+debug_stat RO this file is used for zram debugging purposes174174+backing_dev RW set up backend storage for zram to write out175175+idle WO mark allocated slot as idle175176176177177178User space is advised to use the following files to read the device statistics.···281280If there are lots of write IO with flash device, potentially, it has282281flash wearout problem so that admin needs to design write limitation283282to guarantee storage health for entire product life.284284-To overcome the concern, zram supports "writeback_limit".285285-The "writeback_limit"'s default value is 0 so that it doesn't limit286286-any writeback. If admin want to measure writeback count in a certain287287-period, he could know it via /sys/block/zram0/bd_stat's 3rd column.283283+284284+To overcome the concern, zram supports "writeback_limit" feature.285285+The "writeback_limit_enable"'s default value is 0 so that it doesn't limit286286+any writeback. IOW, if admin want to apply writeback budget, he should287287+enable writeback_limit_enable via288288+289289+ $ echo 1 > /sys/block/zramX/writeback_limit_enable290290+291291+Once writeback_limit_enable is set, zram doesn't allow any writeback292292+until admin set the budget via /sys/block/zramX/writeback_limit.293293+294294+(If admin doesn't enable writeback_limit_enable, writeback_limit's value295295+assigned via /sys/block/zramX/writeback_limit is meaninless.)288296289297If admin want to limit writeback as per-day 400M, he could do it290298like below.291299292292- MB_SHIFT=20293293- 4K_SHIFT=12294294- echo $((400<<MB_SHIFT>>4K_SHIFT)) > \295295- /sys/block/zram0/writeback_limit.300300+ $ MB_SHIFT=20301301+ $ 4K_SHIFT=12302302+ $ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \303303+ /sys/block/zram0/writeback_limit.304304+ $ echo 1 > /sys/block/zram0/writeback_limit_enable296305297297-If admin want to allow further write again, he could do it like below306306+If admin want to allow further write again once the bugdet is exausted,307307+he could do it like below298308299299- echo 0 > /sys/block/zram0/writeback_limit309309+ $ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \310310+ /sys/block/zram0/writeback_limit300311301312If admin want to see remaining writeback budget since he set,302313303303- cat /sys/block/zram0/writeback_limit314314+ $ cat /sys/block/zramX/writeback_limit315315+316316+If admin want to disable writeback limit, he could do317317+318318+ $ echo 0 > /sys/block/zramX/writeback_limit_enable304319305320The writeback_limit count will reset whenever you reset zram(e.g.,306321system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of307322writeback happened until you reset the zram to allocate extra writeback308323budget in next setting is user's job.324324+325325+If admin want to measure writeback count in a certain period, he could326326+know it via /sys/block/zram0/bd_stat's 3rd column.309327310328= memory tracking311329
···11Altera SOCFPGA Reset Manager2233Required properties:44-- compatible : "altr,rst-mgr"44+- compatible : "altr,rst-mgr" for (Cyclone5/Arria5/Arria10)55+ "altr,stratix10-rst-mgr","altr,rst-mgr" for Stratix10 ARM64 SoC56- reg : Should contain 1 register ranges(address and length)67- altr,modrst-offset : Should contain the offset of the first modrst register.78- #reset-cells: 1
···120120 };121121122122123123-USB3 core reset124124----------------123123+Peripheral core reset in glue layer124124+-----------------------------------125125126126-USB3 core reset belongs to USB3 glue layer. Before using the core reset,127127-it is necessary to control the clocks and resets to enable this layer.128128-These clocks and resets should be described in each property.126126+Some peripheral core reset belongs to its own glue layer. Before using127127+this core reset, it is necessary to control the clocks and resets to enable128128+this layer. These clocks and resets should be described in each property.129129130130Required properties:131131- compatible: Should be132132- "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC133133- "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC134134- "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC135135- "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC132132+ "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC USB3133133+ "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC USB3134134+ "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC USB3135135+ "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC USB3136136+ "socionext,uniphier-pro4-ahci-reset" - for Pro4 SoC AHCI137137+ "socionext,uniphier-pxs2-ahci-reset" - for PXs2 SoC AHCI138138+ "socionext,uniphier-pxs3-ahci-reset" - for PXs3 SoC AHCI136139- #reset-cells: Should be 1.137140- reg: Specifies offset and length of the register set for the device.138138-- clocks: A list of phandles to the clock gate for USB3 glue layer.141141+- clocks: A list of phandles to the clock gate for the glue layer.139142 According to the clock-names, appropriate clocks are required.140143- clock-names: Should contain141144 "gio", "link" - for Pro4 SoC142145 "link" - for others143143-- resets: A list of phandles to the reset control for USB3 glue layer.146146+- resets: A list of phandles to the reset control for the glue layer.144147 According to the reset-names, appropriate resets are required.145148- reset-names: Should contain146149 "gio", "link" - for Pro4 SoC
+4-4
Documentation/driver-model/bus.txt
···124124 ssize_t (*store)(struct bus_type *, const char * buf, size_t count);125125};126126127127-Bus drivers can export attributes using the BUS_ATTR macro that works128128-similarly to the DEVICE_ATTR macro for devices. For example, a definition 129129-like this:127127+Bus drivers can export attributes using the BUS_ATTR_RW macro that works128128+similarly to the DEVICE_ATTR_RW macro for devices. For example, a129129+definition like this:130130131131-static BUS_ATTR(debug,0644,show_debug,store_debug);131131+static BUS_ATTR_RW(debug);132132133133is equivalent to declaring:134134
···165165The same can also be done from an application program.166166167167Disable specific CPU's specific idle state from cpuidle sysfs (see168168-Documentation/cpuidle/sysfs.txt):168168+Documentation/admin-guide/pm/cpuidle.rst):169169# echo 1 > /sys/devices/system/cpu/cpu$cpu/cpuidle/state$state/disable170170171171
···99Tony Luck <tony.luck@intel.com>1010Vikas Shivappa <vikas.shivappa@intel.com>11111212-This feature is enabled by the CONFIG_RESCTRL and the X86 /proc/cpuinfo1212+This feature is enabled by the CONFIG_X86_RESCTRL and the x86 /proc/cpuinfo1313flag bits:1414RDT (Resource Director Technology) Allocation - "rdt_a"1515CAT (Cache Allocation Technology) - "cat_l3", "cat_l2"
···22#ifndef __ASM_PROTOTYPES_H33#define __ASM_PROTOTYPES_H44/*55- * CONFIG_MODEVERIONS requires a C declaration to generate the appropriate CRC55+ * CONFIG_MODVERSIONS requires a C declaration to generate the appropriate CRC66 * for each symbol. Since commit:77 *88 * 4efca4ed05cbdfd1 ("kbuild: modversions for EXPORT_SYMBOL() for asm")
···1616#ifndef __ASM_MMU_H1717#define __ASM_MMU_H18181919+#include <asm/cputype.h>2020+1921#define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */2022#define USER_ASID_BIT 482123#define USER_ASID_FLAG (UL(1) << USER_ASID_BIT)···4442{4543 return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&4644 cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);4545+}4646+4747+static inline bool arm64_kernel_use_ng_mappings(void)4848+{4949+ bool tx1_bug;5050+5151+ /* What's a kpti? Use global mappings if we don't know. */5252+ if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))5353+ return false;5454+5555+ /*5656+ * Note: this function is called before the CPU capabilities have5757+ * been configured, so our early mappings will be global. If we5858+ * later determine that kpti is required, then5959+ * kpti_install_ng_mappings() will make them non-global.6060+ */6161+ if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))6262+ return arm64_kernel_unmapped_at_el0();6363+6464+ /*6565+ * KASLR is enabled so we're going to be enabling kpti on non-broken6666+ * CPUs regardless of their susceptibility to Meltdown. Rather6767+ * than force everybody to go through the G -> nG dance later on,6868+ * just put down non-global mappings from the beginning.6969+ */7070+ if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {7171+ tx1_bug = false;7272+#ifndef MODULE7373+ } else if (!static_branch_likely(&arm64_const_caps_ready)) {7474+ extern const struct midr_range cavium_erratum_27456_cpus[];7575+7676+ tx1_bug = is_midr_in_range_list(read_cpuid_id(),7777+ cavium_erratum_27456_cpus);7878+#endif7979+ } else {8080+ tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);8181+ }8282+8383+ return !tx1_bug && kaslr_offset() > 0;4784}48854986typedef void (*bp_hardening_cb_t)(void);
···983983984984 /* Useful for KASLR robustness */985985 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))986986- return true;986986+ return kaslr_offset() > 0;987987988988 /* Don't force KPTI for CPUs that are not vulnerable */989989 if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))···10031003 static bool kpti_applied = false;10041004 int cpu = smp_processor_id();1005100510061006- if (kpti_applied)10061006+ /*10071007+ * We don't need to rewrite the page-tables if either we've done10081008+ * it already or we have KASLR enabled and therefore have not10091009+ * created any global mappings at all.10101010+ */10111011+ if (kpti_applied || kaslr_offset() > 0)10071012 return;1008101310091014 remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
+1
arch/arm64/kernel/head.S
···475475476476ENTRY(kimage_vaddr)477477 .quad _text - TEXT_OFFSET478478+EXPORT_SYMBOL(kimage_vaddr)478479479480/*480481 * If we're fortunate enough to boot at EL2, ensure that the world is
+3-1
arch/arm64/kernel/machine_kexec_file.c
···87878888 /* add kaslr-seed */8989 ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED);9090- if (ret && (ret != -FDT_ERR_NOTFOUND))9090+ if (ret == -FDT_ERR_NOTFOUND)9191+ ret = 0;9292+ else if (ret)9193 goto out;92949395 if (rng_is_initialized()) {
···201201 REG_S s2, PT_SEPC(sp)202202 /* Trace syscalls, but only if requested by the user. */203203 REG_L t0, TASK_TI_FLAGS(tp)204204- andi t0, t0, _TIF_SYSCALL_TRACE204204+ andi t0, t0, _TIF_SYSCALL_WORK205205 bnez t0, handle_syscall_trace_enter206206check_syscall_nr:207207 /* Check to make sure we don't jump to a bogus syscall number. */···221221 REG_S a0, PT_A0(sp)222222 /* Trace syscalls, but only if requested by the user. */223223 REG_L t0, TASK_TI_FLAGS(tp)224224- andi t0, t0, _TIF_SYSCALL_TRACE224224+ andi t0, t0, _TIF_SYSCALL_WORK225225 bnez t0, handle_syscall_trace_exit226226227227ret_from_exception:
+16-14
arch/riscv/kernel/module-sections.c
···99#include <linux/kernel.h>1010#include <linux/module.h>11111212-u64 module_emit_got_entry(struct module *mod, u64 val)1212+unsigned long module_emit_got_entry(struct module *mod, unsigned long val)1313{1414 struct mod_section *got_sec = &mod->arch.got;1515 int i = got_sec->num_entries;1616 struct got_entry *got = get_got_entry(val, got_sec);17171818 if (got)1919- return (u64)got;1919+ return (unsigned long)got;20202121 /* There is no duplicate entry, create a new one */2222 got = (struct got_entry *)got_sec->shdr->sh_addr;···2525 got_sec->num_entries++;2626 BUG_ON(got_sec->num_entries > got_sec->max_entries);27272828- return (u64)&got[i];2828+ return (unsigned long)&got[i];2929}30303131-u64 module_emit_plt_entry(struct module *mod, u64 val)3131+unsigned long module_emit_plt_entry(struct module *mod, unsigned long val)3232{3333 struct mod_section *got_plt_sec = &mod->arch.got_plt;3434 struct got_entry *got_plt;···3737 int i = plt_sec->num_entries;38383939 if (plt)4040- return (u64)plt;4040+ return (unsigned long)plt;41414242 /* There is no duplicate entry, create a new one */4343 got_plt = (struct got_entry *)got_plt_sec->shdr->sh_addr;4444 got_plt[i] = emit_got_entry(val);4545 plt = (struct plt_entry *)plt_sec->shdr->sh_addr;4646- plt[i] = emit_plt_entry(val, (u64)&plt[i], (u64)&got_plt[i]);4646+ plt[i] = emit_plt_entry(val,4747+ (unsigned long)&plt[i],4848+ (unsigned long)&got_plt[i]);47494850 plt_sec->num_entries++;4951 got_plt_sec->num_entries++;5052 BUG_ON(plt_sec->num_entries > plt_sec->max_entries);51535252- return (u64)&plt[i];5454+ return (unsigned long)&plt[i];5355}54565555-static int is_rela_equal(const Elf64_Rela *x, const Elf64_Rela *y)5757+static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y)5658{5759 return x->r_info == y->r_info && x->r_addend == y->r_addend;5860}59616060-static bool duplicate_rela(const Elf64_Rela *rela, int idx)6262+static bool duplicate_rela(const Elf_Rela *rela, int idx)6163{6264 int i;6365 for (i = 0; i < idx; i++) {···6967 return false;7068}71697272-static void count_max_entries(Elf64_Rela *relas, int num,7070+static void count_max_entries(Elf_Rela *relas, int num,7371 unsigned int *plts, unsigned int *gots)7472{7573 unsigned int type, i;76747775 for (i = 0; i < num; i++) {7878- type = ELF64_R_TYPE(relas[i].r_info);7676+ type = ELF_RISCV_R_TYPE(relas[i].r_info);7977 if (type == R_RISCV_CALL_PLT) {8078 if (!duplicate_rela(relas, i))8179 (*plts)++;···120118121119 /* Calculate the maxinum number of entries */122120 for (i = 0; i < ehdr->e_shnum; i++) {123123- Elf64_Rela *relas = (void *)ehdr + sechdrs[i].sh_offset;124124- int num_rela = sechdrs[i].sh_size / sizeof(Elf64_Rela);125125- Elf64_Shdr *dst_sec = sechdrs + sechdrs[i].sh_info;121121+ Elf_Rela *relas = (void *)ehdr + sechdrs[i].sh_offset;122122+ int num_rela = sechdrs[i].sh_size / sizeof(Elf_Rela);123123+ Elf_Shdr *dst_sec = sechdrs + sechdrs[i].sh_info;126124127125 if (sechdrs[i].sh_type != SHT_RELA)128126 continue;
···446446 branches. Requires a compiler with -mindirect-branch=thunk-extern447447 support for full protection. The kernel may run slower.448448449449-config RESCTRL449449+config X86_RESCTRL450450 bool "Resource Control support"451451 depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD)452452 select KERNFS
···45404540 * given physical address won't match the required45414541 * VMCS12_REVISION identifier.45424542 */45434543- nested_vmx_failValid(vcpu,45434543+ return nested_vmx_failValid(vcpu,45444544 VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID);45454545- return kvm_skip_emulated_instruction(vcpu);45464545 }45474546 new_vmcs12 = kmap(page);45484547 if (new_vmcs12->hdr.revision_id != VMCS12_REVISION ||
+2-2
arch/x86/kvm/vmx/vmx.c
···453453 struct kvm_tlb_range *range)454454{455455 struct kvm_vcpu *vcpu;456456- int ret = -ENOTSUPP, i;456456+ int ret = 0, i;457457458458 spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock);459459···7044704470457045 /* unmask address range configure area */70467046 for (i = 0; i < vmx->pt_desc.addr_range; i++)70477047- vmx->pt_desc.ctl_bitmask &= ~(0xf << (32 + i * 4));70477047+ vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));70487048}7049704970507050static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+19-1
block/blk-core.c
···661661 * blk_attempt_plug_merge - try to merge with %current's plugged list662662 * @q: request_queue new bio is being queued at663663 * @bio: new bio being queued664664- * @request_count: out parameter for number of traversed plugged requests665664 * @same_queue_rq: pointer to &struct request that gets filled in when666665 * another request associated with @q is found on the plug list667666 * (optional, may be %NULL)···16821683 * @plug: The &struct blk_plug that needs to be initialized16831684 *16841685 * Description:16861686+ * blk_start_plug() indicates to the block layer an intent by the caller16871687+ * to submit multiple I/O requests in a batch. The block layer may use16881688+ * this hint to defer submitting I/Os from the caller until blk_finish_plug()16891689+ * is called. However, the block layer may choose to submit requests16901690+ * before a call to blk_finish_plug() if the number of queued I/Os16911691+ * exceeds %BLK_MAX_REQUEST_COUNT, or if the size of the I/O is larger than16921692+ * %BLK_PLUG_FLUSH_SIZE. The queued I/Os may also be submitted early if16931693+ * the task schedules (see below).16941694+ *16851695 * Tracking blk_plug inside the task_struct will help with auto-flushing the16861696 * pending I/O should the task end up blocking between blk_start_plug() and16871697 * blk_finish_plug(). This is important from a performance perspective, but···17731765 blk_mq_flush_plug_list(plug, from_schedule);17741766}1775176717681768+/**17691769+ * blk_finish_plug - mark the end of a batch of submitted I/O17701770+ * @plug: The &struct blk_plug passed to blk_start_plug()17711771+ *17721772+ * Description:17731773+ * Indicate that a batch of I/O submissions is complete. This function17741774+ * must be paired with an initial call to blk_start_plug(). The intent17751775+ * is to allow the block layer to optimize I/O submission. See the17761776+ * documentation for blk_start_plug() for more information.17771777+ */17761778void blk_finish_plug(struct blk_plug *plug)17771779{17781780 if (plug != current->plug)
+1
drivers/acpi/Kconfig
···1010 bool "ACPI (Advanced Configuration and Power Interface) Support"1111 depends on ARCH_SUPPORTS_ACPI1212 select PNP1313+ select NLS1314 default y if X861415 help1516 Advanced Configuration and Power Interface (ACPI) support for
···2020#define GPI1_LDO_ON (3 << 0)2121#define GPI1_LDO_OFF (4 << 0)22222323-#define AXP288_ADC_TS_PIN_GPADC 0xf22424-#define AXP288_ADC_TS_PIN_ON 0xf32323+#define AXP288_ADC_TS_CURRENT_ON_OFF_MASK GENMASK(1, 0)2424+#define AXP288_ADC_TS_CURRENT_OFF (0 << 0)2525+#define AXP288_ADC_TS_CURRENT_ON_WHEN_CHARGING (1 << 0)2626+#define AXP288_ADC_TS_CURRENT_ON_ONDEMAND (2 << 0)2727+#define AXP288_ADC_TS_CURRENT_ON (3 << 0)25282629static struct pmic_table power_table[] = {2730 {···215212 */216213static int intel_xpower_pmic_get_raw_temp(struct regmap *regmap, int reg)217214{215215+ int ret, adc_ts_pin_ctrl;218216 u8 buf[2];219219- int ret;220217221221- ret = regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL,222222- AXP288_ADC_TS_PIN_GPADC);218218+ /*219219+ * The current-source used for the battery temp-sensor (TS) is shared220220+ * with the GPADC. For proper fuel-gauge and charger operation the TS221221+ * current-source needs to be permanently on. But to read the GPADC we222222+ * need to temporary switch the TS current-source to ondemand, so that223223+ * the GPADC can use it, otherwise we will always read an all 0 value.224224+ *225225+ * Note that the switching from on to on-ondemand is not necessary226226+ * when the TS current-source is off (this happens on devices which227227+ * do not use the TS-pin).228228+ */229229+ ret = regmap_read(regmap, AXP288_ADC_TS_PIN_CTRL, &adc_ts_pin_ctrl);223230 if (ret)224231 return ret;225232226226- /* After switching to the GPADC pin give things some time to settle */227227- usleep_range(6000, 10000);233233+ if (adc_ts_pin_ctrl & AXP288_ADC_TS_CURRENT_ON_OFF_MASK) {234234+ ret = regmap_update_bits(regmap, AXP288_ADC_TS_PIN_CTRL,235235+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK,236236+ AXP288_ADC_TS_CURRENT_ON_ONDEMAND);237237+ if (ret)238238+ return ret;239239+240240+ /* Wait a bit after switching the current-source */241241+ usleep_range(6000, 10000);242242+ }228243229244 ret = regmap_bulk_read(regmap, AXP288_GP_ADC_H, buf, 2);230245 if (ret == 0)231246 ret = (buf[0] << 4) + ((buf[1] >> 4) & 0x0f);232247233233- regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL, AXP288_ADC_TS_PIN_ON);248248+ if (adc_ts_pin_ctrl & AXP288_ADC_TS_CURRENT_ON_OFF_MASK) {249249+ regmap_update_bits(regmap, AXP288_ADC_TS_PIN_CTRL,250250+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK,251251+ AXP288_ADC_TS_CURRENT_ON);252252+ }234253235254 return ret;236255}
+22
drivers/acpi/power.c
···131131 }132132}133133134134+static bool acpi_power_resource_is_dup(union acpi_object *package,135135+ unsigned int start, unsigned int i)136136+{137137+ acpi_handle rhandle, dup;138138+ unsigned int j;139139+140140+ /* The caller is expected to check the package element types */141141+ rhandle = package->package.elements[i].reference.handle;142142+ for (j = start; j < i; j++) {143143+ dup = package->package.elements[j].reference.handle;144144+ if (dup == rhandle)145145+ return true;146146+ }147147+148148+ return false;149149+}150150+134151int acpi_extract_power_resources(union acpi_object *package, unsigned int start,135152 struct list_head *list)136153{···167150 err = -ENODEV;168151 break;169152 }153153+154154+ /* Some ACPI tables contain duplicate power resource references */155155+ if (acpi_power_resource_is_dup(package, start, i))156156+ continue;157157+170158 err = acpi_add_power_resource(rhandle);171159 if (err)172160 break;
···121121 * Compute the autosuspend-delay expiration time based on the device's122122 * power.last_busy time. If the delay has already expired or is disabled123123 * (negative) or the power.use_autosuspend flag isn't set, return 0.124124- * Otherwise return the expiration time in jiffies (adjusted to be nonzero).124124+ * Otherwise return the expiration time in nanoseconds (adjusted to be nonzero).125125 *126126 * This function may be called either with or without dev->power.lock held.127127 * Either way it can be racy, since power.last_busy may be updated at any time.···141141142142 last_busy = READ_ONCE(dev->power.last_busy);143143144144- expires = last_busy + autosuspend_delay * NSEC_PER_MSEC;144144+ expires = last_busy + (u64)autosuspend_delay * NSEC_PER_MSEC;145145 if (expires <= now)146146 expires = 0; /* Already expired. */147147···525525 * We add a slack of 25% to gather wakeups526526 * without sacrificing the granularity.527527 */528528- u64 slack = READ_ONCE(dev->power.autosuspend_delay) *528528+ u64 slack = (u64)READ_ONCE(dev->power.autosuspend_delay) *529529 (NSEC_PER_MSEC >> 2);530530531531 dev->power.timer_expires = expires;···905905 spin_lock_irqsave(&dev->power.lock, flags);906906907907 expires = dev->power.timer_expires;908908- /* If 'expire' is after 'jiffies' we've been called too early. */908908+ /*909909+ * If 'expires' is after the current time, we've been called910910+ * too early.911911+ */909912 if (expires > 0 && expires < ktime_to_ns(ktime_get())) {910913 dev->power.timer_expires = 0;911914 rpm_suspend(dev, dev->power.timer_autosuspends ?
+33-2
drivers/block/loop.c
···11901190 goto out_unlock;11911191 }1192119211931193+ if (lo->lo_offset != info->lo_offset ||11941194+ lo->lo_sizelimit != info->lo_sizelimit) {11951195+ sync_blockdev(lo->lo_device);11961196+ kill_bdev(lo->lo_device);11971197+ }11981198+11931199 /* I/O need to be drained during transfer transition */11941200 blk_mq_freeze_queue(lo->lo_queue);11951201···1224121812251219 if (lo->lo_offset != info->lo_offset ||12261220 lo->lo_sizelimit != info->lo_sizelimit) {12211221+ /* kill_bdev should have truncated all the pages */12221222+ if (lo->lo_device->bd_inode->i_mapping->nrpages) {12231223+ err = -EAGAIN;12241224+ pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",12251225+ __func__, lo->lo_number, lo->lo_file_name,12261226+ lo->lo_device->bd_inode->i_mapping->nrpages);12271227+ goto out_unfreeze;12281228+ }12271229 if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) {12281230 err = -EFBIG;12291231 goto out_unfreeze;···1457144314581444static int loop_set_block_size(struct loop_device *lo, unsigned long arg)14591445{14461446+ int err = 0;14471447+14601448 if (lo->lo_state != Lo_bound)14611449 return -ENXIO;1462145014631451 if (arg < 512 || arg > PAGE_SIZE || !is_power_of_2(arg))14641452 return -EINVAL;1465145314541454+ if (lo->lo_queue->limits.logical_block_size != arg) {14551455+ sync_blockdev(lo->lo_device);14561456+ kill_bdev(lo->lo_device);14571457+ }14581458+14661459 blk_mq_freeze_queue(lo->lo_queue);14601460+14611461+ /* kill_bdev should have truncated all the pages */14621462+ if (lo->lo_queue->limits.logical_block_size != arg &&14631463+ lo->lo_device->bd_inode->i_mapping->nrpages) {14641464+ err = -EAGAIN;14651465+ pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",14661466+ __func__, lo->lo_number, lo->lo_file_name,14671467+ lo->lo_device->bd_inode->i_mapping->nrpages);14681468+ goto out_unfreeze;14691469+ }1467147014681471 blk_queue_logical_block_size(lo->lo_queue, arg);14691472 blk_queue_physical_block_size(lo->lo_queue, arg);14701473 blk_queue_io_min(lo->lo_queue, arg);14711474 loop_update_dio(lo);14721472-14751475+out_unfreeze:14731476 blk_mq_unfreeze_queue(lo->lo_queue);1474147714751475- return 0;14781478+ return err;14761479}1477148014781481static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd,
+1
drivers/block/null_blk.h
···9797#else9898static inline int null_zone_init(struct nullb_device *dev)9999{100100+ pr_err("null_blk: CONFIG_BLK_DEV_ZONED not enabled\n");100101 return -EINVAL;101102}102103static inline void null_zone_exit(struct nullb_device *dev) {}
+4-5
drivers/block/rbd.c
···59865986 struct list_head *tmp;59875987 int dev_id;59885988 char opt_buf[6];59895989- bool already = false;59905989 bool force = false;59915990 int ret;59925991···60186019 spin_lock_irq(&rbd_dev->lock);60196020 if (rbd_dev->open_count && !force)60206021 ret = -EBUSY;60216021- else60226022- already = test_and_set_bit(RBD_DEV_FLAG_REMOVING,60236023- &rbd_dev->flags);60226022+ else if (test_and_set_bit(RBD_DEV_FLAG_REMOVING,60236023+ &rbd_dev->flags))60246024+ ret = -EINPROGRESS;60246025 spin_unlock_irq(&rbd_dev->lock);60256026 }60266027 spin_unlock(&rbd_dev_list_lock);60276027- if (ret < 0 || already)60286028+ if (ret)60286029 return ret;6029603060306031 if (force) {
···8686 atomic64_t bd_count; /* no. of pages in backing device */8787 atomic64_t bd_reads; /* no. of reads from backing device */8888 atomic64_t bd_writes; /* no. of writes from backing device */8989- atomic64_t bd_wb_limit; /* writeback limit of backing device */9089#endif9190};9291···113114 */114115 bool claim; /* Protected by bdev->bd_mutex */115116 struct file *backing_dev;116116- bool stop_writeback;117117#ifdef CONFIG_ZRAM_WRITEBACK118118+ spinlock_t wb_limit_lock;119119+ bool wb_limit_enable;120120+ u64 bd_wb_limit;118121 struct block_device *bdev;119122 unsigned int old_block_size;120123 unsigned long *bitmap;
+4-8
drivers/cpufreq/cpufreq.c
···15301530{15311531 unsigned int ret_freq = 0;1532153215331533- if (!cpufreq_driver->get)15331533+ if (unlikely(policy_is_inactive(policy)) || !cpufreq_driver->get)15341534 return ret_freq;1535153515361536 ret_freq = cpufreq_driver->get(policy->cpu);1537153715381538 /*15391539- * Updating inactive policies is invalid, so avoid doing that. Also15401540- * if fast frequency switching is used with the given policy, the check15391539+ * If fast frequency switching is used with the given policy, the check15411540 * against policy->cur is pointless, so skip it in that case too.15421541 */15431543- if (unlikely(policy_is_inactive(policy)) || policy->fast_switch_enabled)15421542+ if (policy->fast_switch_enabled)15441543 return ret_freq;1545154415461545 if (ret_freq && policy->cur &&···1568156915691570 if (policy) {15701571 down_read(&policy->rwsem);15711571-15721572- if (!policy_is_inactive(policy))15731573- ret_freq = __cpufreq_get(policy);15741574-15721572+ ret_freq = __cpufreq_get(policy);15751573 up_read(&policy->rwsem);1576157415771575 cpufreq_cpu_put(policy);
···167167 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);168168169169 dce_virtual_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);170170- if (crtc->primary->fb) {171171- int r;172172- struct amdgpu_bo *abo;173173-174174- abo = gem_to_amdgpu_bo(crtc->primary->fb->obj[0]);175175- r = amdgpu_bo_reserve(abo, true);176176- if (unlikely(r))177177- DRM_ERROR("failed to reserve abo before unpin\n");178178- else {179179- amdgpu_bo_unpin(abo);180180- amdgpu_bo_unreserve(abo);181181- }182182- }183170184171 amdgpu_crtc->pll_id = ATOM_PPLL_INVALID;185172 amdgpu_crtc->encoder = NULL;···679692 spin_unlock_irqrestore(&adev->ddev->event_lock, flags);680693681694 drm_crtc_vblank_put(&amdgpu_crtc->base);682682- schedule_work(&works->unpin_work);695695+ amdgpu_bo_unref(&works->old_abo);696696+ kfree(works->shared);697697+ kfree(works);683698684699 return 0;685700}
+35-15
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
···42334233 u32 tmp;42344234 u32 rb_bufsz;42354235 u64 rb_addr, rptr_addr, wptr_gpu_addr;42364236- int r;4237423642384237 /* Set the write pointer delay */42394238 WREG32(mmCP_RB_WPTR_DELAY, 0);···42774278 amdgpu_ring_clear_ring(ring);42784279 gfx_v8_0_cp_gfx_start(adev);42794280 ring->sched.ready = true;42804280- r = amdgpu_ring_test_helper(ring);4281428142824282- return r;42824282+ return 0;42834283}4284428442854285static void gfx_v8_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)···43674369 amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr));43684370 }4369437143704370- r = amdgpu_ring_test_helper(kiq_ring);43714371- if (r)43724372- DRM_ERROR("KCQ enable failed\n");43734373- return r;43724372+ amdgpu_ring_commit(kiq_ring);43734373+43744374+ return 0;43744375}4375437643764377static int gfx_v8_0_deactivate_hqd(struct amdgpu_device *adev, u32 req)···47064709 if (r)47074710 goto done;4708471147094709- /* Test KCQs - reversing the order of rings seems to fix ring test failure47104710- * after GPU reset47114711- */47124712- for (i = adev->gfx.num_compute_rings - 1; i >= 0; i--) {47134713- ring = &adev->gfx.compute_ring[i];47144714- r = amdgpu_ring_test_helper(ring);47154715- }47164716-47174712done:47184713 return r;47144714+}47154715+47164716+static int gfx_v8_0_cp_test_all_rings(struct amdgpu_device *adev)47174717+{47184718+ int r, i;47194719+ struct amdgpu_ring *ring;47204720+47214721+ /* collect all the ring_tests here, gfx, kiq, compute */47224722+ ring = &adev->gfx.gfx_ring[0];47234723+ r = amdgpu_ring_test_helper(ring);47244724+ if (r)47254725+ return r;47264726+47274727+ ring = &adev->gfx.kiq.ring;47284728+ r = amdgpu_ring_test_helper(ring);47294729+ if (r)47304730+ return r;47314731+47324732+ for (i = 0; i < adev->gfx.num_compute_rings; i++) {47334733+ ring = &adev->gfx.compute_ring[i];47344734+ amdgpu_ring_test_helper(ring);47354735+ }47364736+47374737+ return 0;47194738}4720473947214740static int gfx_v8_0_cp_resume(struct amdgpu_device *adev)···47524739 r = gfx_v8_0_kcq_resume(adev);47534740 if (r)47544741 return r;47424742+47434743+ r = gfx_v8_0_cp_test_all_rings(adev);47444744+ if (r)47454745+ return r;47464746+47554747 gfx_v8_0_enable_gui_idle_interrupt(adev, true);4756474847574749 return 0;···51035085 if (REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CP) ||51045086 REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_GFX))51055087 gfx_v8_0_cp_gfx_resume(adev);50885088+50895089+ gfx_v8_0_cp_test_all_rings(adev);5106509051075091 adev->gfx.rlc.funcs->start(adev);51085092
···356356 return 0;357357}358358359359+static int360360+intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload)361361+{362362+ struct intel_vgpu *vgpu = workload->vgpu;363363+ struct intel_vgpu_submission *s = &vgpu->submission;364364+ struct i915_gem_context *shadow_ctx = s->shadow_ctx;365365+ struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;366366+ struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];367367+ struct i915_request *rq;368368+ int ret = 0;369369+370370+ lockdep_assert_held(&dev_priv->drm.struct_mutex);371371+372372+ if (workload->req)373373+ goto out;374374+375375+ rq = i915_request_alloc(engine, shadow_ctx);376376+ if (IS_ERR(rq)) {377377+ gvt_vgpu_err("fail to allocate gem request\n");378378+ ret = PTR_ERR(rq);379379+ goto out;380380+ }381381+ workload->req = i915_request_get(rq);382382+out:383383+ return ret;384384+}385385+359386/**360387 * intel_gvt_scan_and_shadow_workload - audit the workload by scanning and361388 * shadow it as well, include ringbuffer,wa_ctx and ctx.···399372 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;400373 struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];401374 struct intel_context *ce;402402- struct i915_request *rq;403375 int ret;404376405377 lockdep_assert_held(&dev_priv->drm.struct_mutex);406378407407- if (workload->req)379379+ if (workload->shadow)408380 return 0;409381410382 ret = set_context_ppgtt_from_shadow(workload, shadow_ctx);···443417 goto err_shadow;444418 }445419446446- rq = i915_request_alloc(engine, shadow_ctx);447447- if (IS_ERR(rq)) {448448- gvt_vgpu_err("fail to allocate gem request\n");449449- ret = PTR_ERR(rq);450450- goto err_shadow;451451- }452452- workload->req = i915_request_get(rq);453453-454454- ret = populate_shadow_context(workload);455455- if (ret)456456- goto err_req;457457-420420+ workload->shadow = true;458421 return 0;459459-err_req:460460- rq = fetch_and_zero(&workload->req);461461- i915_request_put(rq);462422err_shadow:463423 release_shadow_wa_ctx(&workload->wa_ctx);464424err_unpin:···683671 mutex_lock(&vgpu->vgpu_lock);684672 mutex_lock(&dev_priv->drm.struct_mutex);685673674674+ ret = intel_gvt_workload_req_alloc(workload);675675+ if (ret)676676+ goto err_req;677677+686678 ret = intel_gvt_scan_and_shadow_workload(workload);687679 if (ret)688680 goto out;689681682682+ ret = populate_shadow_context(workload);683683+ if (ret) {684684+ release_shadow_wa_ctx(&workload->wa_ctx);685685+ goto out;686686+ }687687+690688 ret = prepare_workload(workload);691691-692689out:693693- if (ret)694694- workload->status = ret;695695-696690 if (!IS_ERR_OR_NULL(workload->req)) {697691 gvt_dbg_sched("ring id %d submit workload to i915 %p\n",698692 ring_id, workload->req);699693 i915_request_add(workload->req);700694 workload->dispatched = true;701695 }702702-696696+err_req:697697+ if (ret)698698+ workload->status = ret;703699 mutex_unlock(&dev_priv->drm.struct_mutex);704700 mutex_unlock(&vgpu->vgpu_lock);705701 return ret;
+1
drivers/gpu/drm/i915/gvt/scheduler.h
···8383 struct i915_request *req;8484 /* if this workload has been dispatched to i915? */8585 bool dispatched;8686+ bool shadow; /* if workload has done shadow of guest request */8687 int status;87888889 struct intel_vgpu_mm *shadow_mm;
···20752075int gen6_ppgtt_pin(struct i915_hw_ppgtt *base)20762076{20772077 struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(base);20782078+ int err;2078207920792080 /*20802081 * Workaround the limited maximum vma->pin_count and the aliasing_ppgtt···20912090 * allocator works in address space sizes, so it's multiplied by page20922091 * size. We allocate at the top of the GTT to avoid fragmentation.20932092 */20942094- return i915_vma_pin(ppgtt->vma,20952095- 0, GEN6_PD_ALIGN,20962096- PIN_GLOBAL | PIN_HIGH);20932093+ err = i915_vma_pin(ppgtt->vma,20942094+ 0, GEN6_PD_ALIGN,20952095+ PIN_GLOBAL | PIN_HIGH);20962096+ if (err)20972097+ goto unpin;20982098+20992099+ return 0;21002100+21012101+unpin:21022102+ ppgtt->pin_count = 0;21032103+ return err;20972104}2098210520992106void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base)
+14-9
drivers/gpu/drm/i915/i915_gpu_error.c
···19071907{19081908 struct i915_gpu_state *error;1909190919101910+ /* Check if GPU capture has been disabled */19111911+ error = READ_ONCE(i915->gpu_error.first_error);19121912+ if (IS_ERR(error))19131913+ return error;19141914+19101915 error = kzalloc(sizeof(*error), GFP_ATOMIC);19111911- if (!error)19121912- return NULL;19161916+ if (!error) {19171917+ i915_disable_error_state(i915, -ENOMEM);19181918+ return ERR_PTR(-ENOMEM);19191919+ }1913192019141921 kref_init(&error->ref);19151922 error->i915 = i915;···19521945 return;1953194619541947 error = i915_capture_gpu_state(i915);19551955- if (!error) {19561956- DRM_DEBUG_DRIVER("out of memory, not capturing error state\n");19571957- i915_disable_error_state(i915, -ENOMEM);19481948+ if (IS_ERR(error))19581949 return;19591959- }1960195019611951 i915_error_capture_msg(i915, error, engine_mask, error_msg);19621952 DRM_INFO("%s\n", error->error_msg);···1991198719921988 spin_lock_irq(&i915->gpu_error.lock);19931989 error = i915->gpu_error.first_error;19941994- if (error)19901990+ if (!IS_ERR_OR_NULL(error))19951991 i915_gpu_state_get(error);19961992 spin_unlock_irq(&i915->gpu_error.lock);19971993···2004200020052001 spin_lock_irq(&i915->gpu_error.lock);20062002 error = i915->gpu_error.first_error;20072007- i915->gpu_error.first_error = NULL;20032003+ if (error != ERR_PTR(-ENODEV)) /* if disabled, always disabled */20042004+ i915->gpu_error.first_error = NULL;20082005 spin_unlock_irq(&i915->gpu_error.lock);2009200620102010- if (!IS_ERR(error))20072007+ if (!IS_ERR_OR_NULL(error))20112008 i915_gpu_state_put(error);20122009}20132010
+3-1
drivers/gpu/drm/i915/i915_sysfs.c
···521521 ssize_t ret;522522523523 gpu = i915_first_error_state(i915);524524- if (gpu) {524524+ if (IS_ERR(gpu)) {525525+ ret = PTR_ERR(gpu);526526+ } else if (gpu) {525527 ret = i915_gpu_state_copy_to_buffer(gpu, buf, off, count);526528 i915_gpu_state_put(gpu);527529 } else {
···274274 DRM_DEBUG_KMS("eDP panel supports PSR version %x\n",275275 intel_dp->psr_dpcd[0]);276276277277+ if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) {278278+ DRM_DEBUG_KMS("PSR support not currently available for this panel\n");279279+ return;280280+ }281281+277282 if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) {278283 DRM_DEBUG_KMS("Panel lacks power state control, PSR cannot be enabled\n");279284 return;280285 }286286+281287 dev_priv->psr.sink_support = true;282288 dev_priv->psr.sink_sync_latency =283289 intel_dp_get_sink_sync_latency(intel_dp);
+3
drivers/gpu/drm/nouveau/nouveau_backlight.c
···253253 case NV_DEVICE_INFO_V0_FERMI:254254 case NV_DEVICE_INFO_V0_KEPLER:255255 case NV_DEVICE_INFO_V0_MAXWELL:256256+ case NV_DEVICE_INFO_V0_PASCAL:257257+ case NV_DEVICE_INFO_V0_VOLTA:258258+ case NV_DEVICE_INFO_V0_TURING:256259 ret = nv50_backlight_init(nv_encoder, &props, &ops);257260 break;258261 default:
+5-2
drivers/gpu/drm/nouveau/nvkm/engine/falcon.c
···2222#include <engine/falcon.h>23232424#include <core/gpuobj.h>2525+#include <subdev/mc.h>2526#include <subdev/timer.h>2627#include <engine/fifo.h>2728···108107 }109108 }110109111111- nvkm_mask(device, base + 0x048, 0x00000003, 0x00000000);112112- nvkm_wr32(device, base + 0x014, 0xffffffff);110110+ if (nvkm_mc_enabled(device, engine->subdev.index)) {111111+ nvkm_mask(device, base + 0x048, 0x00000003, 0x00000000);112112+ nvkm_wr32(device, base + 0x014, 0xffffffff);113113+ }113114 return 0;114115}115116
···470470 data_arg.data);471471 }472472 case I2C_RETRIES:473473+ if (arg > INT_MAX)474474+ return -EINVAL;475475+473476 client->adapter->retries = arg;474477 break;475478 case I2C_TIMEOUT:479479+ if (arg > INT_MAX)480480+ return -EINVAL;481481+476482 /* For historical reasons, user-space sets the timeout477483 * value in units of 10 ms.478484 */
···504504 INIT_LIST_HEAD(&ctx->mm_head);505505 mutex_init(&ctx->mm_list_lock);506506507507- ctx->ah_tbl.va = dma_zalloc_coherent(&pdev->dev, map_len,508508- &ctx->ah_tbl.pa, GFP_KERNEL);507507+ ctx->ah_tbl.va = dma_alloc_coherent(&pdev->dev, map_len,508508+ &ctx->ah_tbl.pa, GFP_KERNEL);509509 if (!ctx->ah_tbl.va) {510510 kfree(ctx);511511 return ERR_PTR(-ENOMEM);···838838 return -ENOMEM;839839840840 for (i = 0; i < mr->num_pbls; i++) {841841- va = dma_zalloc_coherent(&pdev->dev, dma_len, &pa, GFP_KERNEL);841841+ va = dma_alloc_coherent(&pdev->dev, dma_len, &pa, GFP_KERNEL);842842 if (!va) {843843 ocrdma_free_mr_pbl_tbl(dev, mr);844844 status = -ENOMEM;
+2-2
drivers/infiniband/hw/qedr/verbs.c
···556556 return ERR_PTR(-ENOMEM);557557558558 for (i = 0; i < pbl_info->num_pbls; i++) {559559- va = dma_zalloc_coherent(&pdev->dev, pbl_info->pbl_size,560560- &pa, flags);559559+ va = dma_alloc_coherent(&pdev->dev, pbl_info->pbl_size, &pa,560560+ flags);561561 if (!va)562562 goto err;563563
+2-2
drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
···890890 dev_info(&pdev->dev, "device version %d, driver version %d\n",891891 dev->dsr_version, PVRDMA_VERSION);892892893893- dev->dsr = dma_zalloc_coherent(&pdev->dev, sizeof(*dev->dsr),894894- &dev->dsrbase, GFP_KERNEL);893893+ dev->dsr = dma_alloc_coherent(&pdev->dev, sizeof(*dev->dsr),894894+ &dev->dsrbase, GFP_KERNEL);895895 if (!dev->dsr) {896896 dev_err(&pdev->dev, "failed to allocate shared region\n");897897 ret = -ENOMEM;
···37633763 * Use zalloc to zero the reserved high 32-bits of 128-bit37643764 * descriptors so that they never need to be written.37653765 */37663766- buf = dma_zalloc_coherent(mmc_dev(mmc), host->align_buffer_sz +37673767- host->adma_table_sz, &dma, GFP_KERNEL);37663766+ buf = dma_alloc_coherent(mmc_dev(mmc),37673767+ host->align_buffer_sz + host->adma_table_sz,37683768+ &dma, GFP_KERNEL);37683769 if (!buf) {37693770 pr_warn("%s: Unable to allocate ADMA buffers - falling back to standard DMA\n",37703771 mmc_hostname(mmc));
+1-1
drivers/mtd/mtdcore.c
···522522 mtd->nvmem = nvmem_register(&config);523523 if (IS_ERR(mtd->nvmem)) {524524 /* Just ignore if there is no NVMEM support in the kernel */525525- if (PTR_ERR(mtd->nvmem) == -ENOSYS) {525525+ if (PTR_ERR(mtd->nvmem) == -EOPNOTSUPP) {526526 mtd->nvmem = NULL;527527 } else {528528 dev_err(&mtd->dev, "Failed to register NVMEM device\n");
···10191019 sizeof(struct atl1c_recv_ret_status) * rx_desc_count +10201020 8 * 4;1021102110221022- ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size,10231023- &ring_header->dma, GFP_KERNEL);10221022+ ring_header->desc = dma_alloc_coherent(&pdev->dev, ring_header->size,10231023+ &ring_header->dma, GFP_KERNEL);10241024 if (unlikely(!ring_header->desc)) {10251025 dev_err(&pdev->dev, "could not get memory for DMA buffer\n");10261026 goto err_nomem;
+4-4
drivers/net/ethernet/broadcom/bcm63xx_enet.c
···936936937937 /* allocate rx dma ring */938938 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc);939939- p = dma_zalloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);939939+ p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);940940 if (!p) {941941 ret = -ENOMEM;942942 goto out_freeirq_tx;···947947948948 /* allocate tx dma ring */949949 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc);950950- p = dma_zalloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);950950+ p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);951951 if (!p) {952952 ret = -ENOMEM;953953 goto out_free_rx_ring;···2120212021212121 /* allocate rx dma ring */21222122 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc);21232123- p = dma_zalloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);21232123+ p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);21242124 if (!p) {21252125 dev_err(kdev, "cannot allocate rx ring %u\n", size);21262126 ret = -ENOMEM;···2132213221332133 /* allocate tx dma ring */21342134 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc);21352135- p = dma_zalloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);21352135+ p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);21362136 if (!p) {21372137 dev_err(kdev, "cannot allocate tx ring\n");21382138 ret = -ENOMEM;
+2-2
drivers/net/ethernet/broadcom/bcmsysport.c
···15061506 /* We just need one DMA descriptor which is DMA-able, since writing to15071507 * the port will allocate a new descriptor in its internal linked-list15081508 */15091509- p = dma_zalloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma,15101510- GFP_KERNEL);15091509+ p = dma_alloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma,15101510+ GFP_KERNEL);15111511 if (!p) {15121512 netif_err(priv, hw, priv->netdev, "DMA alloc failed\n");15131513 return -ENOMEM;
+6-6
drivers/net/ethernet/broadcom/bgmac.c
···634634635635 /* Alloc ring of descriptors */636636 size = BGMAC_TX_RING_SLOTS * sizeof(struct bgmac_dma_desc);637637- ring->cpu_base = dma_zalloc_coherent(dma_dev, size,638638- &ring->dma_base,639639- GFP_KERNEL);637637+ ring->cpu_base = dma_alloc_coherent(dma_dev, size,638638+ &ring->dma_base,639639+ GFP_KERNEL);640640 if (!ring->cpu_base) {641641 dev_err(bgmac->dev, "Allocation of TX ring 0x%X failed\n",642642 ring->mmio_base);···659659660660 /* Alloc ring of descriptors */661661 size = BGMAC_RX_RING_SLOTS * sizeof(struct bgmac_dma_desc);662662- ring->cpu_base = dma_zalloc_coherent(dma_dev, size,663663- &ring->dma_base,664664- GFP_KERNEL);662662+ ring->cpu_base = dma_alloc_coherent(dma_dev, size,663663+ &ring->dma_base,664664+ GFP_KERNEL);665665 if (!ring->cpu_base) {666666 dev_err(bgmac->dev, "Allocation of RX ring 0x%X failed\n",667667 ring->mmio_base);
···20812081 bool is_pf);2082208220832083#define BNX2X_ILT_ZALLOC(x, y, size) \20842084- x = dma_zalloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL)20842084+ x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL)2085208520862086#define BNX2X_ILT_FREE(x, y, size) \20872087 do { \
···9999 /* Status messages are allocated and initialized to 0. This is necessary100100 * since DR bit should be initialized to 0.101101 */102102- sring->va = dma_zalloc_coherent(dev, sz, &sring->pa, GFP_KERNEL);102102+ sring->va = dma_alloc_coherent(dev, sz, &sring->pa, GFP_KERNEL);103103 if (!sring->va)104104 return -ENOMEM;105105···381381 if (!ring->ctx)382382 goto err;383383384384- ring->va = dma_zalloc_coherent(dev, sz, &ring->pa, GFP_KERNEL);384384+ ring->va = dma_alloc_coherent(dev, sz, &ring->pa, GFP_KERNEL);385385 if (!ring->va)386386 goto err_free_ctx;387387388388 if (ring->is_rx) {389389 sz = sizeof(*ring->edma_rx_swtail.va);390390 ring->edma_rx_swtail.va =391391- dma_zalloc_coherent(dev, sz, &ring->edma_rx_swtail.pa,392392- GFP_KERNEL);391391+ dma_alloc_coherent(dev, sz, &ring->edma_rx_swtail.pa,392392+ GFP_KERNEL);393393 if (!ring->edma_rx_swtail.va)394394 goto err_free_va;395395 }
···9090 * Allocate space for DMA descriptors9191 * (add an extra element for link descriptor)9292 */9393- bd_ptr = dma_zalloc_coherent(dev,9494- (bd_num + 1) * sizeof(struct tsi721_dma_desc),9595- &bd_phys, GFP_ATOMIC);9393+ bd_ptr = dma_alloc_coherent(dev,9494+ (bd_num + 1) * sizeof(struct tsi721_dma_desc),9595+ &bd_phys, GFP_ATOMIC);9696 if (!bd_ptr)9797 return -ENOMEM;9898···108108 sts_size = ((bd_num + 1) >= TSI721_DMA_MINSTSSZ) ?109109 (bd_num + 1) : TSI721_DMA_MINSTSSZ;110110 sts_size = roundup_pow_of_two(sts_size);111111- sts_ptr = dma_zalloc_coherent(dev,111111+ sts_ptr = dma_alloc_coherent(dev,112112 sts_size * sizeof(struct tsi721_dma_sts),113113 &sts_phys, GFP_ATOMIC);114114 if (!sts_ptr) {
+14-6
drivers/reset/Kconfig
···109109110110config RESET_SIMPLE111111 bool "Simple Reset Controller Driver" if COMPILE_TEST112112- default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED112112+ default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED113113 help114114 This enables a simple reset controller driver for reset lines that115115 that can be asserted and deasserted by toggling bits in a contiguous,···127127 default MACH_STM32MP157128128 help129129 This enables the RCC reset controller driver for STM32 MPUs.130130+131131+config RESET_SOCFPGA132132+ bool "SoCFPGA Reset Driver" if COMPILE_TEST && !ARCH_SOCFPGA133133+ default ARCH_SOCFPGA134134+ select RESET_SIMPLE135135+ help136136+ This enables the reset driver for the SoCFPGA ARMv7 platforms. This137137+ driver gets initialized early during platform init calls.130138131139config RESET_SUNXI132140 bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI···171163 Say Y if you want to control reset signals provided by System Control172164 block, Media I/O block, Peripheral Block.173165174174-config RESET_UNIPHIER_USB3175175- tristate "USB3 reset driver for UniPhier SoCs"166166+config RESET_UNIPHIER_GLUE167167+ tristate "Reset driver in glue layer for UniPhier SoCs"176168 depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF177169 default ARCH_UNIPHIER178170 select RESET_SIMPLE179171 help180180- Support for the USB3 core reset on UniPhier SoCs.181181- Say Y if you want to control reset signals provided by182182- USB3 glue layer.172172+ Support for peripheral core reset included in its own glue layer173173+ on UniPhier SoCs. Say Y if you want to control reset signals174174+ provided by the glue layer.183175184176config RESET_ZYNQ185177 bool "ZYNQ Reset Driver" if COMPILE_TEST
···795795 return rstc;796796}797797EXPORT_SYMBOL_GPL(devm_reset_control_array_get);798798+799799+static int reset_control_get_count_from_lookup(struct device *dev)800800+{801801+ const struct reset_control_lookup *lookup;802802+ const char *dev_id;803803+ int count = 0;804804+805805+ if (!dev)806806+ return -EINVAL;807807+808808+ dev_id = dev_name(dev);809809+ mutex_lock(&reset_lookup_mutex);810810+811811+ list_for_each_entry(lookup, &reset_lookup_list, list) {812812+ if (!strcmp(lookup->dev_id, dev_id))813813+ count++;814814+ }815815+816816+ mutex_unlock(&reset_lookup_mutex);817817+818818+ if (count == 0)819819+ count = -ENOENT;820820+821821+ return count;822822+}823823+824824+/**825825+ * reset_control_get_count - Count number of resets available with a device826826+ *827827+ * @dev: device for which to return the number of resets828828+ *829829+ * Returns positive reset count on success, or error number on failure and830830+ * on count being zero.831831+ */832832+int reset_control_get_count(struct device *dev)833833+{834834+ if (dev->of_node)835835+ return of_reset_control_get_count(dev->of_node);836836+837837+ return reset_control_get_count_from_lookup(dev);838838+}839839+EXPORT_SYMBOL_GPL(reset_control_get_count);
···18271827 * page, this is used as a priori size of SLI4_PAGE_SIZE for18281828 * the later DMA memory free.18291829 */18301830- viraddr = dma_zalloc_coherent(&phba->pcidev->dev,18311831- SLI4_PAGE_SIZE, &phyaddr,18321832- GFP_KERNEL);18301830+ viraddr = dma_alloc_coherent(&phba->pcidev->dev,18311831+ SLI4_PAGE_SIZE, &phyaddr,18321832+ GFP_KERNEL);18331833 /* In case of malloc fails, proceed with whatever we have */18341834 if (!viraddr)18351835 break;
···19151915 /* We use the PCI APIs for now until the generic one gets fixed19161916 * enough or until we get some macio-specific versions19171917 */19181918- dma_cmd_space = dma_zalloc_coherent(&macio_get_pci_dev(mdev)->dev,19191919- ms->dma_cmd_size, &dma_cmd_bus, GFP_KERNEL);19181918+ dma_cmd_space = dma_alloc_coherent(&macio_get_pci_dev(mdev)->dev,19191919+ ms->dma_cmd_size, &dma_cmd_bus,19201920+ GFP_KERNEL);19201921 if (dma_cmd_space == NULL) {19211922 printk(KERN_ERR "mesh: can't allocate DMA table\n");19221923 goto out_unmap;
+5-4
drivers/scsi/mvumi.c
···143143144144 case RESOURCE_UNCACHED_MEMORY:145145 size = round_up(size, 8);146146- res->virt_addr = dma_zalloc_coherent(&mhba->pdev->dev, size,147147- &res->bus_addr, GFP_KERNEL);146146+ res->virt_addr = dma_alloc_coherent(&mhba->pdev->dev, size,147147+ &res->bus_addr,148148+ GFP_KERNEL);148149 if (!res->virt_addr) {149150 dev_err(&mhba->pdev->dev,150151 "unable to allocate consistent mem,"···247246 if (size == 0)248247 return 0;249248250250- virt_addr = dma_zalloc_coherent(&mhba->pdev->dev, size, &phy_addr,251251- GFP_KERNEL);249249+ virt_addr = dma_alloc_coherent(&mhba->pdev->dev, size, &phy_addr,250250+ GFP_KERNEL);252251 if (!virt_addr)253252 return -1;254253
···8585 with "earlycon=smh" on the kernel command line. The console is8686 enabled when early_param is processed.87878888+config SERIAL_EARLYCON_RISCV_SBI8989+ bool "Early console using RISC-V SBI"9090+ depends on RISCV9191+ select SERIAL_CORE9292+ select SERIAL_CORE_CONSOLE9393+ select SERIAL_EARLYCON9494+ help9595+ Support for early debug console using RISC-V SBI. This enables9696+ the console before standard serial driver is probed. This is enabled9797+ with "earlycon=sbi" on the kernel command line. The console is9898+ enabled when early_param is processed.9999+88100config SERIAL_SB1250_DUART89101 tristate "BCM1xxx on-chip DUART serial support"90102 depends on SIBYTE_SB1xxx_SOC=y
+1
drivers/tty/serial/Makefile
···7788obj-$(CONFIG_SERIAL_EARLYCON) += earlycon.o99obj-$(CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST) += earlycon-arm-semihost.o1010+obj-$(CONFIG_SERIAL_EARLYCON_RISCV_SBI) += earlycon-riscv-sbi.o10111112# These Sparc drivers have to appear before others such as 82501213# which share ttySx minor node space. Otherwise console device
+28
drivers/tty/serial/earlycon-riscv-sbi.c
···11+// SPDX-License-Identifier: GPL-2.022+/*33+ * RISC-V SBI based earlycon44+ *55+ * Copyright (C) 2018 Anup Patel <anup@brainfault.org>66+ */77+#include <linux/kernel.h>88+#include <linux/console.h>99+#include <linux/init.h>1010+#include <linux/serial_core.h>1111+#include <asm/sbi.h>1212+1313+static void sbi_console_write(struct console *con,1414+ const char *s, unsigned int n)1515+{1616+ int i;1717+1818+ for (i = 0; i < n; ++i)1919+ sbi_console_putchar(s[i]);2020+}2121+2222+static int __init early_sbi_setup(struct earlycon_device *device,2323+ const char *opt)2424+{2525+ device->con->write = sbi_console_write;2626+ return 0;2727+}2828+EARLYCON_DECLARE(sbi, early_sbi_setup);
···12561256static int tty_reopen(struct tty_struct *tty)12571257{12581258 struct tty_driver *driver = tty->driver;12591259- int retval;12591259+ struct tty_ldisc *ld;12601260+ int retval = 0;1260126112611262 if (driver->type == TTY_DRIVER_TYPE_PTY &&12621263 driver->subtype == PTY_TYPE_MASTER)···12691268 if (test_bit(TTY_EXCLUSIVE, &tty->flags) && !capable(CAP_SYS_ADMIN))12701269 return -EBUSY;1271127012721272- retval = tty_ldisc_lock(tty, 5 * HZ);12731273- if (retval)12741274- return retval;12711271+ ld = tty_ldisc_ref_wait(tty);12721272+ if (ld) {12731273+ tty_ldisc_deref(ld);12741274+ } else {12751275+ retval = tty_ldisc_lock(tty, 5 * HZ);12761276+ if (retval)12771277+ return retval;1275127812761276- if (!tty->ldisc)12771277- retval = tty_ldisc_reinit(tty, tty->termios.c_line);12781278- tty_ldisc_unlock(tty);12791279+ if (!tty->ldisc)12801280+ retval = tty_ldisc_reinit(tty, tty->termios.c_line);12811281+ tty_ldisc_unlock(tty);12821282+ }1279128312801284 if (retval == 0)12811285 tty->count++;
+7
drivers/usb/class/cdc-acm.c
···18651865 .driver_info = IGNORE_DEVICE,18661866 },1867186718681868+ { USB_DEVICE(0x1bc7, 0x0021), /* Telit 3G ACM only composition */18691869+ .driver_info = SEND_ZERO_PACKET,18701870+ },18711871+ { USB_DEVICE(0x1bc7, 0x0023), /* Telit 3G ACM + ECM composition */18721872+ .driver_info = SEND_ZERO_PACKET,18731873+ },18741874+18681875 /* control interfaces without any protocol set */18691876 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,18701877 USB_CDC_PROTO_NONE) },
+6-3
drivers/usb/core/generic.c
···143143 continue;144144 }145145146146- if (i > 0 && desc && is_audio(desc) && is_uac3_config(desc)) {147147- best = c;148148- break;146146+ if (i > 0 && desc && is_audio(desc)) {147147+ if (is_uac3_config(desc)) {148148+ best = c;149149+ break;150150+ }151151+ continue;149152 }150153151154 /* From the remaining configs, choose the first one whose
···235235 if (!(us->fflags & US_FL_NEEDS_CAP16))236236 sdev->try_rc_10_first = 1;237237238238- /* assume SPC3 or latter devices support sense size > 18 */239239- if (sdev->scsi_level > SCSI_SPC_2)238238+ /*239239+ * assume SPC3 or latter devices support sense size > 18240240+ * unless US_FL_BAD_SENSE quirk is specified.241241+ */242242+ if (sdev->scsi_level > SCSI_SPC_2 &&243243+ !(us->fflags & US_FL_BAD_SENSE))240244 us->fflags |= US_FL_SANE_SENSE;241245242246 /*
+12
drivers/usb/storage/unusual_devs.h
···12661266 US_FL_FIX_CAPACITY ),1267126712681268/*12691269+ * Reported by Icenowy Zheng <icenowy@aosc.io>12701270+ * The SMI SM3350 USB-UFS bridge controller will enter a wrong state12711271+ * that do not process read/write command if a long sense is requested,12721272+ * so force to use 18-byte sense.12731273+ */12741274+UNUSUAL_DEV( 0x090c, 0x3350, 0x0000, 0xffff,12751275+ "SMI",12761276+ "SM3350 UFS-to-USB-Mass-Storage bridge",12771277+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,12781278+ US_FL_BAD_SENSE ),12791279+12801280+/*12691281 * Reported by Paul Hartman <paul.hartman+linux@gmail.com>12701282 * This card reader returns "Illegal Request, Logical Block Address12711283 * Out of Range" for the first READ(10) after a new card is inserted.
···10161016 parent_start = parent->start;1017101710181018 /*10191019- * If we are COWing a node/leaf from the extent, chunk or device trees,10201020- * make sure that we do not finish block group creation of pending block10211021- * groups. We do this to avoid a deadlock.10191019+ * If we are COWing a node/leaf from the extent, chunk, device or free10201020+ * space trees, make sure that we do not finish block group creation of10211021+ * pending block groups. We do this to avoid a deadlock.10221022 * COWing can result in allocation of a new chunk, and flushing pending10231023 * block groups (btrfs_create_pending_block_groups()) can be triggered10241024 * when finishing allocation of a new chunk. Creation of a pending block10251025- * group modifies the extent, chunk and device trees, therefore we could10261026- * deadlock with ourselves since we are holding a lock on an extent10271027- * buffer that btrfs_create_pending_block_groups() may try to COW later.10251025+ * group modifies the extent, chunk, device and free space trees,10261026+ * therefore we could deadlock with ourselves since we are holding a10271027+ * lock on an extent buffer that btrfs_create_pending_block_groups() may10281028+ * try to COW later.10281029 */10291030 if (root == fs_info->extent_root ||10301031 root == fs_info->chunk_root ||10311031- root == fs_info->dev_root)10321032+ root == fs_info->dev_root ||10331033+ root == fs_info->free_space_root)10321034 trans->can_flush_pending_bgs = false;1033103510341036 cow = btrfs_alloc_tree_block(trans, root, parent_start,
+43-6
fs/btrfs/ioctl.c
···32213221 inode_lock_nested(inode2, I_MUTEX_CHILD);32223222}3223322332243224+static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,32253225+ struct inode *inode2, u64 loff2, u64 len)32263226+{32273227+ unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1);32283228+ unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);32293229+}32303230+32313231+static void btrfs_double_extent_lock(struct inode *inode1, u64 loff1,32323232+ struct inode *inode2, u64 loff2, u64 len)32333233+{32343234+ if (inode1 < inode2) {32353235+ swap(inode1, inode2);32363236+ swap(loff1, loff2);32373237+ } else if (inode1 == inode2 && loff2 < loff1) {32383238+ swap(loff1, loff2);32393239+ }32403240+ lock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1);32413241+ lock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);32423242+}32433243+32243244static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen,32253245 struct inode *dst, u64 dst_loff)32263246{···32623242 return -EINVAL;3263324332643244 /*32653265- * Lock destination range to serialize with concurrent readpages().32453245+ * Lock destination range to serialize with concurrent readpages() and32463246+ * source range to serialize with relocation.32663247 */32673267- lock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1);32483248+ btrfs_double_extent_lock(src, loff, dst, dst_loff, len);32683249 ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1);32693269- unlock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1);32503250+ btrfs_double_extent_unlock(src, loff, dst, dst_loff, len);3270325132713252 return ret;32723253}···39263905 len = ALIGN(src->i_size, bs) - off;3927390639283907 if (destoff > inode->i_size) {39083908+ const u64 wb_start = ALIGN_DOWN(inode->i_size, bs);39093909+39293910 ret = btrfs_cont_expand(inode, inode->i_size, destoff);39113911+ if (ret)39123912+ return ret;39133913+ /*39143914+ * We may have truncated the last block if the inode's size is39153915+ * not sector size aligned, so we need to wait for writeback to39163916+ * complete before proceeding further, otherwise we can race39173917+ * with cloning and attempt to increment a reference to an39183918+ * extent that no longer exists (writeback completed right after39193919+ * we found the previous extent covering eof and before we39203920+ * attempted to increment its reference count).39213921+ */39223922+ ret = btrfs_wait_ordered_range(inode, wb_start,39233923+ destoff - wb_start);39303924 if (ret)39313925 return ret;39323926 }3933392739343928 /*39353935- * Lock destination range to serialize with concurrent readpages().39293929+ * Lock destination range to serialize with concurrent readpages() and39303930+ * source range to serialize with relocation.39363931 */39373937- lock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1);39323932+ btrfs_double_extent_lock(src, off, inode, destoff, len);39383933 ret = btrfs_clone(src, inode, off, olen, len, destoff, 0);39393939- unlock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1);39343934+ btrfs_double_extent_unlock(src, off, inode, destoff, len);39403935 /*39413936 * Truncate page cache pages so that future reads will see the cloned39423937 * data immediately and not the previous data.
+12
fs/btrfs/volumes.c
···78257825 ret = -EUCLEAN;78267826 goto out;78277827 }78287828+78297829+ /* It's possible this device is a dummy for seed device */78307830+ if (dev->disk_total_bytes == 0) {78317831+ dev = find_device(fs_info->fs_devices->seed, devid, NULL);78327832+ if (!dev) {78337833+ btrfs_err(fs_info, "failed to find seed devid %llu",78347834+ devid);78357835+ ret = -EUCLEAN;78367836+ goto out;78377837+ }78387838+ }78397839+78287840 if (physical_offset + physical_len > dev->disk_total_bytes) {78297841 btrfs_err(fs_info,78307842"dev extent devid %llu physical offset %llu len %llu is beyond device boundary %llu",
+1-4
fs/ceph/addr.c
···14941494 if (err < 0 || off >= i_size_read(inode)) {14951495 unlock_page(page);14961496 put_page(page);14971497- if (err == -ENOMEM)14981498- ret = VM_FAULT_OOM;14991499- else15001500- ret = VM_FAULT_SIGBUS;14971497+ ret = vmf_error(err);15011498 goto out_inline;15021499 }15031500 if (err < PAGE_SIZE)
+2-2
fs/ceph/super.c
···530530 seq_putc(m, ',');531531 pos = m->count;532532533533- ret = ceph_print_client_options(m, fsc->client);533533+ ret = ceph_print_client_options(m, fsc->client, false);534534 if (ret)535535 return ret;536536···640640 opt = NULL; /* fsc->client now owns this */641641642642 fsc->client->extra_mon_dispatch = extra_mon_dispatch;643643- fsc->client->osdc.abort_on_full = true;643643+ ceph_set_opt(fsc->client, ABORT_ON_FULL);644644645645 if (!fsopt->mds_namespace) {646646 ceph_monc_want_map(&fsc->client->monc, CEPH_SUB_MDSMAP,
···14381438 int mid_state; /* wish this were enum but can not pass to wait_event */14391439 unsigned int mid_flags;14401440 __le16 command; /* smb command code */14411441+ unsigned int optype; /* operation type */14411442 bool large_buf:1; /* if valid response, is pointer to large buf */14421443 bool multiRsp:1; /* multiple trans2 responses for one request */14431444 bool multiEnd:1; /* both received */···15731572 kfree(param[i].node_name);15741573 }15751574 kfree(param);15751575+}15761576+15771577+static inline bool is_interrupt_error(int error)15781578+{15791579+ switch (error) {15801580+ case -EINTR:15811581+ case -ERESTARTSYS:15821582+ case -ERESTARTNOHAND:15831583+ case -ERESTARTNOINTR:15841584+ return true;15851585+ }15861586+ return false;15871587+}15881588+15891589+static inline bool is_retryable_error(int error)15901590+{15911591+ if (is_interrupt_error(error) || error == -EAGAIN)15921592+ return true;15931593+ return false;15761594}1577159515781596#define MID_FREE 0
···387387 if (rc < 0 && rc != -EINTR)388388 cifs_dbg(VFS, "Error %d sending data on socket to server\n",389389 rc);390390- else390390+ else if (rc > 0)391391 rc = 0;392392393393 return rc;···783783}784784785785static void786786-cifs_noop_callback(struct mid_q_entry *mid)786786+cifs_compound_callback(struct mid_q_entry *mid)787787{788788+ struct TCP_Server_Info *server = mid->server;789789+ unsigned int optype = mid->optype;790790+ unsigned int credits_received = 0;791791+792792+ if (mid->mid_state == MID_RESPONSE_RECEIVED) {793793+ if (mid->resp_buf)794794+ credits_received = server->ops->get_credits(mid);795795+ else796796+ cifs_dbg(FYI, "Bad state for cancelled MID\n");797797+ }798798+799799+ add_credits(server, credits_received, optype);800800+}801801+802802+static void803803+cifs_compound_last_callback(struct mid_q_entry *mid)804804+{805805+ cifs_compound_callback(mid);806806+ cifs_wake_up_task(mid);807807+}808808+809809+static void810810+cifs_cancelled_callback(struct mid_q_entry *mid)811811+{812812+ cifs_compound_callback(mid);813813+ DeleteMidQEntry(mid);788814}789815790816int···821795 int i, j, rc = 0;822796 int timeout, optype;823797 struct mid_q_entry *midQ[MAX_COMPOUND];824824- unsigned int credits = 0;798798+ bool cancelled_mid[MAX_COMPOUND] = {false};799799+ unsigned int credits[MAX_COMPOUND] = {0};825800 char *buf;826801827802 timeout = flags & CIFS_TIMEOUT_MASK;···840813 return -ENOENT;841814842815 /*843843- * Ensure that we do not send more than 50 overlapping requests844844- * to the same server. We may make this configurable later or845845- * use ses->maxReq.816816+ * Ensure we obtain 1 credit per request in the compound chain.817817+ * It can be optimized further by waiting for all the credits818818+ * at once but this can wait long enough if we don't have enough819819+ * credits due to some heavy operations in progress or the server820820+ * not granting us much, so a fallback to the current approach is821821+ * needed anyway.846822 */847847- rc = wait_for_free_request(ses->server, timeout, optype);848848- if (rc)849849- return rc;823823+ for (i = 0; i < num_rqst; i++) {824824+ rc = wait_for_free_request(ses->server, timeout, optype);825825+ if (rc) {826826+ /*827827+ * We haven't sent an SMB packet to the server yet but828828+ * we already obtained credits for i requests in the829829+ * compound chain - need to return those credits back830830+ * for future use. Note that we need to call add_credits831831+ * multiple times to match the way we obtained credits832832+ * in the first place and to account for in flight833833+ * requests correctly.834834+ */835835+ for (j = 0; j < i; j++)836836+ add_credits(ses->server, 1, optype);837837+ return rc;838838+ }839839+ credits[i] = 1;840840+ }850841851842 /*852843 * Make sure that we sign in the same order that we send on this socket···880835 for (j = 0; j < i; j++)881836 cifs_delete_mid(midQ[j]);882837 mutex_unlock(&ses->server->srv_mutex);838838+883839 /* Update # of requests on wire to server */884884- add_credits(ses->server, 1, optype);840840+ for (j = 0; j < num_rqst; j++)841841+ add_credits(ses->server, credits[j], optype);885842 return PTR_ERR(midQ[i]);886843 }887844888845 midQ[i]->mid_state = MID_REQUEST_SUBMITTED;846846+ midQ[i]->optype = optype;889847 /*890890- * We don't invoke the callback compounds unless it is the last891891- * request.848848+ * Invoke callback for every part of the compound chain849849+ * to calculate credits properly. Wake up this thread only when850850+ * the last element is received.892851 */893852 if (i < num_rqst - 1)894894- midQ[i]->callback = cifs_noop_callback;853853+ midQ[i]->callback = cifs_compound_callback;854854+ else855855+ midQ[i]->callback = cifs_compound_last_callback;895856 }896857 cifs_in_send_inc(ses->server);897858 rc = smb_send_rqst(ses->server, num_rqst, rqst, flags);···911860912861 mutex_unlock(&ses->server->srv_mutex);913862914914- if (rc < 0)863863+ if (rc < 0) {864864+ /* Sending failed for some reason - return credits back */865865+ for (i = 0; i < num_rqst; i++)866866+ add_credits(ses->server, credits[i], optype);915867 goto out;868868+ }869869+870870+ /*871871+ * At this point the request is passed to the network stack - we assume872872+ * that any credits taken from the server structure on the client have873873+ * been spent and we can't return them back. Once we receive responses874874+ * we will collect credits granted by the server in the mid callbacks875875+ * and add those credits to the server structure.876876+ */916877917878 /*918879 * Compounding is never used during session establish.···938875939876 for (i = 0; i < num_rqst; i++) {940877 rc = wait_for_response(ses->server, midQ[i]);941941- if (rc != 0) {878878+ if (rc != 0)879879+ break;880880+ }881881+ if (rc != 0) {882882+ for (; i < num_rqst; i++) {942883 cifs_dbg(VFS, "Cancelling wait for mid %llu cmd: %d\n",943884 midQ[i]->mid, le16_to_cpu(midQ[i]->command));944885 send_cancel(ses->server, &rqst[i], midQ[i]);945886 spin_lock(&GlobalMid_Lock);946887 if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED) {947888 midQ[i]->mid_flags |= MID_WAIT_CANCELLED;948948- midQ[i]->callback = DeleteMidQEntry;949949- spin_unlock(&GlobalMid_Lock);950950- add_credits(ses->server, 1, optype);951951- return rc;889889+ midQ[i]->callback = cifs_cancelled_callback;890890+ cancelled_mid[i] = true;891891+ credits[i] = 0;952892 }953893 spin_unlock(&GlobalMid_Lock);954894 }955895 }956956-957957- for (i = 0; i < num_rqst; i++)958958- if (midQ[i]->resp_buf)959959- credits += ses->server->ops->get_credits(midQ[i]);960960- if (!credits)961961- credits = 1;962896963897 for (i = 0; i < num_rqst; i++) {964898 if (rc < 0)···963903964904 rc = cifs_sync_mid_result(midQ[i], ses->server);965905 if (rc != 0) {966966- add_credits(ses->server, credits, optype);967967- return rc;906906+ /* mark this mid as cancelled to not free it below */907907+ cancelled_mid[i] = true;908908+ goto out;968909 }969910970911 if (!midQ[i]->resp_buf ||···1012951 * This is prevented above by using a noop callback that will not1013952 * wake this thread except for the very last PDU.1014953 */10151015- for (i = 0; i < num_rqst; i++)10161016- cifs_delete_mid(midQ[i]);10171017- add_credits(ses->server, credits, optype);954954+ for (i = 0; i < num_rqst; i++) {955955+ if (!cancelled_mid[i])956956+ cifs_delete_mid(midQ[i]);957957+ }10189581019959 return rc;1020960}
+33-28
fs/hugetlbfs/inode.c
···383383 * truncation is indicated by end of range being LLONG_MAX384384 * In this case, we first scan the range and release found pages.385385 * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv386386- * maps and global counts.386386+ * maps and global counts. Page faults can not race with truncation387387+ * in this routine. hugetlb_no_page() prevents page faults in the388388+ * truncated range. It checks i_size before allocation, and again after389389+ * with the page table lock for the page held. The same lock must be390390+ * acquired to unmap a page.387391 * hole punch is indicated if end is not LLONG_MAX388392 * In the hole punch case we scan the range and release found pages.389393 * Only when releasing a page is the associated region/reserv map390394 * deleted. The region/reserv map for ranges without associated391391- * pages are not modified.392392- *393393- * Callers of this routine must hold the i_mmap_rwsem in write mode to prevent394394- * races with page faults.395395- *395395+ * pages are not modified. Page faults can race with hole punch.396396+ * This is indicated if we find a mapped page.396397 * Note: If the passed end of range value is beyond the end of file, but397398 * not LLONG_MAX this routine still performs a hole punch operation.398399 */···423422424423 for (i = 0; i < pagevec_count(&pvec); ++i) {425424 struct page *page = pvec.pages[i];425425+ u32 hash;426426427427 index = page->index;428428+ hash = hugetlb_fault_mutex_hash(h, current->mm,429429+ &pseudo_vma,430430+ mapping, index, 0);431431+ mutex_lock(&hugetlb_fault_mutex_table[hash]);432432+428433 /*429429- * A mapped page is impossible as callers should unmap430430- * all references before calling. And, i_mmap_rwsem431431- * prevents the creation of additional mappings.434434+ * If page is mapped, it was faulted in after being435435+ * unmapped in caller. Unmap (again) now after taking436436+ * the fault mutex. The mutex will prevent faults437437+ * until we finish removing the page.438438+ *439439+ * This race can only happen in the hole punch case.440440+ * Getting here in a truncate operation is a bug.432441 */433433- VM_BUG_ON(page_mapped(page));442442+ if (unlikely(page_mapped(page))) {443443+ BUG_ON(truncate_op);444444+445445+ i_mmap_lock_write(mapping);446446+ hugetlb_vmdelete_list(&mapping->i_mmap,447447+ index * pages_per_huge_page(h),448448+ (index + 1) * pages_per_huge_page(h));449449+ i_mmap_unlock_write(mapping);450450+ }434451435452 lock_page(page);436453 /*···470451 }471452472453 unlock_page(page);454454+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);473455 }474456 huge_pagevec_release(&pvec);475457 cond_resched();···482462483463static void hugetlbfs_evict_inode(struct inode *inode)484464{485485- struct address_space *mapping = inode->i_mapping;486465 struct resv_map *resv_map;487466488488- /*489489- * The vfs layer guarantees that there are no other users of this490490- * inode. Therefore, it would be safe to call remove_inode_hugepages491491- * without holding i_mmap_rwsem. We acquire and hold here to be492492- * consistent with other callers. Since there will be no contention493493- * on the semaphore, overhead is negligible.494494- */495495- i_mmap_lock_write(mapping);496467 remove_inode_hugepages(inode, 0, LLONG_MAX);497497- i_mmap_unlock_write(mapping);498498-499468 resv_map = (struct resv_map *)inode->i_mapping->private_data;500469 /* root inode doesn't have the resv_map, so we should check it */501470 if (resv_map)···505496 i_mmap_lock_write(mapping);506497 if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))507498 hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0);508508- remove_inode_hugepages(inode, offset, LLONG_MAX);509499 i_mmap_unlock_write(mapping);500500+ remove_inode_hugepages(inode, offset, LLONG_MAX);510501 return 0;511502}512503···540531 hugetlb_vmdelete_list(&mapping->i_mmap,541532 hole_start >> PAGE_SHIFT,542533 hole_end >> PAGE_SHIFT);543543- remove_inode_hugepages(inode, hole_start, hole_end);544534 i_mmap_unlock_write(mapping);535535+ remove_inode_hugepages(inode, hole_start, hole_end);545536 inode_unlock(inode);546537 }547538···624615 /* addr is the offset within the file (zero based) */625616 addr = index * hpage_size;626617627627- /*628628- * fault mutex taken here, protects against fault path629629- * and hole punch. inode_lock previously taken protects630630- * against truncation.631631- */618618+ /* mutex taken here, fault path and hole punch */632619 hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping,633620 index, addr);634621 mutex_lock(&hugetlb_fault_mutex_table[hash]);
+2-1
fs/sysfs/dir.c
···4343 kuid_t uid;4444 kgid_t gid;45454646- BUG_ON(!kobj);4646+ if (WARN_ON(!kobj))4747+ return -EINVAL;47484849 if (kobj->parent)4950 parent = kobj->parent->sd;
···112112 kgid_t gid;113113 int error;114114115115- BUG_ON(!kobj || (!update && !kobj->sd));115115+ if (WARN_ON(!kobj || (!update && !kobj->sd)))116116+ return -EINVAL;116117117118 /* Updates may happen before the object has been instantiated */118119 if (unlikely(update && !kobj->sd))
+2-1
fs/sysfs/symlink.c
···2323{2424 struct kernfs_node *kn, *target = NULL;25252626- BUG_ON(!name || !parent);2626+ if (WARN_ON(!name || !parent))2727+ return -EINVAL;27282829 /*2930 * We don't own @target_kobj and it may be removed at any time.
+7
include/drm/drm_dp_helper.h
···13651365 * to 16 bits. So will give a constant value (0x8000) for compatability.13661366 */13671367 DP_DPCD_QUIRK_CONSTANT_N,13681368+ /**13691369+ * @DP_DPCD_QUIRK_NO_PSR:13701370+ *13711371+ * The device does not support PSR even if reports that it supports or13721372+ * driver still need to implement proper handling for such device.13731373+ */13741374+ DP_DPCD_QUIRK_NO_PSR,13681375};1369137613701377/**
···3535#define CEPH_OPT_NOMSGAUTH (1<<4) /* don't require msg signing feat */3636#define CEPH_OPT_TCP_NODELAY (1<<5) /* TCP_NODELAY on TCP sockets */3737#define CEPH_OPT_NOMSGSIGN (1<<6) /* don't sign msgs */3838+#define CEPH_OPT_ABORT_ON_FULL (1<<7) /* abort w/ ENOSPC when full */38393940#define CEPH_OPT_DEFAULT (CEPH_OPT_TCP_NODELAY)4041···5453 unsigned long osd_request_timeout; /* jiffies */55545655 /*5757- * any type that can't be simply compared or doesn't need need5656+ * any type that can't be simply compared or doesn't need5857 * to be compared should go beyond this point,5958 * ceph_compare_options() should be updated accordingly6059 */···282281 const char *dev_name, const char *dev_name_end,283282 int (*parse_extra_token)(char *c, void *private),284283 void *private);285285-int ceph_print_client_options(struct seq_file *m, struct ceph_client *client);284284+int ceph_print_client_options(struct seq_file *m, struct ceph_client *client,285285+ bool show_all);286286extern void ceph_destroy_options(struct ceph_options *opt);287287extern int ceph_compare_options(struct ceph_options *new_opt,288288 struct ceph_client *client);
-1
include/linux/ceph/osd_client.h
···354354 struct rb_root linger_map_checks;355355 atomic_t num_requests;356356 atomic_t num_homeless;357357- bool abort_on_full; /* abort w/ ENOSPC when full */358357 int abort_err;359358 struct delayed_work timeout_work;360359 struct delayed_work osds_timeout_work;
+1-1
include/linux/compiler-gcc.h
···6868 */6969#define uninitialized_var(x) x = x70707171-#ifdef RETPOLINE7171+#ifdef CONFIG_RETPOLINE7272#define __noretpoline __attribute__((__indirect_branch__("keep")))7373#endif7474
-9
include/linux/dma-mapping.h
···717717}718718#endif719719720720-/*721721- * Please always use dma_alloc_coherent instead as it already zeroes the memory!722722- */723723-static inline void *dma_zalloc_coherent(struct device *dev, size_t size,724724- dma_addr_t *dma_handle, gfp_t flag)725725-{726726- return dma_alloc_coherent(dev, size, dma_handle, flag);727727-}728728-729720static inline int dma_get_cache_alignment(void)730721{731722#ifdef ARCH_DMA_MINALIGN
+6
include/linux/mmzone.h
···520520 PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */521521};522522523523+enum zone_flags {524524+ ZONE_BOOSTED_WATERMARK, /* zone recently boosted watermarks.525525+ * Cleared when kswapd is woken.526526+ */527527+};528528+523529static inline unsigned long zone_managed_pages(struct zone *zone)524530{525531 return (unsigned long)atomic_long_read(&zone->managed_pages);
···3232struct reset_control *of_reset_control_array_get(struct device_node *np,3333 bool shared, bool optional);34343535+int reset_control_get_count(struct device *dev);3636+3537#else36383739static inline int reset_control_reset(struct reset_control *rstc)···9997 return optional ? NULL : ERR_PTR(-ENOTSUPP);10098}10199100100+static inline int reset_control_get_count(struct device *dev)101101+{102102+ return -ENOENT;103103+}104104+102105#endif /* CONFIG_RESET_CONTROLLER */103106104107static inline int __must_check device_reset(struct device *dev)···145138 *146139 * Returns a struct reset_control or IS_ERR() condition containing errno.147140 * This function is intended for use with reset-controls which are shared148148- * between hardware-blocks.141141+ * between hardware blocks.149142 *150143 * When a reset-control is shared, the behavior of reset_control_assert /151144 * deassert is changed, the reset-core will keep track of a deassert_count···194187}195188196189/**197197- * of_reset_control_get_shared - Lookup and obtain an shared reference190190+ * of_reset_control_get_shared - Lookup and obtain a shared reference198191 * to a reset controller.199192 * @node: device to be reset by the controller200193 * @id: reset line name···236229}237230238231/**239239- * of_reset_control_get_shared_by_index - Lookup and obtain an shared232232+ * of_reset_control_get_shared_by_index - Lookup and obtain a shared240233 * reference to a reset controller241234 * by index.242235 * @node: device to be reset by the controller···329322330323/**331324 * devm_reset_control_get_shared_by_index - resource managed332332- * reset_control_get_shared325325+ * reset_control_get_shared333326 * @dev: device to be reset by the controller334327 * @index: index of the reset controller335328 *
+1-1
include/linux/sched.h
···995995 /* cg_list protected by css_set_lock and tsk->alloc_lock: */996996 struct list_head cg_list;997997#endif998998-#ifdef CONFIG_RESCTRL998998+#ifdef CONFIG_X86_RESCTRL999999 u32 closid;10001000 u32 rmid;10011001#endif
+2
include/uapi/linux/audit.h
···400400/* do not define AUDIT_ARCH_PPCLE since it is not supported by audit */401401#define AUDIT_ARCH_PPC64 (EM_PPC64|__AUDIT_ARCH_64BIT)402402#define AUDIT_ARCH_PPC64LE (EM_PPC64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE)403403+#define AUDIT_ARCH_RISCV32 (EM_RISCV|__AUDIT_ARCH_LE)404404+#define AUDIT_ARCH_RISCV64 (EM_RISCV|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE)403405#define AUDIT_ARCH_S390 (EM_S390)404406#define AUDIT_ARCH_S390X (EM_S390|__AUDIT_ARCH_64BIT)405407#define AUDIT_ARCH_SH (EM_SH)
+12-2
kernel/fork.c
···217217 memset(s->addr, 0, THREAD_SIZE);218218219219 tsk->stack_vm_area = s;220220+ tsk->stack = s->addr;220221 return s->addr;221222 }222223···1834183318351834 posix_cpu_timers_init(p);1836183518371837- p->start_time = ktime_get_ns();18381838- p->real_start_time = ktime_get_boot_ns();18391836 p->io_context = NULL;18401837 audit_set_context(p, NULL);18411838 cgroup_fork(p);···19981999 retval = cgroup_can_fork(p);19992000 if (retval)20002001 goto bad_fork_free_pid;20022002+20032003+ /*20042004+ * From this point on we must avoid any synchronous user-space20052005+ * communication until we take the tasklist-lock. In particular, we do20062006+ * not want user-space to be able to predict the process start-time by20072007+ * stalling fork(2) after we recorded the start_time but before it is20082008+ * visible to the system.20092009+ */20102010+20112011+ p->start_time = ktime_get_ns();20122012+ p->real_start_time = ktime_get_boot_ns();2001201320022014 /*20032015 * Make it visible to the rest of the system, but dont wake it up yet.
+2-1
kernel/sys.c
···12071207/*12081208 * Work around broken programs that cannot handle "Linux 3.0".12091209 * Instead we map 3.x to 2.6.40+x, so e.g. 3.0 would be 2.6.4012101210- * And we map 4.x to 2.6.60+x, so 4.0 would be 2.6.60.12101210+ * And we map 4.x and later versions to 2.6.60+x, so 4.0/5.0/6.0/... would be12111211+ * 2.6.60.12111212 */12121213static int override_release(char __user *release, size_t len)12131214{
+24-57
mm/hugetlb.c
···32383238 struct page *ptepage;32393239 unsigned long addr;32403240 int cow;32413241- struct address_space *mapping = vma->vm_file->f_mapping;32423241 struct hstate *h = hstate_vma(vma);32433242 unsigned long sz = huge_page_size(h);32443243 struct mmu_notifier_range range;···32493250 mmu_notifier_range_init(&range, src, vma->vm_start,32503251 vma->vm_end);32513252 mmu_notifier_invalidate_range_start(&range);32523252- } else {32533253- /*32543254- * For shared mappings i_mmap_rwsem must be held to call32553255- * huge_pte_alloc, otherwise the returned ptep could go32563256- * away if part of a shared pmd and another thread calls32573257- * huge_pmd_unshare.32583258- */32593259- i_mmap_lock_read(mapping);32603253 }3261325432623255 for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {32633256 spinlock_t *src_ptl, *dst_ptl;32643264-32653257 src_pte = huge_pte_offset(src, addr, sz);32663258 if (!src_pte)32673259 continue;32683268-32693260 dst_pte = huge_pte_alloc(dst, addr, sz);32703261 if (!dst_pte) {32713262 ret = -ENOMEM;···3326333733273338 if (cow)33283339 mmu_notifier_invalidate_range_end(&range);33293329- else33303330- i_mmap_unlock_read(mapping);3331334033323341 return ret;33333342}···37423755 }3743375637443757 /*37453745- * We can not race with truncation due to holding i_mmap_rwsem.37463746- * Check once here for faults beyond end of file.37583758+ * Use page lock to guard against racing truncation37593759+ * before we get page_table_lock.37473760 */37483748- size = i_size_read(mapping->host) >> huge_page_shift(h);37493749- if (idx >= size)37503750- goto out;37513751-37523761retry:37533762 page = find_lock_page(mapping, idx);37543763 if (!page) {37643764+ size = i_size_read(mapping->host) >> huge_page_shift(h);37653765+ if (idx >= size)37663766+ goto out;37673767+37553768 /*37563769 * Check for page in userfault range37573770 */···37713784 };3772378537733786 /*37743774- * hugetlb_fault_mutex and i_mmap_rwsem must be37753775- * dropped before handling userfault. Reacquire37763776- * after handling fault to make calling code simpler.37873787+ * hugetlb_fault_mutex must be dropped before37883788+ * handling userfault. Reacquire after handling37893789+ * fault to make calling code simpler.37773790 */37783791 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping,37793792 idx, haddr);37803793 mutex_unlock(&hugetlb_fault_mutex_table[hash]);37813781- i_mmap_unlock_read(mapping);37823782-37833794 ret = handle_userfault(&vmf, VM_UFFD_MISSING);37843784-37853785- i_mmap_lock_read(mapping);37863795 mutex_lock(&hugetlb_fault_mutex_table[hash]);37873796 goto out;37883797 }···38373854 }3838385538393856 ptl = huge_pte_lock(h, mm, ptep);38573857+ size = i_size_read(mapping->host) >> huge_page_shift(h);38583858+ if (idx >= size)38593859+ goto backout;3840386038413861 ret = 0;38423862 if (!huge_pte_none(huge_ptep_get(ptep)))···3926394039273941 ptep = huge_pte_offset(mm, haddr, huge_page_size(h));39283942 if (ptep) {39293929- /*39303930- * Since we hold no locks, ptep could be stale. That is39313931- * OK as we are only making decisions based on content and39323932- * not actually modifying content here.39333933- */39343943 entry = huge_ptep_get(ptep);39353944 if (unlikely(is_hugetlb_entry_migration(entry))) {39363945 migration_entry_wait_huge(vma, mm, ptep);···39333952 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))39343953 return VM_FAULT_HWPOISON_LARGE |39353954 VM_FAULT_SET_HINDEX(hstate_index(h));39553955+ } else {39563956+ ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));39573957+ if (!ptep)39583958+ return VM_FAULT_OOM;39363959 }3937396039383938- /*39393939- * Acquire i_mmap_rwsem before calling huge_pte_alloc and hold39403940- * until finished with ptep. This serves two purposes:39413941- * 1) It prevents huge_pmd_unshare from being called elsewhere39423942- * and making the ptep no longer valid.39433943- * 2) It synchronizes us with file truncation.39443944- *39453945- * ptep could have already be assigned via huge_pte_offset. That39463946- * is OK, as huge_pte_alloc will return the same value unless39473947- * something changed.39483948- */39493961 mapping = vma->vm_file->f_mapping;39503950- i_mmap_lock_read(mapping);39513951- ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));39523952- if (!ptep) {39533953- i_mmap_unlock_read(mapping);39543954- return VM_FAULT_OOM;39553955- }39623962+ idx = vma_hugecache_offset(h, vma, haddr);3956396339573964 /*39583965 * Serialize hugepage allocation and instantiation, so that we don't39593966 * get spurious allocation failures if two CPUs race to instantiate39603967 * the same page in the page cache.39613968 */39623962- idx = vma_hugecache_offset(h, vma, haddr);39633969 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr);39643970 mutex_lock(&hugetlb_fault_mutex_table[hash]);39653971···40344066 }40354067out_mutex:40364068 mutex_unlock(&hugetlb_fault_mutex_table[hash]);40374037- i_mmap_unlock_read(mapping);40384069 /*40394070 * Generally it's safe to hold refcount during waiting page lock. But40404071 * here we just wait to defer the next page fault to avoid busy loop and···46384671 * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()46394672 * and returns the corresponding pte. While this is not necessary for the46404673 * !shared pmd case because we can allocate the pmd later as well, it makes the46414641- * code much cleaner.46424642- *46434643- * This routine must be called with i_mmap_rwsem held in at least read mode.46444644- * For hugetlbfs, this prevents removal of any page table entries associated46454645- * with the address space. This is important as we are setting up sharing46464646- * based on existing page table entries (mappings).46744674+ * code much cleaner. pmd allocation is essential for the shared case because46754675+ * pud has to be populated inside the same i_mmap_rwsem section - otherwise46764676+ * racing tasks could either miss the sharing (see huge_pte_offset) or select a46774677+ * bad pmd for sharing.46474678 */46484679pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)46494680{···46584693 if (!vma_shareable(vma, addr))46594694 return (pte_t *)pmd_alloc(mm, pud, addr);4660469546964696+ i_mmap_lock_write(mapping);46614697 vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {46624698 if (svma == vma)46634699 continue;···46884722 spin_unlock(ptl);46894723out:46904724 pte = (pte_t *)pmd_alloc(mm, pud, addr);47254725+ i_mmap_unlock_write(mapping);46914726 return pte;46924727}46934728···46994732 * indicated by page_count > 1, unmap is achieved by clearing pud and47004733 * decrementing the ref count. If count == 1, the pte page is not shared.47014734 *47024702- * Called with page table lock held and i_mmap_rwsem held in write mode.47354735+ * called with page table lock held.47034736 *47044737 * returns: 1 successfully unmapped a shared pte page47054738 * 0 the underlying pte page is not shared, or it is the last user
+44-23
mm/kasan/common.c
···298298 return;299299 }300300301301- cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);302302-303301 *flags |= SLAB_KASAN;304302}305303···347349}348350349351/*350350- * Since it's desirable to only call object contructors once during slab351351- * allocation, we preassign tags to all such objects. Also preassign tags for352352- * SLAB_TYPESAFE_BY_RCU slabs to avoid use-after-free reports.353353- * For SLAB allocator we can't preassign tags randomly since the freelist is354354- * stored as an array of indexes instead of a linked list. Assign tags based355355- * on objects indexes, so that objects that are next to each other get356356- * different tags.357357- * After a tag is assigned, the object always gets allocated with the same tag.358358- * The reason is that we can't change tags for objects with constructors on359359- * reallocation (even for non-SLAB_TYPESAFE_BY_RCU), because the constructor360360- * code can save the pointer to the object somewhere (e.g. in the object361361- * itself). Then if we retag it, the old saved pointer will become invalid.352352+ * This function assigns a tag to an object considering the following:353353+ * 1. A cache might have a constructor, which might save a pointer to a slab354354+ * object somewhere (e.g. in the object itself). We preassign a tag for355355+ * each object in caches with constructors during slab creation and reuse356356+ * the same tag each time a particular object is allocated.357357+ * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be358358+ * accessed after being freed. We preassign tags for objects in these359359+ * caches as well.360360+ * 3. For SLAB allocator we can't preassign tags randomly since the freelist361361+ * is stored as an array of indexes instead of a linked list. Assign tags362362+ * based on objects indexes, so that objects that are next to each other363363+ * get different tags.362364 */363363-static u8 assign_tag(struct kmem_cache *cache, const void *object, bool new)365365+static u8 assign_tag(struct kmem_cache *cache, const void *object,366366+ bool init, bool krealloc)364367{365365- if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))366366- return new ? KASAN_TAG_KERNEL : random_tag();368368+ /* Reuse the same tag for krealloc'ed objects. */369369+ if (krealloc)370370+ return get_tag(object);367371372372+ /*373373+ * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU374374+ * set, assign a tag when the object is being allocated (init == false).375375+ */376376+ if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))377377+ return init ? KASAN_TAG_KERNEL : random_tag();378378+379379+ /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */368380#ifdef CONFIG_SLAB381381+ /* For SLAB assign tags based on the object index in the freelist. */369382 return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);370383#else371371- return new ? random_tag() : get_tag(object);384384+ /*385385+ * For SLUB assign a random tag during slab creation, otherwise reuse386386+ * the already assigned tag.387387+ */388388+ return init ? random_tag() : get_tag(object);372389#endif373390}374391···399386 __memset(alloc_info, 0, sizeof(*alloc_info));400387401388 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))402402- object = set_tag(object, assign_tag(cache, object, true));389389+ object = set_tag(object,390390+ assign_tag(cache, object, true, false));403391404392 return (void *)object;405393}···466452 return __kasan_slab_free(cache, object, ip, true);467453}468454469469-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,470470- size_t size, gfp_t flags)455455+static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,456456+ size_t size, gfp_t flags, bool krealloc)471457{472458 unsigned long redzone_start;473459 unsigned long redzone_end;···485471 KASAN_SHADOW_SCALE_SIZE);486472487473 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))488488- tag = assign_tag(cache, object, false);474474+ tag = assign_tag(cache, object, false, krealloc);489475490476 /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */491477 kasan_unpoison_shadow(set_tag(object, tag), size);···496482 set_track(&get_alloc_info(cache, object)->alloc_track, flags);497483498484 return set_tag(object, tag);485485+}486486+487487+void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,488488+ size_t size, gfp_t flags)489489+{490490+ return __kasan_kmalloc(cache, object, size, flags, false);499491}500492EXPORT_SYMBOL(kasan_kmalloc);501493···542522 if (unlikely(!PageSlab(page)))543523 return kasan_kmalloc_large(object, size, flags);544524 else545545- return kasan_kmalloc(page->slab_cache, object, size, flags);525525+ return __kasan_kmalloc(page->slab_cache, object, size,526526+ flags, true);546527}547528548529void kasan_poison_kfree(void *ptr, unsigned long ip)
+2-14
mm/memory-failure.c
···966966 enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;967967 struct address_space *mapping;968968 LIST_HEAD(tokill);969969- bool unmap_success = true;969969+ bool unmap_success;970970 int kill = 1, forcekill;971971 struct page *hpage = *hpagep;972972 bool mlocked = PageMlocked(hpage);···10281028 if (kill)10291029 collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED);1030103010311031- if (!PageHuge(hpage)) {10321032- unmap_success = try_to_unmap(hpage, ttu);10331033- } else if (mapping) {10341034- /*10351035- * For hugetlb pages, try_to_unmap could potentially call10361036- * huge_pmd_unshare. Because of this, take semaphore in10371037- * write mode here and set TTU_RMAP_LOCKED to indicate we10381038- * have taken the lock at this higer level.10391039- */10401040- i_mmap_lock_write(mapping);10411041- unmap_success = try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED);10421042- i_mmap_unlock_write(mapping);10431043- }10311031+ unmap_success = try_to_unmap(hpage, ttu);10441032 if (!unmap_success)10451033 pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n",10461034 pfn, page_mapcount(hpage));
+24-2
mm/memory.c
···29942994 struct vm_area_struct *vma = vmf->vma;29952995 vm_fault_t ret;2996299629972997+ /*29982998+ * Preallocate pte before we take page_lock because this might lead to29992999+ * deadlocks for memcg reclaim which waits for pages under writeback:30003000+ * lock_page(A)30013001+ * SetPageWriteback(A)30023002+ * unlock_page(A)30033003+ * lock_page(B)30043004+ * lock_page(B)30053005+ * pte_alloc_pne30063006+ * shrink_page_list30073007+ * wait_on_page_writeback(A)30083008+ * SetPageWriteback(B)30093009+ * unlock_page(B)30103010+ * # flush A, B to clear the writeback30113011+ */30123012+ if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {30133013+ vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm);30143014+ if (!vmf->prealloc_pte)30153015+ return VM_FAULT_OOM;30163016+ smp_wmb(); /* See comment in __pte_alloc() */30173017+ }30183018+29973019 ret = vma->vm_ops->fault(vmf);29983020 if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |29993021 VM_FAULT_DONE_COW)))···40994077 goto out;4100407841014079 if (range) {41024102- range->start = address & PAGE_MASK;41034103- range->end = range->start + PAGE_SIZE;40804080+ mmu_notifier_range_init(range, mm, address & PAGE_MASK,40814081+ (address & PAGE_MASK) + PAGE_SIZE);41044082 mmu_notifier_invalidate_range_start(range);41054083 }41064084 ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
+1-12
mm/migrate.c
···13241324 goto put_anon;1325132513261326 if (page_mapped(hpage)) {13271327- struct address_space *mapping = page_mapping(hpage);13281328-13291329- /*13301330- * try_to_unmap could potentially call huge_pmd_unshare.13311331- * Because of this, take semaphore in write mode here and13321332- * set TTU_RMAP_LOCKED to let lower levels know we have13331333- * taken the lock.13341334- */13351335- i_mmap_lock_write(mapping);13361327 try_to_unmap(hpage,13371337- TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS|13381338- TTU_RMAP_LOCKED);13391339- i_mmap_unlock_write(mapping);13281328+ TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);13401329 page_was_mapped = 1;13411330 }13421331
+7-1
mm/page_alloc.c
···22142214 */22152215 boost_watermark(zone);22162216 if (alloc_flags & ALLOC_KSWAPD)22172217- wakeup_kswapd(zone, 0, 0, zone_idx(zone));22172217+ set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);2218221822192219 /* We are not allowed to try stealing from the whole block */22202220 if (!whole_block)···31023102 local_irq_restore(flags);3103310331043104out:31053105+ /* Separate test+clear to avoid unnecessary atomics */31063106+ if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {31073107+ clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);31083108+ wakeup_kswapd(zone, 0, 0, zone_idx(zone));31093109+ }31103110+31053111 VM_BUG_ON_PAGE(page && bad_range(zone, page), page);31063112 return page;31073113
+2-6
mm/rmap.c
···2525 * page->flags PG_locked (lock_page)2626 * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)2727 * mapping->i_mmap_rwsem2828- * hugetlb_fault_mutex (hugetlbfs specific page fault mutex)2928 * anon_vma->rwsem3029 * mm->page_table_lock or pte_lock3130 * zone_lru_lock (in mark_page_accessed, isolate_lru_page)···13711372 * Note that the page can not be free in this function as call of13721373 * try_to_unmap() must hold a reference on the page.13731374 */13741374- mmu_notifier_range_init(&range, vma->vm_mm, vma->vm_start,13751375- min(vma->vm_end, vma->vm_start +13751375+ mmu_notifier_range_init(&range, vma->vm_mm, address,13761376+ min(vma->vm_end, address +13761377 (PAGE_SIZE << compound_order(page))));13771378 if (PageHuge(page)) {13781379 /*13791380 * If sharing is possible, start and end will be adjusted13801381 * accordingly.13811381- *13821382- * If called for a huge page, caller must hold i_mmap_rwsem13831383- * in write mode as it is possible to call huge_pmd_unshare.13841382 */13851383 adjust_range_if_pmd_sharing_possible(vma, &range.start,13861384 &range.end);
···247247/*248248 * Validates that the given object is:249249 * - not bogus address250250- * - known-safe heap or stack object250250+ * - fully contained by stack (or stack frame, when available)251251+ * - fully within SLAB object (or object whitelist area, when available)251252 * - not in kernel text252253 */253254void __check_object_size(const void *ptr, unsigned long n, bool to_user)···262261263262 /* Check for invalid addresses. */264263 check_bogus_address((const unsigned long)ptr, n, to_user);265265-266266- /* Check for bad heap object. */267267- check_heap_object(ptr, n, to_user);268264269265 /* Check for bad stack object. */270266 switch (check_stack_object(ptr, n)) {···279281 default:280282 usercopy_abort("process stack", NULL, to_user, 0, n);281283 }284284+285285+ /* Check for bad heap object. */286286+ check_heap_object(ptr, n, to_user);282287283288 /* Check for object in kernel to avoid text exposure. */284289 check_kernel_text_object((const unsigned long)ptr, n, to_user);
+2-9
mm/userfaultfd.c
···267267 VM_BUG_ON(dst_addr & ~huge_page_mask(h));268268269269 /*270270- * Serialize via i_mmap_rwsem and hugetlb_fault_mutex.271271- * i_mmap_rwsem ensures the dst_pte remains valid even272272- * in the case of shared pmds. fault mutex prevents273273- * races with other faulting threads.270270+ * Serialize via hugetlb_fault_mutex274271 */275275- mapping = dst_vma->vm_file->f_mapping;276276- i_mmap_lock_read(mapping);277272 idx = linear_page_index(dst_vma, dst_addr);273273+ mapping = dst_vma->vm_file->f_mapping;278274 hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping,279275 idx, dst_addr);280276 mutex_lock(&hugetlb_fault_mutex_table[hash]);···279283 dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));280284 if (!dst_pte) {281285 mutex_unlock(&hugetlb_fault_mutex_table[hash]);282282- i_mmap_unlock_read(mapping);283286 goto out_unlock;284287 }285288···286291 dst_pteval = huge_ptep_get(dst_pte);287292 if (!huge_pte_none(dst_pteval)) {288293 mutex_unlock(&hugetlb_fault_mutex_table[hash]);289289- i_mmap_unlock_read(mapping);290294 goto out_unlock;291295 }292296···293299 dst_addr, src_addr, &page);294300295301 mutex_unlock(&hugetlb_fault_mutex_table[hash]);296296- i_mmap_unlock_read(mapping);297302 vm_alloc_shared = vm_shared;298303299304 cond_resched();
+1-1
mm/util.c
···478478 return true;479479 if (PageHuge(page))480480 return false;481481- for (i = 0; i < hpage_nr_pages(page); i++) {481481+ for (i = 0; i < (1 << compound_order(page)); i++) {482482 if (atomic_read(&page[i]._mapcount) >= 0)483483 return true;484484 }
···6969- x = (T)vmalloc(E1);7070+ x = (T)vzalloc(E1);7171|7272-- x = dma_alloc_coherent(E2,E1,E3,E4);7373-+ x = dma_zalloc_coherent(E2,E1,E3,E4);7474-|7575-- x = (T *)dma_alloc_coherent(E2,E1,E3,E4);7676-+ x = dma_zalloc_coherent(E2,E1,E3,E4);7777-|7878-- x = (T)dma_alloc_coherent(E2,E1,E3,E4);7979-+ x = (T)dma_zalloc_coherent(E2,E1,E3,E4);8080-|8172- x = kmalloc_node(E1,E2,E3);8273+ x = kzalloc_node(E1,E2,E3);8374|···216225x << r2.x;217226@@218227219219-msg="WARNING: dma_zalloc_coherent should be used for %s, instead of dma_alloc_coherent/memset" % (x)228228+msg="WARNING: dma_alloc_coherent use in %s already zeroes out memory, so memset is not needed" % (x)220229coccilib.report.print_report(p[0], msg)221230222231//-----------------------------------------------------------------
···4747 /* We use the PCI APIs for now until the generic one gets fixed4848 * enough or until we get some macio-specific versions4949 */5050- r->space = dma_zalloc_coherent(&macio_get_pci_dev(i2sdev->macio)->dev,5151- r->size, &r->bus_addr, GFP_KERNEL);5050+ r->space = dma_alloc_coherent(&macio_get_pci_dev(i2sdev->macio)->dev,5151+ r->size, &r->bus_addr, GFP_KERNEL);5252 if (!r->space)5353 return -ENOMEM;5454
+3
sound/pci/cs46xx/dsp_spos.c
···903903 struct dsp_spos_instance * ins = chip->dsp_spos_instance;904904 int i;905905906906+ if (!ins)907907+ return 0;908908+906909 snd_info_free_entry(ins->proc_sym_info_entry);907910 ins->proc_sym_info_entry = NULL;908911
···11+/*22+ * Copyright (C) 2012 ARM Ltd.33+ * Copyright (C) 2015 Regents of the University of California44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 as77+ * published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ *1414+ * You should have received a copy of the GNU General Public License1515+ * along with this program. If not, see <http://www.gnu.org/licenses/>.1616+ */1717+1818+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H1919+#define _UAPI_ASM_RISCV_BITSPERLONG_H2020+2121+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)2222+2323+#include <asm-generic/bitsperlong.h>2424+2525+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
···412412 int irq_seq;413413} drm_i915_irq_wait_t;414414415415+/*416416+ * Different modes of per-process Graphics Translation Table,417417+ * see I915_PARAM_HAS_ALIASING_PPGTT418418+ */419419+#define I915_GEM_PPGTT_NONE 0420420+#define I915_GEM_PPGTT_ALIASING 1421421+#define I915_GEM_PPGTT_FULL 2422422+415423/* Ioctl to query kernel params:416424 */417425#define I915_PARAM_IRQ_ACTIVE 1
+8-52
tools/include/uapi/linux/fs.h
···1414#include <linux/ioctl.h>1515#include <linux/types.h>16161717+/* Use of MS_* flags within the kernel is restricted to core mount(2) code. */1818+#if !defined(__KERNEL__)1919+#include <linux/mount.h>2020+#endif2121+1722/*1823 * It's silly to have NR_OPEN bigger than NR_FILE, but you can change1924 * the file limit at runtime and only root can increase the per-process···105100106101107102#define NR_FILE 8192 /* this can well be larger on a larger system */108108-109109-110110-/*111111- * These are the fs-independent mount-flags: up to 32 flags are supported112112- */113113-#define MS_RDONLY 1 /* Mount read-only */114114-#define MS_NOSUID 2 /* Ignore suid and sgid bits */115115-#define MS_NODEV 4 /* Disallow access to device special files */116116-#define MS_NOEXEC 8 /* Disallow program execution */117117-#define MS_SYNCHRONOUS 16 /* Writes are synced at once */118118-#define MS_REMOUNT 32 /* Alter flags of a mounted FS */119119-#define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */120120-#define MS_DIRSYNC 128 /* Directory modifications are synchronous */121121-#define MS_NOATIME 1024 /* Do not update access times. */122122-#define MS_NODIRATIME 2048 /* Do not update directory access times */123123-#define MS_BIND 4096124124-#define MS_MOVE 8192125125-#define MS_REC 16384126126-#define MS_VERBOSE 32768 /* War is peace. Verbosity is silence.127127- MS_VERBOSE is deprecated. */128128-#define MS_SILENT 32768129129-#define MS_POSIXACL (1<<16) /* VFS does not apply the umask */130130-#define MS_UNBINDABLE (1<<17) /* change to unbindable */131131-#define MS_PRIVATE (1<<18) /* change to private */132132-#define MS_SLAVE (1<<19) /* change to slave */133133-#define MS_SHARED (1<<20) /* change to shared */134134-#define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */135135-#define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */136136-#define MS_I_VERSION (1<<23) /* Update inode I_version field */137137-#define MS_STRICTATIME (1<<24) /* Always perform atime updates */138138-#define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */139139-140140-/* These sb flags are internal to the kernel */141141-#define MS_SUBMOUNT (1<<26)142142-#define MS_NOREMOTELOCK (1<<27)143143-#define MS_NOSEC (1<<28)144144-#define MS_BORN (1<<29)145145-#define MS_ACTIVE (1<<30)146146-#define MS_NOUSER (1<<31)147147-148148-/*149149- * Superblock flags that can be altered by MS_REMOUNT150150- */151151-#define MS_RMT_MASK (MS_RDONLY|MS_SYNCHRONOUS|MS_MANDLOCK|MS_I_VERSION|\152152- MS_LAZYTIME)153153-154154-/*155155- * Old magic mount flag and mask156156- */157157-#define MS_MGC_VAL 0xC0ED0000158158-#define MS_MGC_MSK 0xffff0000159103160104/*161105 * Structure for FS_IOC_FSGETXATTR[A] and FS_IOC_FSSETXATTR.···223269#define FS_POLICY_FLAGS_PAD_16 0x02224270#define FS_POLICY_FLAGS_PAD_32 0x03225271#define FS_POLICY_FLAGS_PAD_MASK 0x03226226-#define FS_POLICY_FLAGS_VALID 0x03272272+#define FS_POLICY_FLAG_DIRECT_KEY 0x04 /* use master key directly */273273+#define FS_POLICY_FLAGS_VALID 0x07227274228275/* Encryption algorithms */229276#define FS_ENCRYPTION_MODE_INVALID 0···236281#define FS_ENCRYPTION_MODE_AES_128_CTS 6237282#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */238283#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */284284+#define FS_ENCRYPTION_MODE_ADIANTUM 9239285240286struct fscrypt_policy {241287 __u8 version;
···492492 };493493};494494495495+/* for KVM_CLEAR_DIRTY_LOG */496496+struct kvm_clear_dirty_log {497497+ __u32 slot;498498+ __u32 num_pages;499499+ __u64 first_page;500500+ union {501501+ void __user *dirty_bitmap; /* one bit per page */502502+ __u64 padding2;503503+ };504504+};505505+495506/* for KVM_SET_SIGNAL_MASK */496507struct kvm_signal_mask {497508 __u32 len;···986975#define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 163987976#define KVM_CAP_EXCEPTION_PAYLOAD 164988977#define KVM_CAP_ARM_VM_IPA_SIZE 165978978+#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166979979+#define KVM_CAP_HYPERV_CPUID 167989980990981#ifdef KVM_CAP_IRQ_ROUTING991982···14331420/* Available with KVM_CAP_NESTED_STATE */14341421#define KVM_GET_NESTED_STATE _IOWR(KVMIO, 0xbe, struct kvm_nested_state)14351422#define KVM_SET_NESTED_STATE _IOW(KVMIO, 0xbf, struct kvm_nested_state)14231423+14241424+/* Available with KVM_CAP_MANUAL_DIRTY_LOG_PROTECT */14251425+#define KVM_CLEAR_DIRTY_LOG _IOWR(KVMIO, 0xc0, struct kvm_clear_dirty_log)14261426+14271427+/* Available with KVM_CAP_HYPERV_CPUID */14281428+#define KVM_GET_SUPPORTED_HV_CPUID _IOWR(KVMIO, 0xc1, struct kvm_cpuid2)1436142914371430/* Secure Encrypted Virtualization command */14381431enum sev_cmd_id {
+58
tools/include/uapi/linux/mount.h
···11+#ifndef _UAPI_LINUX_MOUNT_H22+#define _UAPI_LINUX_MOUNT_H33+44+/*55+ * These are the fs-independent mount-flags: up to 32 flags are supported66+ *77+ * Usage of these is restricted within the kernel to core mount(2) code and88+ * callers of sys_mount() only. Filesystems should be using the SB_*99+ * equivalent instead.1010+ */1111+#define MS_RDONLY 1 /* Mount read-only */1212+#define MS_NOSUID 2 /* Ignore suid and sgid bits */1313+#define MS_NODEV 4 /* Disallow access to device special files */1414+#define MS_NOEXEC 8 /* Disallow program execution */1515+#define MS_SYNCHRONOUS 16 /* Writes are synced at once */1616+#define MS_REMOUNT 32 /* Alter flags of a mounted FS */1717+#define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */1818+#define MS_DIRSYNC 128 /* Directory modifications are synchronous */1919+#define MS_NOATIME 1024 /* Do not update access times. */2020+#define MS_NODIRATIME 2048 /* Do not update directory access times */2121+#define MS_BIND 40962222+#define MS_MOVE 81922323+#define MS_REC 163842424+#define MS_VERBOSE 32768 /* War is peace. Verbosity is silence.2525+ MS_VERBOSE is deprecated. */2626+#define MS_SILENT 327682727+#define MS_POSIXACL (1<<16) /* VFS does not apply the umask */2828+#define MS_UNBINDABLE (1<<17) /* change to unbindable */2929+#define MS_PRIVATE (1<<18) /* change to private */3030+#define MS_SLAVE (1<<19) /* change to slave */3131+#define MS_SHARED (1<<20) /* change to shared */3232+#define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */3333+#define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */3434+#define MS_I_VERSION (1<<23) /* Update inode I_version field */3535+#define MS_STRICTATIME (1<<24) /* Always perform atime updates */3636+#define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */3737+3838+/* These sb flags are internal to the kernel */3939+#define MS_SUBMOUNT (1<<26)4040+#define MS_NOREMOTELOCK (1<<27)4141+#define MS_NOSEC (1<<28)4242+#define MS_BORN (1<<29)4343+#define MS_ACTIVE (1<<30)4444+#define MS_NOUSER (1<<31)4545+4646+/*4747+ * Superblock flags that can be altered by MS_REMOUNT4848+ */4949+#define MS_RMT_MASK (MS_RDONLY|MS_SYNCHRONOUS|MS_MANDLOCK|MS_I_VERSION|\5050+ MS_LAZYTIME)5151+5252+/*5353+ * Old magic mount flag and mask5454+ */5555+#define MS_MGC_VAL 0xC0ED00005656+#define MS_MGC_MSK 0xffff00005757+5858+#endif /* _UAPI_LINUX_MOUNT_H */
···1111 * device configuration.1212 */13131414+#include <linux/vhost_types.h>1415#include <linux/types.h>1515-#include <linux/compiler.h>1616#include <linux/ioctl.h>1717-#include <linux/virtio_config.h>1818-#include <linux/virtio_ring.h>1919-2020-struct vhost_vring_state {2121- unsigned int index;2222- unsigned int num;2323-};2424-2525-struct vhost_vring_file {2626- unsigned int index;2727- int fd; /* Pass -1 to unbind from file. */2828-2929-};3030-3131-struct vhost_vring_addr {3232- unsigned int index;3333- /* Option flags. */3434- unsigned int flags;3535- /* Flag values: */3636- /* Whether log address is valid. If set enables logging. */3737-#define VHOST_VRING_F_LOG 03838-3939- /* Start of array of descriptors (virtually contiguous) */4040- __u64 desc_user_addr;4141- /* Used structure address. Must be 32 bit aligned */4242- __u64 used_user_addr;4343- /* Available structure address. Must be 16 bit aligned */4444- __u64 avail_user_addr;4545- /* Logging support. */4646- /* Log writes to used structure, at offset calculated from specified4747- * address. Address must be 32 bit aligned. */4848- __u64 log_guest_addr;4949-};5050-5151-/* no alignment requirement */5252-struct vhost_iotlb_msg {5353- __u64 iova;5454- __u64 size;5555- __u64 uaddr;5656-#define VHOST_ACCESS_RO 0x15757-#define VHOST_ACCESS_WO 0x25858-#define VHOST_ACCESS_RW 0x35959- __u8 perm;6060-#define VHOST_IOTLB_MISS 16161-#define VHOST_IOTLB_UPDATE 26262-#define VHOST_IOTLB_INVALIDATE 36363-#define VHOST_IOTLB_ACCESS_FAIL 46464- __u8 type;6565-};6666-6767-#define VHOST_IOTLB_MSG 0x16868-#define VHOST_IOTLB_MSG_V2 0x26969-7070-struct vhost_msg {7171- int type;7272- union {7373- struct vhost_iotlb_msg iotlb;7474- __u8 padding[64];7575- };7676-};7777-7878-struct vhost_msg_v2 {7979- __u32 type;8080- __u32 reserved;8181- union {8282- struct vhost_iotlb_msg iotlb;8383- __u8 padding[64];8484- };8585-};8686-8787-struct vhost_memory_region {8888- __u64 guest_phys_addr;8989- __u64 memory_size; /* bytes */9090- __u64 userspace_addr;9191- __u64 flags_padding; /* No flags are currently specified. */9292-};9393-9494-/* All region addresses and sizes must be 4K aligned. */9595-#define VHOST_PAGE_SIZE 0x10009696-9797-struct vhost_memory {9898- __u32 nregions;9999- __u32 padding;100100- struct vhost_memory_region regions[0];101101-};1021710318/* ioctls */10419···101186 * device. This can be used to stop the ring (e.g. for migration). */102187#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)103188104104-/* Feature bits */105105-/* Log all write descriptors. Can be changed while device is active. */106106-#define VHOST_F_LOG_ALL 26107107-/* vhost-net should add virtio_net_hdr for RX, and strip for TX packets. */108108-#define VHOST_NET_F_VIRTIO_NET_HDR 27109109-110110-/* VHOST_SCSI specific definitions */111111-112112-/*113113- * Used by QEMU userspace to ensure a consistent vhost-scsi ABI.114114- *115115- * ABI Rev 0: July 2012 version starting point for v3.6-rc merge candidate +116116- * RFC-v2 vhost-scsi userspace. Add GET_ABI_VERSION ioctl usage117117- * ABI Rev 1: January 2013. Ignore vhost_tpgt filed in struct vhost_scsi_target.118118- * All the targets under vhost_wwpn can be seen and used by guset.119119- */120120-121121-#define VHOST_SCSI_ABI_VERSION 1122122-123123-struct vhost_scsi_target {124124- int abi_version;125125- char vhost_wwpn[224]; /* TRANSPORT_IQN_LEN */126126- unsigned short vhost_tpgt;127127- unsigned short reserved;128128-};189189+/* VHOST_SCSI specific defines */129190130191#define VHOST_SCSI_SET_ENDPOINT _IOW(VHOST_VIRTIO, 0x40, struct vhost_scsi_target)131192#define VHOST_SCSI_CLEAR_ENDPOINT _IOW(VHOST_VIRTIO, 0x41, struct vhost_scsi_target)
+2-2
tools/lib/traceevent/event-parse-api.c
···194194}195195196196/**197197- * tep_is_file_bigendian - get if the file is in big endian order197197+ * tep_file_bigendian - get if the file is in big endian order198198 * @pevent: a handle to the tep_handle199199 *200200 * This returns if the file is in big endian order201201 * If @pevent is NULL, 0 is returned.202202 */203203-int tep_is_file_bigendian(struct tep_handle *pevent)203203+int tep_file_bigendian(struct tep_handle *pevent)204204{205205 if(pevent)206206 return pevent->file_bigendian;
+2-2
tools/lib/traceevent/event-parse-local.h
···77#ifndef _PARSE_EVENTS_INT_H88#define _PARSE_EVENTS_INT_H991010-struct cmdline;1010+struct tep_cmdline;1111struct cmdline_list;1212struct func_map;1313struct func_list;···3636 int long_size;3737 int page_size;38383939- struct cmdline *cmdlines;3939+ struct tep_cmdline *cmdlines;4040 struct cmdline_list *cmdlist;4141 int cmdline_count;4242
+82-47
tools/lib/traceevent/event-parse.c
···124124 return calloc(1, sizeof(struct tep_print_arg));125125}126126127127-struct cmdline {127127+struct tep_cmdline {128128 char *comm;129129 int pid;130130};131131132132static int cmdline_cmp(const void *a, const void *b)133133{134134- const struct cmdline *ca = a;135135- const struct cmdline *cb = b;134134+ const struct tep_cmdline *ca = a;135135+ const struct tep_cmdline *cb = b;136136137137 if (ca->pid < cb->pid)138138 return -1;···152152{153153 struct cmdline_list *cmdlist = pevent->cmdlist;154154 struct cmdline_list *item;155155- struct cmdline *cmdlines;155155+ struct tep_cmdline *cmdlines;156156 int i;157157158158 cmdlines = malloc(sizeof(*cmdlines) * pevent->cmdline_count);···179179180180static const char *find_cmdline(struct tep_handle *pevent, int pid)181181{182182- const struct cmdline *comm;183183- struct cmdline key;182182+ const struct tep_cmdline *comm;183183+ struct tep_cmdline key;184184185185 if (!pid)186186 return "<idle>";···208208 */209209int tep_pid_is_registered(struct tep_handle *pevent, int pid)210210{211211- const struct cmdline *comm;212212- struct cmdline key;211211+ const struct tep_cmdline *comm;212212+ struct tep_cmdline key;213213214214 if (!pid)215215 return 1;···232232 * we must add this pid. This is much slower than when cmdlines233233 * are added before the array is initialized.234234 */235235-static int add_new_comm(struct tep_handle *pevent, const char *comm, int pid)235235+static int add_new_comm(struct tep_handle *pevent,236236+ const char *comm, int pid, bool override)236237{237237- struct cmdline *cmdlines = pevent->cmdlines;238238- const struct cmdline *cmdline;239239- struct cmdline key;238238+ struct tep_cmdline *cmdlines = pevent->cmdlines;239239+ struct tep_cmdline *cmdline;240240+ struct tep_cmdline key;241241+ char *new_comm;240242241243 if (!pid)242244 return 0;···249247 cmdline = bsearch(&key, pevent->cmdlines, pevent->cmdline_count,250248 sizeof(*pevent->cmdlines), cmdline_cmp);251249 if (cmdline) {252252- errno = EEXIST;253253- return -1;250250+ if (!override) {251251+ errno = EEXIST;252252+ return -1;253253+ }254254+ new_comm = strdup(comm);255255+ if (!new_comm) {256256+ errno = ENOMEM;257257+ return -1;258258+ }259259+ free(cmdline->comm);260260+ cmdline->comm = new_comm;261261+262262+ return 0;254263 }255264256265 cmdlines = realloc(cmdlines, sizeof(*cmdlines) * (pevent->cmdline_count + 1));···288275 return 0;289276}290277291291-/**292292- * tep_register_comm - register a pid / comm mapping293293- * @pevent: handle for the pevent294294- * @comm: the command line to register295295- * @pid: the pid to map the command line to296296- *297297- * This adds a mapping to search for command line names with298298- * a given pid. The comm is duplicated.299299- */300300-int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid)278278+static int _tep_register_comm(struct tep_handle *pevent,279279+ const char *comm, int pid, bool override)301280{302281 struct cmdline_list *item;303282304283 if (pevent->cmdlines)305305- return add_new_comm(pevent, comm, pid);284284+ return add_new_comm(pevent, comm, pid, override);306285307286 item = malloc(sizeof(*item));308287 if (!item)···315310 pevent->cmdline_count++;316311317312 return 0;313313+}314314+315315+/**316316+ * tep_register_comm - register a pid / comm mapping317317+ * @pevent: handle for the pevent318318+ * @comm: the command line to register319319+ * @pid: the pid to map the command line to320320+ *321321+ * This adds a mapping to search for command line names with322322+ * a given pid. The comm is duplicated. If a command with the same pid323323+ * already exist, -1 is returned and errno is set to EEXIST324324+ */325325+int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid)326326+{327327+ return _tep_register_comm(pevent, comm, pid, false);328328+}329329+330330+/**331331+ * tep_override_comm - register a pid / comm mapping332332+ * @pevent: handle for the pevent333333+ * @comm: the command line to register334334+ * @pid: the pid to map the command line to335335+ *336336+ * This adds a mapping to search for command line names with337337+ * a given pid. The comm is duplicated. If a command with the same pid338338+ * already exist, the command string is udapted with the new one339339+ */340340+int tep_override_comm(struct tep_handle *pevent, const char *comm, int pid)341341+{342342+ if (!pevent->cmdlines && cmdline_init(pevent)) {343343+ errno = ENOMEM;344344+ return -1;345345+ }346346+ return _tep_register_comm(pevent, comm, pid, true);318347}319348320349int tep_register_trace_clock(struct tep_handle *pevent, const char *trace_clock)···52665227}5267522852685229/**52695269- * tep_data_event_from_type - find the event by a given type52705270- * @pevent: a handle to the pevent52715271- * @type: the type of the event.52725272- *52735273- * This returns the event form a given @type;52745274- */52755275-struct tep_event *tep_data_event_from_type(struct tep_handle *pevent, int type)52765276-{52775277- return tep_find_event(pevent, type);52785278-}52795279-52805280-/**52815230 * tep_data_pid - parse the PID from record52825231 * @pevent: a handle to the pevent52835232 * @rec: the record to parse···53195292 return comm;53205293}5321529453225322-static struct cmdline *53235323-pid_from_cmdlist(struct tep_handle *pevent, const char *comm, struct cmdline *next)52955295+static struct tep_cmdline *52965296+pid_from_cmdlist(struct tep_handle *pevent, const char *comm, struct tep_cmdline *next)53245297{53255298 struct cmdline_list *cmdlist = (struct cmdline_list *)next;53265299···53325305 while (cmdlist && strcmp(cmdlist->comm, comm) != 0)53335306 cmdlist = cmdlist->next;5334530753355335- return (struct cmdline *)cmdlist;53085308+ return (struct tep_cmdline *)cmdlist;53365309}5337531053385311/**···53485321 * next pid.53495322 * Also, it does a linear search, so it may be slow.53505323 */53515351-struct cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm,53525352- struct cmdline *next)53245324+struct tep_cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm,53255325+ struct tep_cmdline *next)53535326{53545354- struct cmdline *cmdline;53275327+ struct tep_cmdline *cmdline;5355532853565329 /*53575330 * If the cmdlines have not been converted yet, then use···53905363 * Returns the pid for a give cmdline. If @cmdline is NULL, then53915364 * -1 is returned.53925365 */53935393-int tep_cmdline_pid(struct tep_handle *pevent, struct cmdline *cmdline)53665366+int tep_cmdline_pid(struct tep_handle *pevent, struct tep_cmdline *cmdline)53945367{53955368 struct cmdline_list *cmdlist = (struct cmdline_list *)cmdline;53965369···66206593 *66216594 * If @id is >= 0, then it is used to find the event.66226595 * else @sys_name and @event_name are used.65966596+ *65976597+ * Returns:65986598+ * TEP_REGISTER_SUCCESS_OVERWRITE if an existing handler is overwritten65996599+ * TEP_REGISTER_SUCCESS if a new handler is registered successfully66006600+ * negative TEP_ERRNO_... in case of an error66016601+ *66236602 */66246603int tep_register_event_handler(struct tep_handle *pevent, int id,66256604 const char *sys_name, const char *event_name,···6643661066446611 event->handler = func;66456612 event->context = context;66466646- return 0;66136613+ return TEP_REGISTER_SUCCESS_OVERWRITE;6647661466486615 not_found:66496616 /* Save for later use. */···66736640 pevent->handlers = handle;66746641 handle->context = context;6675664266766676- return -1;66436643+ return TEP_REGISTER_SUCCESS;66776644}6678664566796646static int handle_matches(struct event_handler *handler, int id,···67566723{67576724 struct tep_handle *pevent = calloc(1, sizeof(*pevent));6758672567596759- if (pevent)67266726+ if (pevent) {67606727 pevent->ref_count = 1;67286728+ pevent->host_bigendian = tep_host_bigendian();67296729+ }6761673067626731 return pevent;67636732}
···389389 * We can only use the structure if file is of the same390390 * endianness.391391 */392392- if (tep_is_file_bigendian(event->pevent) ==392392+ if (tep_file_bigendian(event->pevent) ==393393 tep_is_host_bigendian(event->pevent)) {394394395395 trace_seq_printf(s, "%u q%u%s %s%s %spae %snxe %swp%s%s%s",
+12-5
tools/lib/traceevent/trace-seq.c
···100100 * @fmt: printf format string101101 *102102 * It returns 0 if the trace oversizes the buffer's free103103- * space, 1 otherwise.103103+ * space, the number of characters printed, or a negative104104+ * value in case of an error.104105 *105106 * The tracer may use either sequence operations or its own106107 * copy to user routines. To simplify formating of a trace···130129 goto try_again;131130 }132131133133- s->len += ret;132132+ if (ret > 0)133133+ s->len += ret;134134135135- return 1;135135+ return ret;136136}137137138138/**···141139 * @s: trace sequence descriptor142140 * @fmt: printf format string143141 *142142+ * It returns 0 if the trace oversizes the buffer's free143143+ * space, the number of characters printed, or a negative144144+ * value in case of an error.145145+ * *144146 * The tracer may use either sequence operations or its own145147 * copy to user routines. To simplify formating of a trace146148 * trace_seq_printf is used to store strings into a special···169163 goto try_again;170164 }171165172172- s->len += ret;166166+ if (ret > 0)167167+ s->len += ret;173168174174- return len;169169+ return ret;175170}176171177172/**
···11+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note22+#33+# system call numbers and entry vectors for powerpc44+#55+# The format is:66+# <number> <abi> <name> <entry point> <compat entry point>77+#88+# The <abi> can be common, spu, nospu, 64, or 32 for this file.99+#1010+0 nospu restart_syscall sys_restart_syscall1111+1 nospu exit sys_exit1212+2 nospu fork ppc_fork1313+3 common read sys_read1414+4 common write sys_write1515+5 common open sys_open compat_sys_open1616+6 common close sys_close1717+7 common waitpid sys_waitpid1818+8 common creat sys_creat1919+9 common link sys_link2020+10 common unlink sys_unlink2121+11 nospu execve sys_execve compat_sys_execve2222+12 common chdir sys_chdir2323+13 common time sys_time compat_sys_time2424+14 common mknod sys_mknod2525+15 common chmod sys_chmod2626+16 common lchown sys_lchown2727+17 common break sys_ni_syscall2828+18 32 oldstat sys_stat sys_ni_syscall2929+18 64 oldstat sys_ni_syscall3030+18 spu oldstat sys_ni_syscall3131+19 common lseek sys_lseek compat_sys_lseek3232+20 common getpid sys_getpid3333+21 nospu mount sys_mount compat_sys_mount3434+22 32 umount sys_oldumount3535+22 64 umount sys_ni_syscall3636+22 spu umount sys_ni_syscall3737+23 common setuid sys_setuid3838+24 common getuid sys_getuid3939+25 common stime sys_stime compat_sys_stime4040+26 nospu ptrace sys_ptrace compat_sys_ptrace4141+27 common alarm sys_alarm4242+28 32 oldfstat sys_fstat sys_ni_syscall4343+28 64 oldfstat sys_ni_syscall4444+28 spu oldfstat sys_ni_syscall4545+29 nospu pause sys_pause4646+30 nospu utime sys_utime compat_sys_utime4747+31 common stty sys_ni_syscall4848+32 common gtty sys_ni_syscall4949+33 common access sys_access5050+34 common nice sys_nice5151+35 common ftime sys_ni_syscall5252+36 common sync sys_sync5353+37 common kill sys_kill5454+38 common rename sys_rename5555+39 common mkdir sys_mkdir5656+40 common rmdir sys_rmdir5757+41 common dup sys_dup5858+42 common pipe sys_pipe5959+43 common times sys_times compat_sys_times6060+44 common prof sys_ni_syscall6161+45 common brk sys_brk6262+46 common setgid sys_setgid6363+47 common getgid sys_getgid6464+48 nospu signal sys_signal6565+49 common geteuid sys_geteuid6666+50 common getegid sys_getegid6767+51 nospu acct sys_acct6868+52 nospu umount2 sys_umount6969+53 common lock sys_ni_syscall7070+54 common ioctl sys_ioctl compat_sys_ioctl7171+55 common fcntl sys_fcntl compat_sys_fcntl7272+56 common mpx sys_ni_syscall7373+57 common setpgid sys_setpgid7474+58 common ulimit sys_ni_syscall7575+59 32 oldolduname sys_olduname7676+59 64 oldolduname sys_ni_syscall7777+59 spu oldolduname sys_ni_syscall7878+60 common umask sys_umask7979+61 common chroot sys_chroot8080+62 nospu ustat sys_ustat compat_sys_ustat8181+63 common dup2 sys_dup28282+64 common getppid sys_getppid8383+65 common getpgrp sys_getpgrp8484+66 common setsid sys_setsid8585+67 32 sigaction sys_sigaction compat_sys_sigaction8686+67 64 sigaction sys_ni_syscall8787+67 spu sigaction sys_ni_syscall8888+68 common sgetmask sys_sgetmask8989+69 common ssetmask sys_ssetmask9090+70 common setreuid sys_setreuid9191+71 common setregid sys_setregid9292+72 32 sigsuspend sys_sigsuspend9393+72 64 sigsuspend sys_ni_syscall9494+72 spu sigsuspend sys_ni_syscall9595+73 32 sigpending sys_sigpending compat_sys_sigpending9696+73 64 sigpending sys_ni_syscall9797+73 spu sigpending sys_ni_syscall9898+74 common sethostname sys_sethostname9999+75 common setrlimit sys_setrlimit compat_sys_setrlimit100100+76 32 getrlimit sys_old_getrlimit compat_sys_old_getrlimit101101+76 64 getrlimit sys_ni_syscall102102+76 spu getrlimit sys_ni_syscall103103+77 common getrusage sys_getrusage compat_sys_getrusage104104+78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday105105+79 common settimeofday sys_settimeofday compat_sys_settimeofday106106+80 common getgroups sys_getgroups107107+81 common setgroups sys_setgroups108108+82 32 select ppc_select sys_ni_syscall109109+82 64 select sys_ni_syscall110110+82 spu select sys_ni_syscall111111+83 common symlink sys_symlink112112+84 32 oldlstat sys_lstat sys_ni_syscall113113+84 64 oldlstat sys_ni_syscall114114+84 spu oldlstat sys_ni_syscall115115+85 common readlink sys_readlink116116+86 nospu uselib sys_uselib117117+87 nospu swapon sys_swapon118118+88 nospu reboot sys_reboot119119+89 32 readdir sys_old_readdir compat_sys_old_readdir120120+89 64 readdir sys_ni_syscall121121+89 spu readdir sys_ni_syscall122122+90 common mmap sys_mmap123123+91 common munmap sys_munmap124124+92 common truncate sys_truncate compat_sys_truncate125125+93 common ftruncate sys_ftruncate compat_sys_ftruncate126126+94 common fchmod sys_fchmod127127+95 common fchown sys_fchown128128+96 common getpriority sys_getpriority129129+97 common setpriority sys_setpriority130130+98 common profil sys_ni_syscall131131+99 nospu statfs sys_statfs compat_sys_statfs132132+100 nospu fstatfs sys_fstatfs compat_sys_fstatfs133133+101 common ioperm sys_ni_syscall134134+102 common socketcall sys_socketcall compat_sys_socketcall135135+103 common syslog sys_syslog136136+104 common setitimer sys_setitimer compat_sys_setitimer137137+105 common getitimer sys_getitimer compat_sys_getitimer138138+106 common stat sys_newstat compat_sys_newstat139139+107 common lstat sys_newlstat compat_sys_newlstat140140+108 common fstat sys_newfstat compat_sys_newfstat141141+109 32 olduname sys_uname142142+109 64 olduname sys_ni_syscall143143+109 spu olduname sys_ni_syscall144144+110 common iopl sys_ni_syscall145145+111 common vhangup sys_vhangup146146+112 common idle sys_ni_syscall147147+113 common vm86 sys_ni_syscall148148+114 common wait4 sys_wait4 compat_sys_wait4149149+115 nospu swapoff sys_swapoff150150+116 common sysinfo sys_sysinfo compat_sys_sysinfo151151+117 nospu ipc sys_ipc compat_sys_ipc152152+118 common fsync sys_fsync153153+119 32 sigreturn sys_sigreturn compat_sys_sigreturn154154+119 64 sigreturn sys_ni_syscall155155+119 spu sigreturn sys_ni_syscall156156+120 nospu clone ppc_clone157157+121 common setdomainname sys_setdomainname158158+122 common uname sys_newuname159159+123 common modify_ldt sys_ni_syscall160160+124 common adjtimex sys_adjtimex compat_sys_adjtimex161161+125 common mprotect sys_mprotect162162+126 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask163163+126 64 sigprocmask sys_ni_syscall164164+126 spu sigprocmask sys_ni_syscall165165+127 common create_module sys_ni_syscall166166+128 nospu init_module sys_init_module167167+129 nospu delete_module sys_delete_module168168+130 common get_kernel_syms sys_ni_syscall169169+131 nospu quotactl sys_quotactl170170+132 common getpgid sys_getpgid171171+133 common fchdir sys_fchdir172172+134 common bdflush sys_bdflush173173+135 common sysfs sys_sysfs174174+136 32 personality sys_personality ppc64_personality175175+136 64 personality ppc64_personality176176+136 spu personality ppc64_personality177177+137 common afs_syscall sys_ni_syscall178178+138 common setfsuid sys_setfsuid179179+139 common setfsgid sys_setfsgid180180+140 common _llseek sys_llseek181181+141 common getdents sys_getdents compat_sys_getdents182182+142 common _newselect sys_select compat_sys_select183183+143 common flock sys_flock184184+144 common msync sys_msync185185+145 common readv sys_readv compat_sys_readv186186+146 common writev sys_writev compat_sys_writev187187+147 common getsid sys_getsid188188+148 common fdatasync sys_fdatasync189189+149 nospu _sysctl sys_sysctl compat_sys_sysctl190190+150 common mlock sys_mlock191191+151 common munlock sys_munlock192192+152 common mlockall sys_mlockall193193+153 common munlockall sys_munlockall194194+154 common sched_setparam sys_sched_setparam195195+155 common sched_getparam sys_sched_getparam196196+156 common sched_setscheduler sys_sched_setscheduler197197+157 common sched_getscheduler sys_sched_getscheduler198198+158 common sched_yield sys_sched_yield199199+159 common sched_get_priority_max sys_sched_get_priority_max200200+160 common sched_get_priority_min sys_sched_get_priority_min201201+161 common sched_rr_get_interval sys_sched_rr_get_interval compat_sys_sched_rr_get_interval202202+162 common nanosleep sys_nanosleep compat_sys_nanosleep203203+163 common mremap sys_mremap204204+164 common setresuid sys_setresuid205205+165 common getresuid sys_getresuid206206+166 common query_module sys_ni_syscall207207+167 common poll sys_poll208208+168 common nfsservctl sys_ni_syscall209209+169 common setresgid sys_setresgid210210+170 common getresgid sys_getresgid211211+171 common prctl sys_prctl212212+172 nospu rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn213213+173 nospu rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction214214+174 nospu rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask215215+175 nospu rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending216216+176 nospu rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait217217+177 nospu rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo218218+178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend219219+179 common pread64 sys_pread64 compat_sys_pread64220220+180 common pwrite64 sys_pwrite64 compat_sys_pwrite64221221+181 common chown sys_chown222222+182 common getcwd sys_getcwd223223+183 common capget sys_capget224224+184 common capset sys_capset225225+185 nospu sigaltstack sys_sigaltstack compat_sys_sigaltstack226226+186 32 sendfile sys_sendfile compat_sys_sendfile227227+186 64 sendfile sys_sendfile64228228+186 spu sendfile sys_sendfile64229229+187 common getpmsg sys_ni_syscall230230+188 common putpmsg sys_ni_syscall231231+189 nospu vfork ppc_vfork232232+190 common ugetrlimit sys_getrlimit compat_sys_getrlimit233233+191 common readahead sys_readahead compat_sys_readahead234234+192 32 mmap2 sys_mmap2 compat_sys_mmap2235235+193 32 truncate64 sys_truncate64 compat_sys_truncate64236236+194 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64237237+195 32 stat64 sys_stat64238238+196 32 lstat64 sys_lstat64239239+197 32 fstat64 sys_fstat64240240+198 nospu pciconfig_read sys_pciconfig_read241241+199 nospu pciconfig_write sys_pciconfig_write242242+200 nospu pciconfig_iobase sys_pciconfig_iobase243243+201 common multiplexer sys_ni_syscall244244+202 common getdents64 sys_getdents64245245+203 common pivot_root sys_pivot_root246246+204 32 fcntl64 sys_fcntl64 compat_sys_fcntl64247247+205 common madvise sys_madvise248248+206 common mincore sys_mincore249249+207 common gettid sys_gettid250250+208 common tkill sys_tkill251251+209 common setxattr sys_setxattr252252+210 common lsetxattr sys_lsetxattr253253+211 common fsetxattr sys_fsetxattr254254+212 common getxattr sys_getxattr255255+213 common lgetxattr sys_lgetxattr256256+214 common fgetxattr sys_fgetxattr257257+215 common listxattr sys_listxattr258258+216 common llistxattr sys_llistxattr259259+217 common flistxattr sys_flistxattr260260+218 common removexattr sys_removexattr261261+219 common lremovexattr sys_lremovexattr262262+220 common fremovexattr sys_fremovexattr263263+221 common futex sys_futex compat_sys_futex264264+222 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity265265+223 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity266266+# 224 unused267267+225 common tuxcall sys_ni_syscall268268+226 32 sendfile64 sys_sendfile64 compat_sys_sendfile64269269+227 common io_setup sys_io_setup compat_sys_io_setup270270+228 common io_destroy sys_io_destroy271271+229 common io_getevents sys_io_getevents compat_sys_io_getevents272272+230 common io_submit sys_io_submit compat_sys_io_submit273273+231 common io_cancel sys_io_cancel274274+232 nospu set_tid_address sys_set_tid_address275275+233 common fadvise64 sys_fadvise64 ppc32_fadvise64276276+234 nospu exit_group sys_exit_group277277+235 nospu lookup_dcookie sys_lookup_dcookie compat_sys_lookup_dcookie278278+236 common epoll_create sys_epoll_create279279+237 common epoll_ctl sys_epoll_ctl280280+238 common epoll_wait sys_epoll_wait281281+239 common remap_file_pages sys_remap_file_pages282282+240 common timer_create sys_timer_create compat_sys_timer_create283283+241 common timer_settime sys_timer_settime compat_sys_timer_settime284284+242 common timer_gettime sys_timer_gettime compat_sys_timer_gettime285285+243 common timer_getoverrun sys_timer_getoverrun286286+244 common timer_delete sys_timer_delete287287+245 common clock_settime sys_clock_settime compat_sys_clock_settime288288+246 common clock_gettime sys_clock_gettime compat_sys_clock_gettime289289+247 common clock_getres sys_clock_getres compat_sys_clock_getres290290+248 common clock_nanosleep sys_clock_nanosleep compat_sys_clock_nanosleep291291+249 32 swapcontext ppc_swapcontext ppc32_swapcontext292292+249 64 swapcontext ppc64_swapcontext293293+249 spu swapcontext sys_ni_syscall294294+250 common tgkill sys_tgkill295295+251 common utimes sys_utimes compat_sys_utimes296296+252 common statfs64 sys_statfs64 compat_sys_statfs64297297+253 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64298298+254 32 fadvise64_64 ppc_fadvise64_64299299+254 spu fadvise64_64 sys_ni_syscall300300+255 common rtas sys_rtas301301+256 32 sys_debug_setcontext sys_debug_setcontext sys_ni_syscall302302+256 64 sys_debug_setcontext sys_ni_syscall303303+256 spu sys_debug_setcontext sys_ni_syscall304304+# 257 reserved for vserver305305+258 nospu migrate_pages sys_migrate_pages compat_sys_migrate_pages306306+259 nospu mbind sys_mbind compat_sys_mbind307307+260 nospu get_mempolicy sys_get_mempolicy compat_sys_get_mempolicy308308+261 nospu set_mempolicy sys_set_mempolicy compat_sys_set_mempolicy309309+262 nospu mq_open sys_mq_open compat_sys_mq_open310310+263 nospu mq_unlink sys_mq_unlink311311+264 nospu mq_timedsend sys_mq_timedsend compat_sys_mq_timedsend312312+265 nospu mq_timedreceive sys_mq_timedreceive compat_sys_mq_timedreceive313313+266 nospu mq_notify sys_mq_notify compat_sys_mq_notify314314+267 nospu mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr315315+268 nospu kexec_load sys_kexec_load compat_sys_kexec_load316316+269 nospu add_key sys_add_key317317+270 nospu request_key sys_request_key318318+271 nospu keyctl sys_keyctl compat_sys_keyctl319319+272 nospu waitid sys_waitid compat_sys_waitid320320+273 nospu ioprio_set sys_ioprio_set321321+274 nospu ioprio_get sys_ioprio_get322322+275 nospu inotify_init sys_inotify_init323323+276 nospu inotify_add_watch sys_inotify_add_watch324324+277 nospu inotify_rm_watch sys_inotify_rm_watch325325+278 nospu spu_run sys_spu_run326326+279 nospu spu_create sys_spu_create327327+280 nospu pselect6 sys_pselect6 compat_sys_pselect6328328+281 nospu ppoll sys_ppoll compat_sys_ppoll329329+282 common unshare sys_unshare330330+283 common splice sys_splice331331+284 common tee sys_tee332332+285 common vmsplice sys_vmsplice compat_sys_vmsplice333333+286 common openat sys_openat compat_sys_openat334334+287 common mkdirat sys_mkdirat335335+288 common mknodat sys_mknodat336336+289 common fchownat sys_fchownat337337+290 common futimesat sys_futimesat compat_sys_futimesat338338+291 32 fstatat64 sys_fstatat64339339+291 64 newfstatat sys_newfstatat340340+291 spu newfstatat sys_newfstatat341341+292 common unlinkat sys_unlinkat342342+293 common renameat sys_renameat343343+294 common linkat sys_linkat344344+295 common symlinkat sys_symlinkat345345+296 common readlinkat sys_readlinkat346346+297 common fchmodat sys_fchmodat347347+298 common faccessat sys_faccessat348348+299 common get_robust_list sys_get_robust_list compat_sys_get_robust_list349349+300 common set_robust_list sys_set_robust_list compat_sys_set_robust_list350350+301 common move_pages sys_move_pages compat_sys_move_pages351351+302 common getcpu sys_getcpu352352+303 nospu epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait353353+304 common utimensat sys_utimensat compat_sys_utimensat354354+305 common signalfd sys_signalfd compat_sys_signalfd355355+306 common timerfd_create sys_timerfd_create356356+307 common eventfd sys_eventfd357357+308 common sync_file_range2 sys_sync_file_range2 compat_sys_sync_file_range2358358+309 nospu fallocate sys_fallocate compat_sys_fallocate359359+310 nospu subpage_prot sys_subpage_prot360360+311 common timerfd_settime sys_timerfd_settime compat_sys_timerfd_settime361361+312 common timerfd_gettime sys_timerfd_gettime compat_sys_timerfd_gettime362362+313 common signalfd4 sys_signalfd4 compat_sys_signalfd4363363+314 common eventfd2 sys_eventfd2364364+315 common epoll_create1 sys_epoll_create1365365+316 common dup3 sys_dup3366366+317 common pipe2 sys_pipe2367367+318 nospu inotify_init1 sys_inotify_init1368368+319 common perf_event_open sys_perf_event_open369369+320 common preadv sys_preadv compat_sys_preadv370370+321 common pwritev sys_pwritev compat_sys_pwritev371371+322 nospu rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo372372+323 nospu fanotify_init sys_fanotify_init373373+324 nospu fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark374374+325 common prlimit64 sys_prlimit64375375+326 common socket sys_socket376376+327 common bind sys_bind377377+328 common connect sys_connect378378+329 common listen sys_listen379379+330 common accept sys_accept380380+331 common getsockname sys_getsockname381381+332 common getpeername sys_getpeername382382+333 common socketpair sys_socketpair383383+334 common send sys_send384384+335 common sendto sys_sendto385385+336 common recv sys_recv compat_sys_recv386386+337 common recvfrom sys_recvfrom compat_sys_recvfrom387387+338 common shutdown sys_shutdown388388+339 common setsockopt sys_setsockopt compat_sys_setsockopt389389+340 common getsockopt sys_getsockopt compat_sys_getsockopt390390+341 common sendmsg sys_sendmsg compat_sys_sendmsg391391+342 common recvmsg sys_recvmsg compat_sys_recvmsg392392+343 common recvmmsg sys_recvmmsg compat_sys_recvmmsg393393+344 common accept4 sys_accept4394394+345 common name_to_handle_at sys_name_to_handle_at395395+346 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at396396+347 common clock_adjtime sys_clock_adjtime compat_sys_clock_adjtime397397+348 common syncfs sys_syncfs398398+349 common sendmmsg sys_sendmmsg compat_sys_sendmmsg399399+350 common setns sys_setns400400+351 nospu process_vm_readv sys_process_vm_readv compat_sys_process_vm_readv401401+352 nospu process_vm_writev sys_process_vm_writev compat_sys_process_vm_writev402402+353 nospu finit_module sys_finit_module403403+354 nospu kcmp sys_kcmp404404+355 common sched_setattr sys_sched_setattr405405+356 common sched_getattr sys_sched_getattr406406+357 common renameat2 sys_renameat2407407+358 common seccomp sys_seccomp408408+359 common getrandom sys_getrandom409409+360 common memfd_create sys_memfd_create410410+361 common bpf sys_bpf411411+362 nospu execveat sys_execveat compat_sys_execveat412412+363 32 switch_endian sys_ni_syscall413413+363 64 switch_endian ppc_switch_endian414414+363 spu switch_endian sys_ni_syscall415415+364 common userfaultfd sys_userfaultfd416416+365 common membarrier sys_membarrier417417+378 nospu mlock2 sys_mlock2418418+379 nospu copy_file_range sys_copy_file_range419419+380 common preadv2 sys_preadv2 compat_sys_preadv2420420+381 common pwritev2 sys_pwritev2 compat_sys_pwritev2421421+382 nospu kexec_file_load sys_kexec_file_load422422+383 nospu statx sys_statx423423+384 nospu pkey_alloc sys_pkey_alloc424424+385 nospu pkey_free sys_pkey_free425425+386 nospu pkey_mprotect sys_pkey_mprotect426426+387 nospu rseq sys_rseq427427+388 nospu io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents
···1028102810291029static int callchain_param__setup_sample_type(struct callchain_param *callchain)10301030{10311031- if (!perf_hpp_list.sym) {10321032- if (callchain->enabled) {10331033- ui__error("Selected -g but \"sym\" not present in --sort/-s.");10341034- return -EINVAL;10351035- }10361036- } else if (callchain->mode != CHAIN_NONE) {10311031+ if (callchain->mode != CHAIN_NONE) {10371032 if (callchain_register_param(callchain) < 0) {10381033 ui__error("Can't register callchain params.\n");10391034 return -EINVAL;
···55#define VDSO__MAP_NAME "[vdso]"6677/*88- * Include definition of find_vdso_map() also used in util/vdso.c for88+ * Include definition of find_map() also used in util/vdso.c for99 * building perf.1010 */1111-#include "util/find-vdso-map.c"1111+#include "util/find-map.c"12121313int main(void)1414{1515 void *start, *end;1616 size_t size, written;17171818- if (find_vdso_map(&start, &end))1818+ if (find_map(&start, &end, VDSO__MAP_NAME))1919 return 1;20202121 size = end - start;
···109109 return ret;110110 }111111 len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);112112- va_end(ap_saved);113112 if (len > strbuf_avail(sb)) {114113 pr_debug("this should not happen, your vsnprintf is broken");115114 va_end(ap_saved);
···1818#include "debug.h"19192020/*2121- * Include definition of find_vdso_map() also used in perf-read-vdso.c for2121+ * Include definition of find_map() also used in perf-read-vdso.c for2222 * building perf-read-vdso32 and perf-read-vdsox32.2323 */2424-#include "find-vdso-map.c"2424+#include "find-map.c"25252626#define VDSO__TEMP_FILE_NAME "/tmp/perf-vdso.so-XXXXXX"2727···7676 if (vdso_file->found)7777 return vdso_file->temp_file_name;78787979- if (vdso_file->error || find_vdso_map(&start, &end))7979+ if (vdso_file->error || find_map(&start, &end, VDSO__MAP_NAME))8080 return NULL;81818282 size = end - start;