···3737 that the USB device has been connected to the machine. This3838 file is read-only.3939Users:4040- PowerTOP <power@bughost.org>4141- http://www.lesswatts.org/projects/powertop/4040+ PowerTOP <powertop@lists.01.org>4141+ https://01.org/powertop/42424343What: /sys/bus/usb/device/.../power/active_duration4444Date: January 2008···5757 will give an integer percentage. Note that this does not5858 account for counter wrap.5959Users:6060- PowerTOP <power@bughost.org>6161- http://www.lesswatts.org/projects/powertop/6060+ PowerTOP <powertop@lists.01.org>6161+ https://01.org/powertop/62626363What: /sys/bus/usb/devices/<busnum>-<port[.port]>...:<config num>-<interface num>/supports_autosuspend6464Date: January 2008
+16-16
Documentation/ABI/testing/sysfs-devices-power
···11What: /sys/devices/.../power/22Date: January 200933-Contact: Rafael J. Wysocki <rjw@sisk.pl>33+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>44Description:55 The /sys/devices/.../power directory contains attributes66 allowing the user space to check and modify some power···8899What: /sys/devices/.../power/wakeup1010Date: January 20091111-Contact: Rafael J. Wysocki <rjw@sisk.pl>1111+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>1212Description:1313 The /sys/devices/.../power/wakeup attribute allows the user1414 space to check if the device is enabled to wake up the system···34343535What: /sys/devices/.../power/control3636Date: January 20093737-Contact: Rafael J. Wysocki <rjw@sisk.pl>3737+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>3838Description:3939 The /sys/devices/.../power/control attribute allows the user4040 space to control the run-time power management of the device.···53535454What: /sys/devices/.../power/async5555Date: January 20095656-Contact: Rafael J. Wysocki <rjw@sisk.pl>5656+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>5757Description:5858 The /sys/devices/.../async attribute allows the user space to5959 enable or diasble the device's suspend and resume callbacks to···79798080What: /sys/devices/.../power/wakeup_count8181Date: September 20108282-Contact: Rafael J. Wysocki <rjw@sisk.pl>8282+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>8383Description:8484 The /sys/devices/.../wakeup_count attribute contains the number8585 of signaled wakeup events associated with the device. This···88888989What: /sys/devices/.../power/wakeup_active_count9090Date: September 20109191-Contact: Rafael J. Wysocki <rjw@sisk.pl>9191+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>9292Description:9393 The /sys/devices/.../wakeup_active_count attribute contains the9494 number of times the processing of wakeup events associated with···98989999What: /sys/devices/.../power/wakeup_abort_count100100Date: February 2012101101-Contact: Rafael J. Wysocki <rjw@sisk.pl>101101+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>102102Description:103103 The /sys/devices/.../wakeup_abort_count attribute contains the104104 number of times the processing of a wakeup event associated with···109109110110What: /sys/devices/.../power/wakeup_expire_count111111Date: February 2012112112-Contact: Rafael J. Wysocki <rjw@sisk.pl>112112+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>113113Description:114114 The /sys/devices/.../wakeup_expire_count attribute contains the115115 number of times a wakeup event associated with the device has···119119120120What: /sys/devices/.../power/wakeup_active121121Date: September 2010122122-Contact: Rafael J. Wysocki <rjw@sisk.pl>122122+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>123123Description:124124 The /sys/devices/.../wakeup_active attribute contains either 1,125125 or 0, depending on whether or not a wakeup event associated with···129129130130What: /sys/devices/.../power/wakeup_total_time_ms131131Date: September 2010132132-Contact: Rafael J. Wysocki <rjw@sisk.pl>132132+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>133133Description:134134 The /sys/devices/.../wakeup_total_time_ms attribute contains135135 the total time of processing wakeup events associated with the···139139140140What: /sys/devices/.../power/wakeup_max_time_ms141141Date: September 2010142142-Contact: Rafael J. Wysocki <rjw@sisk.pl>142142+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>143143Description:144144 The /sys/devices/.../wakeup_max_time_ms attribute contains145145 the maximum time of processing a single wakeup event associated···149149150150What: /sys/devices/.../power/wakeup_last_time_ms151151Date: September 2010152152-Contact: Rafael J. Wysocki <rjw@sisk.pl>152152+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>153153Description:154154 The /sys/devices/.../wakeup_last_time_ms attribute contains155155 the value of the monotonic clock corresponding to the time of···160160161161What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms162162Date: February 2012163163-Contact: Rafael J. Wysocki <rjw@sisk.pl>163163+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>164164Description:165165 The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute166166 contains the total time the device has been preventing···189189190190What: /sys/devices/.../power/pm_qos_latency_us191191Date: March 2012192192-Contact: Rafael J. Wysocki <rjw@sisk.pl>192192+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>193193Description:194194 The /sys/devices/.../power/pm_qos_resume_latency_us attribute195195 contains the PM QoS resume latency limit for the given device,···207207208208What: /sys/devices/.../power/pm_qos_no_power_off209209Date: September 2012210210-Contact: Rafael J. Wysocki <rjw@sisk.pl>210210+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>211211Description:212212 The /sys/devices/.../power/pm_qos_no_power_off attribute213213 is used for manipulating the PM QoS "no power off" flag. If···222222223223What: /sys/devices/.../power/pm_qos_remote_wakeup224224Date: September 2012225225-Contact: Rafael J. Wysocki <rjw@sisk.pl>225225+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>226226Description:227227 The /sys/devices/.../power/pm_qos_remote_wakeup attribute228228 is used for manipulating the PM QoS "remote wakeup required"
+11-11
Documentation/ABI/testing/sysfs-power
···11What: /sys/power/22Date: August 200633-Contact: Rafael J. Wysocki <rjw@sisk.pl>33+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>44Description:55 The /sys/power directory will contain files that will66 provide a unified interface to the power management···8899What: /sys/power/state1010Date: August 20061111-Contact: Rafael J. Wysocki <rjw@sisk.pl>1111+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>1212Description:1313 The /sys/power/state file controls the system power state.1414 Reading from this file returns what states are supported,···22222323What: /sys/power/disk2424Date: September 20062525-Contact: Rafael J. Wysocki <rjw@sisk.pl>2525+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>2626Description:2727 The /sys/power/disk file controls the operating mode of the2828 suspend-to-disk mechanism. Reading from this file returns···67676868What: /sys/power/image_size6969Date: August 20067070-Contact: Rafael J. Wysocki <rjw@sisk.pl>7070+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>7171Description:7272 The /sys/power/image_size file controls the size of the image7373 created by the suspend-to-disk mechanism. It can be written a···84848585What: /sys/power/pm_trace8686Date: August 20068787-Contact: Rafael J. Wysocki <rjw@sisk.pl>8787+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>8888Description:8989 The /sys/power/pm_trace file controls the code which saves the9090 last PM event point in the RTC across reboots, so that you can···133133134134What: /sys/power/pm_async135135Date: January 2009136136-Contact: Rafael J. Wysocki <rjw@sisk.pl>136136+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>137137Description:138138 The /sys/power/pm_async file controls the switch allowing the139139 user space to enable or disable asynchronous suspend and resume···146146147147What: /sys/power/wakeup_count148148Date: July 2010149149-Contact: Rafael J. Wysocki <rjw@sisk.pl>149149+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>150150Description:151151 The /sys/power/wakeup_count file allows user space to put the152152 system into a sleep state while taking into account the···161161162162What: /sys/power/reserved_size163163Date: May 2011164164-Contact: Rafael J. Wysocki <rjw@sisk.pl>164164+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>165165Description:166166 The /sys/power/reserved_size file allows user space to control167167 the amount of memory reserved for allocations made by device···175175176176What: /sys/power/autosleep177177Date: April 2012178178-Contact: Rafael J. Wysocki <rjw@sisk.pl>178178+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>179179Description:180180 The /sys/power/autosleep file can be written one of the strings181181 returned by reads from /sys/power/state. If that happens, a···192192193193What: /sys/power/wake_lock194194Date: February 2012195195-Contact: Rafael J. Wysocki <rjw@sisk.pl>195195+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>196196Description:197197 The /sys/power/wake_lock file allows user space to create198198 wakeup source objects and activate them on demand (if one of···219219220220What: /sys/power/wake_unlock221221Date: February 2012222222-Contact: Rafael J. Wysocki <rjw@sisk.pl>222222+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>223223Description:224224 The /sys/power/wake_unlock file allows user space to deactivate225225 wakeup sources created with the help of /sys/power/wake_lock.
+1-1
Documentation/acpi/dsdt-override.txt
···4455When to use this method is described in detail on the66Linux/ACPI home page:77-http://www.lesswatts.org/projects/acpi/overridingDSDT.php77+https://01.org/linux-acpi/documentation/overriding-dsdt
-168
Documentation/devicetree/bindings/memory.txt
···11-*** Memory binding ***22-33-The /memory node provides basic information about the address and size44-of the physical memory. This node is usually filled or updated by the55-bootloader, depending on the actual memory configuration of the given66-hardware.77-88-The memory layout is described by the following node:99-1010-/ {1111- #address-cells = <(n)>;1212- #size-cells = <(m)>;1313- memory {1414- device_type = "memory";1515- reg = <(baseaddr1) (size1)1616- (baseaddr2) (size2)1717- ...1818- (baseaddrN) (sizeN)>;1919- };2020- ...2121-};2222-2323-A memory node follows the typical device tree rules for "reg" property:2424-n: number of cells used to store base address value2525-m: number of cells used to store size value2626-baseaddrX: defines a base address of the defined memory bank2727-sizeX: the size of the defined memory bank2828-2929-3030-More than one memory bank can be defined.3131-3232-3333-*** Reserved memory regions ***3434-3535-In /memory/reserved-memory node one can create child nodes describing3636-particular reserved (excluded from normal use) memory regions. Such3737-memory regions are usually designed for the special usage by various3838-device drivers. A good example are contiguous memory allocations or3939-memory sharing with other operating system on the same hardware board.4040-Those special memory regions might depend on the board configuration and4141-devices used on the target system.4242-4343-Parameters for each memory region can be encoded into the device tree4444-with the following convention:4545-4646-[(label):] (name) {4747- compatible = "linux,contiguous-memory-region", "reserved-memory-region";4848- reg = <(address) (size)>;4949- (linux,default-contiguous-region);5050-};5151-5252-compatible: one or more of:5353- - "linux,contiguous-memory-region" - enables binding of this5454- region to Contiguous Memory Allocator (special region for5555- contiguous memory allocations, shared with movable system5656- memory, Linux kernel-specific).5757- - "reserved-memory-region" - compatibility is defined, given5858- region is assigned for exclusive usage for by the respective5959- devices.6060-6161-reg: standard property defining the base address and size of6262- the memory region6363-6464-linux,default-contiguous-region: property indicating that the region6565- is the default region for all contiguous memory6666- allocations, Linux specific (optional)6767-6868-It is optional to specify the base address, so if one wants to use6969-autoconfiguration of the base address, '0' can be specified as a base7070-address in the 'reg' property.7171-7272-The /memory/reserved-memory node must contain the same #address-cells7373-and #size-cells value as the root node.7474-7575-7676-*** Device node's properties ***7777-7878-Once regions in the /memory/reserved-memory node have been defined, they7979-may be referenced by other device nodes. Bindings that wish to reference8080-memory regions should explicitly document their use of the following8181-property:8282-8383-memory-region = <&phandle_to_defined_region>;8484-8585-This property indicates that the device driver should use the memory8686-region pointed by the given phandle.8787-8888-8989-*** Example ***9090-9191-This example defines a memory consisting of 4 memory banks. 3 contiguous9292-regions are defined for Linux kernel, one default of all device drivers9393-(named contig_mem, placed at 0x72000000, 64MiB), one dedicated to the9494-framebuffer device (labelled display_mem, placed at 0x78000000, 8MiB)9595-and one for multimedia processing (labelled multimedia_mem, placed at9696-0x77000000, 64MiB). 'display_mem' region is then assigned to fb@123000009797-device for DMA memory allocations (Linux kernel drivers will use CMA is9898-available or dma-exclusive usage otherwise). 'multimedia_mem' is9999-assigned to scaler@12500000 and codec@12600000 devices for contiguous100100-memory allocations when CMA driver is enabled.101101-102102-The reason for creating a separate region for framebuffer device is to103103-match the framebuffer base address to the one configured by bootloader,104104-so once Linux kernel drivers starts no glitches on the displayed boot105105-logo appears. Scaller and codec drivers should share the memory106106-allocations.107107-108108-/ {109109- #address-cells = <1>;110110- #size-cells = <1>;111111-112112- /* ... */113113-114114- memory {115115- reg = <0x40000000 0x10000000116116- 0x50000000 0x10000000117117- 0x60000000 0x10000000118118- 0x70000000 0x10000000>;119119-120120- reserved-memory {121121- #address-cells = <1>;122122- #size-cells = <1>;123123-124124- /*125125- * global autoconfigured region for contiguous allocations126126- * (used only with Contiguous Memory Allocator)127127- */128128- contig_region@0 {129129- compatible = "linux,contiguous-memory-region";130130- reg = <0x0 0x4000000>;131131- linux,default-contiguous-region;132132- };133133-134134- /*135135- * special region for framebuffer136136- */137137- display_region: region@78000000 {138138- compatible = "linux,contiguous-memory-region", "reserved-memory-region";139139- reg = <0x78000000 0x800000>;140140- };141141-142142- /*143143- * special region for multimedia processing devices144144- */145145- multimedia_region: region@77000000 {146146- compatible = "linux,contiguous-memory-region";147147- reg = <0x77000000 0x4000000>;148148- };149149- };150150- };151151-152152- /* ... */153153-154154- fb0: fb@12300000 {155155- status = "okay";156156- memory-region = <&display_region>;157157- };158158-159159- scaler: scaler@12500000 {160160- status = "okay";161161- memory-region = <&multimedia_region>;162162- };163163-164164- codec: codec@12600000 {165165- status = "okay";166166- memory-region = <&multimedia_region>;167167- };168168-};
+1
Documentation/sound/alsa/HD-Audio-Models.txt
···2828 alc269-dmic Enable ALC269(VA) digital mic workaround2929 alc271-dmic Enable ALC271X digital mic workaround3030 inv-dmic Inverted internal mic workaround3131+ headset-mic Indicates a combined headset (headphone+mic) jack3132 lenovo-dock Enables docking station I/O for some Lenovos3233 dell-headset-multi Headset jack, which can also be used as mic-in3334 dell-headset-dock Headset jack (without mic-in), and also dock I/O
+19-13
MAINTAINERS
···237237238238ACPI239239M: Len Brown <lenb@kernel.org>240240-M: Rafael J. Wysocki <rjw@sisk.pl>240240+M: Rafael J. Wysocki <rjw@rjwysocki.net>241241L: linux-acpi@vger.kernel.org242242-W: http://www.lesswatts.org/projects/acpi/243243-Q: http://patchwork.kernel.org/project/linux-acpi/list/244244-T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux242242+W: https://01.org/linux-acpi243243+Q: https://patchwork.kernel.org/project/linux-acpi/list/244244+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm245245S: Supported246246F: drivers/acpi/247247F: drivers/pnp/pnpacpi/···256256ACPI FAN DRIVER257257M: Zhang Rui <rui.zhang@intel.com>258258L: linux-acpi@vger.kernel.org259259-W: http://www.lesswatts.org/projects/acpi/259259+W: https://01.org/linux-acpi260260S: Supported261261F: drivers/acpi/fan.c262262263263ACPI THERMAL DRIVER264264M: Zhang Rui <rui.zhang@intel.com>265265L: linux-acpi@vger.kernel.org266266-W: http://www.lesswatts.org/projects/acpi/266266+W: https://01.org/linux-acpi267267S: Supported268268F: drivers/acpi/*thermal*269269270270ACPI VIDEO DRIVER271271M: Zhang Rui <rui.zhang@intel.com>272272L: linux-acpi@vger.kernel.org273273-W: http://www.lesswatts.org/projects/acpi/273273+W: https://01.org/linux-acpi274274S: Supported275275F: drivers/acpi/video.c276276···23002300F: drivers/net/ethernet/ti/cpmac.c2301230123022302CPU FREQUENCY DRIVERS23032303-M: Rafael J. Wysocki <rjw@sisk.pl>23032303+M: Rafael J. Wysocki <rjw@rjwysocki.net>23042304M: Viresh Kumar <viresh.kumar@linaro.org>23052305L: cpufreq@vger.kernel.org23062306L: linux-pm@vger.kernel.org···23312331F: drivers/cpuidle/cpuidle-big_little.c2332233223332333CPUIDLE DRIVERS23342334-M: Rafael J. Wysocki <rjw@sisk.pl>23342334+M: Rafael J. Wysocki <rjw@rjwysocki.net>23352335M: Daniel Lezcano <daniel.lezcano@linaro.org>23362336L: linux-pm@vger.kernel.org23372337S: Maintained···3553355335543554FREEZER35553555M: Pavel Machek <pavel@ucw.cz>35563556-M: "Rafael J. Wysocki" <rjw@sisk.pl>35563556+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>35573557L: linux-pm@vger.kernel.org35583558S: Supported35593559F: Documentation/power/freezing-of-tasks.txt···36233623L: linux-scsi@vger.kernel.org36243624S: Odd Fixes (e.g., new signatures)36253625F: drivers/scsi/fdomain.*36263626+36273627+GCOV BASED KERNEL PROFILING36283628+M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>36293629+S: Maintained36303630+F: kernel/gcov/36313631+F: Documentation/gcov.txt3626363236273633GDT SCSI DISK ARRAY CONTROLLER DRIVER36283634M: Achim Leubner <achim_leubner@adaptec.com>···3895388938963890HIBERNATION (aka Software Suspend, aka swsusp)38973891M: Pavel Machek <pavel@ucw.cz>38983898-M: "Rafael J. Wysocki" <rjw@sisk.pl>38923892+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>38993893L: linux-pm@vger.kernel.org39003894S: Supported39013895F: arch/x86/power/···43454339INTEL MENLOW THERMAL DRIVER43464340M: Sujith Thomas <sujith.thomas@intel.com>43474341L: platform-driver-x86@vger.kernel.org43484348-W: http://www.lesswatts.org/projects/acpi/43424342+W: https://01.org/linux-acpi43494343S: Supported43504344F: drivers/platform/x86/intel_menlow.c43514345···81078101SUSPEND TO RAM81088102M: Len Brown <len.brown@intel.com>81098103M: Pavel Machek <pavel@ucw.cz>81108110-M: "Rafael J. Wysocki" <rjw@sisk.pl>81048104+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>81118105L: linux-pm@vger.kernel.org81128106S: Supported81138107F: Documentation/power/
+1-1
Makefile
···11VERSION = 322PATCHLEVEL = 1233SUBLEVEL = 044-EXTRAVERSION = -rc444+EXTRAVERSION = -rc655NAME = One Giant Leap for Frogkind6677# *DOCUMENTATION*
+1-1
arch/arc/kernel/ptrace.c
···102102 REG_IGNORE_ONE(pad2);103103 REG_IN_CHUNK(callee, efa, cregs); /* callee_regs[r25..r13] */104104 REG_IGNORE_ONE(efa); /* efa update invalid */105105- REG_IN_ONE(stop_pc, &ptregs->ret); /* stop_pc: PC update */105105+ REG_IGNORE_ONE(stop_pc); /* PC updated via @ret */106106107107 return ret;108108}
···9696 <1 14 0xf08>,9797 <1 11 0xf08>,9898 <1 10 0xf08>;9999+ /* Unfortunately we need this since some versions of U-Boot100100+ * on Exynos don't set the CNTFRQ register, so we need the101101+ * value from DT.102102+ */103103+ clock-frequency = <24000000>;99104 };100105101106 mct@101C0000 {
···2020# $4 - default install path (blank if root directory)2121#22222323+verify () {2424+ if [ ! -f "$1" ]; then2525+ echo "" 1>&22626+ echo " *** Missing file: $1" 1>&22727+ echo ' *** You need to run "make" before "make install".' 1>&22828+ echo "" 1>&22929+ exit 13030+ fi3131+}3232+3333+# Make sure the files actually exist3434+verify "$2"3535+verify "$3"3636+2337# User may have a custom install script2438if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi2539if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
+4-2
arch/arm/common/mcpm_entry.c
···5151{5252 phys_reset_t phys_reset;53535454- BUG_ON(!platform_ops);5454+ if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down))5555+ return;5556 BUG_ON(!irqs_disabled());56575758 /*···9493{9594 phys_reset_t phys_reset;96959797- BUG_ON(!platform_ops);9696+ if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend))9797+ return;9898 BUG_ON(!irqs_disabled());9999100100 /* Very similar to mcpm_cpu_power_down() */
+4-1
arch/arm/common/sharpsl_param.c
···1515#include <linux/module.h>1616#include <linux/string.h>1717#include <asm/mach/sharpsl_param.h>1818+#include <asm/memory.h>18191920/*2021 * Certain hardware parameters determined at the time of device manufacture,···2625 */2726#ifdef CONFIG_ARCH_SA11002827#define PARAM_BASE 0xe8ffc0002828+#define param_start(x) (void *)(x)2929#else3030#define PARAM_BASE 0xa0000a003131+#define param_start(x) __va(x)3132#endif3233#define MAGIC_CHG(a,b,c,d) ( ( d << 24 ) | ( c << 16 ) | ( b << 8 ) | a )3334···44414542void sharpsl_save_param(void)4643{4747- memcpy(&sharpsl_param, (void *)PARAM_BASE, sizeof(struct sharpsl_param_info));4444+ memcpy(&sharpsl_param, param_start(PARAM_BASE), sizeof(struct sharpsl_param_info));48454946 if (sharpsl_param.comadj_keyword != COMADJ_MAGIC)5047 sharpsl_param.comadj=-1;
···7676 *7777 * This must be called with interrupts disabled.7878 *7979- * This does not return. Re-entry in the kernel is expected via8080- * mcpm_entry_point.7979+ * On success this does not return. Re-entry in the kernel is expected8080+ * via mcpm_entry_point.8181+ *8282+ * This will return if mcpm_platform_register() has not been called8383+ * previously in which case the caller should take appropriate action.8184 */8285void mcpm_cpu_power_down(void);8386···10198 *10299 * This must be called with interrupts disabled.103100 *104104- * This does not return. Re-entry in the kernel is expected via105105- * mcpm_entry_point.101101+ * On success this does not return. Re-entry in the kernel is expected102102+ * via mcpm_entry_point.103103+ *104104+ * This will return if mcpm_platform_register() has not been called105105+ * previously in which case the caller should take appropriate action.106106 */107107void mcpm_cpu_suspend(u64 expected_residency);108108
+6
arch/arm/include/asm/syscall.h
···5757 unsigned int i, unsigned int n,5858 unsigned long *args)5959{6060+ if (n == 0)6161+ return;6262+6063 if (i + n > SYSCALL_MAX_ARGS) {6164 unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;6265 unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;···8481 unsigned int i, unsigned int n,8582 const unsigned long *args)8683{8484+ if (n == 0)8585+ return;8686+8787 if (i + n > SYSCALL_MAX_ARGS) {8888 pr_warning("%s called with max args %d, handling only %d\n",8989 __func__, i + n, SYSCALL_MAX_ARGS);
+20-1
arch/arm/kernel/head.S
···487487 mrc p15, 0, r0, c0, c0, 5 @ read MPIDR488488 and r0, r0, #0xc0000000 @ multiprocessing extensions and489489 teq r0, #0x80000000 @ not part of a uniprocessor system?490490- moveq pc, lr @ yes, assume SMP490490+ bne __fixup_smp_on_up @ no, assume UP491491+492492+ @ Core indicates it is SMP. Check for Aegis SOC where a single493493+ @ Cortex-A9 CPU is present but SMP operations fault.494494+ mov r4, #0x41000000495495+ orr r4, r4, #0x0000c000496496+ orr r4, r4, #0x00000090497497+ teq r3, r4 @ Check for ARM Cortex-A9498498+ movne pc, lr @ Not ARM Cortex-A9,499499+500500+ @ If a future SoC *does* use 0x0 as the PERIPH_BASE, then the501501+ @ below address check will need to be #ifdef'd or equivalent502502+ @ for the Aegis platform.503503+ mrc p15, 4, r0, c15, c0 @ get SCU base address504504+ teq r0, #0x0 @ '0' on actual UP A9 hardware505505+ beq __fixup_smp_on_up @ So its an A9 UP506506+ ldr r0, [r0, #4] @ read SCU Config507507+ and r0, r0, #0x3 @ number of CPUs508508+ teq r0, #0x0 @ is 1?509509+ movne pc, lr491510492511__fixup_smp_on_up:493512 adr r0, 1f
···291291 do_exit(SIGSEGV);292292}293293294294-int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs)295295-{296296- return syscall(regs);297297-}298298-299294/* gdb uses break 4,8 */300295#define GDB_BREAK_INSN 0x10004301296static void handle_gdb_break(struct pt_regs *regs, int wot)···800805 else {801806802807 /*803803- * The kernel should never fault on its own address space.808808+ * The kernel should never fault on its own address space,809809+ * unless pagefault_disable() was called before.804810 */805811806806- if (fault_space == 0) 812812+ if (fault_space == 0 && !in_atomic())807813 {808814 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);809815 parisc_terminate("Kernel Fault", regs, code, fault_address);810810-811816 }812817 }813818
+14-1
arch/parisc/lib/memcpy.c
···5656#ifdef __KERNEL__5757#include <linux/module.h>5858#include <linux/compiler.h>5959-#include <asm/uaccess.h>5959+#include <linux/uaccess.h>6060#define s_space "%%sr1"6161#define d_space "%%sr2"6262#else···524524EXPORT_SYMBOL(copy_from_user);525525EXPORT_SYMBOL(copy_in_user);526526EXPORT_SYMBOL(memcpy);527527+528528+long probe_kernel_read(void *dst, const void *src, size_t size)529529+{530530+ unsigned long addr = (unsigned long)src;531531+532532+ if (size < 0 || addr < PAGE_SIZE)533533+ return -EFAULT;534534+535535+ /* check for I/O space F_EXTEND(0xfff00000) access as well? */536536+537537+ return __probe_kernel_read(dst, src, size);538538+}539539+527540#endif
+10-5
arch/parisc/mm/fault.c
···171171 unsigned long address)172172{173173 struct vm_area_struct *vma, *prev_vma;174174- struct task_struct *tsk = current;175175- struct mm_struct *mm = tsk->mm;174174+ struct task_struct *tsk;175175+ struct mm_struct *mm;176176 unsigned long acc_type;177177 int fault;178178- unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;178178+ unsigned int flags;179179180180- if (in_atomic() || !mm)180180+ if (in_atomic())181181 goto no_context;182182183183+ tsk = current;184184+ mm = tsk->mm;185185+ if (!mm)186186+ goto no_context;187187+188188+ flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;183189 if (user_mode(regs))184190 flags |= FAULT_FLAG_USER;185191186192 acc_type = parisc_acctyp(code, regs->iir);187187-188193 if (acc_type & VM_WRITE)189194 flags |= FAULT_FLAG_WRITE;190195retry:
···332332 unsigned long hva;333333 int pfnmap = 0;334334 int tsize = BOOK3E_PAGESZ_4K;335335+ int ret = 0;336336+ unsigned long mmu_seq;337337+ struct kvm *kvm = vcpu_e500->vcpu.kvm;338338+339339+ /* used to check for invalidations in progress */340340+ mmu_seq = kvm->mmu_notifier_seq;341341+ smp_rmb();335342336343 /*337344 * Translate guest physical to true physical, acquiring···456449 gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);457450 }458451452452+ spin_lock(&kvm->mmu_lock);453453+ if (mmu_notifier_retry(kvm, mmu_seq)) {454454+ ret = -EAGAIN;455455+ goto out;456456+ }457457+459458 kvmppc_e500_ref_setup(ref, gtlbe, pfn);460459461460 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,···470457 /* Clear i-cache for new pages */471458 kvmppc_mmu_flush_icache(pfn);472459460460+out:461461+ spin_unlock(&kvm->mmu_lock);462462+473463 /* Drop refcount on page, so that mmu notifiers can clear it */474464 kvm_release_pfn_clean(pfn);475465476476- return 0;466466+ return ret;477467}478468479469/* XXX only map the one-one case, for now use TLB0 */
···166166 *167167 * Atomically sets @v to @i and returns old @v168168 */169169-static inline u64 atomic64_xchg(atomic64_t *v, u64 n)169169+static inline long long atomic64_xchg(atomic64_t *v, long long n)170170{171171 return xchg64(&v->counter, n);172172}···180180 * Atomically checks if @v holds @o and replaces it with @n if so.181181 * Returns the old value at @v.182182 */183183-static inline u64 atomic64_cmpxchg(atomic64_t *v, u64 o, u64 n)183183+static inline long long atomic64_cmpxchg(atomic64_t *v, long long o,184184+ long long n)184185{185186 return cmpxchg64(&v->counter, o, n);186187}
+15-12
arch/tile/include/asm/atomic_32.h
···8080/* A 64bit atomic type */81818282typedef struct {8383- u64 __aligned(8) counter;8383+ long long counter;8484} atomic64_t;85858686#define ATOMIC64_INIT(val) { (val) }···9191 *9292 * Atomically reads the value of @v.9393 */9494-static inline u64 atomic64_read(const atomic64_t *v)9494+static inline long long atomic64_read(const atomic64_t *v)9595{9696 /*9797 * Requires an atomic op to read both 32-bit parts consistently.9898 * Casting away const is safe since the atomic support routines9999 * do not write to memory if the value has not been modified.100100 */101101- return _atomic64_xchg_add((u64 *)&v->counter, 0);101101+ return _atomic64_xchg_add((long long *)&v->counter, 0);102102}103103104104/**···108108 *109109 * Atomically adds @i to @v.110110 */111111-static inline void atomic64_add(u64 i, atomic64_t *v)111111+static inline void atomic64_add(long long i, atomic64_t *v)112112{113113 _atomic64_xchg_add(&v->counter, i);114114}···120120 *121121 * Atomically adds @i to @v and returns @i + @v122122 */123123-static inline u64 atomic64_add_return(u64 i, atomic64_t *v)123123+static inline long long atomic64_add_return(long long i, atomic64_t *v)124124{125125 smp_mb(); /* barrier for proper semantics */126126 return _atomic64_xchg_add(&v->counter, i) + i;···135135 * Atomically adds @a to @v, so long as @v was not already @u.136136 * Returns non-zero if @v was not @u, and zero otherwise.137137 */138138-static inline u64 atomic64_add_unless(atomic64_t *v, u64 a, u64 u)138138+static inline long long atomic64_add_unless(atomic64_t *v, long long a,139139+ long long u)139140{140141 smp_mb(); /* barrier for proper semantics */141142 return _atomic64_xchg_add_unless(&v->counter, a, u) != u;···152151 * atomic64_set() can't be just a raw store, since it would be lost if it153152 * fell between the load and store of one of the other atomic ops.154153 */155155-static inline void atomic64_set(atomic64_t *v, u64 n)154154+static inline void atomic64_set(atomic64_t *v, long long n)156155{157156 _atomic64_xchg(&v->counter, n);158157}···237236extern struct __get_user __atomic_or(volatile int *p, int *lock, int n);238237extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n);239238extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n);240240-extern u64 __atomic64_cmpxchg(volatile u64 *p, int *lock, u64 o, u64 n);241241-extern u64 __atomic64_xchg(volatile u64 *p, int *lock, u64 n);242242-extern u64 __atomic64_xchg_add(volatile u64 *p, int *lock, u64 n);243243-extern u64 __atomic64_xchg_add_unless(volatile u64 *p,244244- int *lock, u64 o, u64 n);239239+extern long long __atomic64_cmpxchg(volatile long long *p, int *lock,240240+ long long o, long long n);241241+extern long long __atomic64_xchg(volatile long long *p, int *lock, long long n);242242+extern long long __atomic64_xchg_add(volatile long long *p, int *lock,243243+ long long n);244244+extern long long __atomic64_xchg_add_unless(volatile long long *p,245245+ int *lock, long long o, long long n);245246246247/* Return failure from the atomic wrappers. */247248struct __get_user __atomic_bad_address(int __user *addr);
+17-11
arch/tile/include/asm/cmpxchg.h
···3535int _atomic_xchg_add(int *v, int i);3636int _atomic_xchg_add_unless(int *v, int a, int u);3737int _atomic_cmpxchg(int *ptr, int o, int n);3838-u64 _atomic64_xchg(u64 *v, u64 n);3939-u64 _atomic64_xchg_add(u64 *v, u64 i);4040-u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u);4141-u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n);3838+long long _atomic64_xchg(long long *v, long long n);3939+long long _atomic64_xchg_add(long long *v, long long i);4040+long long _atomic64_xchg_add_unless(long long *v, long long a, long long u);4141+long long _atomic64_cmpxchg(long long *v, long long o, long long n);42424343#define xchg(ptr, n) \4444 ({ \···5353 if (sizeof(*(ptr)) != 4) \5454 __cmpxchg_called_with_bad_pointer(); \5555 smp_mb(); \5656- (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, (int)n); \5656+ (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, \5757+ (int)n); \5758 })58595960#define xchg64(ptr, n) \···6261 if (sizeof(*(ptr)) != 8) \6362 __xchg_called_with_bad_pointer(); \6463 smp_mb(); \6565- (typeof(*(ptr)))_atomic64_xchg((u64 *)(ptr), (u64)(n)); \6464+ (typeof(*(ptr)))_atomic64_xchg((long long *)(ptr), \6565+ (long long)(n)); \6666 })67676868#define cmpxchg64(ptr, o, n) \···7169 if (sizeof(*(ptr)) != 8) \7270 __cmpxchg_called_with_bad_pointer(); \7371 smp_mb(); \7474- (typeof(*(ptr)))_atomic64_cmpxchg((u64 *)ptr, (u64)o, (u64)n); \7272+ (typeof(*(ptr)))_atomic64_cmpxchg((long long *)ptr, \7373+ (long long)o, (long long)n); \7574 })76757776#else···8481 switch (sizeof(*(ptr))) { \8582 case 4: \8683 __x = (typeof(__x))(unsigned long) \8787- __insn_exch4((ptr), (u32)(unsigned long)(n)); \8484+ __insn_exch4((ptr), \8585+ (u32)(unsigned long)(n)); \8886 break; \8987 case 8: \9090- __x = (typeof(__x)) \8888+ __x = (typeof(__x)) \9189 __insn_exch((ptr), (unsigned long)(n)); \9290 break; \9391 default: \···107103 switch (sizeof(*(ptr))) { \108104 case 4: \109105 __x = (typeof(__x))(unsigned long) \110110- __insn_cmpexch4((ptr), (u32)(unsigned long)(n)); \106106+ __insn_cmpexch4((ptr), \107107+ (u32)(unsigned long)(n)); \111108 break; \112109 case 8: \113113- __x = (typeof(__x))__insn_cmpexch((ptr), (u64)(n)); \110110+ __x = (typeof(__x))__insn_cmpexch((ptr), \111111+ (long long)(n)); \114112 break; \115113 default: \116114 __cmpxchg_called_with_bad_pointer(); \
+31-3
arch/tile/include/asm/percpu.h
···1515#ifndef _ASM_TILE_PERCPU_H1616#define _ASM_TILE_PERCPU_H17171818-register unsigned long __my_cpu_offset __asm__("tp");1919-#define __my_cpu_offset __my_cpu_offset2020-#define set_my_cpu_offset(tp) (__my_cpu_offset = (tp))1818+register unsigned long my_cpu_offset_reg asm("tp");1919+2020+#ifdef CONFIG_PREEMPT2121+/*2222+ * For full preemption, we can't just use the register variable2323+ * directly, since we need barrier() to hazard against it, causing the2424+ * compiler to reload anything computed from a previous "tp" value.2525+ * But we also don't want to use volatile asm, since we'd like the2626+ * compiler to be able to cache the value across multiple percpu reads.2727+ * So we use a fake stack read as a hazard against barrier().2828+ * The 'U' constraint is like 'm' but disallows postincrement.2929+ */3030+static inline unsigned long __my_cpu_offset(void)3131+{3232+ unsigned long tp;3333+ register unsigned long *sp asm("sp");3434+ asm("move %0, tp" : "=r" (tp) : "U" (*sp));3535+ return tp;3636+}3737+#define __my_cpu_offset __my_cpu_offset()3838+#else3939+/*4040+ * We don't need to hazard against barrier() since "tp" doesn't ever4141+ * change with PREEMPT_NONE, and with PREEMPT_VOLUNTARY it only4242+ * changes at function call points, at which we are already re-reading4343+ * the value of "tp" due to "my_cpu_offset_reg" being a global variable.4444+ */4545+#define __my_cpu_offset my_cpu_offset_reg4646+#endif4747+4848+#define set_my_cpu_offset(tp) (my_cpu_offset_reg = (tp))21492250#include <asm-generic/percpu.h>2351
···2323#include <linux/mmzone.h>2424#include <linux/dcache.h>2525#include <linux/fs.h>2626+#include <linux/string.h>2627#include <asm/backtrace.h>2728#include <asm/page.h>2829#include <asm/ucontext.h>···333332 }334333335334 if (vma->vm_file) {336336- char *s;337335 p = d_path(&vma->vm_file->f_path, buf, bufsize);338336 if (IS_ERR(p))339337 p = "?";340340- s = strrchr(p, '/');341341- if (s)342342- p = s+1;338338+ name = kbasename(p);343339 } else {344344- p = "anon";340340+ name = "anon";345341 }346342347343 /* Generate a string description of the vma info. */348348- namelen = strlen(p);344344+ namelen = strlen(name);349345 remaining = (bufsize - 1) - namelen;350350- memmove(buf, p, namelen);346346+ memmove(buf, name, namelen);351347 snprintf(buf + namelen, remaining, "[%lx+%lx] ",352348 vma->vm_start, vma->vm_end - vma->vm_start);353349}
+4-4
arch/tile/lib/atomic_32.c
···107107EXPORT_SYMBOL(_atomic_xor);108108109109110110-u64 _atomic64_xchg(u64 *v, u64 n)110110+long long _atomic64_xchg(long long *v, long long n)111111{112112 return __atomic64_xchg(v, __atomic_setup(v), n);113113}114114EXPORT_SYMBOL(_atomic64_xchg);115115116116-u64 _atomic64_xchg_add(u64 *v, u64 i)116116+long long _atomic64_xchg_add(long long *v, long long i)117117{118118 return __atomic64_xchg_add(v, __atomic_setup(v), i);119119}120120EXPORT_SYMBOL(_atomic64_xchg_add);121121122122-u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u)122122+long long _atomic64_xchg_add_unless(long long *v, long long a, long long u)123123{124124 /*125125 * Note: argument order is switched here since it is easier···130130}131131EXPORT_SYMBOL(_atomic64_xchg_add_unless);132132133133-u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n)133133+long long _atomic64_cmpxchg(long long *v, long long o, long long n)134134{135135 return __atomic64_cmpxchg(v, __atomic_setup(v), o, n);136136}
+4-3
arch/x86/Kconfig
···860860861861config X86_UP_APIC862862 bool "Local APIC support on uniprocessors"863863- depends on X86_32 && !SMP && !X86_32_NON_STANDARD863863+ depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI864864 ---help---865865 A local APIC (Advanced Programmable Interrupt Controller) is an866866 integrated interrupt controller in the CPU. If you have a single-CPU···885885886886config X86_LOCAL_APIC887887 def_bool y888888- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC888888+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI889889890890config X86_IO_APIC891891 def_bool y892892- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC892892+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC || PCI_MSI893893894894config X86_VISWS_APIC895895 def_bool y···1033103310341034config MICROCODE10351035 tristate "CPU microcode loading support"10361036+ depends on CPU_SUP_AMD || CPU_SUP_INTEL10361037 select FW_LOADER10371038 ---help---10381039
+3-3
arch/x86/include/asm/cpufeature.h
···374374 * Catch too early usage of this before alternatives375375 * have run.376376 */377377- asm goto("1: jmp %l[t_warn]\n"377377+ asm_volatile_goto("1: jmp %l[t_warn]\n"378378 "2:\n"379379 ".section .altinstructions,\"a\"\n"380380 " .long 1b - .\n"···388388389389#endif390390391391- asm goto("1: jmp %l[t_no]\n"391391+ asm_volatile_goto("1: jmp %l[t_no]\n"392392 "2:\n"393393 ".section .altinstructions,\"a\"\n"394394 " .long 1b - .\n"···453453 * have. Thus, we force the jump to the widest, 4-byte, signed relative454454 * offset even though the last would often fit in less bytes.455455 */456456- asm goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n"456456+ asm_volatile_goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n"457457 "2:\n"458458 ".section .altinstructions,\"a\"\n"459459 " .long 1b - .\n" /* src offset */
···278278 old memory can be recycled */279279 make_lowmem_page_readwrite(xen_initial_gdt);280280281281+#ifdef CONFIG_X86_32282282+ /*283283+ * Xen starts us with XEN_FLAT_RING1_DS, but linux code284284+ * expects __USER_DS285285+ */286286+ loadsegment(ds, __USER_DS);287287+ loadsegment(es, __USER_DS);288288+#endif289289+281290 xen_filter_cpu_maps();282291 xen_setup_vcpu_info_placement();283292 }
+6-1
block/partitions/efi.c
···222222 * the disk size.223223 *224224 * Hybrid MBRs do not necessarily comply with this.225225+ *226226+ * Consider a bad value here to be a warning to support dd'ing227227+ * an image from a smaller disk to a larger disk.225228 */226229 if (ret == GPT_MBR_PROTECTIVE) {227230 sz = le32_to_cpu(mbr->partition_record[part].size_in_lba);228231 if (sz != (uint32_t) total_sectors - 1 && sz != 0xFFFFFFFF)229229- ret = 0;232232+ pr_debug("GPT: mbr size in lba (%u) different than whole disk (%u).\n",233233+ sz, min_t(uint32_t,234234+ total_sectors - 1, 0xFFFFFFFF));230235 }231236done:232237 return ret;
+4-4
drivers/acpi/Kconfig
···2424 are configured, ACPI is used.25252626 The project home page for the Linux ACPI subsystem is here:2727- <http://www.lesswatts.org/projects/acpi/>2727+ <https://01.org/linux-acpi>28282929 Linux support for ACPI is based on Intel Corporation's ACPI3030 Component Architecture (ACPI CA). For more information on the···123123 default y124124 help125125 This driver handles events on the power, sleep, and lid buttons.126126- A daemon reads /proc/acpi/event and perform user-defined actions127127- such as shutting down the system. This is necessary for128128- software-controlled poweroff.126126+ A daemon reads events from input devices or via netlink and127127+ performs user-defined actions such as shutting down the system.128128+ This is necessary for software-controlled poweroff.129129130130 To compile this driver as a module, choose M here:131131 the module will be called button.
-56
drivers/acpi/device_pm.c
···10251025 }10261026}10271027EXPORT_SYMBOL_GPL(acpi_dev_pm_detach);10281028-10291029-/**10301030- * acpi_dev_pm_add_dependent - Add physical device depending for PM.10311031- * @handle: Handle of ACPI device node.10321032- * @depdev: Device depending on that node for PM.10331033- */10341034-void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev)10351035-{10361036- struct acpi_device_physical_node *dep;10371037- struct acpi_device *adev;10381038-10391039- if (!depdev || acpi_bus_get_device(handle, &adev))10401040- return;10411041-10421042- mutex_lock(&adev->physical_node_lock);10431043-10441044- list_for_each_entry(dep, &adev->power_dependent, node)10451045- if (dep->dev == depdev)10461046- goto out;10471047-10481048- dep = kzalloc(sizeof(*dep), GFP_KERNEL);10491049- if (dep) {10501050- dep->dev = depdev;10511051- list_add_tail(&dep->node, &adev->power_dependent);10521052- }10531053-10541054- out:10551055- mutex_unlock(&adev->physical_node_lock);10561056-}10571057-EXPORT_SYMBOL_GPL(acpi_dev_pm_add_dependent);10581058-10591059-/**10601060- * acpi_dev_pm_remove_dependent - Remove physical device depending for PM.10611061- * @handle: Handle of ACPI device node.10621062- * @depdev: Device depending on that node for PM.10631063- */10641064-void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev)10651065-{10661066- struct acpi_device_physical_node *dep;10671067- struct acpi_device *adev;10681068-10691069- if (!depdev || acpi_bus_get_device(handle, &adev))10701070- return;10711071-10721072- mutex_lock(&adev->physical_node_lock);10731073-10741074- list_for_each_entry(dep, &adev->power_dependent, node)10751075- if (dep->dev == depdev) {10761076- list_del(&dep->node);10771077- kfree(dep);10781078- break;10791079- }10801080-10811081- mutex_unlock(&adev->physical_node_lock);10821082-}10831083-EXPORT_SYMBOL_GPL(acpi_dev_pm_remove_dependent);10841028#endif /* CONFIG_PM */
···166166 if (freq->frequency == CPUFREQ_ENTRY_INVALID)167167 continue;168168169169- dvfs = &s3c64xx_dvfs_table[freq->index];169169+ dvfs = &s3c64xx_dvfs_table[freq->driver_data];170170 found = 0;171171172172 for (i = 0; i < count; i++) {
+1
drivers/dma/edma.c
···306306 EDMA_SLOT_ANY);307307 if (echan->slot[i] < 0) {308308 dev_err(dev, "Failed to allocate slot\n");309309+ kfree(edesc);309310 return NULL;310311 }311312 }
···29252925 /* Speaker Allocation Data Block */29262926 if (dbl == 3) {29272927 *sadb = kmalloc(dbl, GFP_KERNEL);29282928+ if (!*sadb)29292929+ return -ENOMEM;29282930 memcpy(*sadb, &db[1], dbl);29292931 count = dbl;29302932 break;
-8
drivers/gpu/drm/drm_fb_helper.c
···416416 return;417417418418 /*419419- * fbdev->blank can be called from irq context in case of a panic.420420- * Since we already have our own special panic handler which will421421- * restore the fbdev console mode completely, just bail out early.422422- */423423- if (oops_in_progress)424424- return;425425-426426- /*427419 * For each CRTC in this fb, turn the connectors on/off.428420 */429421 drm_modeset_lock_all(dev);
···12901290 * then we do not take part in VGA arbitration and the12911291 * vga_client_register() fails with -ENODEV.12921292 */12931293- if (!HAS_PCH_SPLIT(dev)) {12941294- ret = vga_client_register(dev->pdev, dev, NULL,12951295- i915_vga_set_decode);12961296- if (ret && ret != -ENODEV)12971297- goto out;12981298- }12931293+ ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode);12941294+ if (ret && ret != -ENODEV)12951295+ goto out;1299129613001297 intel_register_dsm_handler();13011298···13471350 * tiny window where we will loose hotplug notifactions.13481351 */13491352 intel_fbdev_initial_config(dev);13501350-13511351- /*13521352- * Must do this after fbcon init so that13531353- * vgacon_save_screen() works during the handover.13541354- */13551355- i915_disable_vga_mem(dev);1356135313571354 /* Only enable hotplug handling once the fbdev is fully set up. */13581355 dev_priv->enable_hotplug_processing = true;
···288288 /* fglrx clears sth in AFMT_AUDIO_PACKET_CONTROL2 here */289289290290 WREG32(HDMI_ACR_PACKET_CONTROL + offset,291291- HDMI_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */292292- HDMI_ACR_SOURCE); /* select SW CTS value */291291+ HDMI_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */293292294293 evergreen_hdmi_update_ACR(encoder, mode->clock);295294
···945945 if (enable) {946946 mutex_lock(&rdev->pm.mutex);947947 rdev->pm.dpm.uvd_active = true;948948+ /* disable this for now */949949+#if 0948950 if ((rdev->pm.dpm.sd == 1) && (rdev->pm.dpm.hd == 0))949951 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_SD;950952 else if ((rdev->pm.dpm.sd == 2) && (rdev->pm.dpm.hd == 0))···956954 else if ((rdev->pm.dpm.sd == 0) && (rdev->pm.dpm.hd == 2))957955 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD2;958956 else957957+#endif959958 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD;960959 rdev->pm.dpm.state = dpm_state;961960 mutex_unlock(&rdev->pm.mutex);
+2-2
drivers/gpu/drm/radeon/radeon_test.c
···3636 struct radeon_bo *vram_obj = NULL;3737 struct radeon_bo **gtt_obj = NULL;3838 uint64_t gtt_addr, vram_addr;3939- unsigned i, n, size;4040- int r, ring;3939+ unsigned n, size;4040+ int i, r, ring;41414242 switch (flag) {4343 case RADEON_TEST_COPY_DMA:
+2-1
drivers/gpu/drm/radeon/radeon_uvd.c
···798798 (rdev->pm.dpm.hd != hd)) {799799 rdev->pm.dpm.sd = sd;800800 rdev->pm.dpm.hd = hd;801801- streams_changed = true;801801+ /* disable this for now */802802+ /*streams_changed = true;*/802803 }803804 }804805
···2727 * - USB ID 04d9:a067, sold as Sharkoon Drakonia and Perixx MX-20002828 * - USB ID 04d9:a04a, sold as Tracer Sniper TRM-503, NOVA Gaming Slider X2002929 * and Zalman ZM-GM13030+ * - USB ID 04d9:a081, sold as SHARKOON DarkGlider Gaming mouse3031 */31323233static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,···4746 }4847 break;4948 case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A:4949+ case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081:5050 if (*rsize >= 113 && rdesc[106] == 0xff && rdesc[107] == 0x7f5151 && rdesc[111] == 0xff && rdesc[112] == 0x7f) {5252 hid_info(hdev, "Fixing up report descriptor\n");···6563 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },6664 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,6765 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) },6666+ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,6767+ USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) },6868 { }6969};7070MODULE_DEVICE_TABLE(hid, holtek_mouse_devices);
···230230231231static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)232232{233233+ u8 status, data = 0;233234 int i;234235235236 if (send_command(cmd) || send_argument(key)) {···238237 return -EIO;239238 }240239240240+ /* This has no effect on newer (2012) SMCs */241241 if (send_byte(len, APPLESMC_DATA_PORT)) {242242 pr_warn("%.4s: read len fail\n", key);243243 return -EIO;···251249 }252250 buffer[i] = inb(APPLESMC_DATA_PORT);253251 }252252+253253+ /* Read the data port until bit0 is cleared */254254+ for (i = 0; i < 16; i++) {255255+ udelay(APPLESMC_MIN_WAIT);256256+ status = inb(APPLESMC_CMD_PORT);257257+ if (!(status & 0x01))258258+ break;259259+ data = inb(APPLESMC_DATA_PORT);260260+ }261261+ if (i)262262+ pr_warn("flushed %d bytes, last value is: %d\n", i, data);254263255264 return 0;256265}
···5252 select PCI_PRI5353 select PCI_PASID5454 select IOMMU_API5555- depends on X86_64 && PCI && ACPI && X86_IO_APIC5555+ depends on X86_64 && PCI && ACPI5656 ---help---5757 With this option you can enable support for AMD IOMMU hardware in5858 your system. An IOMMU is a hardware component which provides
···303303 struct device_node *cpun, *cpus;304304305305 cpus = of_find_node_by_path("/cpus");306306- if (!cpus) {307307- pr_warn("Missing cpus node, bailing out\n");306306+ if (!cpus)308307 return NULL;309309- }310308311309 for_each_child_of_node(cpus, cpun) {312310 if (of_node_cmp(cpun->type, "cpu"))
-12
drivers/of/fdt.c
···1818#include <linux/string.h>1919#include <linux/errno.h>2020#include <linux/slab.h>2121-#include <linux/random.h>22212322#include <asm/setup.h> /* for COMMAND_LINE_SIZE */2423#ifdef CONFIG_PPC···802803}803804804805#endif /* CONFIG_OF_EARLY_FLATTREE */805805-806806-/* Feed entire flattened device tree into the random pool */807807-static int __init add_fdt_randomness(void)808808-{809809- if (initial_boot_params)810810- add_device_randomness(initial_boot_params,811811- be32_to_cpu(initial_boot_params->totalsize));812812-813813- return 0;814814-}815815-core_initcall(add_fdt_randomness);
-173
drivers/of/of_reserved_mem.c
···11-/*22- * Device tree based initialization code for reserved memory.33- *44- * Copyright (c) 2013 Samsung Electronics Co., Ltd.55- * http://www.samsung.com66- * Author: Marek Szyprowski <m.szyprowski@samsung.com>77- *88- * This program is free software; you can redistribute it and/or99- * modify it under the terms of the GNU General Public License as1010- * published by the Free Software Foundation; either version 2 of the1111- * License or (at your optional) any later version of the license.1212- */1313-1414-#include <linux/memblock.h>1515-#include <linux/err.h>1616-#include <linux/of.h>1717-#include <linux/of_fdt.h>1818-#include <linux/of_platform.h>1919-#include <linux/mm.h>2020-#include <linux/sizes.h>2121-#include <linux/mm_types.h>2222-#include <linux/dma-contiguous.h>2323-#include <linux/dma-mapping.h>2424-#include <linux/of_reserved_mem.h>2525-2626-#define MAX_RESERVED_REGIONS 162727-struct reserved_mem {2828- phys_addr_t base;2929- unsigned long size;3030- struct cma *cma;3131- char name[32];3232-};3333-static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];3434-static int reserved_mem_count;3535-3636-static int __init fdt_scan_reserved_mem(unsigned long node, const char *uname,3737- int depth, void *data)3838-{3939- struct reserved_mem *rmem = &reserved_mem[reserved_mem_count];4040- phys_addr_t base, size;4141- int is_cma, is_reserved;4242- unsigned long len;4343- const char *status;4444- __be32 *prop;4545-4646- is_cma = IS_ENABLED(CONFIG_DMA_CMA) &&4747- of_flat_dt_is_compatible(node, "linux,contiguous-memory-region");4848- is_reserved = of_flat_dt_is_compatible(node, "reserved-memory-region");4949-5050- if (!is_reserved && !is_cma) {5151- /* ignore node and scan next one */5252- return 0;5353- }5454-5555- status = of_get_flat_dt_prop(node, "status", &len);5656- if (status && strcmp(status, "okay") != 0) {5757- /* ignore disabled node nad scan next one */5858- return 0;5959- }6060-6161- prop = of_get_flat_dt_prop(node, "reg", &len);6262- if (!prop || (len < (dt_root_size_cells + dt_root_addr_cells) *6363- sizeof(__be32))) {6464- pr_err("Reserved mem: node %s, incorrect \"reg\" property\n",6565- uname);6666- /* ignore node and scan next one */6767- return 0;6868- }6969- base = dt_mem_next_cell(dt_root_addr_cells, &prop);7070- size = dt_mem_next_cell(dt_root_size_cells, &prop);7171-7272- if (!size) {7373- /* ignore node and scan next one */7474- return 0;7575- }7676-7777- pr_info("Reserved mem: found %s, memory base %lx, size %ld MiB\n",7878- uname, (unsigned long)base, (unsigned long)size / SZ_1M);7979-8080- if (reserved_mem_count == ARRAY_SIZE(reserved_mem))8181- return -ENOSPC;8282-8383- rmem->base = base;8484- rmem->size = size;8585- strlcpy(rmem->name, uname, sizeof(rmem->name));8686-8787- if (is_cma) {8888- struct cma *cma;8989- if (dma_contiguous_reserve_area(size, base, 0, &cma) == 0) {9090- rmem->cma = cma;9191- reserved_mem_count++;9292- if (of_get_flat_dt_prop(node,9393- "linux,default-contiguous-region",9494- NULL))9595- dma_contiguous_set_default(cma);9696- }9797- } else if (is_reserved) {9898- if (memblock_remove(base, size) == 0)9999- reserved_mem_count++;100100- else101101- pr_err("Failed to reserve memory for %s\n", uname);102102- }103103-104104- return 0;105105-}106106-107107-static struct reserved_mem *get_dma_memory_region(struct device *dev)108108-{109109- struct device_node *node;110110- const char *name;111111- int i;112112-113113- node = of_parse_phandle(dev->of_node, "memory-region", 0);114114- if (!node)115115- return NULL;116116-117117- name = kbasename(node->full_name);118118- for (i = 0; i < reserved_mem_count; i++)119119- if (strcmp(name, reserved_mem[i].name) == 0)120120- return &reserved_mem[i];121121- return NULL;122122-}123123-124124-/**125125- * of_reserved_mem_device_init() - assign reserved memory region to given device126126- *127127- * This function assign memory region pointed by "memory-region" device tree128128- * property to the given device.129129- */130130-void of_reserved_mem_device_init(struct device *dev)131131-{132132- struct reserved_mem *region = get_dma_memory_region(dev);133133- if (!region)134134- return;135135-136136- if (region->cma) {137137- dev_set_cma_area(dev, region->cma);138138- pr_info("Assigned CMA %s to %s device\n", region->name,139139- dev_name(dev));140140- } else {141141- if (dma_declare_coherent_memory(dev, region->base, region->base,142142- region->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) != 0)143143- pr_info("Declared reserved memory %s to %s device\n",144144- region->name, dev_name(dev));145145- }146146-}147147-148148-/**149149- * of_reserved_mem_device_release() - release reserved memory device structures150150- *151151- * This function releases structures allocated for memory region handling for152152- * the given device.153153- */154154-void of_reserved_mem_device_release(struct device *dev)155155-{156156- struct reserved_mem *region = get_dma_memory_region(dev);157157- if (!region && !region->cma)158158- dma_release_declared_memory(dev);159159-}160160-161161-/**162162- * early_init_dt_scan_reserved_mem() - create reserved memory regions163163- *164164- * This function grabs memory from early allocator for device exclusive use165165- * defined in device tree structures. It should be called by arch specific code166166- * once the early allocator (memblock) has been activated and all other167167- * subsystems have already allocated/reserved memory.168168- */169169-void __init early_init_dt_scan_reserved_mem(void)170170-{171171- of_scan_flat_dt_by_path("/memory/reserved-memory",172172- fdt_scan_reserved_mem, NULL);173173-}
-4
drivers/of/platform.c
···2121#include <linux/of_device.h>2222#include <linux/of_irq.h>2323#include <linux/of_platform.h>2424-#include <linux/of_reserved_mem.h>2524#include <linux/platform_device.h>26252726const struct of_device_id of_default_bus_match_table[] = {···218219 dev->dev.bus = &platform_bus_type;219220 dev->dev.platform_data = platform_data;220221221221- of_reserved_mem_device_init(&dev->dev);222222-223222 /* We do not fill the DMA ops for platform devices by default.224223 * This is currently the responsibility of the platform code225224 * to do such, possibly using a device notifier···225228226229 if (of_device_add(dev) != 0) {227230 platform_device_put(dev);228228- of_reserved_mem_device_release(&dev->dev);229231 return NULL;230232 }231233
+5-3
drivers/pci/hotplug/acpiphp_glue.c
···994994995995 /*996996 * This bridge should have been registered as a hotplug function997997- * under its parent, so the context has to be there. If not, we998998- * are in deep goo.997997+ * under its parent, so the context should be there, unless the998998+ * parent is going to be handled by pciehp, in which case this999999+ * bridge is not interesting to us either.9991000 */10001001 mutex_lock(&acpiphp_context_lock);10011002 context = acpiphp_get_context(handle);10021002- if (WARN_ON(!context)) {10031003+ if (!context) {10031004 mutex_unlock(&acpiphp_context_lock);10041005 put_device(&bus->dev);10061006+ pci_dev_put(bridge->pci_dev);10051007 kfree(bridge);10061008 return;10071009 }
+5-3
drivers/s390/char/sclp_cmd.c
···145145146146 if (sccb->header.response_code != 0x20)147147 return 0;148148- if (sccb->sclp_send_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK))149149- return 1;150150- return 0;148148+ if (!(sccb->sclp_send_mask & (EVTYP_OPCMD_MASK | EVTYP_PMSGCMD_MASK)))149149+ return 0;150150+ if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)))151151+ return 0;152152+ return 1;151153}152154153155bool __init sclp_has_vt220(void)
+1-1
drivers/s390/char/tty3270.c
···810810 struct winsize ws;811811812812 screen = tty3270_alloc_screen(tp->n_rows, tp->n_cols);813813- if (!screen)813813+ if (IS_ERR(screen))814814 return;815815 /* Switch to new output size */816816 spin_lock_bh(&tp->view.lock);
+2-1
drivers/spi/spi-atmel.c
···15831583 /* Initialize the hardware */15841584 ret = clk_prepare_enable(clk);15851585 if (ret)15861586- goto out_unmap_regs;15861586+ goto out_free_irq;15871587 spi_writel(as, CR, SPI_BIT(SWRST));15881588 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */15891589 if (as->caps.has_wdrbt) {···16141614 spi_writel(as, CR, SPI_BIT(SWRST));16151615 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */16161616 clk_disable_unprepare(clk);16171617+out_free_irq:16171618 free_irq(irq, master);16181619out_unmap_regs:16191620 iounmap(as->regs);
···476476 master->bus_num = bus_num;477477478478 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);479479- if (!res) {480480- dev_err(&pdev->dev, "can't get platform resource\n");481481- ret = -EINVAL;482482- goto out_master_put;483483- }484484-485479 dspi->base = devm_ioremap_resource(&pdev->dev, res);486486- if (!dspi->base) {487487- ret = -EINVAL;480480+ if (IS_ERR(dspi->base)) {481481+ ret = PTR_ERR(dspi->base);488482 goto out_master_put;489483 }490484
+3-1
drivers/spi/spi-mpc512x-psc.c
···522522 psc_num = master->bus_num;523523 snprintf(clk_name, sizeof(clk_name), "psc%d_mclk", psc_num);524524 clk = devm_clk_get(dev, clk_name);525525- if (IS_ERR(clk))525525+ if (IS_ERR(clk)) {526526+ ret = PTR_ERR(clk);526527 goto free_irq;528528+ }527529 ret = clk_prepare_enable(clk);528530 if (ret)529531 goto free_irq;
+10-1
drivers/spi/spi-pxa2xx.c
···546546 if (pm_runtime_suspended(&drv_data->pdev->dev))547547 return IRQ_NONE;548548549549- sccr1_reg = read_SSCR1(reg);549549+ /*550550+ * If the device is not yet in RPM suspended state and we get an551551+ * interrupt that is meant for another device, check if status bits552552+ * are all set to one. That means that the device is already553553+ * powered off.554554+ */550555 status = read_SSSR(reg);556556+ if (status == ~0)557557+ return IRQ_NONE;558558+559559+ sccr1_reg = read_SSCR1(reg);551560552561 /* Ignore possible writes if we don't need to write */553562 if (!(sccr1_reg & SSCR1_TIE))
···561561 if (!mmres || !irqres)562562 return -ENODEV;563563564564- if (np)564564+ if (np) {565565 port = of_alias_get_id(np, "serial");566566 if (port >= VT8500_MAX_PORTS)567567 port = -1;568568- else568568+ } else {569569 port = -1;570570+ }570571571572 if (port < 0) {572573 /* calculate the port id */
+4-2
drivers/usb/chipidea/host.c
···100100{101101 struct usb_hcd *hcd = ci->hcd;102102103103- usb_remove_hcd(hcd);104104- usb_put_hcd(hcd);103103+ if (hcd) {104104+ usb_remove_hcd(hcd);105105+ usb_put_hcd(hcd);106106+ }105107 if (ci->platdata->reg_vbus)106108 regulator_disable(ci->platdata->reg_vbus);107109}
···11571157 t1 = xhci_port_state_to_neutral(t1);11581158 if (t1 != t2)11591159 xhci_writel(xhci, t2, port_array[port_index]);11601160-11611161- if (hcd->speed != HCD_USB3) {11621162- /* enable remote wake up for USB 2.0 */11631163- __le32 __iomem *addr;11641164- u32 tmp;11651165-11661166- /* Get the port power control register address. */11671167- addr = port_array[port_index] + PORTPMSC;11681168- tmp = xhci_readl(xhci, addr);11691169- tmp |= PORT_RWE;11701170- xhci_writel(xhci, tmp, addr);11711171- }11721160 }11731161 hcd->state = HC_STATE_SUSPENDED;11741162 bus_state->next_statechange = jiffies + msecs_to_jiffies(10);···12351247 xhci_ring_device(xhci, slot_id);12361248 } else12371249 xhci_writel(xhci, temp, port_array[port_index]);12381238-12391239- if (hcd->speed != HCD_USB3) {12401240- /* disable remote wake up for USB 2.0 */12411241- __le32 __iomem *addr;12421242- u32 tmp;12431243-12441244- /* Add one to the port status register address to get12451245- * the port power control register address.12461246- */12471247- addr = port_array[port_index] + PORTPMSC;12481248- tmp = xhci_readl(xhci, addr);12491249- tmp &= ~PORT_RWE;12501250- xhci_writel(xhci, tmp, addr);12511251- }12521250 }1253125112541252 (void) xhci_readl(xhci, &xhci->op_regs->command);
+25
drivers/usb/host/xhci-pci.c
···3535#define PCI_VENDOR_ID_ETRON 0x1b6f3636#define PCI_DEVICE_ID_ASROCK_P67 0x702337373838+#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c313939+#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c314040+3841static const char hcd_name[] = "xhci_hcd";39424043/* called after powerup, by probe or system-pm "wakeup" */···7168 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,7269 "QUIRK: Fresco Logic xHC needs configure"7370 " endpoint cmd after reset endpoint");7171+ }7272+ if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&7373+ pdev->revision == 0x4) {7474+ xhci->quirks |= XHCI_SLOW_SUSPEND;7575+ xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,7676+ "QUIRK: Fresco Logic xHC revision %u"7777+ "must be suspended extra slowly",7878+ pdev->revision);7479 }7580 /* Fresco Logic confirms: all revisions of this chip do not7681 * support MSI, even though some of them claim to in their PCI···120109 */121110 xhci->quirks |= XHCI_SPURIOUS_REBOOT;122111 xhci->quirks |= XHCI_AVOID_BEI;112112+ }113113+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&114114+ (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI ||115115+ pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) {116116+ /* Workaround for occasional spurious wakeups from S5 (or117117+ * any other sleep) on Haswell machines with LPT and LPT-LP118118+ * with the new Intel BIOS119119+ */120120+ xhci->quirks |= XHCI_SPURIOUS_WAKEUP;123121 }124122 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&125123 pdev->device == PCI_DEVICE_ID_ASROCK_P67) {···237217 usb_put_hcd(xhci->shared_hcd);238218 }239219 usb_hcd_pci_remove(dev);220220+221221+ /* Workaround for spurious wakeups at shutdown with HSW */222222+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)223223+ pci_set_power_state(dev, PCI_D3hot);224224+240225 kfree(xhci);241226}242227
+13-1
drivers/usb/host/xhci.c
···730730731731 spin_lock_irq(&xhci->lock);732732 xhci_halt(xhci);733733+ /* Workaround for spurious wakeups at shutdown with HSW */734734+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)735735+ xhci_reset(xhci);733736 spin_unlock_irq(&xhci->lock);734737735738 xhci_cleanup_msix(xhci);···740737 xhci_dbg_trace(xhci, trace_xhci_dbg_init,741738 "xhci_shutdown completed - status = %x",742739 xhci_readl(xhci, &xhci->op_regs->status));740740+741741+ /* Yet another workaround for spurious wakeups at shutdown with HSW */742742+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)743743+ pci_set_power_state(to_pci_dev(hcd->self.controller), PCI_D3hot);743744}744745745746#ifdef CONFIG_PM···846839int xhci_suspend(struct xhci_hcd *xhci)847840{848841 int rc = 0;842842+ unsigned int delay = XHCI_MAX_HALT_USEC;849843 struct usb_hcd *hcd = xhci_to_hcd(xhci);850844 u32 command;851845···869861 command = xhci_readl(xhci, &xhci->op_regs->command);870862 command &= ~CMD_RUN;871863 xhci_writel(xhci, command, &xhci->op_regs->command);864864+865865+ /* Some chips from Fresco Logic need an extraordinary delay */866866+ delay *= (xhci->quirks & XHCI_SLOW_SUSPEND) ? 10 : 1;867867+872868 if (xhci_handshake(xhci, &xhci->op_regs->status,873873- STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) {869869+ STS_HALT, STS_HALT, delay)) {874870 xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");875871 spin_unlock_irq(&xhci->lock);876872 return -ETIMEDOUT;
+2
drivers/usb/host/xhci.h
···15481548#define XHCI_COMP_MODE_QUIRK (1 << 14)15491549#define XHCI_AVOID_BEI (1 << 15)15501550#define XHCI_PLAT (1 << 16)15511551+#define XHCI_SLOW_SUSPEND (1 << 17)15521552+#define XHCI_SPURIOUS_WAKEUP (1 << 18)15511553 unsigned int num_active_eps;15521554 unsigned int limit_active_eps;15531555 /* There are two roothubs to keep track of bus suspend info for */
+1-1
drivers/usb/misc/Kconfig
···246246config USB_HSIC_USB3503247247 tristate "USB3503 HSIC to USB20 Driver"248248 depends on I2C249249- select REGMAP249249+ select REGMAP_I2C250250 help251251 This option enables support for SMSC USB3503 HSIC to USB 2.0 Driver.
+46
drivers/usb/musb/musb_core.c
···922922}923923924924/*925925+ * Program the HDRC to start (enable interrupts, dma, etc.).926926+ */927927+void musb_start(struct musb *musb)928928+{929929+ void __iomem *regs = musb->mregs;930930+ u8 devctl = musb_readb(regs, MUSB_DEVCTL);931931+932932+ dev_dbg(musb->controller, "<== devctl %02x\n", devctl);933933+934934+ /* Set INT enable registers, enable interrupts */935935+ musb->intrtxe = musb->epmask;936936+ musb_writew(regs, MUSB_INTRTXE, musb->intrtxe);937937+ musb->intrrxe = musb->epmask & 0xfffe;938938+ musb_writew(regs, MUSB_INTRRXE, musb->intrrxe);939939+ musb_writeb(regs, MUSB_INTRUSBE, 0xf7);940940+941941+ musb_writeb(regs, MUSB_TESTMODE, 0);942942+943943+ /* put into basic highspeed mode and start session */944944+ musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE945945+ | MUSB_POWER_HSENAB946946+ /* ENSUSPEND wedges tusb */947947+ /* | MUSB_POWER_ENSUSPEND */948948+ );949949+950950+ musb->is_active = 0;951951+ devctl = musb_readb(regs, MUSB_DEVCTL);952952+ devctl &= ~MUSB_DEVCTL_SESSION;953953+954954+ /* session started after:955955+ * (a) ID-grounded irq, host mode;956956+ * (b) vbus present/connect IRQ, peripheral mode;957957+ * (c) peripheral initiates, using SRP958958+ */959959+ if (musb->port_mode != MUSB_PORT_MODE_HOST &&960960+ (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) {961961+ musb->is_active = 1;962962+ } else {963963+ devctl |= MUSB_DEVCTL_SESSION;964964+ }965965+966966+ musb_platform_enable(musb);967967+ musb_writeb(regs, MUSB_DEVCTL, devctl);968968+}969969+970970+/*925971 * Make the HDRC stop (disable interrupts, etc.);926972 * reversible by musb_start927973 * called on gadget driver unregister
···18531853 musb->gadget_driver = driver;1854185418551855 spin_lock_irqsave(&musb->lock, flags);18561856+ musb->is_active = 1;1856185718571858 otg_set_peripheral(otg, &musb->g);18581859 musb->xceiv->state = OTG_STATE_B_IDLE;18591860 spin_unlock_irqrestore(&musb->lock, flags);18611861+18621862+ musb_start(musb);1860186318611864 /* REVISIT: funcall to other code, which also18621865 * handles power budgeting ... this way also
···211211 /*212212 * Many devices do not respond properly to READ_CAPACITY_16.213213 * Tell the SCSI layer to try READ_CAPACITY_10 first.214214+ * However some USB 3.0 drive enclosures return capacity215215+ * modulo 2TB. Those must use READ_CAPACITY_16214216 */215215- sdev->try_rc_10_first = 1;217217+ if (!(us->fflags & US_FL_NEEDS_CAP16))218218+ sdev->try_rc_10_first = 1;216219217220 /* assume SPC3 or latter devices support sense size > 18 */218221 if (sdev->scsi_level > SCSI_SPC_2)
···545545 long npage;546546 int ret = 0, prot = 0;547547 uint64_t mask;548548+ struct vfio_dma *dma = NULL;549549+ unsigned long pfn;548550549551 end = map->iova + map->size;550552···589587 }590588591589 for (iova = map->iova; iova < end; iova += size, vaddr += size) {592592- struct vfio_dma *dma = NULL;593593- unsigned long pfn;594590 long i;595591596592 /* Pin a contiguous chunk of memory */···597597 if (npage <= 0) {598598 WARN_ON(!npage);599599 ret = (int)npage;600600- break;600600+ goto out;601601 }602602603603 /* Verify pages are not already mapped */604604 for (i = 0; i < npage; i++) {605605 if (iommu_iova_to_phys(iommu->domain,606606 iova + (i << PAGE_SHIFT))) {607607- vfio_unpin_pages(pfn, npage, prot, true);608607 ret = -EBUSY;609609- break;608608+ goto out_unpin;610609 }611610 }612611···615616 if (ret) {616617 if (ret != -EBUSY ||617618 map_try_harder(iommu, iova, pfn, npage, prot)) {618618- vfio_unpin_pages(pfn, npage, prot, true);619619- break;619619+ goto out_unpin;620620 }621621 }622622···670672 dma = kzalloc(sizeof(*dma), GFP_KERNEL);671673 if (!dma) {672674 iommu_unmap(iommu->domain, iova, size);673673- vfio_unpin_pages(pfn, npage, prot, true);674675 ret = -ENOMEM;675675- break;676676+ goto out_unpin;676677 }677678678679 dma->size = size;···682685 }683686 }684687685685- if (ret) {686686- struct vfio_dma *tmp;687687- iova = map->iova;688688- size = map->size;689689- while ((tmp = vfio_find_dma(iommu, iova, size))) {690690- int r = vfio_remove_dma_overlap(iommu, iova,691691- &size, tmp);692692- if (WARN_ON(r || !size))693693- break;694694- }688688+ WARN_ON(ret);689689+ mutex_unlock(&iommu->lock);690690+ return ret;691691+692692+out_unpin:693693+ vfio_unpin_pages(pfn, npage, prot, true);694694+695695+out:696696+ iova = map->iova;697697+ size = map->size;698698+ while ((dma = vfio_find_dma(iommu, iova, size))) {699699+ int r = vfio_remove_dma_overlap(iommu, iova,700700+ &size, dma);701701+ if (WARN_ON(r || !size))702702+ break;695703 }696704697705 mutex_unlock(&iommu->lock);
+6
drivers/w1/w1.c
···613613 sl = dev_to_w1_slave(dev);614614 fops = sl->family->fops;615615616616+ if (!fops)617617+ return 0;618618+616619 switch (action) {617620 case BUS_NOTIFY_ADD_DEVICE:618621 /* if the family driver needs to initialize something... */···716713 atomic_set(&sl->refcnt, 0);717714 init_completion(&sl->released);718715716716+ /* slave modules need to be loaded in a context with unlocked mutex */717717+ mutex_unlock(&dev->mutex);719718 request_module("w1-family-0x%0x", rn->family);719719+ mutex_lock(&dev->mutex);720720721721 spin_lock(&w1_flock);722722 f = w1_family_registered(rn->family);
+6
drivers/watchdog/hpwdt.c
···802802 return -ENODEV;803803 }804804805805+ /*806806+ * Ignore all auxilary iLO devices with the following PCI ID807807+ */808808+ if (dev->subsystem_device == 0x1979)809809+ return -ENODEV;810810+805811 if (pci_enable_device(dev)) {806812 dev_warn(&dev->dev,807813 "Not possible to enable PCI Device: 0x%x:0x%x.\n",
···14901490 cur_start = state->end + 1;14911491 node = rb_next(node);14921492 total_bytes += state->end - state->start + 1;14931493- if (total_bytes >= max_bytes) {14941494- *end = *start + max_bytes - 1;14931493+ if (total_bytes >= max_bytes)14951494 break;14961496- }14971495 if (!node)14981496 break;14991497 }···1633163516341636 /*16351637 * make sure to limit the number of pages we try to lock down16361636- * if we're looping.16371638 */16381638- if (delalloc_end + 1 - delalloc_start > max_bytes && loops)16391639- delalloc_end = delalloc_start + PAGE_CACHE_SIZE - 1;16391639+ if (delalloc_end + 1 - delalloc_start > max_bytes)16401640+ delalloc_end = delalloc_start + max_bytes - 1;1640164116411642 /* step two, lock all the pages after the page that has start */16421643 ret = lock_delalloc_pages(inode, locked_page,···16461649 */16471650 free_extent_state(cached_state);16481651 if (!loops) {16491649- unsigned long offset = (*start) & (PAGE_CACHE_SIZE - 1);16501650- max_bytes = PAGE_CACHE_SIZE - offset;16521652+ max_bytes = PAGE_CACHE_SIZE;16511653 loops = 1;16521654 goto again;16531655 } else {
+2-1
fs/btrfs/inode.c
···6437643764386438 if (btrfs_extent_readonly(root, disk_bytenr))64396439 goto out;64406440+ btrfs_release_path(path);6440644164416442 /*64426443 * look for other files referencing this extent, if we···798779867988798779897988 /* check for collisions, even if the name isn't there */79907990- ret = btrfs_check_dir_item_collision(root, new_dir->i_ino,79897989+ ret = btrfs_check_dir_item_collision(dest, new_dir->i_ino,79917990 new_dentry->d_name.name,79927991 new_dentry->d_name.len);79937992
···299299 continue;300300 }301301302302- if (btrfs_root_refs(&root->root_item) == 0) {303303- btrfs_add_dead_root(root);304304- continue;305305- }306306-307302 err = btrfs_init_fs_root(root);308303 if (err) {309304 btrfs_free_fs_root(root);···313318 btrfs_free_fs_root(root);314319 break;315320 }321321+322322+ if (btrfs_root_refs(&root->root_item) == 0)323323+ btrfs_add_dead_root(root);316324 }317325318326 btrfs_free_path(path);
+12-2
fs/buffer.c
···10051005 struct buffer_head *bh;10061006 sector_t end_block;10071007 int ret = 0; /* Will call free_more_memory() */10081008+ gfp_t gfp_mask;1008100910091009- page = find_or_create_page(inode->i_mapping, index,10101010- (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);10101010+ gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;10111011+ gfp_mask |= __GFP_MOVABLE;10121012+ /*10131013+ * XXX: __getblk_slow() can not really deal with failure and10141014+ * will endlessly loop on improvised global reclaim. Prefer10151015+ * looping in the allocator rather than here, at least that10161016+ * code knows what it's doing.10171017+ */10181018+ gfp_mask |= __GFP_NOFAIL;10191019+10201020+ page = find_or_create_page(inode->i_mapping, index, gfp_mask);10111021 if (!page)10121022 return ret;10131023
+4-2
fs/cifs/cifsfs.c
···120120{121121 struct inode *inode;122122 struct cifs_sb_info *cifs_sb;123123+ struct cifs_tcon *tcon;123124 int rc = 0;124125125126 cifs_sb = CIFS_SB(sb);127127+ tcon = cifs_sb_master_tcon(cifs_sb);126128127129 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL)128130 sb->s_flags |= MS_POSIXACL;129131130130- if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES)132132+ if (tcon->ses->capabilities & tcon->ses->server->vals->cap_large_files)131133 sb->s_maxbytes = MAX_LFS_FILESIZE;132134 else133135 sb->s_maxbytes = MAX_NON_LFS;···149147 goto out_no_root;150148 }151149152152- if (cifs_sb_master_tcon(cifs_sb)->nocase)150150+ if (tcon->nocase)153151 sb->s_d_op = &cifs_ci_dentry_ops;154152 else155153 sb->s_d_op = &cifs_dentry_ops;
···780780 ERRDOS, ERRnoaccess, 0xc0000290}, {781781 ERRDOS, ERRbadfunc, 0xc000029c}, {782782 ERRDOS, ERRsymlink, NT_STATUS_STOPPED_ON_SYMLINK}, {783783- ERRDOS, ERRinvlevel, 0x007c0001}, };783783+ ERRDOS, ERRinvlevel, 0x007c0001}, {784784+ 0, 0, 0 }785785+};784786785787/*****************************************************************************786788 Print an error message from the status code
+2-2
fs/cifs/sess.c
···500500 return NTLMv2;501501 if (global_secflags & CIFSSEC_MAY_NTLM)502502 return NTLM;503503- /* Fallthrough */504503 default:505505- return Unspecified;504504+ /* Fallthrough to attempt LANMAN authentication next */505505+ break;506506 }507507 case CIFS_NEGFLAVOR_LANMAN:508508 switch (requested) {
+6
fs/cifs/smb2pdu.c
···687687 else688688 return -EIO;689689690690+ /* no need to send SMB logoff if uid already closed due to reconnect */691691+ if (ses->need_reconnect)692692+ goto smb2_session_already_dead;693693+690694 rc = small_smb2_init(SMB2_LOGOFF, NULL, (void **) &req);691695 if (rc)692696 return rc;···705701 * No tcon so can't do706702 * cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_fail[SMB2...]);707703 */704704+705705+smb2_session_already_dead:708706 return rc;709707}710708
+14
fs/cifs/smbfsctl.h
···9797#define FSCTL_QUERY_NETWORK_INTERFACE_INFO 0x001401FC /* BB add struct */9898#define FSCTL_SRV_READ_HASH 0x001441BB /* BB add struct */9999100100+/* See FSCC 2.1.2.5 */100101#define IO_REPARSE_TAG_MOUNT_POINT 0xA0000003101102#define IO_REPARSE_TAG_HSM 0xC0000004102103#define IO_REPARSE_TAG_SIS 0x80000007104104+#define IO_REPARSE_TAG_HSM2 0x80000006105105+#define IO_REPARSE_TAG_DRIVER_EXTENDER 0x80000005106106+/* Used by the DFS filter. See MS-DFSC */107107+#define IO_REPARSE_TAG_DFS 0x8000000A108108+/* Used by the DFS filter See MS-DFSC */109109+#define IO_REPARSE_TAG_DFSR 0x80000012110110+#define IO_REPARSE_TAG_FILTER_MANAGER 0x8000000B111111+/* See section MS-FSCC 2.1.2.4 */112112+#define IO_REPARSE_TAG_SYMLINK 0xA000000C113113+#define IO_REPARSE_TAG_DEDUP 0x80000013114114+#define IO_REPARSE_APPXSTREAM 0xC0000014115115+/* NFS symlinks, Win 8/SMB3 and later */116116+#define IO_REPARSE_TAG_NFS 0x80000014103117104118/* fsctl flags */105119/* If Flags is set to this value, the request is an FSCTL not ioctl request */
+7-2
fs/cifs/transport.c
···410410wait_for_free_request(struct TCP_Server_Info *server, const int timeout,411411 const int optype)412412{413413- return wait_for_free_credits(server, timeout,414414- server->ops->get_credits_field(server, optype));413413+ int *val;414414+415415+ val = server->ops->get_credits_field(server, optype);416416+ /* Since an echo is already inflight, no need to wait to send another */417417+ if (*val <= 0 && optype == CIFS_ECHO_OP)418418+ return -EAGAIN;419419+ return wait_for_free_credits(server, timeout, val);415420}416421417422static int allocate_mid(struct cifs_ses *ses, struct smb_hdr *in_buf,
···2323#define PULL_UP (1 << 4)2424#define ALTELECTRICALSEL (1 << 5)25252626-/* 34xx specific mux bit defines */2626+/* omap3/4/5 specific mux bit defines */2727#define INPUT_EN (1 << 8)2828#define OFF_EN (1 << 9)2929#define OFFOUT_EN (1 << 10)···3131#define OFF_PULL_EN (1 << 12)3232#define OFF_PULL_UP (1 << 13)3333#define WAKEUP_EN (1 << 14)3434-3535-/* 44xx specific mux bit defines */3634#define WAKEUP_EVENT (1 << 15)37353836/* Active pin states */
+15
include/linux/compiler-gcc4.h
···6565#define __visible __attribute__((externally_visible))6666#endif67676868+/*6969+ * GCC 'asm goto' miscompiles certain code sequences:7070+ *7171+ * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=586707272+ *7373+ * Work it around via a compiler barrier quirk suggested by Jakub Jelinek.7474+ * Fixed in GCC 4.8.2 and later versions.7575+ *7676+ * (asm goto is automatically volatile - the naming reflects this.)7777+ */7878+#if GCC_VERSION <= 408017979+# define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)8080+#else8181+# define asm_volatile_goto(x...) do { asm goto(x); } while (0)8282+#endif68836984#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP7085#if GCC_VERSION >= 40400
+11-39
include/linux/memcontrol.h
···137137extern void mem_cgroup_replace_page_cache(struct page *oldpage,138138 struct page *newpage);139139140140-/**141141- * mem_cgroup_toggle_oom - toggle the memcg OOM killer for the current task142142- * @new: true to enable, false to disable143143- *144144- * Toggle whether a failed memcg charge should invoke the OOM killer145145- * or just return -ENOMEM. Returns the previous toggle state.146146- *147147- * NOTE: Any path that enables the OOM killer before charging must148148- * call mem_cgroup_oom_synchronize() afterward to finalize the149149- * OOM handling and clean up.150150- */151151-static inline bool mem_cgroup_toggle_oom(bool new)140140+static inline void mem_cgroup_oom_enable(void)152141{153153- bool old;154154-155155- old = current->memcg_oom.may_oom;156156- current->memcg_oom.may_oom = new;157157-158158- return old;142142+ WARN_ON(current->memcg_oom.may_oom);143143+ current->memcg_oom.may_oom = 1;159144}160145161161-static inline void mem_cgroup_enable_oom(void)146146+static inline void mem_cgroup_oom_disable(void)162147{163163- bool old = mem_cgroup_toggle_oom(true);164164-165165- WARN_ON(old == true);166166-}167167-168168-static inline void mem_cgroup_disable_oom(void)169169-{170170- bool old = mem_cgroup_toggle_oom(false);171171-172172- WARN_ON(old == false);148148+ WARN_ON(!current->memcg_oom.may_oom);149149+ current->memcg_oom.may_oom = 0;173150}174151175152static inline bool task_in_memcg_oom(struct task_struct *p)176153{177177- return p->memcg_oom.in_memcg_oom;154154+ return p->memcg_oom.memcg;178155}179156180180-bool mem_cgroup_oom_synchronize(void);157157+bool mem_cgroup_oom_synchronize(bool wait);181158182159#ifdef CONFIG_MEMCG_SWAP183160extern int do_swap_account;···379402{380403}381404382382-static inline bool mem_cgroup_toggle_oom(bool new)383383-{384384- return false;385385-}386386-387387-static inline void mem_cgroup_enable_oom(void)405405+static inline void mem_cgroup_oom_enable(void)388406{389407}390408391391-static inline void mem_cgroup_disable_oom(void)409409+static inline void mem_cgroup_oom_disable(void)392410{393411}394412···392420 return false;393421}394422395395-static inline bool mem_cgroup_oom_synchronize(void)423423+static inline bool mem_cgroup_oom_synchronize(bool wait)396424{397425 return false;398426}
···294294 */295295struct perf_event {296296#ifdef CONFIG_PERF_EVENTS297297- struct list_head group_entry;297297+ /*298298+ * entry onto perf_event_context::event_list;299299+ * modifications require ctx->lock300300+ * RCU safe iterations.301301+ */298302 struct list_head event_entry;303303+304304+ /*305305+ * XXX: group_entry and sibling_list should be mutually exclusive;306306+ * either you're a sibling on a group, or you're the group leader.307307+ * Rework the code to always use the same list element.308308+ *309309+ * Locked for modification by both ctx->mutex and ctx->lock; holding310310+ * either sufficies for read.311311+ */312312+ struct list_head group_entry;299313 struct list_head sibling_list;314314+315315+ /*316316+ * We need storage to track the entries in perf_pmu_migrate_context; we317317+ * cannot use the event_entry because of RCU and we want to keep the318318+ * group in tact which avoids us using the other two entries.319319+ */320320+ struct list_head migrate_entry;321321+300322 struct hlist_node hlist_entry;301323 int nr_siblings;302324 int group_flags;
+1
include/linux/random.h
···1717extern void get_random_bytes(void *buf, int nbytes);1818extern void get_random_bytes_arch(void *buf, int nbytes);1919void generate_random_uuid(unsigned char uuid_out[16]);2020+extern int random_int_secret_init(void);20212122#ifndef MODULE2223extern const struct file_operations random_fops, urandom_fops;
+3-4
include/linux/sched.h
···13941394 } memcg_batch;13951395 unsigned int memcg_kmem_skip_account;13961396 struct memcg_oom_info {13971397+ struct mem_cgroup *memcg;13981398+ gfp_t gfp_mask;13991399+ int order;13971400 unsigned int may_oom:1;13981398- unsigned int in_memcg_oom:1;13991399- unsigned int oom_locked:1;14001400- int wakeups;14011401- struct mem_cgroup *wait_on_memcg;14021401 } memcg_oom;14031402#endif14041403#ifdef CONFIG_UPROBES
+14
include/linux/timex.h
···64646565#include <asm/timex.h>66666767+#ifndef random_get_entropy6868+/*6969+ * The random_get_entropy() function is used by the /dev/random driver7070+ * in order to extract entropy via the relative unpredictability of7171+ * when an interrupt takes places versus a high speed, fine-grained7272+ * timing source or cycle counter. Since it will be occurred on every7373+ * single interrupt, it must have a very low cost/overhead.7474+ *7575+ * By default we use get_cycles() for this purpose, but individual7676+ * architectures may override this in their asm/timex.h header file.7777+ */7878+#define random_get_entropy() get_cycles()7979+#endif8080+6781/*6882 * SHIFT_PLL is used as a dampening factor to define how much we6983 * adjust the frequency correction for a given offset in PLL mode.
+1-1
include/linux/usb/usb_phy_gen_xceiv.h
···1212 unsigned int needs_reset:1;1313};14141515-#if IS_ENABLED(CONFIG_NOP_USB_XCEIV)1515+#if defined(CONFIG_NOP_USB_XCEIV) || (defined(CONFIG_NOP_USB_XCEIV_MODULE) && defined(MODULE))1616/* sometimes transceivers are accessed only through e.g. ULPI */1717extern void usb_nop_xceiv_register(void);1818extern void usb_nop_xceiv_unregister(void);
+3-1
include/linux/usb_usual.h
···6666 US_FLAG(INITIAL_READ10, 0x00100000) \6767 /* Initial READ(10) (and others) must be retried */ \6868 US_FLAG(WRITE_CACHE, 0x00200000) \6969- /* Write Cache status is not available */6969+ /* Write Cache status is not available */ \7070+ US_FLAG(NEEDS_CAP16, 0x00400000)7171+ /* cannot handle READ_CAPACITY_10 */70727173#define US_FLAG(name, value) US_FL_##name = value ,7274enum { US_DO_ALL_FLAGS };
-7
include/linux/vgaarb.h
···6565 * out of the arbitration process (and can be safe to take6666 * interrupts at any time.6767 */6868-#if defined(CONFIG_VGA_ARB)6968extern void vga_set_legacy_decoding(struct pci_dev *pdev,7069 unsigned int decodes);7171-#else7272-static inline void vga_set_legacy_decoding(struct pci_dev *pdev,7373- unsigned int decodes)7474-{7575-}7676-#endif77707871/**7972 * vga_get - acquire & locks VGA resources
···1282128212831283 sem_lock(sma, NULL, -1);1284128412851285+ if (sma->sem_perm.deleted) {12861286+ sem_unlock(sma, -1);12871287+ rcu_read_unlock();12881288+ return -EIDRM;12891289+ }12901290+12851291 curr = &sma->sem_base[semnum];1286129212871293 ipc_assert_locked_object(&sma->sem_perm);···13421336 int i;1343133713441338 sem_lock(sma, NULL, -1);13391339+ if (sma->sem_perm.deleted) {13401340+ err = -EIDRM;13411341+ goto out_unlock;13421342+ }13451343 if(nsems > SEMMSL_FAST) {13461344 if (!ipc_rcu_getref(sma)) {13471347- sem_unlock(sma, -1);13481348- rcu_read_unlock();13491345 err = -EIDRM;13501350- goto out_free;13461346+ goto out_unlock;13511347 }13521348 sem_unlock(sma, -1);13531349 rcu_read_unlock();···13621354 rcu_read_lock();13631355 sem_lock_and_putref(sma);13641356 if (sma->sem_perm.deleted) {13651365- sem_unlock(sma, -1);13661366- rcu_read_unlock();13671357 err = -EIDRM;13681368- goto out_free;13581358+ goto out_unlock;13691359 }13701360 }13711361 for (i = 0; i < sma->sem_nsems; i++)···13811375 struct sem_undo *un;1382137613831377 if (!ipc_rcu_getref(sma)) {13841384- rcu_read_unlock();13851385- return -EIDRM;13781378+ err = -EIDRM;13791379+ goto out_rcu_wakeup;13861380 }13871381 rcu_read_unlock();13881382···14101404 rcu_read_lock();14111405 sem_lock_and_putref(sma);14121406 if (sma->sem_perm.deleted) {14131413- sem_unlock(sma, -1);14141414- rcu_read_unlock();14151407 err = -EIDRM;14161416- goto out_free;14081408+ goto out_unlock;14171409 }1418141014191411 for (i = 0; i < nsems; i++)···14351431 goto out_rcu_wakeup;1436143214371433 sem_lock(sma, NULL, -1);14341434+ if (sma->sem_perm.deleted) {14351435+ err = -EIDRM;14361436+ goto out_unlock;14371437+ }14381438 curr = &sma->sem_base[semnum];1439143914401440 switch (cmd) {···18441836 if (error)18451837 goto out_rcu_wakeup;1846183818391839+ error = -EIDRM;18401840+ locknum = sem_lock(sma, sops, nsops);18411841+ if (sma->sem_perm.deleted)18421842+ goto out_unlock_free;18471843 /*18481844 * semid identifiers are not unique - find_alloc_undo may have18491845 * allocated an undo structure, it was invalidated by an RMID···18551843 * This case can be detected checking un->semid. The existence of18561844 * "un" itself is guaranteed by rcu.18571845 */18581858- error = -EIDRM;18591859- locknum = sem_lock(sma, sops, nsops);18601846 if (un && un->semid == -1)18611847 goto out_unlock_free;18621848···20672057 }2068205820692059 sem_lock(sma, NULL, -1);20602060+ /* exit_sem raced with IPC_RMID, nothing to do */20612061+ if (sma->sem_perm.deleted) {20622062+ sem_unlock(sma, -1);20632063+ rcu_read_unlock();20642064+ continue;20652065+ }20702066 un = __lookup_undo(ulp, semid);20712067 if (un == NULL) {20722068 /* exit_sem raced with IPC_RMID+semget() that created
+21-6
ipc/util.c
···1717 * Pavel Emelianov <xemul@openvz.org>1818 *1919 * General sysv ipc locking scheme:2020- * when doing ipc id lookups, take the ids->rwsem2121- * rcu_read_lock()2222- * obtain the ipc object (kern_ipc_perm)2323- * perform security, capabilities, auditing and permission checks, etc.2424- * acquire the ipc lock (kern_ipc_perm.lock) throught ipc_lock_object()2525- * perform data updates (ie: SET, RMID, LOCK/UNLOCK commands)2020+ * rcu_read_lock()2121+ * obtain the ipc object (kern_ipc_perm) by looking up the id in an idr2222+ * tree.2323+ * - perform initial checks (capabilities, auditing and permission,2424+ * etc).2525+ * - perform read-only operations, such as STAT, INFO commands.2626+ * acquire the ipc lock (kern_ipc_perm.lock) through2727+ * ipc_lock_object()2828+ * - perform data updates, such as SET, RMID commands and2929+ * mechanism-specific operations (semop/semtimedop,3030+ * msgsnd/msgrcv, shmat/shmdt).3131+ * drop the ipc lock, through ipc_unlock_object().3232+ * rcu_read_unlock()3333+ *3434+ * The ids->rwsem must be taken when:3535+ * - creating, removing and iterating the existing entries in ipc3636+ * identifier sets.3737+ * - iterating through files under /proc/sysvipc/3838+ *3939+ * Note that sems have a special fast path that avoids kern_ipc_perm.lock -4040+ * see sem_lock().2641 */27422843#include <linux/mm.h>
···16161616 struct inode *inode = mapping->host;16171617 pgoff_t offset = vmf->pgoff;16181618 struct page *page;16191619- bool memcg_oom;16201619 pgoff_t size;16211620 int ret = 0;16221621···16241625 return VM_FAULT_SIGBUS;1625162616261627 /*16271627- * Do we have something in the page cache already? Either16281628- * way, try readahead, but disable the memcg OOM killer for it16291629- * as readahead is optional and no errors are propagated up16301630- * the fault stack. The OOM killer is enabled while trying to16311631- * instantiate the faulting page individually below.16281628+ * Do we have something in the page cache already?16321629 */16331630 page = find_get_page(mapping, offset);16341631 if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {···16321637 * We found the page, so try async readahead before16331638 * waiting for the lock.16341639 */16351635- memcg_oom = mem_cgroup_toggle_oom(false);16361640 do_async_mmap_readahead(vma, ra, file, page, offset);16371637- mem_cgroup_toggle_oom(memcg_oom);16381641 } else if (!page) {16391642 /* No page in the page cache at all */16401640- memcg_oom = mem_cgroup_toggle_oom(false);16411643 do_sync_mmap_readahead(vma, ra, file, offset);16421642- mem_cgroup_toggle_oom(memcg_oom);16431644 count_vm_event(PGMAJFAULT);16441645 mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);16451646 ret = VM_FAULT_MAJOR;
+9-1
mm/huge_memory.c
···2697269726982698 mmun_start = haddr;26992699 mmun_end = haddr + HPAGE_PMD_SIZE;27002700+again:27002701 mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);27012702 spin_lock(&mm->page_table_lock);27022703 if (unlikely(!pmd_trans_huge(*pmd))) {···27202719 split_huge_page(page);2721272027222721 put_page(page);27232723- BUG_ON(pmd_trans_huge(*pmd));27222722+27232723+ /*27242724+ * We don't always have down_write of mmap_sem here: a racing27252725+ * do_huge_pmd_wp_page() might have copied-on-write to another27262726+ * huge page before our split_huge_page() got the anon_vma lock.27272727+ */27282728+ if (unlikely(pmd_trans_huge(*pmd)))27292729+ goto again;27242730}2725273127262732void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
+16-1
mm/hugetlb.c
···653653 BUG_ON(page_count(page));654654 BUG_ON(page_mapcount(page));655655 restore_reserve = PagePrivate(page);656656+ ClearPagePrivate(page);656657657658 spin_lock(&hugetlb_lock);658659 hugetlb_cgroup_uncharge_page(hstate_index(h),···696695 /* we rely on prep_new_huge_page to set the destructor */697696 set_compound_order(page, order);698697 __SetPageHead(page);698698+ __ClearPageReserved(page);699699 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {700700 __SetPageTail(p);701701+ /*702702+ * For gigantic hugepages allocated through bootmem at703703+ * boot, it's safer to be consistent with the not-gigantic704704+ * hugepages and clear the PG_reserved bit from all tail pages705705+ * too. Otherwse drivers using get_user_pages() to access tail706706+ * pages may get the reference counting wrong if they see707707+ * PG_reserved set on a tail page (despite the head page not708708+ * having PG_reserved set). Enforcing this consistency between709709+ * head and tail pages allows drivers to optimize away a check710710+ * on the head page when they need know if put_page() is needed711711+ * after get_user_pages().712712+ */713713+ __ClearPageReserved(p);701714 set_page_count(p, 0);702715 p->first_page = page;703716 }···13441329#else13451330 page = virt_to_page(m);13461331#endif13471347- __ClearPageReserved(page);13481332 WARN_ON(page_count(page) != 1);13491333 prep_compound_huge_page(page, h->order);13341334+ WARN_ON(PageReserved(page));13501335 prep_new_huge_page(h, page, page_to_nid(page));13511336 /*13521337 * If we had gigantic hugepages allocated at boot time, we need
+72-105
mm/memcontrol.c
···866866 unsigned long val = 0;867867 int cpu;868868869869+ get_online_cpus();869870 for_each_online_cpu(cpu)870871 val += per_cpu(memcg->stat->events[idx], cpu);871872#ifdef CONFIG_HOTPLUG_CPU···874873 val += memcg->nocpu_base.events[idx];875874 spin_unlock(&memcg->pcp_counter_lock);876875#endif876876+ put_online_cpus();877877 return val;878878}879879···21612159 memcg_wakeup_oom(memcg);21622160}2163216121642164-/*21652165- * try to call OOM killer21662166- */21672162static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order)21682163{21692169- bool locked;21702170- int wakeups;21712171-21722164 if (!current->memcg_oom.may_oom)21732165 return;21742174-21752175- current->memcg_oom.in_memcg_oom = 1;21762176-21772166 /*21782178- * As with any blocking lock, a contender needs to start21792179- * listening for wakeups before attempting the trylock,21802180- * otherwise it can miss the wakeup from the unlock and sleep21812181- * indefinitely. This is just open-coded because our locking21822182- * is so particular to memcg hierarchies.21672167+ * We are in the middle of the charge context here, so we21682168+ * don't want to block when potentially sitting on a callstack21692169+ * that holds all kinds of filesystem and mm locks.21702170+ *21712171+ * Also, the caller may handle a failed allocation gracefully21722172+ * (like optional page cache readahead) and so an OOM killer21732173+ * invocation might not even be necessary.21742174+ *21752175+ * That's why we don't do anything here except remember the21762176+ * OOM context and then deal with it at the end of the page21772177+ * fault when the stack is unwound, the locks are released,21782178+ * and when we know whether the fault was overall successful.21832179 */21842184- wakeups = atomic_read(&memcg->oom_wakeups);21802180+ css_get(&memcg->css);21812181+ current->memcg_oom.memcg = memcg;21822182+ current->memcg_oom.gfp_mask = mask;21832183+ current->memcg_oom.order = order;21842184+}21852185+21862186+/**21872187+ * mem_cgroup_oom_synchronize - complete memcg OOM handling21882188+ * @handle: actually kill/wait or just clean up the OOM state21892189+ *21902190+ * This has to be called at the end of a page fault if the memcg OOM21912191+ * handler was enabled.21922192+ *21932193+ * Memcg supports userspace OOM handling where failed allocations must21942194+ * sleep on a waitqueue until the userspace task resolves the21952195+ * situation. Sleeping directly in the charge context with all kinds21962196+ * of locks held is not a good idea, instead we remember an OOM state21972197+ * in the task and mem_cgroup_oom_synchronize() has to be called at21982198+ * the end of the page fault to complete the OOM handling.21992199+ *22002200+ * Returns %true if an ongoing memcg OOM situation was detected and22012201+ * completed, %false otherwise.22022202+ */22032203+bool mem_cgroup_oom_synchronize(bool handle)22042204+{22052205+ struct mem_cgroup *memcg = current->memcg_oom.memcg;22062206+ struct oom_wait_info owait;22072207+ bool locked;22082208+22092209+ /* OOM is global, do not handle */22102210+ if (!memcg)22112211+ return false;22122212+22132213+ if (!handle)22142214+ goto cleanup;22152215+22162216+ owait.memcg = memcg;22172217+ owait.wait.flags = 0;22182218+ owait.wait.func = memcg_oom_wake_function;22192219+ owait.wait.private = current;22202220+ INIT_LIST_HEAD(&owait.wait.task_list);22212221+22222222+ prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);21852223 mem_cgroup_mark_under_oom(memcg);2186222421872225 locked = mem_cgroup_oom_trylock(memcg);···2231218922322190 if (locked && !memcg->oom_kill_disable) {22332191 mem_cgroup_unmark_under_oom(memcg);22342234- mem_cgroup_out_of_memory(memcg, mask, order);22352235- mem_cgroup_oom_unlock(memcg);22362236- /*22372237- * There is no guarantee that an OOM-lock contender22382238- * sees the wakeups triggered by the OOM kill22392239- * uncharges. Wake any sleepers explicitely.22402240- */22412241- memcg_oom_recover(memcg);21922192+ finish_wait(&memcg_oom_waitq, &owait.wait);21932193+ mem_cgroup_out_of_memory(memcg, current->memcg_oom.gfp_mask,21942194+ current->memcg_oom.order);22422195 } else {22432243- /*22442244- * A system call can just return -ENOMEM, but if this22452245- * is a page fault and somebody else is handling the22462246- * OOM already, we need to sleep on the OOM waitqueue22472247- * for this memcg until the situation is resolved.22482248- * Which can take some time because it might be22492249- * handled by a userspace task.22502250- *22512251- * However, this is the charge context, which means22522252- * that we may sit on a large call stack and hold22532253- * various filesystem locks, the mmap_sem etc. and we22542254- * don't want the OOM handler to deadlock on them22552255- * while we sit here and wait. Store the current OOM22562256- * context in the task_struct, then return -ENOMEM.22572257- * At the end of the page fault handler, with the22582258- * stack unwound, pagefault_out_of_memory() will check22592259- * back with us by calling22602260- * mem_cgroup_oom_synchronize(), possibly putting the22612261- * task to sleep.22622262- */22632263- current->memcg_oom.oom_locked = locked;22642264- current->memcg_oom.wakeups = wakeups;22652265- css_get(&memcg->css);22662266- current->memcg_oom.wait_on_memcg = memcg;22672267- }22682268-}22692269-22702270-/**22712271- * mem_cgroup_oom_synchronize - complete memcg OOM handling22722272- *22732273- * This has to be called at the end of a page fault if the the memcg22742274- * OOM handler was enabled and the fault is returning %VM_FAULT_OOM.22752275- *22762276- * Memcg supports userspace OOM handling, so failed allocations must22772277- * sleep on a waitqueue until the userspace task resolves the22782278- * situation. Sleeping directly in the charge context with all kinds22792279- * of locks held is not a good idea, instead we remember an OOM state22802280- * in the task and mem_cgroup_oom_synchronize() has to be called at22812281- * the end of the page fault to put the task to sleep and clean up the22822282- * OOM state.22832283- *22842284- * Returns %true if an ongoing memcg OOM situation was detected and22852285- * finalized, %false otherwise.22862286- */22872287-bool mem_cgroup_oom_synchronize(void)22882288-{22892289- struct oom_wait_info owait;22902290- struct mem_cgroup *memcg;22912291-22922292- /* OOM is global, do not handle */22932293- if (!current->memcg_oom.in_memcg_oom)22942294- return false;22952295-22962296- /*22972297- * We invoked the OOM killer but there is a chance that a kill22982298- * did not free up any charges. Everybody else might already22992299- * be sleeping, so restart the fault and keep the rampage23002300- * going until some charges are released.23012301- */23022302- memcg = current->memcg_oom.wait_on_memcg;23032303- if (!memcg)23042304- goto out;23052305-23062306- if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current))23072307- goto out_memcg;23082308-23092309- owait.memcg = memcg;23102310- owait.wait.flags = 0;23112311- owait.wait.func = memcg_oom_wake_function;23122312- owait.wait.private = current;23132313- INIT_LIST_HEAD(&owait.wait.task_list);23142314-23152315- prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);23162316- /* Only sleep if we didn't miss any wakeups since OOM */23172317- if (atomic_read(&memcg->oom_wakeups) == current->memcg_oom.wakeups)23182196 schedule();23192319- finish_wait(&memcg_oom_waitq, &owait.wait);23202320-out_memcg:23212321- mem_cgroup_unmark_under_oom(memcg);23222322- if (current->memcg_oom.oom_locked) {21972197+ mem_cgroup_unmark_under_oom(memcg);21982198+ finish_wait(&memcg_oom_waitq, &owait.wait);21992199+ }22002200+22012201+ if (locked) {23232202 mem_cgroup_oom_unlock(memcg);23242203 /*23252204 * There is no guarantee that an OOM-lock contender···22492286 */22502287 memcg_oom_recover(memcg);22512288 }22892289+cleanup:22902290+ current->memcg_oom.memcg = NULL;22522291 css_put(&memcg->css);22532253- current->memcg_oom.wait_on_memcg = NULL;22542254-out:22552255- current->memcg_oom.in_memcg_oom = 0;22562292 return true;22572293}22582294···26652703 || fatal_signal_pending(current)))26662704 goto bypass;2667270527062706+ if (unlikely(task_in_memcg_oom(current)))27072707+ goto bypass;27082708+26682709 /*26692710 * We always charge the cgroup the mm_struct belongs to.26702711 * The mm_struct's mem_cgroup changes on task migration if the···27662801 return 0;27672802nomem:27682803 *ptr = NULL;28042804+ if (gfp_mask & __GFP_NOFAIL)28052805+ return 0;27692806 return -ENOMEM;27702807bypass:27712808 *ptr = root_mem_cgroup;
+14-6
mm/memory.c
···837837 */838838 make_migration_entry_read(&entry);839839 pte = swp_entry_to_pte(entry);840840+ if (pte_swp_soft_dirty(*src_pte))841841+ pte = pte_swp_mksoft_dirty(pte);840842 set_pte_at(src_mm, addr, src_pte, pte);841843 }842844 }···38653863 * space. Kernel faults are handled more gracefully.38663864 */38673865 if (flags & FAULT_FLAG_USER)38683868- mem_cgroup_enable_oom();38663866+ mem_cgroup_oom_enable();3869386738703868 ret = __handle_mm_fault(mm, vma, address, flags);3871386938723872- if (flags & FAULT_FLAG_USER)38733873- mem_cgroup_disable_oom();38743874-38753875- if (WARN_ON(task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)))38763876- mem_cgroup_oom_synchronize();38703870+ if (flags & FAULT_FLAG_USER) {38713871+ mem_cgroup_oom_disable();38723872+ /*38733873+ * The task may have entered a memcg OOM situation but38743874+ * if the allocation error was handled gracefully (no38753875+ * VM_FAULT_OOM), there is no need to kill anything.38763876+ * Just clean up the OOM state peacefully.38773877+ */38783878+ if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))38793879+ mem_cgroup_oom_synchronize(false);38803880+ }3877388138783882 return ret;38793883}
+2
mm/migrate.c
···161161162162 get_page(new);163163 pte = pte_mkold(mk_pte(new, vma->vm_page_prot));164164+ if (pte_swp_soft_dirty(*ptep))165165+ pte = pte_mksoft_dirty(pte);164166 if (is_write_migration_entry(entry))165167 pte = pte_mkwrite(pte);166168#ifdef CONFIG_HUGETLB_PAGE
+5-2
mm/mprotect.c
···9494 swp_entry_t entry = pte_to_swp_entry(oldpte);95959696 if (is_write_migration_entry(entry)) {9797+ pte_t newpte;9798 /*9899 * A protection check is difficult so99100 * just be safe and disable write100101 */101102 make_migration_entry_read(&entry);102102- set_pte_at(mm, addr, pte,103103- swp_entry_to_pte(entry));103103+ newpte = swp_entry_to_pte(entry);104104+ if (pte_swp_soft_dirty(oldpte))105105+ newpte = pte_swp_mksoft_dirty(newpte);106106+ set_pte_at(mm, addr, pte, newpte);104107 }105108 pages++;106109 }
···680680{681681 struct zonelist *zonelist;682682683683- if (mem_cgroup_oom_synchronize())683683+ if (mem_cgroup_oom_synchronize(true))684684 return;685685686686 zonelist = node_zonelist(first_online_node, GFP_KERNEL);
+5-5
mm/page-writeback.c
···12101210 return 1;12111211}1212121212131213-static long bdi_max_pause(struct backing_dev_info *bdi,12141214- unsigned long bdi_dirty)12131213+static unsigned long bdi_max_pause(struct backing_dev_info *bdi,12141214+ unsigned long bdi_dirty)12151215{12161216- long bw = bdi->avg_write_bandwidth;12171217- long t;12161216+ unsigned long bw = bdi->avg_write_bandwidth;12171217+ unsigned long t;1218121812191219 /*12201220 * Limit pause time for small memory systems. If sleeping for too long···12261226 t = bdi_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8));12271227 t++;1228122812291229- return min_t(long, t, MAX_PAUSE);12291229+ return min_t(unsigned long, t, MAX_PAUSE);12301230}1231123112321232static long bdi_min_pause(struct backing_dev_info *bdi,
+2
mm/slab_common.c
···5656 continue;5757 }58585959+#if !defined(CONFIG_SLUB) || !defined(CONFIG_SLUB_DEBUG_ON)5960 /*6061 * For simplicity, we won't check this in the list of memcg6162 * caches. We have control over memcg naming, and if there···7069 s = NULL;7170 return -EINVAL;7271 }7272+#endif7373 }74747575 WARN_ON(strchr(name, ' ')); /* It confuses parsers */
+3-1
mm/swapfile.c
···18241824 struct filename *pathname;18251825 int i, type, prev;18261826 int err;18271827+ unsigned int old_block_size;1827182818281829 if (!capable(CAP_SYS_ADMIN))18291830 return -EPERM;···19151914 }1916191519171916 swap_file = p->swap_file;19171917+ old_block_size = p->old_block_size;19181918 p->swap_file = NULL;19191919 p->max = 0;19201920 swap_map = p->swap_map;···19401938 inode = mapping->host;19411939 if (S_ISBLK(inode->i_mode)) {19421940 struct block_device *bdev = I_BDEV(inode);19431943- set_blocksize(bdev, p->old_block_size);19411941+ set_blocksize(bdev, old_block_size);19441942 blkdev_put(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL);19451943 } else {19461944 mutex_lock(&inode->i_mutex);
···426426 * @die_mem: a buffer for result DIE427427 *428428 * Search a non-inlined function DIE which includes @addr. Stores the429429- * DIE to @die_mem and returns it if found. Returns NULl if failed.429429+ * DIE to @die_mem and returns it if found. Returns NULL if failed.430430 */431431Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr,432432 Dwarf_Die *die_mem)···454454}455455456456/**457457- * die_find_inlinefunc - Search an inlined function at given address458458- * @cu_die: a CU DIE which including @addr457457+ * die_find_top_inlinefunc - Search the top inlined function at given address458458+ * @sp_die: a subprogram DIE which including @addr459459 * @addr: target address460460 * @die_mem: a buffer for result DIE461461 *462462 * Search an inlined function DIE which includes @addr. Stores the463463- * DIE to @die_mem and returns it if found. Returns NULl if failed.463463+ * DIE to @die_mem and returns it if found. Returns NULL if failed.464464+ * Even if several inlined functions are expanded recursively, this465465+ * doesn't trace it down, and returns the topmost one.466466+ */467467+Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,468468+ Dwarf_Die *die_mem)469469+{470470+ return die_find_child(sp_die, __die_find_inline_cb, &addr, die_mem);471471+}472472+473473+/**474474+ * die_find_inlinefunc - Search an inlined function at given address475475+ * @sp_die: a subprogram DIE which including @addr476476+ * @addr: target address477477+ * @die_mem: a buffer for result DIE478478+ *479479+ * Search an inlined function DIE which includes @addr. Stores the480480+ * DIE to @die_mem and returns it if found. Returns NULL if failed.464481 * If several inlined functions are expanded recursively, this trace465465- * it and returns deepest one.482482+ * it down and returns deepest one.466483 */467484Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,468485 Dwarf_Die *die_mem)
+5-1
tools/perf/util/dwarf-aux.h
···7979extern Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr,8080 Dwarf_Die *die_mem);81818282-/* Search an inlined function including given address */8282+/* Search the top inlined function including given address */8383+extern Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,8484+ Dwarf_Die *die_mem);8585+8686+/* Search the deepest inlined function including given address */8387extern Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,8488 Dwarf_Die *die_mem);8589
+12
tools/perf/util/header.c
···27682768 if (perf_file_header__read(&f_header, header, fd) < 0)27692769 return -EINVAL;2770277027712771+ /*27722772+ * Sanity check that perf.data was written cleanly; data size is27732773+ * initialized to 0 and updated only if the on_exit function is run.27742774+ * If data size is still 0 then the file contains only partial27752775+ * information. Just warn user and process it as much as it can.27762776+ */27772777+ if (f_header.data.size == 0) {27782778+ pr_warning("WARNING: The %s file's data size field is 0 which is unexpected.\n"27792779+ "Was the 'perf record' command properly terminated?\n",27802780+ session->filename);27812781+ }27822782+27712783 nr_attrs = f_header.attrs.size / f_header.attr_size;27722784 lseek(fd, f_header.attrs.offset, SEEK_SET);27732785
+33-16
tools/perf/util/probe-finder.c
···13271327 struct perf_probe_point *ppt)13281328{13291329 Dwarf_Die cudie, spdie, indie;13301330- Dwarf_Addr _addr, baseaddr;13311331- const char *fname = NULL, *func = NULL, *tmp;13301330+ Dwarf_Addr _addr = 0, baseaddr = 0;13311331+ const char *fname = NULL, *func = NULL, *basefunc = NULL, *tmp;13321332 int baseline = 0, lineno = 0, ret = 0;1333133313341334 /* Adjust address with bias */···13491349 /* Find a corresponding function (name, baseline and baseaddr) */13501350 if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {13511351 /* Get function entry information */13521352- tmp = dwarf_diename(&spdie);13531353- if (!tmp ||13521352+ func = basefunc = dwarf_diename(&spdie);13531353+ if (!func ||13541354 dwarf_entrypc(&spdie, &baseaddr) != 0 ||13551355- dwarf_decl_line(&spdie, &baseline) != 0)13551355+ dwarf_decl_line(&spdie, &baseline) != 0) {13561356+ lineno = 0;13561357 goto post;13571357- func = tmp;13581358+ }1358135913591359- if (addr == (unsigned long)baseaddr)13601360+ if (addr == (unsigned long)baseaddr) {13601361 /* Function entry - Relative line number is 0 */13611362 lineno = baseline;13621362- else if (die_find_inlinefunc(&spdie, (Dwarf_Addr)addr,13631363- &indie)) {13631363+ fname = dwarf_decl_file(&spdie);13641364+ goto post;13651365+ }13661366+13671367+ /* Track down the inline functions step by step */13681368+ while (die_find_top_inlinefunc(&spdie, (Dwarf_Addr)addr,13691369+ &indie)) {13701370+ /* There is an inline function */13641371 if (dwarf_entrypc(&indie, &_addr) == 0 &&13651365- _addr == addr)13721372+ _addr == addr) {13661373 /*13671374 * addr is at an inline function entry.13681375 * In this case, lineno should be the call-site13691369- * line number.13761376+ * line number. (overwrite lineinfo)13701377 */13711378 lineno = die_get_call_lineno(&indie);13721372- else {13791379+ fname = die_get_call_file(&indie);13801380+ break;13811381+ } else {13731382 /*13741383 * addr is in an inline function body.13751384 * Since lineno points one of the lines···13861377 * be the entry line of the inline function.13871378 */13881379 tmp = dwarf_diename(&indie);13891389- if (tmp &&13901390- dwarf_decl_line(&spdie, &baseline) == 0)13911391- func = tmp;13801380+ if (!tmp ||13811381+ dwarf_decl_line(&indie, &baseline) != 0)13821382+ break;13831383+ func = tmp;13841384+ spdie = indie;13921385 }13931386 }13871387+ /* Verify the lineno and baseline are in a same file */13881388+ tmp = dwarf_decl_file(&spdie);13891389+ if (!tmp || strcmp(tmp, fname) != 0)13901390+ lineno = 0;13941391 }1395139213961393post:13971394 /* Make a relative line number or an offset */13981395 if (lineno)13991396 ppt->line = lineno - baseline;14001400- else if (func)13971397+ else if (basefunc) {14011398 ppt->offset = addr - (unsigned long)baseaddr;13991399+ func = basefunc;14001400+ }1402140114031402 /* Duplicate strings */14041403 if (func) {
+3-1
tools/perf/util/session.c
···256256 tool->sample = process_event_sample_stub;257257 if (tool->mmap == NULL)258258 tool->mmap = process_event_stub;259259+ if (tool->mmap2 == NULL)260260+ tool->mmap2 = process_event_stub;259261 if (tool->comm == NULL)260262 tool->comm = process_event_stub;261263 if (tool->fork == NULL)···13121310 file_offset = page_offset;13131311 head = data_offset - page_offset;1314131213151315- if (data_offset + data_size < file_size)13131313+ if (data_size && (data_offset + data_size < file_size))13161314 file_size = data_offset + data_size;1317131513181316 progress_next = file_size / 16;