···19192020- clock-frequency : The frequency of the main counter, in Hz. Optional.21212222+- always-on : a boolean property. If present, the timer is powered through an2323+ always-on power domain, therefore it never loses context.2424+2225Example:23262427 timer {
···44- compatible: Should be "snps,arc-emac"55- reg: Address and length of the register set for the device66- interrupts: Should contain the EMAC interrupts77-- clock-frequency: CPU frequency. It is needed to calculate and set polling88-period of EMAC.97- max-speed: see ethernet.txt file in the same directory.108- phy: see ethernet.txt file in the same directory.99+1010+Clock handling:1111+The clock frequency is needed to calculate and set polling period of EMAC.1212+It must be provided by one of:1313+- clock-frequency: CPU frequency.1414+- clocks: reference to the clock supplying the EMAC.11151216Child nodes of the driver are the individual PHY devices connected to the1317MDIO bus. They must have a "reg" property given the PHY address on the MDIO bus.···2319 reg = <0xc0fc2000 0x3c>;2420 interrupts = <6>;2521 mac-address = [ 00 11 22 33 44 55 ];2222+2623 clock-frequency = <80000000>;2424+ /* or */2525+ clocks = <&emac_clock>;2626+2727 max-speed = <100>;2828 phy = <&phy0>;2929
···3333- max-frame-size: See ethernet.txt file in the same directory3434- clocks: If present, the first clock should be the GMAC main clock,3535 further clocks may be specified in derived bindings.3636-- clocks-names: One name for each entry in the clocks property, the3636+- clock-names: One name for each entry in the clocks property, the3737 first one should be "stmmaceth".38383939Examples:
···1313 "ti,tlv320aic3111" - TLV320AIC3111 (stereo speaker amp, MiniDSP)14141515- reg - <int> - I2C slave address1616+- HPVDD-supply, SPRVDD-supply, SPLVDD-supply, AVDD-supply, IOVDD-supply,1717+ DVDD-supply : power supplies for the device as covered in1818+ Documentation/devicetree/bindings/regulator/regulator.txt161917201821Optional properties:···2724 3 or MICBIAS_AVDD - MICBIAS output is connected to AVDD2825 If this node is not mentioned or if the value is unknown, then2926 micbias is set to 2.0V.3030-- HPVDD-supply, SPRVDD-supply, SPLVDD-supply, AVDD-supply, IOVDD-supply,3131- DVDD-supply : power supplies for the device as covered in3232- Documentation/devicetree/bindings/regulator/regulator.txt33273428CODEC output pins:3529 * HPL
+4-1
Documentation/input/elantech.txt
···504504* reg_10505505506506 bit 7 6 5 4 3 2 1 0507507- 0 0 0 0 0 0 0 A507507+ 0 0 0 0 R F T A508508509509 A: 1 = enable absolute tracking510510+ T: 1 = enable two finger mode auto correct511511+ F: 1 = disable ABS Position Filter512512+ R: 1 = enable real hardware resolution5105135115146.2 Native absolute mode 6 byte packet format512515 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1-1
Documentation/networking/scaling.txt
···429429(therbert@google.com)430430431431Accelerated RFS was introduced in 2.6.35. Original patches were432432-submitted by Ben Hutchings (bhutchings@solarflare.com)432432+submitted by Ben Hutchings (bwh@kernel.org)433433434434Authors:435435Tom Herbert (therbert@google.com)
+15-6
MAINTAINERS
···34853485F: drivers/extcon/34863486F: Documentation/extcon/3487348734883488+EXYNOS DP DRIVER34893489+M: Jingoo Han <jg1.han@samsung.com>34903490+L: dri-devel@lists.freedesktop.org34913491+S: Maintained34923492+F: drivers/gpu/drm/exynos/exynos_dp*34933493+34883494EXYNOS MIPI DISPLAY DRIVERS34893495M: Inki Dae <inki.dae@samsung.com>34903496M: Donghwa Lee <dh09.lee@samsung.com>···35563550F: include/uapi/scsi/fc/3557355135583552FILE LOCKING (flock() and fcntl()/lockf())35593559-M: Jeff Layton <jlayton@redhat.com>35533553+M: Jeff Layton <jlayton@poochiereds.net>35603554M: J. Bruce Fields <bfields@fieldses.org>35613555L: linux-fsdevel@vger.kernel.org35623556S: Maintained···5114510851155109KERNEL VIRTUAL MACHINE (KVM) FOR ARM51165110M: Christoffer Dall <christoffer.dall@linaro.org>51115111+M: Marc Zyngier <marc.zyngier@arm.com>51125112+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)51175113L: kvmarm@lists.cs.columbia.edu51185114W: http://systems.cs.columbia.edu/projects/kvm-arm51195115S: Supported51205116F: arch/arm/include/uapi/asm/kvm*51215117F: arch/arm/include/asm/kvm*51225118F: arch/arm/kvm/51195119+F: virt/kvm/arm/51205120+F: include/kvm/arm_*5123512151245122KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)51235123+M: Christoffer Dall <christoffer.dall@linaro.org>51255124M: Marc Zyngier <marc.zyngier@arm.com>51265125L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)51275126L: kvmarm@lists.cs.columbia.edu···72887277RALINK RT2X00 WIRELESS LAN DRIVER72897278P: rt2x00 project72907279M: Ivo van Doorn <IvDoorn@gmail.com>72917291-M: Gertjan van Wingerde <gwingerde@gmail.com>72927280M: Helmut Schaa <helmut.schaa@googlemail.com>72937281L: linux-wireless@vger.kernel.org72947282L: users@rt2x00.serialmonkey.com (moderated for non-subscribers)···73037293F: drivers/block/brd.c7304729473057295RANDOM NUMBER DRIVER73067306-M: Theodore Ts'o" <tytso@mit.edu>72967296+M: "Theodore Ts'o" <tytso@mit.edu>73077297S: Maintained73087298F: drivers/char/random.c73097299···76847674SAMSUNG SXGBE DRIVERS76857675M: Byungho An <bh74.an@samsung.com>76867676M: Girish K S <ks.giri@samsung.com>76877687-M: Siva Reddy Kallam <siva.kallam@samsung.com>76887677M: Vipul Pandya <vipul.pandya@samsung.com>76897678S: Supported76907679L: netdev@vger.kernel.org···99609951F: drivers/net/hamradio/z8530.h9961995299629953ZBUD COMPRESSED PAGE ALLOCATOR99639963-M: Seth Jennings <sjenning@linux.vnet.ibm.com>99549954+M: Seth Jennings <sjennings@variantweb.net>99649955L: linux-mm@kvack.org99659956S: Maintained99669957F: mm/zbud.c···100059996F: include/linux/zsmalloc.h100069997100079998ZSWAP COMPRESSED SWAP CACHING1000810008-M: Seth Jennings <sjenning@linux.vnet.ibm.com>99999999+M: Seth Jennings <sjennings@variantweb.net>1000910000L: linux-mm@kvack.org1001010001S: Maintained1001110002F: mm/zswap.c
···614614615615resume_kernel_mode:616616617617-#ifdef CONFIG_PREEMPT618618-619619- ; This is a must for preempt_schedule_irq()617617+ ; Disable Interrupts from this point on618618+ ; CONFIG_PREEMPT: This is a must for preempt_schedule_irq()619619+ ; !CONFIG_PREEMPT: To ensure restore_regs is intr safe620620 IRQ_DISABLE r9621621+622622+#ifdef CONFIG_PREEMPT621623622624 ; Can't preempt if preemption disabled623625 GET_CURR_THR_INFO_FROM_SP r10
···2323 select HAVE_KVM_CPU_RELAX_INTERCEPT2424 select KVM_MMIO2525 select KVM_ARM_HOST2626- depends on ARM_VIRT_EXT && ARM_LPAE2626+ depends on ARM_VIRT_EXT && ARM_LPAE && !CPU_BIG_ENDIAN2727 ---help---2828 Support hosting virtualized guest machines. You will also2929 need to select one or more of the processor modules below.
···2222#include <linux/slab.h>2323#include <linux/dma-mapping.h>2424#include <linux/dma-contiguous.h>2525+#include <linux/of.h>2626+#include <linux/platform_device.h>2527#include <linux/vmalloc.h>2628#include <linux/swiotlb.h>2929+#include <linux/amba/bus.h>27302831#include <asm/cacheflush.h>2932···308305};309306EXPORT_SYMBOL(coherent_swiotlb_dma_ops);310307308308+static int dma_bus_notifier(struct notifier_block *nb,309309+ unsigned long event, void *_dev)310310+{311311+ struct device *dev = _dev;312312+313313+ if (event != BUS_NOTIFY_ADD_DEVICE)314314+ return NOTIFY_DONE;315315+316316+ if (of_property_read_bool(dev->of_node, "dma-coherent"))317317+ set_dma_ops(dev, &coherent_swiotlb_dma_ops);318318+319319+ return NOTIFY_OK;320320+}321321+322322+static struct notifier_block platform_bus_nb = {323323+ .notifier_call = dma_bus_notifier,324324+};325325+326326+static struct notifier_block amba_bus_nb = {327327+ .notifier_call = dma_bus_notifier,328328+};329329+311330extern int swiotlb_late_init_with_default_size(size_t default_size);312331313332static int __init swiotlb_late_init(void)314333{315334 size_t swiotlb_size = min(SZ_64M, MAX_ORDER_NR_PAGES << PAGE_SHIFT);316335317317- dma_ops = &coherent_swiotlb_dma_ops;336336+ /*337337+ * These must be registered before of_platform_populate().338338+ */339339+ bus_register_notifier(&platform_bus_type, &platform_bus_nb);340340+ bus_register_notifier(&amba_bustype, &amba_bus_nb);341341+342342+ dma_ops = &noncoherent_swiotlb_dma_ops;318343319344 return swiotlb_late_init_with_default_size(swiotlb_size);320345}321321-subsys_initcall(swiotlb_late_init);346346+arch_initcall(swiotlb_late_init);322347323348#define PREALLOC_DMA_DEBUG_ENTRIES 4096324349
+3
arch/arm64/mm/mmu.c
···374374 if (pmd_none(*pmd))375375 return 0;376376377377+ if (pmd_sect(*pmd))378378+ return pfn_valid(pmd_pfn(*pmd));379379+377380 pte = pte_offset_kernel(pmd, addr);378381 if (pte_none(*pte))379382 return 0;
-37
arch/hexagon/include/asm/barrier.h
···11-/*22- * Memory barrier definitions for the Hexagon architecture33- *44- * Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.55- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License version 2 and88- * only version 2 as published by the Free Software Foundation.99- *1010- * This program is distributed in the hope that it will be useful,1111- * but WITHOUT ANY WARRANTY; without even the implied warranty of1212- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1313- * GNU General Public License for more details.1414- *1515- * You should have received a copy of the GNU General Public License1616- * along with this program; if not, write to the Free Software1717- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA1818- * 02110-1301, USA.1919- */2020-2121-#ifndef _ASM_BARRIER_H2222-#define _ASM_BARRIER_H2323-2424-#define rmb() barrier()2525-#define read_barrier_depends() barrier()2626-#define wmb() barrier()2727-#define mb() barrier()2828-#define smp_rmb() barrier()2929-#define smp_read_barrier_depends() barrier()3030-#define smp_wmb() barrier()3131-#define smp_mb() barrier()3232-3333-/* Set a value and use a memory barrier. Used by the scheduler somewhere. */3434-#define set_mb(var, value) \3535- do { var = value; mb(); } while (0)3636-3737-#endif /* _ASM_BARRIER_H */
···139139 * edit the command line passed to vmlinux (by setting /chosen/bootargs).140140 * The buffer is put in it's own section so that tools may locate it easier.141141 */142142-static char cmdline[COMMAND_LINE_SIZE]142142+static char cmdline[BOOT_COMMAND_LINE_SIZE]143143 __attribute__((__section__("__builtin_cmdline")));144144145145static void prep_cmdline(void *chosen)146146{147147 if (cmdline[0] == '\0')148148- getprop(chosen, "bootargs", cmdline, COMMAND_LINE_SIZE-1);148148+ getprop(chosen, "bootargs", cmdline, BOOT_COMMAND_LINE_SIZE-1);149149150150 printf("\n\rLinux/PowerPC load: %s", cmdline);151151 /* If possible, edit the command line */152152 if (console_ops.edit_cmdline)153153- console_ops.edit_cmdline(cmdline, COMMAND_LINE_SIZE);153153+ console_ops.edit_cmdline(cmdline, BOOT_COMMAND_LINE_SIZE);154154 printf("\n\r");155155156156 /* Put the command line back into the devtree for the kernel */···174174 * built-in command line wasn't set by an external tool */175175 if ((loader_info.cmdline_len > 0) && (cmdline[0] == '\0'))176176 memmove(cmdline, loader_info.cmdline,177177- min(loader_info.cmdline_len, COMMAND_LINE_SIZE-1));177177+ min(loader_info.cmdline_len, BOOT_COMMAND_LINE_SIZE-1));178178179179 if (console_ops.open && (console_ops.open() < 0))180180 exit();
+1-1
arch/powerpc/boot/ops.h
···1515#include "types.h"1616#include "string.h"17171818-#define COMMAND_LINE_SIZE 5121818+#define BOOT_COMMAND_LINE_SIZE 20481919#define MAX_PATH_LEN 2562020#define MAX_PROP_LEN 256 /* What should this be? */2121
+2-2
arch/powerpc/boot/ps3.c
···4747 * The buffer is put in it's own section so that tools may locate it easier.4848 */49495050-static char cmdline[COMMAND_LINE_SIZE]5050+static char cmdline[BOOT_COMMAND_LINE_SIZE]5151 __attribute__((__section__("__builtin_cmdline")));52525353static void prep_cmdline(void *chosen)5454{5555 if (cmdline[0] == '\0')5656- getprop(chosen, "bootargs", cmdline, COMMAND_LINE_SIZE-1);5656+ getprop(chosen, "bootargs", cmdline, BOOT_COMMAND_LINE_SIZE-1);5757 else5858 setprop_str(chosen, "bootargs", cmdline);5959
+19-23
arch/powerpc/include/asm/opal.h
···4141 * size except the last one in the list to be as well.4242 */4343struct opal_sg_entry {4444- void *data;4545- long length;4444+ __be64 data;4545+ __be64 length;4646};47474848-/* sg list */4848+/* SG list */4949struct opal_sg_list {5050- unsigned long num_entries;5151- struct opal_sg_list *next;5050+ __be64 length;5151+ __be64 next;5252 struct opal_sg_entry entry[];5353};5454···858858int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type,859859 uint32_t addr, __be32 *data, uint32_t sz);860860861861-int64_t opal_read_elog(uint64_t buffer, size_t size, uint64_t log_id);862862-int64_t opal_get_elog_size(uint64_t *log_id, size_t *size, uint64_t *elog_type);861861+int64_t opal_read_elog(uint64_t buffer, uint64_t size, uint64_t log_id);862862+int64_t opal_get_elog_size(__be64 *log_id, __be64 *size, __be64 *elog_type);863863int64_t opal_write_elog(uint64_t buffer, uint64_t size, uint64_t offset);864864int64_t opal_send_ack_elog(uint64_t log_id);865865void opal_resend_pending_logs(void);···868868int64_t opal_manage_flash(uint8_t op);869869int64_t opal_update_flash(uint64_t blk_list);870870int64_t opal_dump_init(uint8_t dump_type);871871-int64_t opal_dump_info(uint32_t *dump_id, uint32_t *dump_size);872872-int64_t opal_dump_info2(uint32_t *dump_id, uint32_t *dump_size, uint32_t *dump_type);871871+int64_t opal_dump_info(__be32 *dump_id, __be32 *dump_size);872872+int64_t opal_dump_info2(__be32 *dump_id, __be32 *dump_size, __be32 *dump_type);873873int64_t opal_dump_read(uint32_t dump_id, uint64_t buffer);874874int64_t opal_dump_ack(uint32_t dump_id);875875int64_t opal_dump_resend_notification(void);876876877877-int64_t opal_get_msg(uint64_t buffer, size_t size);878878-int64_t opal_check_completion(uint64_t buffer, size_t size, uint64_t token);877877+int64_t opal_get_msg(uint64_t buffer, uint64_t size);878878+int64_t opal_check_completion(uint64_t buffer, uint64_t size, uint64_t token);879879int64_t opal_sync_host_reboot(void);880880int64_t opal_get_param(uint64_t token, uint32_t param_id, uint64_t buffer,881881- size_t length);881881+ uint64_t length);882882int64_t opal_set_param(uint64_t token, uint32_t param_id, uint64_t buffer,883883- size_t length);883883+ uint64_t length);884884int64_t opal_sensor_read(uint32_t sensor_hndl, int token, __be32 *sensor_data);885885886886/* Internal functions */887887-extern int early_init_dt_scan_opal(unsigned long node, const char *uname, int depth, void *data);887887+extern int early_init_dt_scan_opal(unsigned long node, const char *uname,888888+ int depth, void *data);888889extern int early_init_dt_scan_recoverable_ranges(unsigned long node,889890 const char *uname, int depth, void *data);890891···893892extern int opal_put_chars(uint32_t vtermno, const char *buf, int total_len);894893895894extern void hvc_opal_init_early(void);896896-897897-/* Internal functions */898898-extern int early_init_dt_scan_opal(unsigned long node, const char *uname,899899- int depth, void *data);900895901896extern int opal_notifier_register(struct notifier_block *nb);902897extern int opal_notifier_unregister(struct notifier_block *nb);···903906extern void opal_notifier_disable(void);904907extern void opal_notifier_update_evt(uint64_t evt_mask, uint64_t evt_val);905908906906-extern int opal_get_chars(uint32_t vtermno, char *buf, int count);907907-extern int opal_put_chars(uint32_t vtermno, const char *buf, int total_len);908908-909909extern int __opal_async_get_token(void);910910extern int opal_async_get_token_interruptible(void);911911extern int __opal_async_release_token(int token);912912extern int opal_async_release_token(int token);913913extern int opal_async_wait_response(uint64_t token, struct opal_msg *msg);914914extern int opal_get_sensor_data(u32 sensor_hndl, u32 *sensor_data);915915-916916-extern void hvc_opal_init_early(void);917915918916struct rtc_time;919917extern int opal_set_rtc_time(struct rtc_time *tm);···928936extern int opal_resync_timebase(void);929937930938extern void opal_lpc_init(void);939939+940940+struct opal_sg_list *opal_vmalloc_to_sg_list(void *vmalloc_addr,941941+ unsigned long vmalloc_size);942942+void opal_free_sg_list(struct opal_sg_list *sg);931943932944#endif /* __ASSEMBLY__ */933945
···705705 if (rtas_token("ibm,update-flash-64-and-reboot") ==706706 RTAS_UNKNOWN_SERVICE) {707707 pr_info("rtas_flash: no firmware flash support\n");708708- return 1;708708+ return -EINVAL;709709 }710710711711 rtas_validate_flash_data.buf = kzalloc(VALIDATE_BUF_SIZE, GFP_KERNEL);
+17-1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···242242 */243243 .globl kvm_start_guest244244kvm_start_guest:245245+246246+ /* Set runlatch bit the minute you wake up from nap */247247+ mfspr r1, SPRN_CTRLF248248+ ori r1, r1, 1249249+ mtspr SPRN_CTRLT, r1250250+245251 ld r2,PACATOC(r13)246252247253 li r0,KVM_HWTHREAD_IN_KVM···315309 li r0, KVM_HWTHREAD_IN_NAP316310 stb r0, HSTATE_HWTHREAD_STATE(r13)317311kvm_do_nap:312312+ /* Clear the runlatch bit before napping */313313+ mfspr r2, SPRN_CTRLF314314+ clrrdi r2, r2, 1315315+ mtspr SPRN_CTRLT, r2316316+318317 li r3, LPCR_PECE0319318 mfspr r4, SPRN_LPCR320319 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1···2010199920112000 /*20122001 * Take a nap until a decrementer or external or doobell interrupt20132013- * occurs, with PECE1, PECE0 and PECEDP set in LPCR20022002+ * occurs, with PECE1, PECE0 and PECEDP set in LPCR. Also clear the20032003+ * runlatch bit before napping.20142004 */20052005+ mfspr r2, SPRN_CTRLF20062006+ clrrdi r2, r2, 120072007+ mtspr SPRN_CTRLT, r220082008+20152009 li r0,120162010 stb r0,HSTATE_HWTHREAD_REQ(r13)20172011 mfspr r5,SPRN_LPCR
+16-22
arch/powerpc/mm/hash_native_64.c
···8282 va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);8383 va |= penc << 12;8484 va |= ssize << 8;8585- /* Add AVAL part */8686- if (psize != apsize) {8787- /*8888- * MPSS, 64K base page size and 16MB parge page size8989- * We don't need all the bits, but rest of the bits9090- * must be ignored by the processor.9191- * vpn cover upto 65 bits of va. (0...65) and we need9292- * 58..64 bits of va.9393- */9494- va |= (vpn & 0xfe);9595- }8585+ /*8686+ * AVAL bits:8787+ * We don't need all the bits, but rest of the bits8888+ * must be ignored by the processor.8989+ * vpn cover upto 65 bits of va. (0...65) and we need9090+ * 58..64 bits of va.9191+ */9292+ va |= (vpn & 0xfe); /* AVAL */9693 va |= 1; /* L */9794 asm volatile(ASM_FTR_IFCLR("tlbie %0,1", PPC_TLBIE(%1,%0), %2)9895 : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)···130133 va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);131134 va |= penc << 12;132135 va |= ssize << 8;133133- /* Add AVAL part */134134- if (psize != apsize) {135135- /*136136- * MPSS, 64K base page size and 16MB parge page size137137- * We don't need all the bits, but rest of the bits138138- * must be ignored by the processor.139139- * vpn cover upto 65 bits of va. (0...65) and we need140140- * 58..64 bits of va.141141- */142142- va |= (vpn & 0xfe);143143- }136136+ /*137137+ * AVAL bits:138138+ * We don't need all the bits, but rest of the bits139139+ * must be ignored by the processor.140140+ * vpn cover upto 65 bits of va. (0...65) and we need141141+ * 58..64 bits of va.142142+ */143143+ va |= (vpn & 0xfe);144144 va |= 1; /* L */145145 asm volatile(".long 0x7c000224 | (%0 << 11) | (1 << 21)"146146 : : "r"(va) : "memory");
+25-12
arch/powerpc/perf/hv-24x7.c
···155155 return copy_len;156156}157157158158-static unsigned long h_get_24x7_catalog_page(char page[static 4096],159159- u32 version, u32 index)158158+static unsigned long h_get_24x7_catalog_page_(unsigned long phys_4096,159159+ unsigned long version,160160+ unsigned long index)160161{161161- WARN_ON(!IS_ALIGNED((unsigned long)page, 4096));162162- return plpar_hcall_norets(H_GET_24X7_CATALOG_PAGE,163163- virt_to_phys(page),162162+ pr_devel("h_get_24x7_catalog_page(0x%lx, %lu, %lu)",163163+ phys_4096,164164 version,165165 index);166166+ WARN_ON(!IS_ALIGNED(phys_4096, 4096));167167+ return plpar_hcall_norets(H_GET_24X7_CATALOG_PAGE,168168+ phys_4096,169169+ version,170170+ index);171171+}172172+173173+static unsigned long h_get_24x7_catalog_page(char page[],174174+ u64 version, u32 index)175175+{176176+ return h_get_24x7_catalog_page_(virt_to_phys(page),177177+ version, index);166178}167179168180static ssize_t catalog_read(struct file *filp, struct kobject *kobj,···185173 ssize_t ret = 0;186174 size_t catalog_len = 0, catalog_page_len = 0, page_count = 0;187175 loff_t page_offset = 0;188188- uint32_t catalog_version_num = 0;176176+ uint64_t catalog_version_num = 0;189177 void *page = kmem_cache_alloc(hv_page_cache, GFP_USER);190178 struct hv_24x7_catalog_page_0 *page_0 = page;191179 if (!page)···197185 goto e_free;198186 }199187200200- catalog_version_num = be32_to_cpu(page_0->version);188188+ catalog_version_num = be64_to_cpu(page_0->version);201189 catalog_page_len = be32_to_cpu(page_0->length);202190 catalog_len = catalog_page_len * 4096;203191···220208 page, 4096, page_offset * 4096);221209e_free:222210 if (hret)223223- pr_err("h_get_24x7_catalog_page(ver=%d, page=%lld) failed: rc=%ld\n",224224- catalog_version_num, page_offset, hret);211211+ pr_err("h_get_24x7_catalog_page(ver=%lld, page=%lld) failed:"212212+ " rc=%ld\n",213213+ catalog_version_num, page_offset, hret);225214 kfree(page);226215227216 pr_devel("catalog_read: offset=%lld(%lld) count=%zu(%zu) catalog_len=%zu(%zu) => %zd\n",···256243static DEVICE_ATTR_RO(_name)257244258245PAGE_0_ATTR(catalog_version, "%lld\n",259259- (unsigned long long)be32_to_cpu(page_0->version));246246+ (unsigned long long)be64_to_cpu(page_0->version));260247PAGE_0_ATTR(catalog_len, "%lld\n",261248 (unsigned long long)be32_to_cpu(page_0->length) * 4096);262249static BIN_ATTR_RO(catalog, 0/* real length varies */);···498485 struct hv_perf_caps caps;499486500487 if (!firmware_has_feature(FW_FEATURE_LPAR)) {501501- pr_info("not a virtualized system, not enabling\n");488488+ pr_debug("not a virtualized system, not enabling\n");502489 return -ENODEV;503490 }504491505492 hret = hv_perf_caps_get(&caps);506493 if (hret) {507507- pr_info("could not obtain capabilities, error 0x%80lx, not enabling\n",494494+ pr_debug("could not obtain capabilities, not enabling, rc=%ld\n",508495 hret);509496 return -ENODEV;510497 }
+3-3
arch/powerpc/perf/hv-gpci.c
···7878 return sprintf(page, "0x%x\n", COUNTER_INFO_VERSION_CURRENT);7979}80808181-DEVICE_ATTR_RO(kernel_version);8181+static DEVICE_ATTR_RO(kernel_version);8282HV_CAPS_ATTR(version, "0x%x\n");8383HV_CAPS_ATTR(ga, "%d\n");8484HV_CAPS_ATTR(expanded, "%d\n");···273273 struct hv_perf_caps caps;274274275275 if (!firmware_has_feature(FW_FEATURE_LPAR)) {276276- pr_info("not a virtualized system, not enabling\n");276276+ pr_debug("not a virtualized system, not enabling\n");277277 return -ENODEV;278278 }279279280280 hret = hv_perf_caps_get(&caps);281281 if (hret) {282282- pr_info("could not obtain capabilities, error 0x%80lx, not enabling\n",282282+ pr_debug("could not obtain capabilities, not enabling, rc=%ld\n",283283 hret);284284 return -ENODEV;285285 }
+11-83
arch/powerpc/platforms/powernv/opal-dump.c
···209209 .default_attrs = dump_default_attrs,210210};211211212212-static void free_dump_sg_list(struct opal_sg_list *list)212212+static int64_t dump_read_info(uint32_t *dump_id, uint32_t *dump_size, uint32_t *dump_type)213213{214214- struct opal_sg_list *sg1;215215- while (list) {216216- sg1 = list->next;217217- kfree(list);218218- list = sg1;219219- }220220- list = NULL;221221-}222222-223223-static struct opal_sg_list *dump_data_to_sglist(struct dump_obj *dump)224224-{225225- struct opal_sg_list *sg1, *list = NULL;226226- void *addr;227227- int64_t size;228228-229229- addr = dump->buffer;230230- size = dump->size;231231-232232- sg1 = kzalloc(PAGE_SIZE, GFP_KERNEL);233233- if (!sg1)234234- goto nomem;235235-236236- list = sg1;237237- sg1->num_entries = 0;238238- while (size > 0) {239239- /* Translate virtual address to physical address */240240- sg1->entry[sg1->num_entries].data =241241- (void *)(vmalloc_to_pfn(addr) << PAGE_SHIFT);242242-243243- if (size > PAGE_SIZE)244244- sg1->entry[sg1->num_entries].length = PAGE_SIZE;245245- else246246- sg1->entry[sg1->num_entries].length = size;247247-248248- sg1->num_entries++;249249- if (sg1->num_entries >= SG_ENTRIES_PER_NODE) {250250- sg1->next = kzalloc(PAGE_SIZE, GFP_KERNEL);251251- if (!sg1->next)252252- goto nomem;253253-254254- sg1 = sg1->next;255255- sg1->num_entries = 0;256256- }257257- addr += PAGE_SIZE;258258- size -= PAGE_SIZE;259259- }260260- return list;261261-262262-nomem:263263- pr_err("%s : Failed to allocate memory\n", __func__);264264- free_dump_sg_list(list);265265- return NULL;266266-}267267-268268-static void sglist_to_phy_addr(struct opal_sg_list *list)269269-{270270- struct opal_sg_list *sg, *next;271271-272272- for (sg = list; sg; sg = next) {273273- next = sg->next;274274- /* Don't translate NULL pointer for last entry */275275- if (sg->next)276276- sg->next = (struct opal_sg_list *)__pa(sg->next);277277- else278278- sg->next = NULL;279279-280280- /* Convert num_entries to length */281281- sg->num_entries =282282- sg->num_entries * sizeof(struct opal_sg_entry) + 16;283283- }284284-}285285-286286-static int64_t dump_read_info(uint32_t *id, uint32_t *size, uint32_t *type)287287-{214214+ __be32 id, size, type;288215 int rc;289289- *type = 0xffffffff;290216291291- rc = opal_dump_info2(id, size, type);217217+ type = cpu_to_be32(0xffffffff);292218219219+ rc = opal_dump_info2(&id, &size, &type);293220 if (rc == OPAL_PARAMETER)294294- rc = opal_dump_info(id, size);221221+ rc = opal_dump_info(&id, &size);222222+223223+ *dump_id = be32_to_cpu(id);224224+ *dump_size = be32_to_cpu(size);225225+ *dump_type = be32_to_cpu(type);295226296227 if (rc)297228 pr_warn("%s: Failed to get dump info (%d)\n",···245314 }246315247316 /* Generate SG list */248248- list = dump_data_to_sglist(dump);317317+ list = opal_vmalloc_to_sg_list(dump->buffer, dump->size);249318 if (!list) {250319 rc = -ENOMEM;251320 goto out;252321 }253253-254254- /* Translate sg list addr to real address */255255- sglist_to_phy_addr(list);256322257323 /* First entry address */258324 addr = __pa(list);···269341 __func__, dump->id);270342271343 /* Free SG list */272272- free_dump_sg_list(list);344344+ opal_free_sg_list(list);273345274346out:275347 return rc;
···162162}163163164164#ifdef CONFIG_KEXEC165165+static void pnv_kexec_wait_secondaries_down(void)166166+{167167+ int my_cpu, i, notified = -1;168168+169169+ my_cpu = get_cpu();170170+171171+ for_each_online_cpu(i) {172172+ uint8_t status;173173+ int64_t rc;174174+175175+ if (i == my_cpu)176176+ continue;177177+178178+ for (;;) {179179+ rc = opal_query_cpu_status(get_hard_smp_processor_id(i),180180+ &status);181181+ if (rc != OPAL_SUCCESS || status != OPAL_THREAD_STARTED)182182+ break;183183+ barrier();184184+ if (i != notified) {185185+ printk(KERN_INFO "kexec: waiting for cpu %d "186186+ "(physical %d) to enter OPAL\n",187187+ i, paca[i].hw_cpu_id);188188+ notified = i;189189+ }190190+ }191191+ }192192+}193193+165194static void pnv_kexec_cpu_down(int crash_shutdown, int secondary)166195{167196 xics_kexec_teardown_cpu(secondary);168197169169- /* Return secondary CPUs to firmware on OPAL v3 */170170- if (firmware_has_feature(FW_FEATURE_OPALv3) && secondary) {198198+ /* On OPAL v3, we return all CPUs to firmware */199199+200200+ if (!firmware_has_feature(FW_FEATURE_OPALv3))201201+ return;202202+203203+ if (secondary) {204204+ /* Return secondary CPUs to firmware on OPAL v3 */171205 mb();172206 get_paca()->kexec_state = KEXEC_STATE_REAL_MODE;173207 mb();174208175209 /* Return the CPU to OPAL */176210 opal_return_cpu();211211+ } else if (crash_shutdown) {212212+ /*213213+ * On crash, we don't wait for secondaries to go214214+ * down as they might be unreachable or hung, so215215+ * instead we just wait a bit and move on.216216+ */217217+ mdelay(1);218218+ } else {219219+ /* Primary waits for the secondaries to have reached OPAL */220220+ pnv_kexec_wait_secondaries_down();177221 }178222}179223#endif /* CONFIG_KEXEC */
+3
arch/powerpc/platforms/powernv/smp.c
···3030#include <asm/cputhreads.h>3131#include <asm/xics.h>3232#include <asm/opal.h>3333+#include <asm/runlatch.h>33343435#include "powernv.h"3536···157156 */158157 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1);159158 while (!generic_check_cpu_restart(cpu)) {159159+ ppc64_runlatch_off();160160 power7_nap();161161+ ppc64_runlatch_on();161162 if (!generic_check_cpu_restart(cpu)) {162163 DBG("CPU%d Unexpected exit while offline !\n", cpu);163164 /* We may be getting an IPI, so we re-enable
···100100101101 start_pfn = base >> PAGE_SHIFT;102102103103- if (!pfn_valid(start_pfn)) {104104- memblock_remove(base, memblock_size);105105- return 0;106106- }103103+ lock_device_hotplug();104104+105105+ if (!pfn_valid(start_pfn))106106+ goto out;107107108108 block_sz = memory_block_size_bytes();109109 sections_per_block = block_sz / MIN_MEMORY_BLOCK_SIZE;···114114 base += MIN_MEMORY_BLOCK_SIZE;115115 }116116117117+out:117118 /* Update memory regions for memory remove */118119 memblock_remove(base, memblock_size);120120+ unlock_device_hotplug();119121 return 0;120122}121123
+1-1
arch/powerpc/sysdev/ppc4xx_pci.c
···10581058 return 1;10591059}1060106010611061-static int apm821xx_pciex_init_port_hw(struct ppc4xx_pciex_port *port)10611061+static int __init apm821xx_pciex_init_port_hw(struct ppc4xx_pciex_port *port)10621062{10631063 u32 val;10641064
-1
arch/s390/net/bpf_jit_comp.c
···276276 case BPF_S_LD_W_IND:277277 case BPF_S_LD_H_IND:278278 case BPF_S_LD_B_IND:279279- case BPF_S_LDX_B_MSH:280279 case BPF_S_LD_IMM:281280 case BPF_S_LD_MEM:282281 case BPF_S_MISC_TXA:
+46-37
arch/sparc/include/asm/pgtable_64.h
···71717272#include <linux/sched.h>73737474+extern unsigned long sparc64_valid_addr_bitmap[];7575+7676+/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */7777+static inline bool __kern_addr_valid(unsigned long paddr)7878+{7979+ if ((paddr >> MAX_PHYS_ADDRESS_BITS) != 0UL)8080+ return false;8181+ return test_bit(paddr >> ILOG2_4MB, sparc64_valid_addr_bitmap);8282+}8383+8484+static inline bool kern_addr_valid(unsigned long addr)8585+{8686+ unsigned long paddr = __pa(addr);8787+8888+ return __kern_addr_valid(paddr);8989+}9090+7491/* Entries per page directory level. */7592#define PTRS_PER_PTE (1UL << (PAGE_SHIFT-3))7693#define PTRS_PER_PMD (1UL << PMD_BITS)···9679/* Kernel has a separate 44bit address space. */9780#define FIRST_USER_ADDRESS 098819999-#define pte_ERROR(e) __builtin_trap()100100-#define pmd_ERROR(e) __builtin_trap()101101-#define pgd_ERROR(e) __builtin_trap()8282+#define pmd_ERROR(e) \8383+ pr_err("%s:%d: bad pmd %p(%016lx) seen at (%pS)\n", \8484+ __FILE__, __LINE__, &(e), pmd_val(e), __builtin_return_address(0))8585+#define pgd_ERROR(e) \8686+ pr_err("%s:%d: bad pgd %p(%016lx) seen at (%pS)\n", \8787+ __FILE__, __LINE__, &(e), pgd_val(e), __builtin_return_address(0))1028810389#endif /* !(__ASSEMBLY__) */10490···278258{279259 unsigned long mask, tmp;280260281281- /* SUN4U: 0x600307ffffffecb8 (negated == 0x9ffcf80000001347)282282- * SUN4V: 0x30ffffffffffee17 (negated == 0xcf000000000011e8)261261+ /* SUN4U: 0x630107ffffffec38 (negated == 0x9cfef800000013c7)262262+ * SUN4V: 0x33ffffffffffee07 (negated == 0xcc000000000011f8)283263 *284264 * Even if we use negation tricks the result is still a 6285265 * instruction sequence, so don't try to play fancy and just···309289 " .previous\n"310290 : "=r" (mask), "=r" (tmp)311291 : "i" (_PAGE_PADDR_4U | _PAGE_MODIFIED_4U | _PAGE_ACCESSED_4U |312312- _PAGE_CP_4U | _PAGE_CV_4U | _PAGE_E_4U | _PAGE_PRESENT_4U |292292+ _PAGE_CP_4U | _PAGE_CV_4U | _PAGE_E_4U |313293 _PAGE_SPECIAL | _PAGE_PMD_HUGE | _PAGE_SZALL_4U),314294 "i" (_PAGE_PADDR_4V | _PAGE_MODIFIED_4V | _PAGE_ACCESSED_4V |315315- _PAGE_CP_4V | _PAGE_CV_4V | _PAGE_E_4V | _PAGE_PRESENT_4V |295295+ _PAGE_CP_4V | _PAGE_CV_4V | _PAGE_E_4V |316296 _PAGE_SPECIAL | _PAGE_PMD_HUGE | _PAGE_SZALL_4V));317297318298 return __pte((pte_val(pte) & mask) | (pgprot_val(prot) & ~mask));···653633{654634 pte_t pte = __pte(pmd_val(pmd));655635656656- return (pte_val(pte) & _PAGE_PMD_HUGE) && pte_present(pte);636636+ return pte_val(pte) & _PAGE_PMD_HUGE;657637}658638659639#ifdef CONFIG_TRANSPARENT_HUGEPAGE···739719 return __pmd(pte_val(pte));740720}741721742742-static inline pmd_t pmd_mknotpresent(pmd_t pmd)743743-{744744- unsigned long mask;745745-746746- if (tlb_type == hypervisor)747747- mask = _PAGE_PRESENT_4V;748748- else749749- mask = _PAGE_PRESENT_4U;750750-751751- pmd_val(pmd) &= ~mask;752752-753753- return pmd;754754-}755755-756722static inline pmd_t pmd_mksplitting(pmd_t pmd)757723{758724 pte_t pte = __pte(pmd_val(pmd));···762756}763757764758#define pmd_none(pmd) (!pmd_val(pmd))759759+760760+/* pmd_bad() is only called on non-trans-huge PMDs. Our encoding is761761+ * very simple, it's just the physical address. PTE tables are of762762+ * size PAGE_SIZE so make sure the sub-PAGE_SIZE bits are clear and763763+ * the top bits outside of the range of any physical address size we764764+ * support are clear as well. We also validate the physical itself.765765+ */766766+#define pmd_bad(pmd) ((pmd_val(pmd) & ~PAGE_MASK) || \767767+ !__kern_addr_valid(pmd_val(pmd)))768768+769769+#define pud_none(pud) (!pud_val(pud))770770+771771+#define pud_bad(pud) ((pud_val(pud) & ~PAGE_MASK) || \772772+ !__kern_addr_valid(pud_val(pud)))765773766774#ifdef CONFIG_TRANSPARENT_HUGEPAGE767775extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,···810790#define pud_page_vaddr(pud) \811791 ((unsigned long) __va(pud_val(pud)))812792#define pud_page(pud) virt_to_page((void *)pud_page_vaddr(pud))813813-#define pmd_bad(pmd) (0)814793#define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL)815815-#define pud_none(pud) (!pud_val(pud))816816-#define pud_bad(pud) (0)817794#define pud_present(pud) (pud_val(pud) != 0U)818795#define pud_clear(pudp) (pud_val(*(pudp)) = 0UL)819796···910893extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,911894 pmd_t *pmd);912895896896+#define __HAVE_ARCH_PMDP_INVALIDATE897897+extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,898898+ pmd_t *pmdp);899899+913900#define __HAVE_ARCH_PGTABLE_DEPOSIT914901extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,915902 pgtable_t pgtable);···939918#define pte_to_pgoff(pte) (pte_val(pte) >> PAGE_SHIFT)940919extern pte_t pgoff_to_pte(unsigned long);941920#define PTE_FILE_MAX_BITS (64UL - PAGE_SHIFT - 1UL)942942-943943-extern unsigned long sparc64_valid_addr_bitmap[];944944-945945-/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */946946-static inline bool kern_addr_valid(unsigned long addr)947947-{948948- unsigned long paddr = __pa(addr);949949-950950- if ((paddr >> 41UL) != 0UL)951951- return false;952952- return test_bit(paddr >> 22, sparc64_valid_addr_bitmap);953953-}954921955922extern int page_in_phys_avail(unsigned long paddr);956923
···166166unsigned long compute_effective_address(struct pt_regs *regs,167167 unsigned int insn, unsigned int rd)168168{169169+ int from_kernel = (regs->tstate & TSTATE_PRIV) != 0;169170 unsigned int rs1 = (insn >> 14) & 0x1f;170171 unsigned int rs2 = insn & 0x1f;171171- int from_kernel = (regs->tstate & TSTATE_PRIV) != 0;172172+ unsigned long addr;172173173174 if (insn & 0x2000) {174175 maybe_flush_windows(rs1, 0, rd, from_kernel);175175- return (fetch_reg(rs1, regs) + sign_extend_imm13(insn));176176+ addr = (fetch_reg(rs1, regs) + sign_extend_imm13(insn));176177 } else {177178 maybe_flush_windows(rs1, rs2, rd, from_kernel);178178- return (fetch_reg(rs1, regs) + fetch_reg(rs2, regs));179179+ addr = (fetch_reg(rs1, regs) + fetch_reg(rs2, regs));179180 }181181+182182+ if (!from_kernel && test_thread_flag(TIF_32BIT))183183+ addr &= 0xffffffff;184184+185185+ return addr;180186}181187182188/* This is just to make gcc think die_if_kernel does return... */
+53-31
arch/sparc/mm/fault_64.c
···9696 pte_t *ptep, pte;9797 unsigned long pa;9898 u32 insn = 0;9999- unsigned long pstate;10099101101- if (pgd_none(*pgdp))102102- goto outret;100100+ if (pgd_none(*pgdp) || unlikely(pgd_bad(*pgdp)))101101+ goto out;103102 pudp = pud_offset(pgdp, tpc);104104- if (pud_none(*pudp))105105- goto outret;106106- pmdp = pmd_offset(pudp, tpc);107107- if (pmd_none(*pmdp))108108- goto outret;109109-110110- /* This disables preemption for us as well. */111111- __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));112112- __asm__ __volatile__("wrpr %0, %1, %%pstate"113113- : : "r" (pstate), "i" (PSTATE_IE));114114- ptep = pte_offset_map(pmdp, tpc);115115- pte = *ptep;116116- if (!pte_present(pte))103103+ if (pud_none(*pudp) || unlikely(pud_bad(*pudp)))117104 goto out;118105119119- pa = (pte_pfn(pte) << PAGE_SHIFT);120120- pa += (tpc & ~PAGE_MASK);106106+ /* This disables preemption for us as well. */107107+ local_irq_disable();121108122122- /* Use phys bypass so we don't pollute dtlb/dcache. */123123- __asm__ __volatile__("lduwa [%1] %2, %0"124124- : "=r" (insn)125125- : "r" (pa), "i" (ASI_PHYS_USE_EC));109109+ pmdp = pmd_offset(pudp, tpc);110110+ if (pmd_none(*pmdp) || unlikely(pmd_bad(*pmdp)))111111+ goto out_irq_enable;126112113113+#ifdef CONFIG_TRANSPARENT_HUGEPAGE114114+ if (pmd_trans_huge(*pmdp)) {115115+ if (pmd_trans_splitting(*pmdp))116116+ goto out_irq_enable;117117+118118+ pa = pmd_pfn(*pmdp) << PAGE_SHIFT;119119+ pa += tpc & ~HPAGE_MASK;120120+121121+ /* Use phys bypass so we don't pollute dtlb/dcache. */122122+ __asm__ __volatile__("lduwa [%1] %2, %0"123123+ : "=r" (insn)124124+ : "r" (pa), "i" (ASI_PHYS_USE_EC));125125+ } else126126+#endif127127+ {128128+ ptep = pte_offset_map(pmdp, tpc);129129+ pte = *ptep;130130+ if (pte_present(pte)) {131131+ pa = (pte_pfn(pte) << PAGE_SHIFT);132132+ pa += (tpc & ~PAGE_MASK);133133+134134+ /* Use phys bypass so we don't pollute dtlb/dcache. */135135+ __asm__ __volatile__("lduwa [%1] %2, %0"136136+ : "=r" (insn)137137+ : "r" (pa), "i" (ASI_PHYS_USE_EC));138138+ }139139+ pte_unmap(ptep);140140+ }141141+out_irq_enable:142142+ local_irq_enable();127143out:128128- pte_unmap(ptep);129129- __asm__ __volatile__("wrpr %0, 0x0, %%pstate" : : "r" (pstate));130130-outret:131144 return insn;132145}133146···166153}167154168155static void do_fault_siginfo(int code, int sig, struct pt_regs *regs,169169- unsigned int insn, int fault_code)156156+ unsigned long fault_addr, unsigned int insn,157157+ int fault_code)170158{171159 unsigned long addr;172160 siginfo_t info;···175161 info.si_code = code;176162 info.si_signo = sig;177163 info.si_errno = 0;178178- if (fault_code & FAULT_CODE_ITLB)164164+ if (fault_code & FAULT_CODE_ITLB) {179165 addr = regs->tpc;180180- else181181- addr = compute_effective_address(regs, insn, 0);166166+ } else {167167+ /* If we were able to probe the faulting instruction, use it168168+ * to compute a precise fault address. Otherwise use the fault169169+ * time provided address which may only have page granularity.170170+ */171171+ if (insn)172172+ addr = compute_effective_address(regs, insn, 0);173173+ else174174+ addr = fault_addr;175175+ }182176 info.si_addr = (void __user *) addr;183177 info.si_trapno = 0;184178···261239 /* The si_code was set to make clear whether262240 * this was a SEGV_MAPERR or SEGV_ACCERR fault.263241 */264264- do_fault_siginfo(si_code, SIGSEGV, regs, insn, fault_code);242242+ do_fault_siginfo(si_code, SIGSEGV, regs, address, insn, fault_code);265243 return;266244 }267245···547525 * Send a sigbus, regardless of whether we were in kernel548526 * or user mode.549527 */550550- do_fault_siginfo(BUS_ADRERR, SIGBUS, regs, insn, fault_code);528528+ do_fault_siginfo(BUS_ADRERR, SIGBUS, regs, address, insn, fault_code);551529552530 /* Kernel mode? Handle exceptions or die */553531 if (regs->tstate & TSTATE_PRIV)
+1-1
arch/sparc/mm/gup.c
···7373 struct page *head, *page, *tail;7474 int refs;75757676- if (!pmd_large(pmd))7676+ if (!(pmd_val(pmd) & _PAGE_VALID))7777 return 0;78787979 if (write && !pmd_write(pmd))
···3131 *3232 * Wrapper around acpi_enter_sleep_state() to be called by assmebly.3333 */3434-acpi_status asmlinkage x86_acpi_enter_sleep_state(u8 state)3434+acpi_status asmlinkage __visible x86_acpi_enter_sleep_state(u8 state)3535{3636 return acpi_enter_sleep_state(state);3737}
+6-1
arch/x86/kernel/apic/io_apic.c
···21892189 cfg->move_in_progress = 0;21902190}2191219121922192-asmlinkage void smp_irq_move_cleanup_interrupt(void)21922192+asmlinkage __visible void smp_irq_move_cleanup_interrupt(void)21932193{21942194 unsigned vector, me;21952195···34233423int get_nr_irqs_gsi(void)34243424{34253425 return nr_irqs_gsi;34263426+}34273427+34283428+unsigned int arch_dynirq_lower_bound(unsigned int from)34293429+{34303430+ return from < nr_irqs_gsi ? nr_irqs_gsi : from;34263431}3427343234283433int __init arch_probe_nr_irqs(void)
···8888/*8989 * HPET command line enable / disable9090 */9191-static int boot_hpet_disable;9191+int boot_hpet_disable;9292int hpet_force_user;9393static int hpet_verbose;9494
+1-1
arch/x86/kernel/process_64.c
···52525353asmlinkage extern void ret_from_fork(void);54545555-asmlinkage DEFINE_PER_CPU(unsigned long, old_rsp);5555+__visible DEFINE_PER_CPU(unsigned long, old_rsp);56565757/* Prints also some state that isn't saved in the pt_regs */5858void __show_regs(struct pt_regs *regs, int all)
···168168 * this function calls the 'stop' function on all other CPUs in the system.169169 */170170171171-asmlinkage void smp_reboot_interrupt(void)171171+asmlinkage __visible void smp_reboot_interrupt(void)172172{173173 ack_APIC_irq();174174 irq_enter();
+3-3
arch/x86/kernel/traps.c
···357357 * for scheduling or signal handling. The actual stack switch is done in358358 * entry.S359359 */360360-asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)360360+asmlinkage __visible __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)361361{362362 struct pt_regs *regs = eregs;363363 /* Did already sync */···601601#endif602602}603603604604-asmlinkage void __attribute__((weak)) smp_thermal_interrupt(void)604604+asmlinkage __visible void __attribute__((weak)) smp_thermal_interrupt(void)605605{606606}607607608608-asmlinkage void __attribute__((weak)) smp_threshold_interrupt(void)608608+asmlinkage __visible void __attribute__((weak)) smp_threshold_interrupt(void)609609{610610}611611
+13-4
arch/x86/kernel/vsmp_64.c
···26262727#define TOPOLOGY_REGISTER_OFFSET 0x1028282929+/* Flag below is initialized once during vSMP PCI initialization. */3030+static int irq_routing_comply = 1;3131+2932#if defined CONFIG_PCI && defined CONFIG_PARAVIRT3033/*3134 * Interrupt control on vSMPowered systems:···3633 * and vice versa.3734 */38353939-asmlinkage unsigned long vsmp_save_fl(void)3636+asmlinkage __visible unsigned long vsmp_save_fl(void)4037{4138 unsigned long flags = native_save_fl();4239···5653}5754PV_CALLEE_SAVE_REGS_THUNK(vsmp_restore_fl);58555959-asmlinkage void vsmp_irq_disable(void)5656+asmlinkage __visible void vsmp_irq_disable(void)6057{6158 unsigned long flags = native_save_fl();6259···6461}6562PV_CALLEE_SAVE_REGS_THUNK(vsmp_irq_disable);66636767-asmlinkage void vsmp_irq_enable(void)6464+asmlinkage __visible void vsmp_irq_enable(void)6865{6966 unsigned long flags = native_save_fl();7067···104101#ifdef CONFIG_SMP105102 if (cap & ctl & BIT(8)) {106103 ctl &= ~BIT(8);104104+105105+ /* Interrupt routing set to ignore */106106+ irq_routing_comply = 0;107107+107108#ifdef CONFIG_PROC_FS108109 /* Don't let users change irq affinity via procfs */109110 no_irq_affinity = 1;···225218{226219 /* need to update phys_pkg_id */227220 apic->phys_pkg_id = apicid_phys_pkg_id;228228- apic->vector_allocation_domain = fill_vector_allocation_domain;221221+222222+ if (!irq_routing_comply)223223+ apic->vector_allocation_domain = fill_vector_allocation_domain;229224}230225231226void __init vsmp_init(void)
···503503 [number##_HIGH] = VMCS12_OFFSET(name)+4504504505505506506-static const unsigned long shadow_read_only_fields[] = {506506+static unsigned long shadow_read_only_fields[] = {507507 /*508508 * We do NOT shadow fields that are modified when L0509509 * traps and emulates any vmx instruction (e.g. VMPTRLD,···526526 GUEST_LINEAR_ADDRESS,527527 GUEST_PHYSICAL_ADDRESS528528};529529-static const int max_shadow_read_only_fields =529529+static int max_shadow_read_only_fields =530530 ARRAY_SIZE(shadow_read_only_fields);531531532532-static const unsigned long shadow_read_write_fields[] = {532532+static unsigned long shadow_read_write_fields[] = {533533 GUEST_RIP,534534 GUEST_RSP,535535 GUEST_CR0,···558558 HOST_FS_SELECTOR,559559 HOST_GS_SELECTOR560560};561561-static const int max_shadow_read_write_fields =561561+static int max_shadow_read_write_fields =562562 ARRAY_SIZE(shadow_read_write_fields);563563564564static const unsigned short vmcs_field_to_offset_table[] = {···30093009 }30103010}3011301130123012+static void init_vmcs_shadow_fields(void)30133013+{30143014+ int i, j;30153015+30163016+ /* No checks for read only fields yet */30173017+30183018+ for (i = j = 0; i < max_shadow_read_write_fields; i++) {30193019+ switch (shadow_read_write_fields[i]) {30203020+ case GUEST_BNDCFGS:30213021+ if (!vmx_mpx_supported())30223022+ continue;30233023+ break;30243024+ default:30253025+ break;30263026+ }30273027+30283028+ if (j < i)30293029+ shadow_read_write_fields[j] =30303030+ shadow_read_write_fields[i];30313031+ j++;30323032+ }30333033+ max_shadow_read_write_fields = j;30343034+30353035+ /* shadowed fields guest access without vmexit */30363036+ for (i = 0; i < max_shadow_read_write_fields; i++) {30373037+ clear_bit(shadow_read_write_fields[i],30383038+ vmx_vmwrite_bitmap);30393039+ clear_bit(shadow_read_write_fields[i],30403040+ vmx_vmread_bitmap);30413041+ }30423042+ for (i = 0; i < max_shadow_read_only_fields; i++)30433043+ clear_bit(shadow_read_only_fields[i],30443044+ vmx_vmread_bitmap);30453045+}30463046+30123047static __init int alloc_kvm_area(void)30133048{30143049 int cpu;···30743039 enable_vpid = 0;30753040 if (!cpu_has_vmx_shadow_vmcs())30763041 enable_shadow_vmcs = 0;30423042+ if (enable_shadow_vmcs)30433043+ init_vmcs_shadow_fields();3077304430783045 if (!cpu_has_vmx_ept() ||30793046 !cpu_has_vmx_ept_4levels()) {···8840880388418804 memset(vmx_vmread_bitmap, 0xff, PAGE_SIZE);88428805 memset(vmx_vmwrite_bitmap, 0xff, PAGE_SIZE);88438843- /* shadowed read/write fields */88448844- for (i = 0; i < max_shadow_read_write_fields; i++) {88458845- clear_bit(shadow_read_write_fields[i], vmx_vmwrite_bitmap);88468846- clear_bit(shadow_read_write_fields[i], vmx_vmread_bitmap);88478847- }88488848- /* shadowed read only fields */88498849- for (i = 0; i < max_shadow_read_only_fields; i++)88508850- clear_bit(shadow_read_only_fields[i], vmx_vmread_bitmap);8851880688528807 /*88538808 * Allow direct access to the PC debug port (it is often used for I/O
+1-1
arch/x86/kvm/x86.c
···280280}281281EXPORT_SYMBOL_GPL(kvm_set_apic_base);282282283283-asmlinkage void kvm_spurious_fault(void)283283+asmlinkage __visible void kvm_spurious_fault(void)284284{285285 /* Fault while not rebooting. We want the trace. */286286 BUG();
+2-2
arch/x86/lguest/boot.c
···233233 * flags word contains all kind of stuff, but in practice Linux only cares234234 * about the interrupt flag. Our "save_flags()" just returns that.235235 */236236-asmlinkage unsigned long lguest_save_fl(void)236236+asmlinkage __visible unsigned long lguest_save_fl(void)237237{238238 return lguest_data.irq_enabled;239239}240240241241/* Interrupts go off... */242242-asmlinkage void lguest_irq_disable(void)242242+asmlinkage __visible void lguest_irq_disable(void)243243{244244 lguest_data.irq_enabled = 0;245245}
+1-1
arch/x86/lib/msr.c
···7676 if (m1.q == m.q)7777 return 0;78787979- err = msr_write(msr, &m);7979+ err = msr_write(msr, &m1);8080 if (err)8181 return err;8282
+8-8
arch/x86/math-emu/errors.c
···302302 0x242 in div_Xsig.S303303 */304304305305-asmlinkage void FPU_exception(int n)305305+asmlinkage __visible void FPU_exception(int n)306306{307307 int i, int_type;308308···492492493493/* Invalid arith operation on Valid registers */494494/* Returns < 0 if the exception is unmasked */495495-asmlinkage int arith_invalid(int deststnr)495495+asmlinkage __visible int arith_invalid(int deststnr)496496{497497498498 EXCEPTION(EX_Invalid);···507507}508508509509/* Divide a finite number by zero */510510-asmlinkage int FPU_divide_by_zero(int deststnr, u_char sign)510510+asmlinkage __visible int FPU_divide_by_zero(int deststnr, u_char sign)511511{512512 FPU_REG *dest = &st(deststnr);513513 int tag = TAG_Valid;···539539}540540541541/* This may be called often, so keep it lean */542542-asmlinkage void set_precision_flag_up(void)542542+asmlinkage __visible void set_precision_flag_up(void)543543{544544 if (control_word & CW_Precision)545545 partial_status |= (SW_Precision | SW_C1); /* The masked response */···548548}549549550550/* This may be called often, so keep it lean */551551-asmlinkage void set_precision_flag_down(void)551551+asmlinkage __visible void set_precision_flag_down(void)552552{553553 if (control_word & CW_Precision) { /* The masked response */554554 partial_status &= ~SW_C1;···557557 EXCEPTION(EX_Precision);558558}559559560560-asmlinkage int denormal_operand(void)560560+asmlinkage __visible int denormal_operand(void)561561{562562 if (control_word & CW_Denormal) { /* The masked response */563563 partial_status |= SW_Denorm_Op;···568568 }569569}570570571571-asmlinkage int arith_overflow(FPU_REG *dest)571571+asmlinkage __visible int arith_overflow(FPU_REG *dest)572572{573573 int tag = TAG_Valid;574574···596596597597}598598599599-asmlinkage int arith_underflow(FPU_REG *dest)599599+asmlinkage __visible int arith_underflow(FPU_REG *dest)600600{601601 int tag = TAG_Valid;602602
+64-19
arch/x86/platform/efi/early_printk.c
···14141515static const struct font_desc *font;1616static u32 efi_x, efi_y;1717+static void *efi_fb;1818+static bool early_efi_keep;17191818-static __init void early_efi_clear_scanline(unsigned int y)2020+/*2121+ * efi earlyprintk need use early_ioremap to map the framebuffer.2222+ * But early_ioremap is not usable for earlyprintk=efi,keep, ioremap should2323+ * be used instead. ioremap will be available after paging_init() which is2424+ * earlier than initcall callbacks. Thus adding this early initcall function2525+ * early_efi_map_fb to map the whole efi framebuffer.2626+ */2727+static __init int early_efi_map_fb(void)1928{2020- unsigned long base, *dst;2121- u16 len;2929+ unsigned long base, size;3030+3131+ if (!early_efi_keep)3232+ return 0;22332334 base = boot_params.screen_info.lfb_base;2424- len = boot_params.screen_info.lfb_linelength;3535+ size = boot_params.screen_info.lfb_size;3636+ efi_fb = ioremap(base, size);25372626- dst = early_ioremap(base + y*len, len);3838+ return efi_fb ? 0 : -ENOMEM;3939+}4040+early_initcall(early_efi_map_fb);4141+4242+/*4343+ * early_efi_map maps efi framebuffer region [start, start + len -1]4444+ * In case earlyprintk=efi,keep we have the whole framebuffer mapped already4545+ * so just return the offset efi_fb + start.4646+ */4747+static __init_refok void *early_efi_map(unsigned long start, unsigned long len)4848+{4949+ unsigned long base;5050+5151+ base = boot_params.screen_info.lfb_base;5252+5353+ if (efi_fb)5454+ return (efi_fb + start);5555+ else5656+ return early_ioremap(base + start, len);5757+}5858+5959+static __init_refok void early_efi_unmap(void *addr, unsigned long len)6060+{6161+ if (!efi_fb)6262+ early_iounmap(addr, len);6363+}6464+6565+static void early_efi_clear_scanline(unsigned int y)6666+{6767+ unsigned long *dst;6868+ u16 len;6969+7070+ len = boot_params.screen_info.lfb_linelength;7171+ dst = early_efi_map(y*len, len);2772 if (!dst)2873 return;29743075 memset(dst, 0, len);3131- early_iounmap(dst, len);7676+ early_efi_unmap(dst, len);3277}33783434-static __init void early_efi_scroll_up(void)7979+static void early_efi_scroll_up(void)3580{3636- unsigned long base, *dst, *src;8181+ unsigned long *dst, *src;3782 u16 len;3883 u32 i, height;39844040- base = boot_params.screen_info.lfb_base;4185 len = boot_params.screen_info.lfb_linelength;4286 height = boot_params.screen_info.lfb_height;43874488 for (i = 0; i < height - font->height; i++) {4545- dst = early_ioremap(base + i*len, len);8989+ dst = early_efi_map(i*len, len);4690 if (!dst)4791 return;48924949- src = early_ioremap(base + (i + font->height) * len, len);9393+ src = early_efi_map((i + font->height) * len, len);5094 if (!src) {5151- early_iounmap(dst, len);9595+ early_efi_unmap(dst, len);5296 return;5397 }54985599 memmove(dst, src, len);561005757- early_iounmap(src, len);5858- early_iounmap(dst, len);101101+ early_efi_unmap(src, len);102102+ early_efi_unmap(dst, len);59103 }60104}61105···12379 }12480}12581126126-static __init void8282+static void12783early_efi_write(struct console *con, const char *str, unsigned int num)12884{12985 struct screen_info *si;130130- unsigned long base;13186 unsigned int len;13287 const char *s;13388 void *dst;13489135135- base = boot_params.screen_info.lfb_base;13690 si = &boot_params.screen_info;13791 len = si->lfb_linelength;13892···151109 for (h = 0; h < font->height; h++) {152110 unsigned int n, x;153111154154- dst = early_ioremap(base + (efi_y + h) * len, len);112112+ dst = early_efi_map((efi_y + h) * len, len);155113 if (!dst)156114 return;157115···165123 s++;166124 }167125168168- early_iounmap(dst, len);126126+ early_efi_unmap(dst, len);169127 }170128171129 num -= count;···221179 for (i = 0; i < (yres - efi_y) / font->height; i++)222180 early_efi_scroll_up();223181182182+ /* early_console_register will unset CON_BOOT in case ,keep */183183+ if (!(con->flags & CON_BOOT))184184+ early_efi_keep = true;224185 return 0;225186}226187
+1-1
arch/x86/platform/olpc/olpc-xo1-pm.c
···7575 return 0;7676}77777878-asmlinkage int xo1_do_sleep(u8 sleep_state)7878+asmlinkage __visible int xo1_do_sleep(u8 sleep_state)7979{8080 void *pgd_addr = __va(read_cr3());8181
+1-1
arch/x86/power/hibernate_64.c
···2323extern __visible const void __nosave_begin, __nosave_end;24242525/* Defined in hibernate_asm_64.S */2626-extern asmlinkage int restore_image(void);2626+extern asmlinkage __visible int restore_image(void);27272828/*2929 * Address to jump to in the last phase of restore in order to get to the image
+1-1
arch/x86/xen/enlighten.c
···15151515}1516151615171517/* First C function to be called on Xen boot */15181518-asmlinkage void __init xen_start_kernel(void)15181518+asmlinkage __visible void __init xen_start_kernel(void)15191519{15201520 struct physdev_set_iopl set_iopl;15211521 int rc;
+3-3
arch/x86/xen/irq.c
···2323 (void)HYPERVISOR_xen_version(0, NULL);2424}25252626-asmlinkage unsigned long xen_save_fl(void)2626+asmlinkage __visible unsigned long xen_save_fl(void)2727{2828 struct vcpu_info *vcpu;2929 unsigned long flags;···6363}6464PV_CALLEE_SAVE_REGS_THUNK(xen_restore_fl);65656666-asmlinkage void xen_irq_disable(void)6666+asmlinkage __visible void xen_irq_disable(void)6767{6868 /* There's a one instruction preempt window here. We need to6969 make sure we're don't switch CPUs between getting the vcpu···7474}7575PV_CALLEE_SAVE_REGS_THUNK(xen_irq_disable);76767777-asmlinkage void xen_irq_enable(void)7777+asmlinkage __visible void xen_irq_enable(void)7878{7979 struct vcpu_info *vcpu;8080
+19-1
arch/xtensa/Kconfig
···1414 select GENERIC_PCI_IOMAP1515 select ARCH_WANT_IPC_PARSE_VERSION1616 select ARCH_WANT_OPTIONAL_GPIOLIB1717+ select BUILDTIME_EXTABLE_SORT1718 select CLONE_BACKWARDS1819 select IRQ_DOMAIN1920 select HAVE_OPROFILE···190189191190 If in doubt, say Y.192191192192+config HIGHMEM193193+ bool "High Memory Support"194194+ help195195+ Linux can use the full amount of RAM in the system by196196+ default. However, the default MMUv2 setup only maps the197197+ lowermost 128 MB of memory linearly to the areas starting198198+ at 0xd0000000 (cached) and 0xd8000000 (uncached).199199+ When there are more than 128 MB memory in the system not200200+ all of it can be "permanently mapped" by the kernel.201201+ The physical memory that's not permanently mapped is called202202+ "high memory".203203+204204+ If you are compiling a kernel which will never run on a205205+ machine with more than 128 MB total physical RAM, answer206206+ N here.207207+208208+ If unsure, say Y.209209+193210endmenu194211195212config XTENSA_CALIBRATE_CCOUNT···243224244225config XTENSA_PLATFORM_ISS245226 bool "ISS"246246- depends on TTY247227 select XTENSA_CALIBRATE_CCOUNT248228 select SERIAL_CONSOLE249229 help
···3737 unsigned long data[0]; /* data */3838} bp_tag_t;39394040-typedef struct meminfo {4040+struct bp_meminfo {4141 unsigned long type;4242 unsigned long start;4343 unsigned long end;4444-} meminfo_t;4545-4646-#define SYSMEM_BANKS_MAX 54444+};47454846#define MEMORY_TYPE_CONVENTIONAL 0x10004947#define MEMORY_TYPE_NONE 0x20005050-5151-typedef struct sysmem_info {5252- int nr_banks;5353- meminfo_t bank[SYSMEM_BANKS_MAX];5454-} sysmem_info_t;5555-5656-extern sysmem_info_t sysmem;57485849#endif5950#endif
+58
arch/xtensa/include/asm/fixmap.h
···11+/*22+ * fixmap.h: compile-time virtual memory allocation33+ *44+ * This file is subject to the terms and conditions of the GNU General Public55+ * License. See the file "COPYING" in the main directory of this archive66+ * for more details.77+ *88+ * Copyright (C) 1998 Ingo Molnar99+ *1010+ * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 19991111+ */1212+1313+#ifndef _ASM_FIXMAP_H1414+#define _ASM_FIXMAP_H1515+1616+#include <asm/pgtable.h>1717+#ifdef CONFIG_HIGHMEM1818+#include <linux/threads.h>1919+#include <asm/kmap_types.h>2020+#endif2121+2222+/*2323+ * Here we define all the compile-time 'special' virtual2424+ * addresses. The point is to have a constant address at2525+ * compile time, but to set the physical address only2626+ * in the boot process. We allocate these special addresses2727+ * from the end of the consistent memory region backwards.2828+ * Also this lets us do fail-safe vmalloc(), we2929+ * can guarantee that these special addresses and3030+ * vmalloc()-ed addresses never overlap.3131+ *3232+ * these 'compile-time allocated' memory buffers are3333+ * fixed-size 4k pages. (or larger if used with an increment3434+ * higher than 1) use fixmap_set(idx,phys) to associate3535+ * physical memory with fixmap indices.3636+ */3737+enum fixed_addresses {3838+#ifdef CONFIG_HIGHMEM3939+ /* reserved pte's for temporary kernel mappings */4040+ FIX_KMAP_BEGIN,4141+ FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,4242+#endif4343+ __end_of_fixed_addresses4444+};4545+4646+#define FIXADDR_TOP (VMALLOC_START - PAGE_SIZE)4747+#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)4848+#define FIXADDR_START ((FIXADDR_TOP - FIXADDR_SIZE) & PMD_MASK)4949+5050+#include <asm-generic/fixmap.h>5151+5252+#define kmap_get_fixmap_pte(vaddr) \5353+ pte_offset_kernel( \5454+ pmd_offset(pud_offset(pgd_offset_k(vaddr), (vaddr)), (vaddr)), \5555+ (vaddr) \5656+ )5757+5858+#endif
···11+/*22+ * sysmem-related prototypes.33+ *44+ * This file is subject to the terms and conditions of the GNU General Public55+ * License. See the file "COPYING" in the main directory of this archive66+ * for more details.77+ *88+ * Copyright (C) 2014 Cadence Design Systems Inc.99+ */1010+1111+#ifndef _XTENSA_SYSMEM_H1212+#define _XTENSA_SYSMEM_H1313+1414+#define SYSMEM_BANKS_MAX 311515+1616+struct meminfo {1717+ unsigned long start;1818+ unsigned long end;1919+};2020+2121+/*2222+ * Bank array is sorted by .start.2323+ * Banks don't overlap and there's at least one page gap2424+ * between adjacent bank entries.2525+ */2626+struct sysmem_info {2727+ int nr_banks;2828+ struct meminfo bank[SYSMEM_BANKS_MAX];2929+};3030+3131+extern struct sysmem_info sysmem;3232+3333+int add_sysmem_bank(unsigned long start, unsigned long end);3434+int mem_reserve(unsigned long, unsigned long, int);3535+void bootmem_init(void);3636+void zones_init(void);3737+3838+#endif /* _XTENSA_SYSMEM_H */
+4-7
arch/xtensa/include/asm/tlbflush.h
···3636 unsigned long page);3737void local_flush_tlb_range(struct vm_area_struct *vma,3838 unsigned long start, unsigned long end);3939+void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);39404041#ifdef CONFIG_SMP4142···4544void flush_tlb_page(struct vm_area_struct *, unsigned long);4645void flush_tlb_range(struct vm_area_struct *, unsigned long,4746 unsigned long);4848-4949-static inline void flush_tlb_kernel_range(unsigned long start,5050- unsigned long end)5151-{5252- flush_tlb_all();5353-}4747+void flush_tlb_kernel_range(unsigned long start, unsigned long end);54485549#else /* !CONFIG_SMP */5650···5458#define flush_tlb_page(vma, page) local_flush_tlb_page(vma, page)5559#define flush_tlb_range(vma, vmaddr, end) local_flush_tlb_range(vma, vmaddr, \5660 end)5757-#define flush_tlb_kernel_range(start, end) local_flush_tlb_all()6161+#define flush_tlb_kernel_range(start, end) local_flush_tlb_kernel_range(start, \6262+ end)58635964#endif /* CONFIG_SMP */6065
···5959 *6060 */61616262+#if (DCACHE_WAY_SIZE > PAGE_SIZE) && defined(CONFIG_HIGHMEM)6363+#error "HIGHMEM is not supported on cores with aliasing cache."6464+#endif6565+6266#if (DCACHE_WAY_SIZE > PAGE_SIZE) && XCHAL_DCACHE_IS_WRITEBACK63676468/*···183179#else184180 if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags)185181 && (vma->vm_flags & VM_EXEC) != 0) {186186- unsigned long paddr = (unsigned long) page_address(page);182182+ unsigned long paddr = (unsigned long)kmap_atomic(page);187183 __flush_dcache_page(paddr);188184 __invalidate_icache_page(paddr);189185 set_bit(PG_arch_1, &page->flags);186186+ kunmap_atomic((void *)paddr);190187 }191188#endif192189}
+72
arch/xtensa/mm/highmem.c
···11+/*22+ * High memory support for Xtensa architecture33+ *44+ * This file is subject to the terms and conditions of the GNU General55+ * Public License. See the file "COPYING" in the main directory of66+ * this archive for more details.77+ *88+ * Copyright (C) 2014 Cadence Design Systems Inc.99+ */1010+1111+#include <linux/export.h>1212+#include <linux/highmem.h>1313+#include <asm/tlbflush.h>1414+1515+static pte_t *kmap_pte;1616+1717+void *kmap_atomic(struct page *page)1818+{1919+ enum fixed_addresses idx;2020+ unsigned long vaddr;2121+ int type;2222+2323+ pagefault_disable();2424+ if (!PageHighMem(page))2525+ return page_address(page);2626+2727+ type = kmap_atomic_idx_push();2828+ idx = type + KM_TYPE_NR * smp_processor_id();2929+ vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);3030+#ifdef CONFIG_DEBUG_HIGHMEM3131+ BUG_ON(!pte_none(*(kmap_pte - idx)));3232+#endif3333+ set_pte(kmap_pte - idx, mk_pte(page, PAGE_KERNEL_EXEC));3434+3535+ return (void *)vaddr;3636+}3737+EXPORT_SYMBOL(kmap_atomic);3838+3939+void __kunmap_atomic(void *kvaddr)4040+{4141+ int idx, type;4242+4343+ if (kvaddr >= (void *)FIXADDR_START &&4444+ kvaddr < (void *)FIXADDR_TOP) {4545+ type = kmap_atomic_idx();4646+ idx = type + KM_TYPE_NR * smp_processor_id();4747+4848+ /*4949+ * Force other mappings to Oops if they'll try to access this5050+ * pte without first remap it. Keeping stale mappings around5151+ * is a bad idea also, in case the page changes cacheability5252+ * attributes or becomes a protected page in a hypervisor.5353+ */5454+ pte_clear(&init_mm, kvaddr, kmap_pte - idx);5555+ local_flush_tlb_kernel_range((unsigned long)kvaddr,5656+ (unsigned long)kvaddr + PAGE_SIZE);5757+5858+ kmap_atomic_idx_pop();5959+ }6060+6161+ pagefault_enable();6262+}6363+EXPORT_SYMBOL(__kunmap_atomic);6464+6565+void __init kmap_init(void)6666+{6767+ unsigned long kmap_vstart;6868+6969+ /* cache the first kmap pte */7070+ kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);7171+ kmap_pte = kmap_get_fixmap_pte(kmap_vstart);7272+}
+255-48
arch/xtensa/mm/init.c
···88 * for more details.99 *1010 * Copyright (C) 2001 - 2005 Tensilica Inc.1111+ * Copyright (C) 2014 Cadence Design Systems Inc.1112 *1213 * Chris Zankel <chris@zankel.net>1314 * Joe Taylor <joe@tensilica.com, joetylr@yahoo.com>···2019#include <linux/errno.h>2120#include <linux/bootmem.h>2221#include <linux/gfp.h>2222+#include <linux/highmem.h>2323#include <linux/swap.h>2424#include <linux/mman.h>2525#include <linux/nodemask.h>···2927#include <asm/bootparam.h>3028#include <asm/page.h>3129#include <asm/sections.h>3030+#include <asm/sysmem.h>3131+3232+struct sysmem_info sysmem __initdata;3333+3434+static void __init sysmem_dump(void)3535+{3636+ unsigned i;3737+3838+ pr_debug("Sysmem:\n");3939+ for (i = 0; i < sysmem.nr_banks; ++i)4040+ pr_debug(" 0x%08lx - 0x%08lx (%ldK)\n",4141+ sysmem.bank[i].start, sysmem.bank[i].end,4242+ (sysmem.bank[i].end - sysmem.bank[i].start) >> 10);4343+}4444+4545+/*4646+ * Find bank with maximal .start such that bank.start <= start4747+ */4848+static inline struct meminfo * __init find_bank(unsigned long start)4949+{5050+ unsigned i;5151+ struct meminfo *it = NULL;5252+5353+ for (i = 0; i < sysmem.nr_banks; ++i)5454+ if (sysmem.bank[i].start <= start)5555+ it = sysmem.bank + i;5656+ else5757+ break;5858+ return it;5959+}6060+6161+/*6262+ * Move all memory banks starting at 'from' to a new place at 'to',6363+ * adjust nr_banks accordingly.6464+ * Both 'from' and 'to' must be inside the sysmem.bank.6565+ *6666+ * Returns: 0 (success), -ENOMEM (not enough space in the sysmem.bank).6767+ */6868+static int __init move_banks(struct meminfo *to, struct meminfo *from)6969+{7070+ unsigned n = sysmem.nr_banks - (from - sysmem.bank);7171+7272+ if (to > from && to - from + sysmem.nr_banks > SYSMEM_BANKS_MAX)7373+ return -ENOMEM;7474+ if (to != from)7575+ memmove(to, from, n * sizeof(struct meminfo));7676+ sysmem.nr_banks += to - from;7777+ return 0;7878+}7979+8080+/*8181+ * Add new bank to sysmem. Resulting sysmem is the union of bytes of the8282+ * original sysmem and the new bank.8383+ *8484+ * Returns: 0 (success), < 0 (error)8585+ */8686+int __init add_sysmem_bank(unsigned long start, unsigned long end)8787+{8888+ unsigned i;8989+ struct meminfo *it = NULL;9090+ unsigned long sz;9191+ unsigned long bank_sz = 0;9292+9393+ if (start == end ||9494+ (start < end) != (PAGE_ALIGN(start) < (end & PAGE_MASK))) {9595+ pr_warn("Ignoring small memory bank 0x%08lx size: %ld bytes\n",9696+ start, end - start);9797+ return -EINVAL;9898+ }9999+100100+ start = PAGE_ALIGN(start);101101+ end &= PAGE_MASK;102102+ sz = end - start;103103+104104+ it = find_bank(start);105105+106106+ if (it)107107+ bank_sz = it->end - it->start;108108+109109+ if (it && bank_sz >= start - it->start) {110110+ if (end - it->start > bank_sz)111111+ it->end = end;112112+ else113113+ return 0;114114+ } else {115115+ if (!it)116116+ it = sysmem.bank;117117+ else118118+ ++it;119119+120120+ if (it - sysmem.bank < sysmem.nr_banks &&121121+ it->start - start <= sz) {122122+ it->start = start;123123+ if (it->end - it->start < sz)124124+ it->end = end;125125+ else126126+ return 0;127127+ } else {128128+ if (move_banks(it + 1, it) < 0) {129129+ pr_warn("Ignoring memory bank 0x%08lx size %ld bytes\n",130130+ start, end - start);131131+ return -EINVAL;132132+ }133133+ it->start = start;134134+ it->end = end;135135+ return 0;136136+ }137137+ }138138+ sz = it->end - it->start;139139+ for (i = it + 1 - sysmem.bank; i < sysmem.nr_banks; ++i)140140+ if (sysmem.bank[i].start - it->start <= sz) {141141+ if (sz < sysmem.bank[i].end - it->start)142142+ it->end = sysmem.bank[i].end;143143+ } else {144144+ break;145145+ }146146+147147+ move_banks(it + 1, sysmem.bank + i);148148+ return 0;149149+}3215033151/*34152 * mem_reserve(start, end, must_exist)35153 *36154 * Reserve some memory from the memory pool.155155+ * If must_exist is set and a part of the region being reserved does not exist156156+ * memory map is not altered.37157 *38158 * Parameters:39159 * start Start of region,···16339 * must_exist Must exist in memory pool.16440 *16541 * Returns:166166- * 0 (memory area couldn't be mapped)167167- * -1 (success)4242+ * 0 (success)4343+ * < 0 (error)16844 */1694517046int __init mem_reserve(unsigned long start, unsigned long end, int must_exist)17147{172172- int i;173173-174174- if (start == end)175175- return 0;4848+ struct meminfo *it;4949+ struct meminfo *rm = NULL;5050+ unsigned long sz;5151+ unsigned long bank_sz = 0;1765217753 start = start & PAGE_MASK;17854 end = PAGE_ALIGN(end);5555+ sz = end - start;5656+ if (!sz)5757+ return -EINVAL;17958180180- for (i = 0; i < sysmem.nr_banks; i++)181181- if (start < sysmem.bank[i].end182182- && end >= sysmem.bank[i].start)183183- break;5959+ it = find_bank(start);18460185185- if (i == sysmem.nr_banks) {186186- if (must_exist)187187- printk (KERN_WARNING "mem_reserve: [0x%0lx, 0x%0lx) "188188- "not in any region!\n", start, end);189189- return 0;6161+ if (it)6262+ bank_sz = it->end - it->start;6363+6464+ if ((!it || end - it->start > bank_sz) && must_exist) {6565+ pr_warn("mem_reserve: [0x%0lx, 0x%0lx) not in any region!\n",6666+ start, end);6767+ return -EINVAL;19068 }19169192192- if (start > sysmem.bank[i].start) {193193- if (end < sysmem.bank[i].end) {194194- /* split entry */195195- if (sysmem.nr_banks >= SYSMEM_BANKS_MAX)196196- panic("meminfo overflow\n");197197- sysmem.bank[sysmem.nr_banks].start = end;198198- sysmem.bank[sysmem.nr_banks].end = sysmem.bank[i].end;199199- sysmem.nr_banks++;7070+ if (it && start - it->start < bank_sz) {7171+ if (start == it->start) {7272+ if (end - it->start < bank_sz) {7373+ it->start = end;7474+ return 0;7575+ } else {7676+ rm = it;7777+ }7878+ } else {7979+ it->end = start;8080+ if (end - it->start < bank_sz)8181+ return add_sysmem_bank(end,8282+ it->start + bank_sz);8383+ ++it;20084 }201201- sysmem.bank[i].end = start;202202-203203- } else if (end < sysmem.bank[i].end) {204204- sysmem.bank[i].start = end;205205-206206- } else {207207- /* remove entry */208208- sysmem.nr_banks--;209209- sysmem.bank[i].start = sysmem.bank[sysmem.nr_banks].start;210210- sysmem.bank[i].end = sysmem.bank[sysmem.nr_banks].end;21185 }212212- return -1;8686+8787+ if (!it)8888+ it = sysmem.bank;8989+9090+ for (; it < sysmem.bank + sysmem.nr_banks; ++it) {9191+ if (it->end - start <= sz) {9292+ if (!rm)9393+ rm = it;9494+ } else {9595+ if (it->start - start < sz)9696+ it->start = end;9797+ break;9898+ }9999+ }100100+101101+ if (rm)102102+ move_banks(rm, it);103103+104104+ return 0;213105}214106215107···23999 unsigned long bootmap_start, bootmap_size;240100 int i;241101102102+ sysmem_dump();242103 max_low_pfn = max_pfn = 0;243104 min_low_pfn = ~0;244105···297156298157void __init zones_init(void)299158{300300- unsigned long zones_size[MAX_NR_ZONES];301301- int i;302302-303159 /* All pages are DMA-able, so we put them all in the DMA zone. */304304-305305- zones_size[ZONE_DMA] = max_low_pfn - ARCH_PFN_OFFSET;306306- for (i = 1; i < MAX_NR_ZONES; i++)307307- zones_size[i] = 0;308308-160160+ unsigned long zones_size[MAX_NR_ZONES] = {161161+ [ZONE_DMA] = max_low_pfn - ARCH_PFN_OFFSET,309162#ifdef CONFIG_HIGHMEM310310- zones_size[ZONE_HIGHMEM] = max_pfn - max_low_pfn;163163+ [ZONE_HIGHMEM] = max_pfn - max_low_pfn,311164#endif312312-165165+ };313166 free_area_init_node(0, zones_size, ARCH_PFN_OFFSET, NULL);314167}315168···313178314179void __init mem_init(void)315180{316316- max_mapnr = max_low_pfn - ARCH_PFN_OFFSET;317317- high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);318318-319181#ifdef CONFIG_HIGHMEM320320-#error HIGHGMEM not implemented in init.c182182+ unsigned long tmp;183183+184184+ reset_all_zones_managed_pages();185185+ for (tmp = max_low_pfn; tmp < max_pfn; tmp++)186186+ free_highmem_page(pfn_to_page(tmp));321187#endif188188+189189+ max_mapnr = max_pfn - ARCH_PFN_OFFSET;190190+ high_memory = (void *)__va(max_low_pfn << PAGE_SHIFT);322191323192 free_all_bootmem();324193325194 mem_init_print_info(NULL);195195+ pr_info("virtual kernel memory layout:\n"196196+#ifdef CONFIG_HIGHMEM197197+ " pkmap : 0x%08lx - 0x%08lx (%5lu kB)\n"198198+ " fixmap : 0x%08lx - 0x%08lx (%5lu kB)\n"199199+#endif200200+ " vmalloc : 0x%08x - 0x%08x (%5u MB)\n"201201+ " lowmem : 0x%08x - 0x%08lx (%5lu MB)\n",202202+#ifdef CONFIG_HIGHMEM203203+ PKMAP_BASE, PKMAP_BASE + LAST_PKMAP * PAGE_SIZE,204204+ (LAST_PKMAP*PAGE_SIZE) >> 10,205205+ FIXADDR_START, FIXADDR_TOP,206206+ (FIXADDR_TOP - FIXADDR_START) >> 10,207207+#endif208208+ VMALLOC_START, VMALLOC_END,209209+ (VMALLOC_END - VMALLOC_START) >> 20,210210+ PAGE_OFFSET, PAGE_OFFSET +211211+ (max_low_pfn - min_low_pfn) * PAGE_SIZE,212212+ ((max_low_pfn - min_low_pfn) * PAGE_SIZE) >> 20);326213}327214328215#ifdef CONFIG_BLK_DEV_INITRD···361204{362205 free_initmem_default(-1);363206}207207+208208+static void __init parse_memmap_one(char *p)209209+{210210+ char *oldp;211211+ unsigned long start_at, mem_size;212212+213213+ if (!p)214214+ return;215215+216216+ oldp = p;217217+ mem_size = memparse(p, &p);218218+ if (p == oldp)219219+ return;220220+221221+ switch (*p) {222222+ case '@':223223+ start_at = memparse(p + 1, &p);224224+ add_sysmem_bank(start_at, start_at + mem_size);225225+ break;226226+227227+ case '$':228228+ start_at = memparse(p + 1, &p);229229+ mem_reserve(start_at, start_at + mem_size, 0);230230+ break;231231+232232+ case 0:233233+ mem_reserve(mem_size, 0, 0);234234+ break;235235+236236+ default:237237+ pr_warn("Unrecognized memmap syntax: %s\n", p);238238+ break;239239+ }240240+}241241+242242+static int __init parse_memmap_opt(char *str)243243+{244244+ while (str) {245245+ char *k = strchr(str, ',');246246+247247+ if (k)248248+ *k++ = 0;249249+250250+ parse_memmap_one(str);251251+ str = k;252252+ }253253+254254+ return 0;255255+}256256+early_param("memmap", parse_memmap_opt);
···92929393/* early initialization */94949595-extern sysmem_info_t __initdata sysmem;9696-9797-void platform_init(bp_tag_t* first)9595+void __init platform_init(bp_tag_t *first)9896{9999- /* Set default memory block if not provided by the bootloader. */100100-101101- if (sysmem.nr_banks == 0) {102102- sysmem.nr_banks = 1;103103- sysmem.bank[0].start = PLATFORM_DEFAULT_MEM_START;104104- sysmem.bank[0].end = PLATFORM_DEFAULT_MEM_START105105- + PLATFORM_DEFAULT_MEM_SIZE;106106- }10797}1089810999/* Heartbeat. Let the LED blink. */
+1-1
crypto/crypto_user.c
···466466 type -= CRYPTO_MSG_BASE;467467 link = &crypto_dispatch[type];468468469469- if (!capable(CAP_NET_ADMIN))469469+ if (!netlink_capable(skb, CAP_NET_ADMIN))470470 return -EPERM;471471472472 if ((type == (CRYPTO_MSG_GETALG - CRYPTO_MSG_BASE) &&
+4-3
drivers/acpi/acpi_processor.c
···170170 acpi_status status;171171 int ret;172172173173+ if (pr->apic_id == -1)174174+ return -ENODEV;175175+173176 status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);174177 if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))175178 return -ENODEV;···263260 }264261265262 apic_id = acpi_get_apicid(pr->handle, device_declaration, pr->acpi_id);266266- if (apic_id < 0) {263263+ if (apic_id < 0)267264 acpi_handle_debug(pr->handle, "failed to get CPU APIC ID.\n");268268- return -ENODEV;269269- }270265 pr->apic_id = apic_id;271266272267 cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id);
+39-7
drivers/acpi/device_pm.c
···900900 */901901int acpi_subsys_prepare(struct device *dev)902902{903903- /*904904- * Devices having power.ignore_children set may still be necessary for905905- * suspending their children in the next phase of device suspend.906906- */907907- if (dev->power.ignore_children)908908- pm_runtime_resume(dev);903903+ struct acpi_device *adev = ACPI_COMPANION(dev);904904+ u32 sys_target;905905+ int ret, state;909906910910- return pm_generic_prepare(dev);907907+ ret = pm_generic_prepare(dev);908908+ if (ret < 0)909909+ return ret;910910+911911+ if (!adev || !pm_runtime_suspended(dev)912912+ || device_may_wakeup(dev) != !!adev->wakeup.prepare_count)913913+ return 0;914914+915915+ sys_target = acpi_target_system_state();916916+ if (sys_target == ACPI_STATE_S0)917917+ return 1;918918+919919+ if (adev->power.flags.dsw_present)920920+ return 0;921921+922922+ ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state);923923+ return !ret && state == adev->power.state;911924}912925EXPORT_SYMBOL_GPL(acpi_subsys_prepare);926926+927927+/**928928+ * acpi_subsys_complete - Finalize device's resume during system resume.929929+ * @dev: Device to handle.930930+ */931931+void acpi_subsys_complete(struct device *dev)932932+{933933+ /*934934+ * If the device had been runtime-suspended before the system went into935935+ * the sleep state it is going out of and it has never been resumed till936936+ * now, resume it in case the firmware powered it up.937937+ */938938+ if (dev->power.direct_complete)939939+ pm_request_resume(dev);940940+}941941+EXPORT_SYMBOL_GPL(acpi_subsys_complete);913942914943/**915944 * acpi_subsys_suspend - Run the device driver's suspend callback.···952923 pm_runtime_resume(dev);953924 return pm_generic_suspend(dev);954925}926926+EXPORT_SYMBOL_GPL(acpi_subsys_suspend);955927956928/**957929 * acpi_subsys_suspend_late - Suspend device using ACPI.···998968 pm_runtime_resume(dev);999969 return pm_generic_freeze(dev);1000970}971971+EXPORT_SYMBOL_GPL(acpi_subsys_freeze);10019721002973#endif /* CONFIG_PM_SLEEP */1003974···1010979#endif1011980#ifdef CONFIG_PM_SLEEP1012981 .prepare = acpi_subsys_prepare,982982+ .complete = acpi_subsys_complete,1013983 .suspend = acpi_subsys_suspend,1014984 .suspend_late = acpi_subsys_suspend_late,1015985 .resume_early = acpi_subsys_resume_early,
+12-9
drivers/acpi/ec.c
···206206 spin_unlock_irqrestore(&ec->lock, flags);207207}208208209209-static int acpi_ec_sync_query(struct acpi_ec *ec);209209+static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data);210210211211static int ec_check_sci_sync(struct acpi_ec *ec, u8 state)212212{213213 if (state & ACPI_EC_FLAG_SCI) {214214 if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags))215215- return acpi_ec_sync_query(ec);215215+ return acpi_ec_sync_query(ec, NULL);216216 }217217 return 0;218218}···443443444444EXPORT_SYMBOL(ec_get_handle);445445446446-static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data);447447-448446/*449449- * Clears stale _Q events that might have accumulated in the EC.447447+ * Process _Q events that might have accumulated in the EC.450448 * Run with locked ec mutex.451449 */452450static void acpi_ec_clear(struct acpi_ec *ec)···453455 u8 value = 0;454456455457 for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {456456- status = acpi_ec_query_unlocked(ec, &value);458458+ status = acpi_ec_sync_query(ec, &value);457459 if (status || !value)458460 break;459461 }···580582 kfree(handler);581583}582584583583-static int acpi_ec_sync_query(struct acpi_ec *ec)585585+static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data)584586{585587 u8 value = 0;586588 int status;587589 struct acpi_ec_query_handler *handler, *copy;588588- if ((status = acpi_ec_query_unlocked(ec, &value)))590590+591591+ status = acpi_ec_query_unlocked(ec, &value);592592+ if (data)593593+ *data = value;594594+ if (status)589595 return status;596596+590597 list_for_each_entry(handler, &ec->list, node) {591598 if (value == handler->query_bit) {592599 /* have custom handler for this bit */···615612 if (!ec)616613 return;617614 mutex_lock(&ec->mutex);618618- acpi_ec_sync_query(ec);615615+ acpi_ec_sync_query(ec, NULL);619616 mutex_unlock(&ec->mutex);620617}621618
+4
drivers/acpi/scan.c
···15511551 */15521552 if (acpi_has_method(device->handle, "_PSC"))15531553 device->power.flags.explicit_get = 1;15541554+15541555 if (acpi_has_method(device->handle, "_IRC"))15551556 device->power.flags.inrush_current = 1;15571557+15581558+ if (acpi_has_method(device->handle, "_DSW"))15591559+ device->power.flags.dsw_present = 1;1556156015571561 /*15581562 * Enumerate supported power management states
···5252static LIST_HEAD(deferred_probe_pending_list);5353static LIST_HEAD(deferred_probe_active_list);5454static struct workqueue_struct *deferred_wq;5555+static atomic_t deferred_trigger_count = ATOMIC_INIT(0);55565657/**5758 * deferred_probe_work_func() - Retry probing devices in the active list.···136135 * This functions moves all devices from the pending list to the active137136 * list and schedules the deferred probe workqueue to process them. It138137 * should be called anytime a driver is successfully bound to a device.138138+ *139139+ * Note, there is a race condition in multi-threaded probe. In the case where140140+ * more than one device is probing at the same time, it is possible for one141141+ * probe to complete successfully while another is about to defer. If the second142142+ * depends on the first, then it will get put on the pending list after the143143+ * trigger event has already occured and will be stuck there.144144+ *145145+ * The atomic 'deferred_trigger_count' is used to determine if a successful146146+ * trigger has occurred in the midst of probing a driver. If the trigger count147147+ * changes in the midst of a probe, then deferred processing should be triggered148148+ * again.139149 */140150static void driver_deferred_probe_trigger(void)141151{···159147 * into the active list so they can be retried by the workqueue160148 */161149 mutex_lock(&deferred_probe_mutex);150150+ atomic_inc(&deferred_trigger_count);162151 list_splice_tail_init(&deferred_probe_pending_list,163152 &deferred_probe_active_list);164153 mutex_unlock(&deferred_probe_mutex);···278265static int really_probe(struct device *dev, struct device_driver *drv)279266{280267 int ret = 0;268268+ int local_trigger_count = atomic_read(&deferred_trigger_count);281269282270 atomic_inc(&probe_count);283271 pr_debug("bus: '%s': %s: probing driver %s with device %s\n",···324310 /* Driver requested deferred probing */325311 dev_info(dev, "Driver %s requests probe deferral\n", drv->name);326312 driver_deferred_probe_add(dev);313313+ /* Did a trigger occur while probing? Need to re-trigger if yes */314314+ if (local_trigger_count != atomic_read(&deferred_trigger_count))315315+ driver_deferred_probe_trigger();327316 } else if (ret != -ENODEV && ret != -ENXIO) {328317 /* driver matched but the probe failed */329318 printk(KERN_WARNING
···369369 return;370370371371 /* Can only change if privileged. */372372- if (!capable(CAP_NET_ADMIN)) {372372+ if (!__netlink_ns_capable(nsp, &init_user_ns, CAP_NET_ADMIN)) {373373 err = EPERM;374374 goto out;375375 }
+24-12
drivers/cpufreq/longhaul.c
···242242 * Sets a new clock ratio.243243 */244244245245-static void longhaul_setstate(struct cpufreq_policy *policy,245245+static int longhaul_setstate(struct cpufreq_policy *policy,246246 unsigned int table_index)247247{248248 unsigned int mults_index;···258258 /* Safety precautions */259259 mult = mults[mults_index & 0x1f];260260 if (mult == -1)261261- return;261261+ return -EINVAL;262262+262263 speed = calc_speed(mult);263264 if ((speed > highest_speed) || (speed < lowest_speed))264264- return;265265+ return -EINVAL;266266+265267 /* Voltage transition before frequency transition? */266268 if (can_scale_voltage && longhaul_index < table_index)267269 dir = 1;268270269271 freqs.old = calc_speed(longhaul_get_cpu_mult());270272 freqs.new = speed;271271-272272- cpufreq_freq_transition_begin(policy, &freqs);273273274274 pr_debug("Setting to FSB:%dMHz Mult:%d.%dx (%s)\n",275275 fsb, mult/10, mult%10, print_speed(speed/1000));···385385 goto retry_loop;386386 }387387 }388388- /* Report true CPU frequency */389389- cpufreq_freq_transition_end(policy, &freqs, 0);390388391391- if (!bm_timeout)389389+ if (!bm_timeout) {392390 printk(KERN_INFO PFX "Warning: Timeout while waiting for "393391 "idle PCI bus.\n");392392+ return -EBUSY;393393+ }394394+395395+ return 0;394396}395397396398/*···633631 unsigned int i;634632 unsigned int dir = 0;635633 u8 vid, current_vid;634634+ int retval = 0;636635637636 if (!can_scale_voltage)638638- longhaul_setstate(policy, table_index);637637+ retval = longhaul_setstate(policy, table_index);639638 else {640639 /* On test system voltage transitions exceeding single641640 * step up or down were turning motherboard off. Both···651648 while (i != table_index) {652649 vid = (longhaul_table[i].driver_data >> 8) & 0x1f;653650 if (vid != current_vid) {654654- longhaul_setstate(policy, i);651651+ retval = longhaul_setstate(policy, i);655652 current_vid = vid;656653 msleep(200);657654 }···660657 else661658 i--;662659 }663663- longhaul_setstate(policy, table_index);660660+ retval = longhaul_setstate(policy, table_index);664661 }662662+665663 longhaul_index = table_index;666666- return 0;664664+ return retval;667665}668666669667···972968973969 for (i = 0; i < numscales; i++) {974970 if (mults[i] == maxmult) {971971+ struct cpufreq_freqs freqs;972972+973973+ freqs.old = policy->cur;974974+ freqs.new = longhaul_table[i].frequency;975975+ freqs.flags = 0;976976+977977+ cpufreq_freq_transition_begin(policy, &freqs);975978 longhaul_setstate(policy, i);979979+ cpufreq_freq_transition_end(policy, &freqs, 0);976980 break;977981 }978982 }
+13-10
drivers/cpufreq/powernow-k6.c
···138138static int powernow_k6_target(struct cpufreq_policy *policy,139139 unsigned int best_i)140140{141141- struct cpufreq_freqs freqs;142141143142 if (clock_ratio[best_i].driver_data > max_multiplier) {144143 printk(KERN_ERR PFX "invalid target frequency\n");145144 return -EINVAL;146145 }147146148148- freqs.old = busfreq * powernow_k6_get_cpu_multiplier();149149- freqs.new = busfreq * clock_ratio[best_i].driver_data;150150-151151- cpufreq_freq_transition_begin(policy, &freqs);152152-153147 powernow_k6_set_cpu_multiplier(best_i);154154-155155- cpufreq_freq_transition_end(policy, &freqs, 0);156148157149 return 0;158150}···219227static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)220228{221229 unsigned int i;222222- for (i = 0; i < 8; i++) {223223- if (i == max_multiplier)230230+231231+ for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {232232+ if (clock_ratio[i].driver_data == max_multiplier) {233233+ struct cpufreq_freqs freqs;234234+235235+ freqs.old = policy->cur;236236+ freqs.new = clock_ratio[i].frequency;237237+ freqs.flags = 0;238238+239239+ cpufreq_freq_transition_begin(policy, &freqs);224240 powernow_k6_target(policy, i);241241+ cpufreq_freq_transition_end(policy, &freqs, 0);242242+ break;243243+ }225244 }226245 return 0;227246}
-4
drivers/cpufreq/powernow-k7.c
···269269270270 freqs.new = powernow_table[index].frequency;271271272272- cpufreq_freq_transition_begin(policy, &freqs);273273-274272 /* Now do the magic poking into the MSRs. */275273276274 if (have_a0 == 1) /* A0 errata 5 */···287289288290 if (have_a0 == 1)289291 local_irq_enable();290290-291291- cpufreq_freq_transition_end(policy, &freqs, 0);292292293293 return 0;294294}
···34343535bool intel_enable_ppgtt(struct drm_device *dev, bool full)3636{3737- if (i915.enable_ppgtt == 0 || !HAS_ALIASING_PPGTT(dev))3737+ if (i915.enable_ppgtt == 0)3838 return false;39394040 if (i915.enable_ppgtt == 1 && full)4141 return false;42424343+ return true;4444+}4545+4646+static int sanitize_enable_ppgtt(struct drm_device *dev, int enable_ppgtt)4747+{4848+ if (enable_ppgtt == 0 || !HAS_ALIASING_PPGTT(dev))4949+ return 0;5050+5151+ if (enable_ppgtt == 1)5252+ return 1;5353+5454+ if (enable_ppgtt == 2 && HAS_PPGTT(dev))5555+ return 2;5656+4357#ifdef CONFIG_INTEL_IOMMU4458 /* Disable ppgtt on SNB if VT-d is on. */4559 if (INTEL_INFO(dev)->gen == 6 && intel_iommu_gfx_mapped) {4660 DRM_INFO("Disabling PPGTT because VT-d is on\n");4747- return false;6161+ return 0;4862 }4963#endif50645151- /* Full ppgtt disabled by default for now due to issues. */5252- if (full)5353- return false; /* HAS_PPGTT(dev) */5454- else5555- return HAS_ALIASING_PPGTT(dev);6565+ return HAS_ALIASING_PPGTT(dev) ? 1 : 0;5666}57675868#define GEN6_PPGTT_PD_ENTRIES 512···20412031 gtt->base.total >> 20);20422032 DRM_DEBUG_DRIVER("GMADR size = %ldM\n", gtt->mappable_end >> 20);20432033 DRM_DEBUG_DRIVER("GTT stolen size = %zdM\n", gtt->stolen_size >> 20);20342034+ /*20352035+ * i915.enable_ppgtt is read-only, so do an early pass to validate the20362036+ * user's requested state against the hardware/driver capabilities. We20372037+ * do this now so that we can print out any log messages once rather20382038+ * than every time we check intel_enable_ppgtt().20392039+ */20402040+ i915.enable_ppgtt = sanitize_enable_ppgtt(dev, i915.enable_ppgtt);20412041+ DRM_DEBUG_DRIVER("ppgtt mode: %i\n", i915.enable_ppgtt);2044204220452043 return 0;20462044}
+14-4
drivers/gpu/drm/i915/i915_irq.c
···13621362 spin_lock(&dev_priv->irq_lock);13631363 for (i = 1; i < HPD_NUM_PINS; i++) {1364136413651365- WARN_ONCE(hpd[i] & hotplug_trigger &&13661366- dev_priv->hpd_stats[i].hpd_mark == HPD_DISABLED,13671367- "Received HPD interrupt (0x%08x) on pin %d (0x%08x) although disabled\n",13681368- hotplug_trigger, i, hpd[i]);13651365+ if (hpd[i] & hotplug_trigger &&13661366+ dev_priv->hpd_stats[i].hpd_mark == HPD_DISABLED) {13671367+ /*13681368+ * On GMCH platforms the interrupt mask bits only13691369+ * prevent irq generation, not the setting of the13701370+ * hotplug bits itself. So only WARN about unexpected13711371+ * interrupts on saner platforms.13721372+ */13731373+ WARN_ONCE(INTEL_INFO(dev)->gen >= 5 && !IS_VALLEYVIEW(dev),13741374+ "Received HPD interrupt (0x%08x) on pin %d (0x%08x) although disabled\n",13751375+ hotplug_trigger, i, hpd[i]);13761376+13771377+ continue;13781378+ }1369137913701380 if (!(hpd[i] & hotplug_trigger) ||13711381 dev_priv->hpd_stats[i].hpd_mark != HPD_ENABLED)
···96549654 PIPE_CONF_CHECK_I(pipe_src_w);96559655 PIPE_CONF_CHECK_I(pipe_src_h);9656965696579657- PIPE_CONF_CHECK_I(gmch_pfit.control);96589658- /* pfit ratios are autocomputed by the hw on gen4+ */96599659- if (INTEL_INFO(dev)->gen < 4)96609660- PIPE_CONF_CHECK_I(gmch_pfit.pgm_ratios);96619661- PIPE_CONF_CHECK_I(gmch_pfit.lvds_border_bits);96579657+ /*96589658+ * FIXME: BIOS likes to set up a cloned config with lvds+external96599659+ * screen. Since we don't yet re-compute the pipe config when moving96609660+ * just the lvds port away to another pipe the sw tracking won't match.96619661+ *96629662+ * Proper atomic modesets with recomputed global state will fix this.96639663+ * Until then just don't check gmch state for inherited modes.96649664+ */96659665+ if (!PIPE_CONF_QUIRK(PIPE_CONFIG_QUIRK_INHERITED_MODE)) {96669666+ PIPE_CONF_CHECK_I(gmch_pfit.control);96679667+ /* pfit ratios are autocomputed by the hw on gen4+ */96689668+ if (INTEL_INFO(dev)->gen < 4)96699669+ PIPE_CONF_CHECK_I(gmch_pfit.pgm_ratios);96709670+ PIPE_CONF_CHECK_I(gmch_pfit.lvds_border_bits);96719671+ }96729672+96629673 PIPE_CONF_CHECK_I(pch_pfit.enabled);96639674 if (current_config->pch_pfit.enabled) {96649675 PIPE_CONF_CHECK_I(pch_pfit.pos);···1139511384 }1139611385}11397113861139811398-static void1139911399-intel_connector_break_all_links(struct intel_connector *connector)1140011400-{1140111401- connector->base.dpms = DRM_MODE_DPMS_OFF;1140211402- connector->base.encoder = NULL;1140311403- connector->encoder->connectors_active = false;1140411404- connector->encoder->base.crtc = NULL;1140511405-}1140611406-1140711387static void intel_enable_pipe_a(struct drm_device *dev)1140811388{1140911389 struct intel_connector *connector;···1147611474 if (connector->encoder->base.crtc != &crtc->base)1147711475 continue;11478114761147911479- intel_connector_break_all_links(connector);1147711477+ connector->base.dpms = DRM_MODE_DPMS_OFF;1147811478+ connector->base.encoder = NULL;1148011479 }1148011480+ /* multiple connectors may have the same encoder:1148111481+ * handle them and break crtc link separately */1148211482+ list_for_each_entry(connector, &dev->mode_config.connector_list,1148311483+ base.head)1148411484+ if (connector->encoder->base.crtc == &crtc->base) {1148511485+ connector->encoder->base.crtc = NULL;1148611486+ connector->encoder->connectors_active = false;1148711487+ }11481114881148211489 WARN_ON(crtc->active);1148311490 crtc->base.enabled = false;···1156811557 drm_get_encoder_name(&encoder->base));1156911558 encoder->disable(encoder);1157011559 }1156011560+ encoder->base.crtc = NULL;1156111561+ encoder->connectors_active = false;11571115621157211563 /* Inconsistent output/port/pipe state happens presumably due to1157311564 * a bug in one of the get_hw_state functions. Or someplace else···1158011567 base.head) {1158111568 if (connector->encoder != encoder)1158211569 continue;1158311583-1158411584- intel_connector_break_all_links(connector);1157011570+ connector->base.dpms = DRM_MODE_DPMS_OFF;1157111571+ connector->base.encoder = NULL;1158511572 }1158611573 }1158711574 /* Enabled encoders without active connectors will be fixed in···1162811615 list_for_each_entry(crtc, &dev->mode_config.crtc_list,1162911616 base.head) {1163011617 memset(&crtc->config, 0, sizeof(crtc->config));1161811618+1161911619+ crtc->config.quirks |= PIPE_CONFIG_QUIRK_INHERITED_MODE;11631116201163211621 crtc->active = dev_priv->display.get_pipe_config(crtc,1163311622 &crtc->config);
+12-2
drivers/gpu/drm/i915/intel_dp.c
···105105 case DP_LINK_BW_2_7:106106 break;107107 case DP_LINK_BW_5_4: /* 1.2 capable displays may advertise higher bw */108108- if ((IS_HASWELL(dev) || INTEL_INFO(dev)->gen >= 8) &&108108+ if (((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) ||109109+ INTEL_INFO(dev)->gen >= 8) &&109110 intel_dp->dpcd[DP_DPCD_REV] >= 0x12)110111 max_link_bw = DP_LINK_BW_5_4;111112 else···36203619{36213620 struct drm_connector *connector = &intel_connector->base;36223621 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);36233623- struct drm_device *dev = intel_dig_port->base.base.dev;36223622+ struct intel_encoder *intel_encoder = &intel_dig_port->base;36233623+ struct drm_device *dev = intel_encoder->base.dev;36243624 struct drm_i915_private *dev_priv = dev->dev_private;36253625 struct drm_display_mode *fixed_mode = NULL;36263626 bool has_dpcd;···3630362836313629 if (!is_edp(intel_dp))36323630 return true;36313631+36323632+ /* The VDD bit needs a power domain reference, so if the bit is already36333633+ * enabled when we boot, grab this reference. */36343634+ if (edp_have_panel_vdd(intel_dp)) {36353635+ enum intel_display_power_domain power_domain;36363636+ power_domain = intel_display_port_power_domain(intel_encoder);36373637+ intel_display_power_get(dev_priv, power_domain);36383638+ }3633363936343640 /* Cache DPCD and EDID for edp. */36353641 intel_edp_panel_vdd_on(intel_dp);
+2-1
drivers/gpu/drm/i915/intel_drv.h
···236236 * tracked with quirk flags so that fastboot and state checker can act237237 * accordingly.238238 */239239-#define PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS (1<<0) /* unreliable sync mode.flags */239239+#define PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS (1<<0) /* unreliable sync mode.flags */240240+#define PIPE_CONFIG_QUIRK_INHERITED_MODE (1<<1) /* mode inherited from firmware */240241 unsigned long quirks;241242242243 /* User requested mode, only valid as a starting point to
+10
drivers/gpu/drm/i915/intel_fbdev.c
···132132133133 mutex_lock(&dev->struct_mutex);134134135135+ if (intel_fb &&136136+ (sizes->fb_width > intel_fb->base.width ||137137+ sizes->fb_height > intel_fb->base.height)) {138138+ DRM_DEBUG_KMS("BIOS fb too small (%dx%d), we require (%dx%d),"139139+ " releasing it\n",140140+ intel_fb->base.width, intel_fb->base.height,141141+ sizes->fb_width, sizes->fb_height);142142+ drm_framebuffer_unreference(&intel_fb->base);143143+ intel_fb = ifbdev->fb = NULL;144144+ }135145 if (!intel_fb || WARN_ON(!intel_fb->obj)) {136146 DRM_DEBUG_KMS("no BIOS fb, allocating a new one\n");137147 ret = intelfb_alloc(helper, sizes);
+5-4
drivers/gpu/drm/i915/intel_hdmi.c
···821821 }822822}823823824824-static int hdmi_portclock_limit(struct intel_hdmi *hdmi)824824+static int hdmi_portclock_limit(struct intel_hdmi *hdmi, bool respect_dvi_limit)825825{826826 struct drm_device *dev = intel_hdmi_to_dev(hdmi);827827828828- if (!hdmi->has_hdmi_sink || IS_G4X(dev))828828+ if ((respect_dvi_limit && !hdmi->has_hdmi_sink) || IS_G4X(dev))829829 return 165000;830830 else if (IS_HASWELL(dev) || INTEL_INFO(dev)->gen >= 8)831831 return 300000;···837837intel_hdmi_mode_valid(struct drm_connector *connector,838838 struct drm_display_mode *mode)839839{840840- if (mode->clock > hdmi_portclock_limit(intel_attached_hdmi(connector)))840840+ if (mode->clock > hdmi_portclock_limit(intel_attached_hdmi(connector),841841+ true))841842 return MODE_CLOCK_HIGH;842843 if (mode->clock < 20000)843844 return MODE_CLOCK_LOW;···880879 struct drm_device *dev = encoder->base.dev;881880 struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode;882881 int clock_12bpc = pipe_config->adjusted_mode.crtc_clock * 3 / 2;883883- int portclock_limit = hdmi_portclock_limit(intel_hdmi);882882+ int portclock_limit = hdmi_portclock_limit(intel_hdmi, false);884883 int desired_bpp;885884886885 if (intel_hdmi->color_range_auto) {
+34-20
drivers/gpu/drm/i915/intel_ringbuffer.c
···437437 I915_WRITE(HWS_PGA, addr);438438}439439440440+static bool stop_ring(struct intel_ring_buffer *ring)441441+{442442+ struct drm_i915_private *dev_priv = to_i915(ring->dev);443443+444444+ if (!IS_GEN2(ring->dev)) {445445+ I915_WRITE_MODE(ring, _MASKED_BIT_ENABLE(STOP_RING));446446+ if (wait_for_atomic((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000)) {447447+ DRM_ERROR("%s :timed out trying to stop ring\n", ring->name);448448+ return false;449449+ }450450+ }451451+452452+ I915_WRITE_CTL(ring, 0);453453+ I915_WRITE_HEAD(ring, 0);454454+ ring->write_tail(ring, 0);455455+456456+ if (!IS_GEN2(ring->dev)) {457457+ (void)I915_READ_CTL(ring);458458+ I915_WRITE_MODE(ring, _MASKED_BIT_DISABLE(STOP_RING));459459+ }460460+461461+ return (I915_READ_HEAD(ring) & HEAD_ADDR) == 0;462462+}463463+440464static int init_ring_common(struct intel_ring_buffer *ring)441465{442466 struct drm_device *dev = ring->dev;443467 struct drm_i915_private *dev_priv = dev->dev_private;444468 struct drm_i915_gem_object *obj = ring->obj;445469 int ret = 0;446446- u32 head;447470448471 gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);449472450450- /* Stop the ring if it's running. */451451- I915_WRITE_CTL(ring, 0);452452- I915_WRITE_HEAD(ring, 0);453453- ring->write_tail(ring, 0);454454- if (wait_for_atomic((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000))455455- DRM_ERROR("%s :timed out trying to stop ring\n", ring->name);456456-457457- if (I915_NEED_GFX_HWS(dev))458458- intel_ring_setup_status_page(ring);459459- else460460- ring_setup_phys_status_page(ring);461461-462462- head = I915_READ_HEAD(ring) & HEAD_ADDR;463463-464464- /* G45 ring initialization fails to reset head to zero */465465- if (head != 0) {473473+ if (!stop_ring(ring)) {474474+ /* G45 ring initialization often fails to reset head to zero */466475 DRM_DEBUG_KMS("%s head not reset to zero "467476 "ctl %08x head %08x tail %08x start %08x\n",468477 ring->name,···480471 I915_READ_TAIL(ring),481472 I915_READ_START(ring));482473483483- I915_WRITE_HEAD(ring, 0);484484-485485- if (I915_READ_HEAD(ring) & HEAD_ADDR) {474474+ if (!stop_ring(ring)) {486475 DRM_ERROR("failed to set %s head to zero "487476 "ctl %08x head %08x tail %08x start %08x\n",488477 ring->name,···488481 I915_READ_HEAD(ring),489482 I915_READ_TAIL(ring),490483 I915_READ_START(ring));484484+ ret = -EIO;485485+ goto out;491486 }492487 }488488+489489+ if (I915_NEED_GFX_HWS(dev))490490+ intel_ring_setup_status_page(ring);491491+ else492492+ ring_setup_phys_status_page(ring);493493494494 /* Initialize the ring. This must happen _after_ we've cleared the ring495495 * registers with the above sequence (the readback of the HEAD registers
···510510 MDP4_DMA_CURSOR_BLEND_CONFIG_CURSOR_EN);511511 } else {512512 /* disable cursor: */513513- mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_BASE(dma), 0);514514- mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_BLEND_CONFIG(dma),515515- MDP4_DMA_CURSOR_BLEND_CONFIG_FORMAT(CURSOR_ARGB));513513+ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_BASE(dma),514514+ mdp4_kms->blank_cursor_iova);516515 }517516518517 /* and drop the iova ref + obj rev when done scanning out: */···573574574575 if (old_bo) {575576 /* drop our previous reference: */576576- msm_gem_put_iova(old_bo, mdp4_kms->id);577577- drm_gem_object_unreference_unlocked(old_bo);577577+ drm_flip_work_queue(&mdp4_crtc->unref_cursor_work, old_bo);578578 }579579580580- crtc_flush(crtc);581580 request_pending(crtc, PENDING_CURSOR);582581583582 return 0;
+2-2
drivers/gpu/drm/msm/mdp/mdp4/mdp4_irq.c
···70707171 VERB("status=%08x", status);72727373+ mdp_dispatch_irqs(mdp_kms, status);7474+7375 for (id = 0; id < priv->num_crtcs; id++)7476 if (status & mdp4_crtc_vblank(priv->crtcs[id]))7577 drm_handle_vblank(dev, id);7676-7777- mdp_dispatch_irqs(mdp_kms, status);78787979 return IRQ_HANDLED;8080}
+21
drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
···144144static void mdp4_destroy(struct msm_kms *kms)145145{146146 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));147147+ if (mdp4_kms->blank_cursor_iova)148148+ msm_gem_put_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id);149149+ if (mdp4_kms->blank_cursor_bo)150150+ drm_gem_object_unreference(mdp4_kms->blank_cursor_bo);147151 kfree(mdp4_kms);148152}149153···373369 ret = modeset_init(mdp4_kms);374370 if (ret) {375371 dev_err(dev->dev, "modeset_init failed: %d\n", ret);372372+ goto fail;373373+ }374374+375375+ mutex_lock(&dev->struct_mutex);376376+ mdp4_kms->blank_cursor_bo = msm_gem_new(dev, SZ_16K, MSM_BO_WC);377377+ mutex_unlock(&dev->struct_mutex);378378+ if (IS_ERR(mdp4_kms->blank_cursor_bo)) {379379+ ret = PTR_ERR(mdp4_kms->blank_cursor_bo);380380+ dev_err(dev->dev, "could not allocate blank-cursor bo: %d\n", ret);381381+ mdp4_kms->blank_cursor_bo = NULL;382382+ goto fail;383383+ }384384+385385+ ret = msm_gem_get_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id,386386+ &mdp4_kms->blank_cursor_iova);387387+ if (ret) {388388+ dev_err(dev->dev, "could not pin blank-cursor bo: %d\n", ret);376389 goto fail;377390 }378391
+4
drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h
···4444 struct clk *lut_clk;45454646 struct mdp_irq error_handler;4747+4848+ /* empty/blank cursor bo to use when cursor is "disabled" */4949+ struct drm_gem_object *blank_cursor_bo;5050+ uint32_t blank_cursor_iova;4751};4852#define to_mdp4_kms(x) container_of(x, struct mdp4_kms, base)4953
+2-2
drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
···71717272 VERB("status=%08x", status);73737474+ mdp_dispatch_irqs(mdp_kms, status);7575+7476 for (id = 0; id < priv->num_crtcs; id++)7577 if (status & mdp5_crtc_vblank(priv->crtcs[id]))7678 drm_handle_vblank(dev, id);7777-7878- mdp_dispatch_irqs(mdp_kms, status);7979}80808181irqreturn_t mdp5_irq(struct msm_kms *kms)
+1-4
drivers/gpu/drm/msm/msm_fbdev.c
···6262 dma_addr_t paddr;6363 int ret, size;64646565- /* only doing ARGB32 since this is what is needed to alpha-blend6666- * with video overlays:6767- */6865 sizes->surface_bpp = 32;6969- sizes->surface_depth = 32;6666+ sizes->surface_depth = 24;70677168 DBG("create fbdev: %dx%d@%d (%dx%d)", sizes->surface_width,7269 sizes->surface_height, sizes->surface_bpp,
···13001300 case CHIP_KABINI:13011301 case CHIP_KAVERI:13021302 case CHIP_HAWAII:13031303+ case CHIP_MULLINS:13031304 /* DPM requires the RLC, RV770+ dGPU requires SMC */13041305 if (!rdev->rlc_fw)13051306 rdev->pm.pm_method = PM_METHOD_PROFILE;
···365365 if (cpu_has_tjmax(c))366366 dev_warn(dev, "Unable to read TjMax from CPU %u\n", id);367367 } else {368368- val = (eax >> 16) & 0x7f;368368+ val = (eax >> 16) & 0xff;369369 /*370370 * If the TjMax is not plausible, an assumption371371 * will be used372372 */373373- if (val >= 85) {373373+ if (val) {374374 dev_dbg(dev, "TjMax is %d degrees C\n", val);375375 return val * 1000;376376 }
+2-2
drivers/iio/adc/Kconfig
···106106 Say yes here to build support for Atmel AT91 ADC.107107108108config EXYNOS_ADC109109- bool "Exynos ADC driver support"109109+ tristate "Exynos ADC driver support"110110 depends on OF111111 help112112 Core support for the ADC block found in the Samsung EXYNOS series···114114 this resource.115115116116config LP8788_ADC117117- bool "LP8788 ADC driver"117117+ tristate "LP8788 ADC driver"118118 depends on MFD_LP8788119119 help120120 Say yes here to build support for TI LP8788 ADC.
···660660{661661 struct inv_mpu6050_state *st;662662 struct iio_dev *indio_dev;663663+ struct inv_mpu6050_platform_data *pdata;663664 int result;664665665666 if (!i2c_check_functionality(client->adapter,···673672674673 st = iio_priv(indio_dev);675674 st->client = client;676676- st->plat_data = *(struct inv_mpu6050_platform_data677677- *)dev_get_platdata(&client->dev);675675+ pdata = (struct inv_mpu6050_platform_data676676+ *)dev_get_platdata(&client->dev);677677+ if (pdata)678678+ st->plat_data = *pdata;678679 /* power is turned on inside check chip type*/679680 result = inv_check_and_setup_chip(st, id);680681 if (result)
+3-3
drivers/infiniband/hw/cxgb4/Kconfig
···11config INFINIBAND_CXGB422- tristate "Chelsio T4 RDMA Driver"22+ tristate "Chelsio T4/T5 RDMA Driver"33 depends on CHELSIO_T4 && INET && (IPV6 || IPV6=n)44 select GENERIC_ALLOCATOR55 ---help---66- This is an iWARP/RDMA driver for the Chelsio T4 1GbE and77- 10GbE adapters.66+ This is an iWARP/RDMA driver for the Chelsio T4 and T577+ 1GbE, 10GbE adapters and T5 40GbE adapter.8899 For general information about Chelsio and our products, visit1010 our website at <http://www.chelsio.com>.
+28-11
drivers/infiniband/hw/cxgb4/cm.c
···587587 opt2 |= SACK_EN(1);588588 if (wscale && enable_tcp_window_scaling)589589 opt2 |= WND_SCALE_EN(1);590590+ if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) {591591+ opt2 |= T5_OPT_2_VALID;592592+ opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE);593593+ }590594 t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure);591595592596 if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) {···1000996static int abort_connection(struct c4iw_ep *ep, struct sk_buff *skb, gfp_t gfp)1001997{1002998 PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);10031003- state_set(&ep->com, ABORTING);999999+ __state_set(&ep->com, ABORTING);10041000 set_bit(ABORT_CONN, &ep->com.history);10051001 return send_abort(ep, skb, gfp);10061002}···11581154 return credits;11591155}1160115611611161-static void process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)11571157+static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)11621158{11631159 struct mpa_message *mpa;11641160 struct mpa_v2_conn_params *mpa_v2_params;···11681164 struct c4iw_qp_attributes attrs;11691165 enum c4iw_qp_attr_mask mask;11701166 int err;11671167+ int disconnect = 0;1171116811721169 PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);11731170···11781173 * will abort the connection.11791174 */11801175 if (stop_ep_timer(ep))11811181- return;11761176+ return 0;1182117711831178 /*11841179 * If we get more than the supported amount of private data···12001195 * if we don't even have the mpa message, then bail.12011196 */12021197 if (ep->mpa_pkt_len < sizeof(*mpa))12031203- return;11981198+ return 0;12041199 mpa = (struct mpa_message *) ep->mpa_pkt;1205120012061201 /* Validate MPA header. */···12401235 * We'll continue process when more data arrives.12411236 */12421237 if (ep->mpa_pkt_len < (sizeof(*mpa) + plen))12431243- return;12381238+ return 0;1244123912451240 if (mpa->flags & MPA_REJECT) {12461241 err = -ECONNREFUSED;···13421337 attrs.layer_etype = LAYER_MPA | DDP_LLP;13431338 attrs.ecode = MPA_NOMATCH_RTR;13441339 attrs.next_state = C4IW_QP_STATE_TERMINATE;13401340+ attrs.send_term = 1;13451341 err = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,13461346- C4IW_QP_ATTR_NEXT_STATE, &attrs, 0);13421342+ C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);13471343 err = -ENOMEM;13441344+ disconnect = 1;13481345 goto out;13491346 }13501347···13621355 attrs.layer_etype = LAYER_MPA | DDP_LLP;13631356 attrs.ecode = MPA_INSUFF_IRD;13641357 attrs.next_state = C4IW_QP_STATE_TERMINATE;13581358+ attrs.send_term = 1;13651359 err = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,13661366- C4IW_QP_ATTR_NEXT_STATE, &attrs, 0);13601360+ C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);13671361 err = -ENOMEM;13621362+ disconnect = 1;13681363 goto out;13691364 }13701365 goto out;···13751366 send_abort(ep, skb, GFP_KERNEL);13761367out:13771368 connect_reply_upcall(ep, err);13781378- return;13691369+ return disconnect;13791370}1380137113811372static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)···15331524 unsigned int tid = GET_TID(hdr);15341525 struct tid_info *t = dev->rdev.lldi.tids;15351526 __u8 status = hdr->status;15271527+ int disconnect = 0;1536152815371529 ep = lookup_tid(t, tid);15381530 if (!ep)···15491539 switch (ep->com.state) {15501540 case MPA_REQ_SENT:15511541 ep->rcv_seq += dlen;15521552- process_mpa_reply(ep, skb);15421542+ disconnect = process_mpa_reply(ep, skb);15531543 break;15541544 case MPA_REQ_WAIT:15551545 ep->rcv_seq += dlen;···15651555 ep->com.state, ep->hwtid, status);15661556 attrs.next_state = C4IW_QP_STATE_TERMINATE;15671557 c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,15681568- C4IW_QP_ATTR_NEXT_STATE, &attrs, 0);15581558+ C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);15591559+ disconnect = 1;15691560 break;15701561 }15711562 default:15721563 break;15731564 }15741565 mutex_unlock(&ep->com.mutex);15661566+ if (disconnect)15671567+ c4iw_ep_disconnect(ep, 0, GFP_KERNEL);15751568 return 0;15761569}15771570···20212008 G_IP_HDR_LEN(hlen);20222009 if (tcph->ece && tcph->cwr)20232010 opt2 |= CCTRL_ECN(1);20112011+ }20122012+ if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) {20132013+ opt2 |= T5_OPT_2_VALID;20142014+ opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE);20242015 }2025201620262017 rpl = cplhdr(skb);···34993482 __func__, ep, ep->hwtid, ep->com.state);35003483 abort = 0;35013484 }35023502- mutex_unlock(&ep->com.mutex);35033485 if (abort)35043486 abort_connection(ep, NULL, GFP_KERNEL);34873487+ mutex_unlock(&ep->com.mutex);35053488 c4iw_put_ep(&ep->com);35063489}35073490
···243243static void *atkbd_platform_fixup_data;244244static unsigned int (*atkbd_platform_scancode_fixup)(struct atkbd *, unsigned int);245245246246+/*247247+ * Certain keyboards to not like ATKBD_CMD_RESET_DIS and stop responding248248+ * to many commands until full reset (ATKBD_CMD_RESET_BAT) is performed.249249+ */250250+static bool atkbd_skip_deactivate;251251+246252static ssize_t atkbd_attr_show_helper(struct device *dev, char *buf,247253 ssize_t (*handler)(struct atkbd *, char *));248254static ssize_t atkbd_attr_set_helper(struct device *dev, const char *buf, size_t count,···774768 * Make sure nothing is coming from the keyboard and disturbs our775769 * internal state.776770 */777777- atkbd_deactivate(atkbd);771771+ if (!atkbd_skip_deactivate)772772+ atkbd_deactivate(atkbd);778773779774 return 0;780775}···16451638 return 1;16461639}1647164016411641+static int __init atkbd_deactivate_fixup(const struct dmi_system_id *id)16421642+{16431643+ atkbd_skip_deactivate = true;16441644+ return 1;16451645+}16461646+16481647static const struct dmi_system_id atkbd_dmi_quirk_table[] __initconst = {16491648 {16501649 .matches = {···17871774 },17881775 .callback = atkbd_setup_scancode_fixup,17891776 .driver_data = atkbd_oqo_01plus_scancode_fixup,17771777+ },17781778+ {17791779+ .matches = {17801780+ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),17811781+ DMI_MATCH(DMI_PRODUCT_NAME, "LW25-B7HV"),17821782+ },17831783+ .callback = atkbd_deactivate_fixup,17841784+ },17851785+ {17861786+ .matches = {17871787+ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),17881788+ DMI_MATCH(DMI_PRODUCT_NAME, "P1-J273B"),17891789+ },17901790+ .callback = atkbd_deactivate_fixup,17901791 },17911792 { }17921793};
+7
drivers/input/keyboard/tca8418_keypad.c
···392392 { }393393};394394MODULE_DEVICE_TABLE(of, tca8418_dt_ids);395395+396396+/*397397+ * The device tree based i2c loader looks for398398+ * "i2c:" + second_component_of(property("compatible"))399399+ * and therefore we need an alias to be found.400400+ */401401+MODULE_ALIAS("i2c:tca8418");395402#endif396403397404static struct i2c_driver tca8418_keypad_driver = {
···1111 */12121313#include <linux/delay.h>1414+#include <linux/dmi.h>1415#include <linux/slab.h>1516#include <linux/module.h>1617#include <linux/input.h>···832831 break;833832834833 case 3:835835- etd->reg_10 = 0x0b;834834+ if (etd->set_hw_resolution)835835+ etd->reg_10 = 0x0b;836836+ else837837+ etd->reg_10 = 0x03;838838+836839 if (elantech_write_reg(psmouse, 0x10, etd->reg_10))837840 rc = -1;838841···13361331}1337133213381333/*13341334+ * Some hw_version 3 models go into error state when we try to set bit 3 of r1013351335+ */13361336+static const struct dmi_system_id no_hw_res_dmi_table[] = {13371337+#if defined(CONFIG_DMI) && defined(CONFIG_X86)13381338+ {13391339+ /* Gigabyte U2442 */13401340+ .matches = {13411341+ DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),13421342+ DMI_MATCH(DMI_PRODUCT_NAME, "U2442"),13431343+ },13441344+ },13451345+#endif13461346+ { }13471347+};13481348+13491349+/*13391350 * determine hardware version and set some properties according to it.13401351 */13411352static int elantech_set_properties(struct elantech_data *etd)···14101389 * value of this hardware flag.14111390 */14121391 etd->crc_enabled = ((etd->fw_version & 0x4000) == 0x4000);13921392+13931393+ /* Enable real hardware resolution on hw_version 3 ? */13941394+ etd->set_hw_resolution = !dmi_check_system(no_hw_res_dmi_table);1413139514141396 return 0;14151397}
+1
drivers/input/mouse/elantech.h
···130130 bool jumpy_cursor;131131 bool reports_pressure;132132 bool crc_enabled;133133+ bool set_hw_resolution;133134 unsigned char hw_version;134135 unsigned int fw_version;135136 unsigned int single_finger_reports;
···1414 SPEAr1310 and SPEAr320 evaluation boards & TI (www.ti.com)1515 boards like am335x, dm814x, dm813x and dm811x.16161717+config CAN_C_CAN_STRICT_FRAME_ORDERING1818+ bool "Force a strict RX CAN frame order (may cause frame loss)"1919+ ---help---2020+ The RX split buffer prevents packet reordering but can cause packet2121+ loss. Only enable this option when you accept to lose CAN frames2222+ in favour of getting the received CAN frames in the correct order.2323+1724config CAN_C_CAN_PCI1825 tristate "Generic PCI Bus based C_CAN/D_CAN driver"1926 depends on PCI
+309-357
drivers/net/can/c_can/c_can.c
···6060#define CONTROL_IE BIT(1)6161#define CONTROL_INIT BIT(0)62626363+#define CONTROL_IRQMSK (CONTROL_EIE | CONTROL_IE | CONTROL_SIE)6464+6365/* test register */6466#define TEST_RX BIT(7)6567#define TEST_TX1 BIT(6)···110108#define IF_COMM_CONTROL BIT(4)111109#define IF_COMM_CLR_INT_PND BIT(3)112110#define IF_COMM_TXRQST BIT(2)111111+#define IF_COMM_CLR_NEWDAT IF_COMM_TXRQST113112#define IF_COMM_DATAA BIT(1)114113#define IF_COMM_DATAB BIT(0)115115-#define IF_COMM_ALL (IF_COMM_MASK | IF_COMM_ARB | \116116- IF_COMM_CONTROL | IF_COMM_TXRQST | \117117- IF_COMM_DATAA | IF_COMM_DATAB)114114+115115+/* TX buffer setup */116116+#define IF_COMM_TX (IF_COMM_ARB | IF_COMM_CONTROL | \117117+ IF_COMM_TXRQST | \118118+ IF_COMM_DATAA | IF_COMM_DATAB)118119119120/* For the low buffers we clear the interrupt bit, but keep newdat */120121#define IF_COMM_RCV_LOW (IF_COMM_MASK | IF_COMM_ARB | \···125120 IF_COMM_DATAA | IF_COMM_DATAB)126121127122/* For the high buffers we clear the interrupt bit and newdat */128128-#define IF_COMM_RCV_HIGH (IF_COMM_RCV_LOW | IF_COMM_TXRQST)123123+#define IF_COMM_RCV_HIGH (IF_COMM_RCV_LOW | IF_COMM_CLR_NEWDAT)124124+125125+126126+/* Receive setup of message objects */127127+#define IF_COMM_RCV_SETUP (IF_COMM_MASK | IF_COMM_ARB | IF_COMM_CONTROL)128128+129129+/* Invalidation of message objects */130130+#define IF_COMM_INVAL (IF_COMM_ARB | IF_COMM_CONTROL)129131130132/* IFx arbitration */131131-#define IF_ARB_MSGVAL BIT(15)132132-#define IF_ARB_MSGXTD BIT(14)133133-#define IF_ARB_TRANSMIT BIT(13)133133+#define IF_ARB_MSGVAL BIT(31)134134+#define IF_ARB_MSGXTD BIT(30)135135+#define IF_ARB_TRANSMIT BIT(29)134136135137/* IFx message control */136138#define IF_MCONT_NEWDAT BIT(15)···151139#define IF_MCONT_EOB BIT(7)152140#define IF_MCONT_DLC_MASK 0xf153141142142+#define IF_MCONT_RCV (IF_MCONT_RXIE | IF_MCONT_UMASK)143143+#define IF_MCONT_RCV_EOB (IF_MCONT_RCV | IF_MCONT_EOB)144144+145145+#define IF_MCONT_TX (IF_MCONT_TXIE | IF_MCONT_EOB)146146+154147/*155148 * Use IF1 for RX and IF2 for TX156149 */157150#define IF_RX 0158151#define IF_TX 1159159-160160-/* status interrupt */161161-#define STATUS_INTERRUPT 0x8000162162-163163-/* global interrupt masks */164164-#define ENABLE_ALL_INTERRUPTS 1165165-#define DISABLE_ALL_INTERRUPTS 0166152167153/* minimum timeout for checking BUSY status */168154#define MIN_TIMEOUT_VALUE 6···181171 LEC_BIT0_ERROR,182172 LEC_CRC_ERROR,183173 LEC_UNUSED,174174+ LEC_MASK = LEC_UNUSED,184175};185176186177/*···237226 priv->raminit(priv, enable);238227}239228240240-static inline int get_tx_next_msg_obj(const struct c_can_priv *priv)229229+static void c_can_irq_control(struct c_can_priv *priv, bool enable)241230{242242- return (priv->tx_next & C_CAN_NEXT_MSG_OBJ_MASK) +243243- C_CAN_MSG_OBJ_TX_FIRST;244244-}245245-246246-static inline int get_tx_echo_msg_obj(int txecho)247247-{248248- return (txecho & C_CAN_NEXT_MSG_OBJ_MASK) + C_CAN_MSG_OBJ_TX_FIRST;249249-}250250-251251-static u32 c_can_read_reg32(struct c_can_priv *priv, enum reg index)252252-{253253- u32 val = priv->read_reg(priv, index);254254- val |= ((u32) priv->read_reg(priv, index + 1)) << 16;255255- return val;256256-}257257-258258-static void c_can_enable_all_interrupts(struct c_can_priv *priv,259259- int enable)260260-{261261- unsigned int cntrl_save = priv->read_reg(priv,262262- C_CAN_CTRL_REG);231231+ u32 ctrl = priv->read_reg(priv, C_CAN_CTRL_REG) & ~CONTROL_IRQMSK;263232264233 if (enable)265265- cntrl_save |= (CONTROL_SIE | CONTROL_EIE | CONTROL_IE);266266- else267267- cntrl_save &= ~(CONTROL_EIE | CONTROL_IE | CONTROL_SIE);234234+ ctrl |= CONTROL_IRQMSK;268235269269- priv->write_reg(priv, C_CAN_CTRL_REG, cntrl_save);236236+ priv->write_reg(priv, C_CAN_CTRL_REG, ctrl);270237}271238272272-static inline int c_can_msg_obj_is_busy(struct c_can_priv *priv, int iface)239239+static void c_can_obj_update(struct net_device *dev, int iface, u32 cmd, u32 obj)273240{274274- int count = MIN_TIMEOUT_VALUE;241241+ struct c_can_priv *priv = netdev_priv(dev);242242+ int cnt, reg = C_CAN_IFACE(COMREQ_REG, iface);275243276276- while (count && priv->read_reg(priv,277277- C_CAN_IFACE(COMREQ_REG, iface)) &278278- IF_COMR_BUSY) {279279- count--;244244+ priv->write_reg(priv, reg + 1, cmd);245245+ priv->write_reg(priv, reg, obj);246246+247247+ for (cnt = MIN_TIMEOUT_VALUE; cnt; cnt--) {248248+ if (!(priv->read_reg(priv, reg) & IF_COMR_BUSY))249249+ return;280250 udelay(1);281251 }252252+ netdev_err(dev, "Updating object timed out\n");282253283283- if (!count)284284- return 1;285285-286286- return 0;287254}288255289289-static inline void c_can_object_get(struct net_device *dev,290290- int iface, int objno, int mask)256256+static inline void c_can_object_get(struct net_device *dev, int iface,257257+ u32 obj, u32 cmd)258258+{259259+ c_can_obj_update(dev, iface, cmd, obj);260260+}261261+262262+static inline void c_can_object_put(struct net_device *dev, int iface,263263+ u32 obj, u32 cmd)264264+{265265+ c_can_obj_update(dev, iface, cmd | IF_COMM_WR, obj);266266+}267267+268268+/*269269+ * Note: According to documentation clearing TXIE while MSGVAL is set270270+ * is not allowed, but works nicely on C/DCAN. And that lowers the I/O271271+ * load significantly.272272+ */273273+static void c_can_inval_tx_object(struct net_device *dev, int iface, int obj)291274{292275 struct c_can_priv *priv = netdev_priv(dev);293276294294- /*295295- * As per specs, after writting the message object number in the296296- * IF command request register the transfer b/w interface297297- * register and message RAM must be complete in 6 CAN-CLK298298- * period.299299- */300300- priv->write_reg(priv, C_CAN_IFACE(COMMSK_REG, iface),301301- IFX_WRITE_LOW_16BIT(mask));302302- priv->write_reg(priv, C_CAN_IFACE(COMREQ_REG, iface),303303- IFX_WRITE_LOW_16BIT(objno));304304-305305- if (c_can_msg_obj_is_busy(priv, iface))306306- netdev_err(dev, "timed out in object get\n");277277+ priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 0);278278+ c_can_object_put(dev, iface, obj, IF_COMM_INVAL);307279}308280309309-static inline void c_can_object_put(struct net_device *dev,310310- int iface, int objno, int mask)281281+static void c_can_inval_msg_object(struct net_device *dev, int iface, int obj)311282{312283 struct c_can_priv *priv = netdev_priv(dev);313284314314- /*315315- * As per specs, after writting the message object number in the316316- * IF command request register the transfer b/w interface317317- * register and message RAM must be complete in 6 CAN-CLK318318- * period.319319- */320320- priv->write_reg(priv, C_CAN_IFACE(COMMSK_REG, iface),321321- (IF_COMM_WR | IFX_WRITE_LOW_16BIT(mask)));322322- priv->write_reg(priv, C_CAN_IFACE(COMREQ_REG, iface),323323- IFX_WRITE_LOW_16BIT(objno));324324-325325- if (c_can_msg_obj_is_busy(priv, iface))326326- netdev_err(dev, "timed out in object put\n");285285+ priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 0);286286+ priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), 0);287287+ c_can_inval_tx_object(dev, iface, obj);327288}328289329329-static void c_can_write_msg_object(struct net_device *dev,330330- int iface, struct can_frame *frame, int objno)290290+static void c_can_setup_tx_object(struct net_device *dev, int iface,291291+ struct can_frame *frame, int idx)331292{293293+ struct c_can_priv *priv = netdev_priv(dev);294294+ u16 ctrl = IF_MCONT_TX | frame->can_dlc;295295+ bool rtr = frame->can_id & CAN_RTR_FLAG;296296+ u32 arb = IF_ARB_MSGVAL;332297 int i;333333- u16 flags = 0;334334- unsigned int id;335335- struct c_can_priv *priv = netdev_priv(dev);336336-337337- if (!(frame->can_id & CAN_RTR_FLAG))338338- flags |= IF_ARB_TRANSMIT;339298340299 if (frame->can_id & CAN_EFF_FLAG) {341341- id = frame->can_id & CAN_EFF_MASK;342342- flags |= IF_ARB_MSGXTD;343343- } else344344- id = ((frame->can_id & CAN_SFF_MASK) << 18);300300+ arb |= frame->can_id & CAN_EFF_MASK;301301+ arb |= IF_ARB_MSGXTD;302302+ } else {303303+ arb |= (frame->can_id & CAN_SFF_MASK) << 18;304304+ }345305346346- flags |= IF_ARB_MSGVAL;306306+ if (!rtr)307307+ arb |= IF_ARB_TRANSMIT;347308348348- priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface),349349- IFX_WRITE_LOW_16BIT(id));350350- priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), flags |351351- IFX_WRITE_HIGH_16BIT(id));309309+ /*310310+ * If we change the DIR bit, we need to invalidate the buffer311311+ * first, i.e. clear the MSGVAL flag in the arbiter.312312+ */313313+ if (rtr != (bool)test_bit(idx, &priv->tx_dir)) {314314+ u32 obj = idx + C_CAN_MSG_OBJ_TX_FIRST;315315+316316+ c_can_inval_msg_object(dev, iface, obj);317317+ change_bit(idx, &priv->tx_dir);318318+ }319319+320320+ priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), arb);321321+ priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), arb >> 16);322322+323323+ priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl);352324353325 for (i = 0; i < frame->can_dlc; i += 2) {354326 priv->write_reg(priv, C_CAN_IFACE(DATA1_REG, iface) + i / 2,355327 frame->data[i] | (frame->data[i + 1] << 8));356328 }357357-358358- /* enable interrupt for this message object */359359- priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface),360360- IF_MCONT_TXIE | IF_MCONT_TXRQST | IF_MCONT_EOB |361361- frame->can_dlc);362362- c_can_object_put(dev, iface, objno, IF_COMM_ALL);363329}364330365331static inline void c_can_activate_all_lower_rx_msg_obj(struct net_device *dev,366366- int iface,367367- int ctrl_mask)332332+ int iface)368333{369334 int i;370370- struct c_can_priv *priv = netdev_priv(dev);371335372372- for (i = C_CAN_MSG_OBJ_RX_FIRST; i <= C_CAN_MSG_RX_LOW_LAST; i++) {373373- priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface),374374- ctrl_mask & ~IF_MCONT_NEWDAT);375375- c_can_object_put(dev, iface, i, IF_COMM_CONTROL);376376- }336336+ for (i = C_CAN_MSG_OBJ_RX_FIRST; i <= C_CAN_MSG_RX_LOW_LAST; i++)337337+ c_can_object_get(dev, iface, i, IF_COMM_CLR_NEWDAT);377338}378339379340static int c_can_handle_lost_msg_obj(struct net_device *dev,···360377 priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl);361378 c_can_object_put(dev, iface, objno, IF_COMM_CONTROL);362379380380+ stats->rx_errors++;381381+ stats->rx_over_errors++;382382+363383 /* create an error msg */364384 skb = alloc_can_err_skb(dev, &frame);365385 if (unlikely(!skb))···370384371385 frame->can_id |= CAN_ERR_CRTL;372386 frame->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;373373- stats->rx_errors++;374374- stats->rx_over_errors++;375387376388 netif_receive_skb(skb);377389 return 1;378390}379391380380-static int c_can_read_msg_object(struct net_device *dev, int iface, int ctrl)392392+static int c_can_read_msg_object(struct net_device *dev, int iface, u32 ctrl)381393{382382- u16 flags, data;383383- int i;384384- unsigned int val;385385- struct c_can_priv *priv = netdev_priv(dev);386394 struct net_device_stats *stats = &dev->stats;387387- struct sk_buff *skb;395395+ struct c_can_priv *priv = netdev_priv(dev);388396 struct can_frame *frame;397397+ struct sk_buff *skb;398398+ u32 arb, data;389399390400 skb = alloc_can_skb(dev, &frame);391401 if (!skb) {···391409392410 frame->can_dlc = get_can_dlc(ctrl & 0x0F);393411394394- flags = priv->read_reg(priv, C_CAN_IFACE(ARB2_REG, iface));395395- val = priv->read_reg(priv, C_CAN_IFACE(ARB1_REG, iface)) |396396- (flags << 16);412412+ arb = priv->read_reg(priv, C_CAN_IFACE(ARB1_REG, iface));413413+ arb |= priv->read_reg(priv, C_CAN_IFACE(ARB2_REG, iface)) << 16;397414398398- if (flags & IF_ARB_MSGXTD)399399- frame->can_id = (val & CAN_EFF_MASK) | CAN_EFF_FLAG;415415+ if (arb & IF_ARB_MSGXTD)416416+ frame->can_id = (arb & CAN_EFF_MASK) | CAN_EFF_FLAG;400417 else401401- frame->can_id = (val >> 18) & CAN_SFF_MASK;418418+ frame->can_id = (arb >> 18) & CAN_SFF_MASK;402419403403- if (flags & IF_ARB_TRANSMIT)420420+ if (arb & IF_ARB_TRANSMIT) {404421 frame->can_id |= CAN_RTR_FLAG;405405- else {406406- for (i = 0; i < frame->can_dlc; i += 2) {407407- data = priv->read_reg(priv,408408- C_CAN_IFACE(DATA1_REG, iface) + i / 2);422422+ } else {423423+ int i, dreg = C_CAN_IFACE(DATA1_REG, iface);424424+425425+ for (i = 0; i < frame->can_dlc; i += 2, dreg ++) {426426+ data = priv->read_reg(priv, dreg);409427 frame->data[i] = data;410428 frame->data[i + 1] = data >> 8;411429 }412430 }413431414414- netif_receive_skb(skb);415415-416432 stats->rx_packets++;417433 stats->rx_bytes += frame->can_dlc;434434+435435+ netif_receive_skb(skb);418436 return 0;419437}420438421439static void c_can_setup_receive_object(struct net_device *dev, int iface,422422- int objno, unsigned int mask,423423- unsigned int id, unsigned int mcont)440440+ u32 obj, u32 mask, u32 id, u32 mcont)424441{425442 struct c_can_priv *priv = netdev_priv(dev);426443427427- priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface),428428- IFX_WRITE_LOW_16BIT(mask));444444+ mask |= BIT(29);445445+ priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface), mask);446446+ priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface), mask >> 16);429447430430- /* According to C_CAN documentation, the reserved bit431431- * in IFx_MASK2 register is fixed 1432432- */433433- priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface),434434- IFX_WRITE_HIGH_16BIT(mask) | BIT(13));435435-436436- priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface),437437- IFX_WRITE_LOW_16BIT(id));438438- priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface),439439- (IF_ARB_MSGVAL | IFX_WRITE_HIGH_16BIT(id)));448448+ id |= IF_ARB_MSGVAL;449449+ priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), id);450450+ priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), id >> 16);440451441452 priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), mcont);442442- c_can_object_put(dev, iface, objno, IF_COMM_ALL & ~IF_COMM_TXRQST);443443-444444- netdev_dbg(dev, "obj no:%d, msgval:0x%08x\n", objno,445445- c_can_read_reg32(priv, C_CAN_MSGVAL1_REG));446446-}447447-448448-static void c_can_inval_msg_object(struct net_device *dev, int iface, int objno)449449-{450450- struct c_can_priv *priv = netdev_priv(dev);451451-452452- priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 0);453453- priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), 0);454454- priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 0);455455-456456- c_can_object_put(dev, iface, objno, IF_COMM_ARB | IF_COMM_CONTROL);457457-458458- netdev_dbg(dev, "obj no:%d, msgval:0x%08x\n", objno,459459- c_can_read_reg32(priv, C_CAN_MSGVAL1_REG));460460-}461461-462462-static inline int c_can_is_next_tx_obj_busy(struct c_can_priv *priv, int objno)463463-{464464- int val = c_can_read_reg32(priv, C_CAN_TXRQST1_REG);465465-466466- /*467467- * as transmission request register's bit n-1 corresponds to468468- * message object n, we need to handle the same properly.469469- */470470- if (val & (1 << (objno - 1)))471471- return 1;472472-473473- return 0;453453+ c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);474454}475455476456static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,477477- struct net_device *dev)457457+ struct net_device *dev)478458{479479- u32 msg_obj_no;480480- struct c_can_priv *priv = netdev_priv(dev);481459 struct can_frame *frame = (struct can_frame *)skb->data;460460+ struct c_can_priv *priv = netdev_priv(dev);461461+ u32 idx, obj;482462483463 if (can_dropped_invalid_skb(dev, skb))484464 return NETDEV_TX_OK;485485-486486- spin_lock_bh(&priv->xmit_lock);487487- msg_obj_no = get_tx_next_msg_obj(priv);488488-489489- /* prepare message object for transmission */490490- c_can_write_msg_object(dev, IF_TX, frame, msg_obj_no);491491- priv->dlc[msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST] = frame->can_dlc;492492- can_put_echo_skb(skb, dev, msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST);493493-494465 /*495495- * we have to stop the queue in case of a wrap around or496496- * if the next TX message object is still in use466466+ * This is not a FIFO. C/D_CAN sends out the buffers467467+ * prioritized. The lowest buffer number wins.497468 */498498- priv->tx_next++;499499- if (c_can_is_next_tx_obj_busy(priv, get_tx_next_msg_obj(priv)) ||500500- (priv->tx_next & C_CAN_NEXT_MSG_OBJ_MASK) == 0)469469+ idx = fls(atomic_read(&priv->tx_active));470470+ obj = idx + C_CAN_MSG_OBJ_TX_FIRST;471471+472472+ /* If this is the last buffer, stop the xmit queue */473473+ if (idx == C_CAN_MSG_OBJ_TX_NUM - 1)501474 netif_stop_queue(dev);502502- spin_unlock_bh(&priv->xmit_lock);475475+ /*476476+ * Store the message in the interface so we can call477477+ * can_put_echo_skb(). We must do this before we enable478478+ * transmit as we might race against do_tx().479479+ */480480+ c_can_setup_tx_object(dev, IF_TX, frame, idx);481481+ priv->dlc[idx] = frame->can_dlc;482482+ can_put_echo_skb(skb, dev, idx);483483+484484+ /* Update the active bits */485485+ atomic_add((1 << idx), &priv->tx_active);486486+ /* Start transmission */487487+ c_can_object_put(dev, IF_TX, obj, IF_COMM_TX);503488504489 return NETDEV_TX_OK;505490}···543594544595 /* setup receive message objects */545596 for (i = C_CAN_MSG_OBJ_RX_FIRST; i < C_CAN_MSG_OBJ_RX_LAST; i++)546546- c_can_setup_receive_object(dev, IF_RX, i, 0, 0,547547- (IF_MCONT_RXIE | IF_MCONT_UMASK) & ~IF_MCONT_EOB);597597+ c_can_setup_receive_object(dev, IF_RX, i, 0, 0, IF_MCONT_RCV);548598549599 c_can_setup_receive_object(dev, IF_RX, C_CAN_MSG_OBJ_RX_LAST, 0, 0,550550- IF_MCONT_EOB | IF_MCONT_RXIE | IF_MCONT_UMASK);600600+ IF_MCONT_RCV_EOB);551601}552602553603/*···560612 struct c_can_priv *priv = netdev_priv(dev);561613562614 /* enable automatic retransmission */563563- priv->write_reg(priv, C_CAN_CTRL_REG,564564- CONTROL_ENABLE_AR);615615+ priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_ENABLE_AR);565616566617 if ((priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) &&567618 (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)) {568619 /* loopback + silent mode : useful for hot self-test */569569- priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE |570570- CONTROL_SIE | CONTROL_IE | CONTROL_TEST);571571- priv->write_reg(priv, C_CAN_TEST_REG,572572- TEST_LBACK | TEST_SILENT);620620+ priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_TEST);621621+ priv->write_reg(priv, C_CAN_TEST_REG, TEST_LBACK | TEST_SILENT);573622 } else if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) {574623 /* loopback mode : useful for self-test function */575575- priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE |576576- CONTROL_SIE | CONTROL_IE | CONTROL_TEST);624624+ priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_TEST);577625 priv->write_reg(priv, C_CAN_TEST_REG, TEST_LBACK);578626 } else if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) {579627 /* silent mode : bus-monitoring mode */580580- priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE |581581- CONTROL_SIE | CONTROL_IE | CONTROL_TEST);628628+ priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_TEST);582629 priv->write_reg(priv, C_CAN_TEST_REG, TEST_SILENT);583583- } else584584- /* normal mode*/585585- priv->write_reg(priv, C_CAN_CTRL_REG,586586- CONTROL_EIE | CONTROL_SIE | CONTROL_IE);630630+ }587631588632 /* configure message objects */589633 c_can_configure_msg_objects(dev);590634591635 /* set a `lec` value so that we can check for updates later */592636 priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED);637637+638638+ /* Clear all internal status */639639+ atomic_set(&priv->tx_active, 0);640640+ priv->rxmasked = 0;641641+ priv->tx_dir = 0;593642594643 /* set bittiming params */595644 return c_can_set_bittiming(dev);···602657 if (err)603658 return err;604659660660+ /* Setup the command for new messages */661661+ priv->comm_rcv_high = priv->type != BOSCH_D_CAN ?662662+ IF_COMM_RCV_LOW : IF_COMM_RCV_HIGH;663663+605664 priv->can.state = CAN_STATE_ERROR_ACTIVE;606606-607607- /* reset tx helper pointers */608608- priv->tx_next = priv->tx_echo = 0;609609-610610- /* enable status change, error and module interrupts */611611- c_can_enable_all_interrupts(priv, ENABLE_ALL_INTERRUPTS);612665613666 return 0;614667}···615672{616673 struct c_can_priv *priv = netdev_priv(dev);617674618618- /* disable all interrupts */619619- c_can_enable_all_interrupts(priv, DISABLE_ALL_INTERRUPTS);620620-621621- /* set the state as STOPPED */675675+ c_can_irq_control(priv, false);622676 priv->can.state = CAN_STATE_STOPPED;623677}624678625679static int c_can_set_mode(struct net_device *dev, enum can_mode mode)626680{681681+ struct c_can_priv *priv = netdev_priv(dev);627682 int err;628683629684 switch (mode) {···630689 if (err)631690 return err;632691 netif_wake_queue(dev);692692+ c_can_irq_control(priv, true);633693 break;634694 default:635695 return -EOPNOTSUPP;···666724 return err;667725}668726669669-/*670670- * priv->tx_echo holds the number of the oldest can_frame put for671671- * transmission into the hardware, but not yet ACKed by the CAN tx672672- * complete IRQ.673673- *674674- * We iterate from priv->tx_echo to priv->tx_next and check if the675675- * packet has been transmitted, echo it back to the CAN framework.676676- * If we discover a not yet transmitted packet, stop looking for more.677677- */678727static void c_can_do_tx(struct net_device *dev)679728{680729 struct c_can_priv *priv = netdev_priv(dev);681730 struct net_device_stats *stats = &dev->stats;682682- u32 val, obj, pkts = 0, bytes = 0;731731+ u32 idx, obj, pkts = 0, bytes = 0, pend, clr;683732684684- spin_lock_bh(&priv->xmit_lock);733733+ clr = pend = priv->read_reg(priv, C_CAN_INTPND2_REG);685734686686- for (; (priv->tx_next - priv->tx_echo) > 0; priv->tx_echo++) {687687- obj = get_tx_echo_msg_obj(priv->tx_echo);688688- val = c_can_read_reg32(priv, C_CAN_TXRQST1_REG);689689-690690- if (val & (1 << (obj - 1)))691691- break;692692-693693- can_get_echo_skb(dev, obj - C_CAN_MSG_OBJ_TX_FIRST);694694- bytes += priv->dlc[obj - C_CAN_MSG_OBJ_TX_FIRST];735735+ while ((idx = ffs(pend))) {736736+ idx--;737737+ pend &= ~(1 << idx);738738+ obj = idx + C_CAN_MSG_OBJ_TX_FIRST;739739+ c_can_inval_tx_object(dev, IF_RX, obj);740740+ can_get_echo_skb(dev, idx);741741+ bytes += priv->dlc[idx];695742 pkts++;696696- c_can_inval_msg_object(dev, IF_TX, obj);697743 }698744699699- /* restart queue if wrap-up or if queue stalled on last pkt */700700- if (((priv->tx_next & C_CAN_NEXT_MSG_OBJ_MASK) != 0) ||701701- ((priv->tx_echo & C_CAN_NEXT_MSG_OBJ_MASK) == 0))702702- netif_wake_queue(dev);745745+ /* Clear the bits in the tx_active mask */746746+ atomic_sub(clr, &priv->tx_active);703747704704- spin_unlock_bh(&priv->xmit_lock);748748+ if (clr & (1 << (C_CAN_MSG_OBJ_TX_NUM - 1)))749749+ netif_wake_queue(dev);705750706751 if (pkts) {707752 stats->tx_bytes += bytes;···729800 return pend & ~((1 << lasts) - 1);730801}731802803803+static inline void c_can_rx_object_get(struct net_device *dev,804804+ struct c_can_priv *priv, u32 obj)805805+{806806+#ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING807807+ if (obj < C_CAN_MSG_RX_LOW_LAST)808808+ c_can_object_get(dev, IF_RX, obj, IF_COMM_RCV_LOW);809809+ else810810+#endif811811+ c_can_object_get(dev, IF_RX, obj, priv->comm_rcv_high);812812+}813813+814814+static inline void c_can_rx_finalize(struct net_device *dev,815815+ struct c_can_priv *priv, u32 obj)816816+{817817+#ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING818818+ if (obj < C_CAN_MSG_RX_LOW_LAST)819819+ priv->rxmasked |= BIT(obj - 1);820820+ else if (obj == C_CAN_MSG_RX_LOW_LAST) {821821+ priv->rxmasked = 0;822822+ /* activate all lower message objects */823823+ c_can_activate_all_lower_rx_msg_obj(dev, IF_RX);824824+ }825825+#endif826826+ if (priv->type != BOSCH_D_CAN)827827+ c_can_object_get(dev, IF_RX, obj, IF_COMM_CLR_NEWDAT);828828+}829829+732830static int c_can_read_objects(struct net_device *dev, struct c_can_priv *priv,733831 u32 pend, int quota)734832{735735- u32 pkts = 0, ctrl, obj, mcmd;833833+ u32 pkts = 0, ctrl, obj;736834737835 while ((obj = ffs(pend)) && quota > 0) {738836 pend &= ~BIT(obj - 1);739837740740- mcmd = obj < C_CAN_MSG_RX_LOW_LAST ?741741- IF_COMM_RCV_LOW : IF_COMM_RCV_HIGH;742742-743743- c_can_object_get(dev, IF_RX, obj, mcmd);838838+ c_can_rx_object_get(dev, priv, obj);744839 ctrl = priv->read_reg(priv, C_CAN_IFACE(MSGCTRL_REG, IF_RX));745840746841 if (ctrl & IF_MCONT_MSGLST) {···786833 /* read the data from the message object */787834 c_can_read_msg_object(dev, IF_RX, ctrl);788835789789- if (obj == C_CAN_MSG_RX_LOW_LAST)790790- /* activate all lower message objects */791791- c_can_activate_all_lower_rx_msg_obj(dev, IF_RX, ctrl);836836+ c_can_rx_finalize(dev, priv, obj);792837793838 pkts++;794839 quota--;795840 }796841797842 return pkts;843843+}844844+845845+static inline u32 c_can_get_pending(struct c_can_priv *priv)846846+{847847+ u32 pend = priv->read_reg(priv, C_CAN_NEWDAT1_REG);848848+849849+#ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING850850+ pend &= ~priv->rxmasked;851851+#endif852852+ return pend;798853}799854800855/*···813852 * INTPND are set for this message object indicating that a new message814853 * has arrived. To work-around this issue, we keep two groups of message815854 * objects whose partitioning is defined by C_CAN_MSG_OBJ_RX_SPLIT.855855+ *856856+ * If CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING = y816857 *817858 * To ensure in-order frame reception we use the following818859 * approach while re-activating a message object to receive further···828865 * - if the current message object number is greater than829866 * C_CAN_MSG_RX_LOW_LAST then clear the NEWDAT bit of830867 * only this message object.868868+ *869869+ * This can cause packet loss!870870+ *871871+ * If CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING = n872872+ *873873+ * We clear the newdat bit right away.874874+ *875875+ * This can result in packet reordering when the readout is slow.831876 */832877static int c_can_do_rx_poll(struct net_device *dev, int quota)833878{···851880852881 while (quota > 0) {853882 if (!pend) {854854- pend = priv->read_reg(priv, C_CAN_INTPND1_REG);883883+ pend = c_can_get_pending(priv);855884 if (!pend)856885 break;857886 /*···876905 return pkts;877906}878907879879-static inline int c_can_has_and_handle_berr(struct c_can_priv *priv)880880-{881881- return (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) &&882882- (priv->current_status & LEC_UNUSED);883883-}884884-885908static int c_can_handle_state_change(struct net_device *dev,886909 enum c_can_bus_error_types error_type)887910{···886921 struct can_frame *cf;887922 struct sk_buff *skb;888923 struct can_berr_counter bec;924924+925925+ switch (error_type) {926926+ case C_CAN_ERROR_WARNING:927927+ /* error warning state */928928+ priv->can.can_stats.error_warning++;929929+ priv->can.state = CAN_STATE_ERROR_WARNING;930930+ break;931931+ case C_CAN_ERROR_PASSIVE:932932+ /* error passive state */933933+ priv->can.can_stats.error_passive++;934934+ priv->can.state = CAN_STATE_ERROR_PASSIVE;935935+ break;936936+ case C_CAN_BUS_OFF:937937+ /* bus-off state */938938+ priv->can.state = CAN_STATE_BUS_OFF;939939+ can_bus_off(dev);940940+ break;941941+ default:942942+ break;943943+ }889944890945 /* propagate the error condition to the CAN stack */891946 skb = alloc_can_err_skb(dev, &cf);···920935 switch (error_type) {921936 case C_CAN_ERROR_WARNING:922937 /* error warning state */923923- priv->can.can_stats.error_warning++;924924- priv->can.state = CAN_STATE_ERROR_WARNING;925938 cf->can_id |= CAN_ERR_CRTL;926939 cf->data[1] = (bec.txerr > bec.rxerr) ?927940 CAN_ERR_CRTL_TX_WARNING :···930947 break;931948 case C_CAN_ERROR_PASSIVE:932949 /* error passive state */933933- priv->can.can_stats.error_passive++;934934- priv->can.state = CAN_STATE_ERROR_PASSIVE;935950 cf->can_id |= CAN_ERR_CRTL;936951 if (rx_err_passive)937952 cf->data[1] |= CAN_ERR_CRTL_RX_PASSIVE;···941960 break;942961 case C_CAN_BUS_OFF:943962 /* bus-off state */944944- priv->can.state = CAN_STATE_BUS_OFF;945963 cf->can_id |= CAN_ERR_BUSOFF;946946- /*947947- * disable all interrupts in bus-off mode to ensure that948948- * the CPU is not hogged down949949- */950950- c_can_enable_all_interrupts(priv, DISABLE_ALL_INTERRUPTS);951964 can_bus_off(dev);952965 break;953966 default:954967 break;955968 }956969957957- netif_receive_skb(skb);958970 stats->rx_packets++;959971 stats->rx_bytes += cf->can_dlc;972972+ netif_receive_skb(skb);960973961974 return 1;962975}···971996 if (lec_type == LEC_UNUSED || lec_type == LEC_NO_ERROR)972997 return 0;973998999999+ if (!(priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING))10001000+ return 0;10011001+10021002+ /* common for all type of bus errors */10031003+ priv->can.can_stats.bus_error++;10041004+ stats->rx_errors++;10051005+9741006 /* propagate the error condition to the CAN stack */9751007 skb = alloc_can_err_skb(dev, &cf);9761008 if (unlikely(!skb))···9871005 * check for 'last error code' which tells us the9881006 * type of the last error to occur on the CAN bus9891007 */990990-991991- /* common for all type of bus errors */992992- priv->can.can_stats.bus_error++;993993- stats->rx_errors++;9941008 cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;9951009 cf->data[2] |= CAN_ERR_PROT_UNSPEC;9961010···10211043 break;10221044 }1023104510241024- /* set a `lec` value so that we can check for updates later */10251025- priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED);10261026-10271027- netif_receive_skb(skb);10281046 stats->rx_packets++;10291047 stats->rx_bytes += cf->can_dlc;10301030-10481048+ netif_receive_skb(skb);10311049 return 1;10321050}1033105110341052static int c_can_poll(struct napi_struct *napi, int quota)10351053{10361036- u16 irqstatus;10371037- int lec_type = 0;10381038- int work_done = 0;10391054 struct net_device *dev = napi->dev;10401055 struct c_can_priv *priv = netdev_priv(dev);10561056+ u16 curr, last = priv->last_status;10571057+ int work_done = 0;1041105810421042- irqstatus = priv->irqstatus;10431043- if (!irqstatus)10441044- goto end;10591059+ priv->last_status = curr = priv->read_reg(priv, C_CAN_STS_REG);10601060+ /* Ack status on C_CAN. D_CAN is self clearing */10611061+ if (priv->type != BOSCH_D_CAN)10621062+ priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED);1045106310461046- /* status events have the highest priority */10471047- if (irqstatus == STATUS_INTERRUPT) {10481048- priv->current_status = priv->read_reg(priv,10491049- C_CAN_STS_REG);10501050-10511051- /* handle Tx/Rx events */10521052- if (priv->current_status & STATUS_TXOK)10531053- priv->write_reg(priv, C_CAN_STS_REG,10541054- priv->current_status & ~STATUS_TXOK);10551055-10561056- if (priv->current_status & STATUS_RXOK)10571057- priv->write_reg(priv, C_CAN_STS_REG,10581058- priv->current_status & ~STATUS_RXOK);10591059-10601060- /* handle state changes */10611061- if ((priv->current_status & STATUS_EWARN) &&10621062- (!(priv->last_status & STATUS_EWARN))) {10631063- netdev_dbg(dev, "entered error warning state\n");10641064- work_done += c_can_handle_state_change(dev,10651065- C_CAN_ERROR_WARNING);10661066- }10671067- if ((priv->current_status & STATUS_EPASS) &&10681068- (!(priv->last_status & STATUS_EPASS))) {10691069- netdev_dbg(dev, "entered error passive state\n");10701070- work_done += c_can_handle_state_change(dev,10711071- C_CAN_ERROR_PASSIVE);10721072- }10731073- if ((priv->current_status & STATUS_BOFF) &&10741074- (!(priv->last_status & STATUS_BOFF))) {10751075- netdev_dbg(dev, "entered bus off state\n");10761076- work_done += c_can_handle_state_change(dev,10771077- C_CAN_BUS_OFF);10781078- }10791079-10801080- /* handle bus recovery events */10811081- if ((!(priv->current_status & STATUS_BOFF)) &&10821082- (priv->last_status & STATUS_BOFF)) {10831083- netdev_dbg(dev, "left bus off state\n");10841084- priv->can.state = CAN_STATE_ERROR_ACTIVE;10851085- }10861086- if ((!(priv->current_status & STATUS_EPASS)) &&10871087- (priv->last_status & STATUS_EPASS)) {10881088- netdev_dbg(dev, "left error passive state\n");10891089- priv->can.state = CAN_STATE_ERROR_ACTIVE;10901090- }10911091-10921092- priv->last_status = priv->current_status;10931093-10941094- /* handle lec errors on the bus */10951095- lec_type = c_can_has_and_handle_berr(priv);10961096- if (lec_type)10971097- work_done += c_can_handle_bus_err(dev, lec_type);10981098- } else if ((irqstatus >= C_CAN_MSG_OBJ_RX_FIRST) &&10991099- (irqstatus <= C_CAN_MSG_OBJ_RX_LAST)) {11001100- /* handle events corresponding to receive message objects */11011101- work_done += c_can_do_rx_poll(dev, (quota - work_done));11021102- } else if ((irqstatus >= C_CAN_MSG_OBJ_TX_FIRST) &&11031103- (irqstatus <= C_CAN_MSG_OBJ_TX_LAST)) {11041104- /* handle events corresponding to transmit message objects */11051105- c_can_do_tx(dev);10641064+ /* handle state changes */10651065+ if ((curr & STATUS_EWARN) && (!(last & STATUS_EWARN))) {10661066+ netdev_dbg(dev, "entered error warning state\n");10671067+ work_done += c_can_handle_state_change(dev, C_CAN_ERROR_WARNING);11061068 }10691069+10701070+ if ((curr & STATUS_EPASS) && (!(last & STATUS_EPASS))) {10711071+ netdev_dbg(dev, "entered error passive state\n");10721072+ work_done += c_can_handle_state_change(dev, C_CAN_ERROR_PASSIVE);10731073+ }10741074+10751075+ if ((curr & STATUS_BOFF) && (!(last & STATUS_BOFF))) {10761076+ netdev_dbg(dev, "entered bus off state\n");10771077+ work_done += c_can_handle_state_change(dev, C_CAN_BUS_OFF);10781078+ goto end;10791079+ }10801080+10811081+ /* handle bus recovery events */10821082+ if ((!(curr & STATUS_BOFF)) && (last & STATUS_BOFF)) {10831083+ netdev_dbg(dev, "left bus off state\n");10841084+ priv->can.state = CAN_STATE_ERROR_ACTIVE;10851085+ }10861086+ if ((!(curr & STATUS_EPASS)) && (last & STATUS_EPASS)) {10871087+ netdev_dbg(dev, "left error passive state\n");10881088+ priv->can.state = CAN_STATE_ERROR_ACTIVE;10891089+ }10901090+10911091+ /* handle lec errors on the bus */10921092+ work_done += c_can_handle_bus_err(dev, curr & LEC_MASK);10931093+10941094+ /* Handle Tx/Rx events. We do this unconditionally */10951095+ work_done += c_can_do_rx_poll(dev, (quota - work_done));10961096+ c_can_do_tx(dev);1107109711081098end:11091099 if (work_done < quota) {11101100 napi_complete(napi);11111111- /* enable all IRQs */11121112- c_can_enable_all_interrupts(priv, ENABLE_ALL_INTERRUPTS);11011101+ /* enable all IRQs if we are not in bus off state */11021102+ if (priv->can.state != CAN_STATE_BUS_OFF)11031103+ c_can_irq_control(priv, true);11131104 }1114110511151106 return work_done;···10891142 struct net_device *dev = (struct net_device *)dev_id;10901143 struct c_can_priv *priv = netdev_priv(dev);1091114410921092- priv->irqstatus = priv->read_reg(priv, C_CAN_INT_REG);10931093- if (!priv->irqstatus)11451145+ if (!priv->read_reg(priv, C_CAN_INT_REG))10941146 return IRQ_NONE;1095114710961148 /* disable all interrupts and schedule the NAPI */10971097- c_can_enable_all_interrupts(priv, DISABLE_ALL_INTERRUPTS);11491149+ c_can_irq_control(priv, false);10981150 napi_schedule(&priv->napi);1099115111001152 return IRQ_HANDLED;···11301184 can_led_event(dev, CAN_LED_EVENT_OPEN);1131118511321186 napi_enable(&priv->napi);11871187+ /* enable status change, error and module interrupts */11881188+ c_can_irq_control(priv, true);11331189 netif_start_queue(dev);1134119011351191 return 0;···11741226 return NULL;1175122711761228 priv = netdev_priv(dev);11771177- spin_lock_init(&priv->xmit_lock);11781229 netif_napi_add(dev, &priv->napi, c_can_poll, C_CAN_NAPI_WEIGHT);1179123011801231 priv->dev = dev;···12281281 u32 val;12291282 unsigned long time_out;12301283 struct c_can_priv *priv = netdev_priv(dev);12841284+ int ret;1231128512321286 if (!(dev->flags & IFF_UP))12331287 return 0;···12551307 if (time_after(jiffies, time_out))12561308 return -ETIMEDOUT;1257130912581258- return c_can_start(dev);13101310+ ret = c_can_start(dev);13111311+ if (!ret)13121312+ c_can_irq_control(priv, true);13131313+13141314+ return ret;12591315}12601316EXPORT_SYMBOL_GPL(c_can_power_up);12611317#endif
+5-18
drivers/net/can/c_can/c_can.h
···2222#ifndef C_CAN_H2323#define C_CAN_H24242525-/*2626- * IFx register masks:2727- * allow easy operation on 16-bit registers when the2828- * argument is 32-bit instead2929- */3030-#define IFX_WRITE_LOW_16BIT(x) ((x) & 0xFFFF)3131-#define IFX_WRITE_HIGH_16BIT(x) (((x) & 0xFFFF0000) >> 16)3232-3325/* message object split */3426#define C_CAN_NO_OF_OBJECTS 323527#define C_CAN_MSG_OBJ_RX_NUM 16···37453846#define C_CAN_MSG_OBJ_RX_SPLIT 93947#define C_CAN_MSG_RX_LOW_LAST (C_CAN_MSG_OBJ_RX_SPLIT - 1)4040-4141-#define C_CAN_NEXT_MSG_OBJ_MASK (C_CAN_MSG_OBJ_TX_NUM - 1)4248#define RECEIVE_OBJECT_BITS 0x0000ffff43494450enum reg {···173183 struct napi_struct napi;174184 struct net_device *dev;175185 struct device *device;176176- spinlock_t xmit_lock;177177- int tx_object;178178- int current_status;186186+ atomic_t tx_active;187187+ unsigned long tx_dir;179188 int last_status;180189 u16 (*read_reg) (struct c_can_priv *priv, enum reg index);181190 void (*write_reg) (struct c_can_priv *priv, enum reg index, u16 val);182191 void __iomem *base;183192 const u16 *regs;184184- unsigned long irq_flags; /* for request_irq() */185185- unsigned int tx_next;186186- unsigned int tx_echo;187193 void *priv; /* for board-specific data */188188- u16 irqstatus;189194 enum c_can_dev_id type;190195 u32 __iomem *raminit_ctrlreg;191191- unsigned int instance;196196+ int instance;192197 void (*raminit) (const struct c_can_priv *priv, bool enable);198198+ u32 comm_rcv_high;199199+ u32 rxmasked;193200 u32 dlc[C_CAN_MSG_OBJ_TX_NUM];194201};195202
+7-2
drivers/net/can/c_can/c_can_pci.c
···8484 goto out_disable_device;8585 }86868787- pci_set_master(pdev);8888- pci_enable_msi(pdev);8787+ ret = pci_enable_msi(pdev);8888+ if (!ret) {8989+ dev_info(&pdev->dev, "MSI enabled\n");9090+ pci_set_master(pdev);9191+ }89929093 addr = pci_iomap(pdev, 0, pci_resource_len(pdev, 0));9194 if (!addr) {···134131 ret = -EINVAL;135132 goto out_free_c_can;136133 }134134+135135+ priv->type = c_can_pci_data->type;137136138137 /* Configure access to registers */139138 switch (c_can_pci_data->reg_align) {
+1-1
drivers/net/can/c_can/c_can_platform.c
···222222223223 res = platform_get_resource(pdev, IORESOURCE_MEM, 1);224224 priv->raminit_ctrlreg = devm_ioremap_resource(&pdev->dev, res);225225- if (IS_ERR(priv->raminit_ctrlreg) || (int)priv->instance < 0)225225+ if (IS_ERR(priv->raminit_ctrlreg) || priv->instance < 0)226226 dev_info(&pdev->dev, "control memory is not used for raminit\n");227227 else228228 priv->raminit = c_can_hw_raminit;
+1-1
drivers/net/can/dev.c
···256256257257 /* Check if the CAN device has bit-timing parameters */258258 if (!btc)259259- return -ENOTSUPP;259259+ return -EOPNOTSUPP;260260261261 /*262262 * Depending on the given can_bittiming parameter structure the CAN
+13-3
drivers/net/can/sja1000/sja1000_isa.c
···4646static unsigned char cdr[MAXDEV] = {[0 ... (MAXDEV - 1)] = 0xff};4747static unsigned char ocr[MAXDEV] = {[0 ... (MAXDEV - 1)] = 0xff};4848static int indirect[MAXDEV] = {[0 ... (MAXDEV - 1)] = -1};4949+static spinlock_t indirect_lock[MAXDEV]; /* lock for indirect access mode */49505051module_param_array(port, ulong, NULL, S_IRUGO);5152MODULE_PARM_DESC(port, "I/O port number");···102101static u8 sja1000_isa_port_read_reg_indirect(const struct sja1000_priv *priv,103102 int reg)104103{105105- unsigned long base = (unsigned long)priv->reg_base;104104+ unsigned long flags, base = (unsigned long)priv->reg_base;105105+ u8 readval;106106107107+ spin_lock_irqsave(&indirect_lock[priv->dev->dev_id], flags);107108 outb(reg, base);108108- return inb(base + 1);109109+ readval = inb(base + 1);110110+ spin_unlock_irqrestore(&indirect_lock[priv->dev->dev_id], flags);111111+112112+ return readval;109113}110114111115static void sja1000_isa_port_write_reg_indirect(const struct sja1000_priv *priv,112116 int reg, u8 val)113117{114114- unsigned long base = (unsigned long)priv->reg_base;118118+ unsigned long flags, base = (unsigned long)priv->reg_base;115119120120+ spin_lock_irqsave(&indirect_lock[priv->dev->dev_id], flags);116121 outb(reg, base);117122 outb(val, base + 1);123123+ spin_unlock_irqrestore(&indirect_lock[priv->dev->dev_id], flags);118124}119125120126static int sja1000_isa_probe(struct platform_device *pdev)···177169 if (iosize == SJA1000_IOSIZE_INDIRECT) {178170 priv->read_reg = sja1000_isa_port_read_reg_indirect;179171 priv->write_reg = sja1000_isa_port_write_reg_indirect;172172+ spin_lock_init(&indirect_lock[idx]);180173 } else {181174 priv->read_reg = sja1000_isa_port_read_reg;182175 priv->write_reg = sja1000_isa_port_write_reg;···207198208199 platform_set_drvdata(pdev, dev);209200 SET_NETDEV_DEV(dev, &pdev->dev);201201+ dev->dev_id = idx;210202211203 err = register_sja1000dev(dev);212204 if (err) {
+3-3
drivers/net/can/slcan.c
···322322 if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev))323323 return;324324325325- spin_lock(&sl->lock);325325+ spin_lock_bh(&sl->lock);326326 if (sl->xleft <= 0) {327327 /* Now serial buffer is almost free & we can start328328 * transmission of another packet */329329 sl->dev->stats.tx_packets++;330330 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);331331- spin_unlock(&sl->lock);331331+ spin_unlock_bh(&sl->lock);332332 netif_wake_queue(sl->dev);333333 return;334334 }···336336 actual = tty->ops->write(tty, sl->xhead, sl->xleft);337337 sl->xleft -= actual;338338 sl->xhead += actual;339339- spin_unlock(&sl->lock);339339+ spin_unlock_bh(&sl->lock);340340}341341342342/* Send a can_frame to a TTY queue. */
+1
drivers/net/ethernet/altera/Kconfig
···11config ALTERA_TSE22 tristate "Altera Triple-Speed Ethernet MAC support"33+ depends on HAS_DMA34 select PHYLIB45 ---help---56 This driver supports the Altera Triple-Speed (TSE) Ethernet MAC.
+6-2
drivers/net/ethernet/altera/altera_msgdma.c
···1818#include "altera_utils.h"1919#include "altera_tse.h"2020#include "altera_msgdmahw.h"2121+#include "altera_msgdma.h"21222223/* No initialization work to do for MSGDMA */2324int msgdma_initialize(struct altera_tse_private *priv)···2726}28272928void msgdma_uninitialize(struct altera_tse_private *priv)2929+{3030+}3131+3232+void msgdma_start_rxdma(struct altera_tse_private *priv)3033{3134}3235···159154160155/* Put buffer to the mSGDMA RX FIFO161156 */162162-int msgdma_add_rx_desc(struct altera_tse_private *priv,157157+void msgdma_add_rx_desc(struct altera_tse_private *priv,163158 struct tse_buffer *rxbuffer)164159{165160 struct msgdma_extended_desc *desc = priv->rx_dma_desc;···180175 iowrite32(0, &desc->burst_seq_num);181176 iowrite32(0x00010001, &desc->stride);182177 iowrite32(control, &desc->control);183183- return 1;184178}185179186180/* status is returned on upper 16 bits,
···5858/* MAC function configuration default settings */5959#define ALTERA_TSE_TX_IPG_LENGTH 1260606161+#define ALTERA_TSE_PAUSE_QUANTA 0xffff6262+6163#define GET_BIT_VALUE(v, bit) (((v) >> (bit)) & 0x1)62646365/* MAC Command_Config Register Bit Definitions···392390 void (*clear_rxirq)(struct altera_tse_private *);393391 int (*tx_buffer)(struct altera_tse_private *, struct tse_buffer *);394392 u32 (*tx_completions)(struct altera_tse_private *);395395- int (*add_rx_desc)(struct altera_tse_private *, struct tse_buffer *);393393+ void (*add_rx_desc)(struct altera_tse_private *, struct tse_buffer *);396394 u32 (*get_rx_status)(struct altera_tse_private *);397395 int (*init_dma)(struct altera_tse_private *);398396 void (*uninit_dma)(struct altera_tse_private *);397397+ void (*start_rxdma)(struct altera_tse_private *);399398};400399401400/* This structure is private to each device.···456453 u32 rxctrlreg;457454 dma_addr_t rxdescphys;458455 dma_addr_t txdescphys;456456+ size_t sgdmadesclen;459457460458 struct list_head txlisthd;461459 struct list_head rxlisthd;
+7-1
drivers/net/ethernet/altera/altera_tse_ethtool.c
···7777 struct altera_tse_private *priv = netdev_priv(dev);7878 u32 rev = ioread32(&priv->mac_dev->megacore_revision);79798080- strcpy(info->driver, "Altera TSE MAC IP Driver");8080+ strcpy(info->driver, "altera_tse");8181 strcpy(info->version, "v8.0");8282 snprintf(info->fw_version, ETHTOOL_FWVERS_LEN, "v%d.%d",8383 rev & 0xFFFF, (rev & 0xFFFF0000) >> 16);···185185 * how to do any special formatting of this data.186186 * This version number will need to change if and187187 * when this register table is changed.188188+ *189189+ * version[31:0] = 1: Dump the first 128 TSE Registers190190+ * Upper bits are all 0 by default191191+ *192192+ * Upper 16-bits will indicate feature presence for193193+ * Ethtool register decoding in future version.188194 */189195190196 regs->version = 1;
+46-31
drivers/net/ethernet/altera/altera_tse_main.c
···224224 dev_kfree_skb_any(rxbuffer->skb);225225 return -EINVAL;226226 }227227+ rxbuffer->dma_addr &= (dma_addr_t)~3;227228 rxbuffer->len = len;228229 return 0;229230}···426425 priv->dev->stats.rx_bytes += pktlength;427426428427 entry = next_entry;428428+429429+ tse_rx_refill(priv);429430 }430431431431- tse_rx_refill(priv);432432 return count;433433}434434···521519 struct net_device *dev = dev_id;522520 struct altera_tse_private *priv;523521 unsigned long int flags;524524-525522526523 if (unlikely(!dev)) {527524 pr_err("%s: invalid dev pointer\n", __func__);···869868 /* Disable RX/TX shift 16 for alignment of all received frames on 16-bit870869 * start address871870 */872872- tse_clear_bit(&mac->rx_cmd_stat, ALTERA_TSE_RX_CMD_STAT_RX_SHIFT16);871871+ tse_set_bit(&mac->rx_cmd_stat, ALTERA_TSE_RX_CMD_STAT_RX_SHIFT16);873872 tse_clear_bit(&mac->tx_cmd_stat, ALTERA_TSE_TX_CMD_STAT_TX_SHIFT16 |874873 ALTERA_TSE_TX_CMD_STAT_OMIT_CRC);875874876875 /* Set the MAC options */877876 cmd = ioread32(&mac->command_config);878878- cmd |= MAC_CMDCFG_PAD_EN; /* Padding Removal on Receive */877877+ cmd &= ~MAC_CMDCFG_PAD_EN; /* No padding Removal on Receive */879878 cmd &= ~MAC_CMDCFG_CRC_FWD; /* CRC Removal */880879 cmd |= MAC_CMDCFG_RX_ERR_DISC; /* Automatically discard frames881880 * with CRC errors···883882 cmd |= MAC_CMDCFG_CNTL_FRM_ENA;884883 cmd &= ~MAC_CMDCFG_TX_ENA;885884 cmd &= ~MAC_CMDCFG_RX_ENA;885885+886886+ /* Default speed and duplex setting, full/100 */887887+ cmd &= ~MAC_CMDCFG_HD_ENA;888888+ cmd &= ~MAC_CMDCFG_ETH_SPEED;889889+ cmd &= ~MAC_CMDCFG_ENA_10;890890+886891 iowrite32(cmd, &mac->command_config);892892+893893+ iowrite32(ALTERA_TSE_PAUSE_QUANTA, &mac->pause_quanta);887894888895 if (netif_msg_hw(priv))889896 dev_dbg(priv->device,···1094108510951086 spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags);1096108710971097- /* Start MAC Rx/Tx */10981098- spin_lock(&priv->mac_cfg_lock);10991099- tse_set_mac(priv, true);11001100- spin_unlock(&priv->mac_cfg_lock);11011101-11021088 if (priv->phydev)11031089 phy_start(priv->phydev);1104109011051091 napi_enable(&priv->napi);11061092 netif_start_queue(dev);10931093+10941094+ priv->dmaops->start_rxdma(priv);10951095+10961096+ /* Start MAC Rx/Tx */10971097+ spin_lock(&priv->mac_cfg_lock);10981098+ tse_set_mac(priv, true);10991099+ spin_unlock(&priv->mac_cfg_lock);1107110011081101 return 0;11091102···11781167 .ndo_validate_addr = eth_validate_addr,11791168};1180116911811181-11821170static int request_and_map(struct platform_device *pdev, const char *name,11831171 struct resource **res, void __iomem **ptr)11841172{···12451235 /* Get the mapped address to the SGDMA descriptor memory */12461236 ret = request_and_map(pdev, "s1", &dma_res, &descmap);12471237 if (ret)12481248- goto out_free;12381238+ goto err_free_netdev;1249123912501240 /* Start of that memory is for transmit descriptors */12511241 priv->tx_dma_desc = descmap;···12641254 if (upper_32_bits(priv->rxdescmem_busaddr)) {12651255 dev_dbg(priv->device,12661256 "SGDMA bus addresses greater than 32-bits\n");12671267- goto out_free;12571257+ goto err_free_netdev;12681258 }12691259 if (upper_32_bits(priv->txdescmem_busaddr)) {12701260 dev_dbg(priv->device,12711261 "SGDMA bus addresses greater than 32-bits\n");12721272- goto out_free;12621262+ goto err_free_netdev;12731263 }12741264 } else if (priv->dmaops &&12751265 priv->dmaops->altera_dtype == ALTERA_DTYPE_MSGDMA) {12761266 ret = request_and_map(pdev, "rx_resp", &dma_res,12771267 &priv->rx_dma_resp);12781268 if (ret)12791279- goto out_free;12691269+ goto err_free_netdev;1280127012811271 ret = request_and_map(pdev, "tx_desc", &dma_res,12821272 &priv->tx_dma_desc);12831273 if (ret)12841284- goto out_free;12741274+ goto err_free_netdev;1285127512861276 priv->txdescmem = resource_size(dma_res);12871277 priv->txdescmem_busaddr = dma_res->start;···12891279 ret = request_and_map(pdev, "rx_desc", &dma_res,12901280 &priv->rx_dma_desc);12911281 if (ret)12921292- goto out_free;12821282+ goto err_free_netdev;1293128312941284 priv->rxdescmem = resource_size(dma_res);12951285 priv->rxdescmem_busaddr = dma_res->start;1296128612971287 } else {12981298- goto out_free;12881288+ goto err_free_netdev;12991289 }1300129013011291 if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask)))···13041294 else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32)))13051295 dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32));13061296 else13071307- goto out_free;12971297+ goto err_free_netdev;1308129813091299 /* MAC address space */13101300 ret = request_and_map(pdev, "control_port", &control_port,13111301 (void __iomem **)&priv->mac_dev);13121302 if (ret)13131313- goto out_free;13031303+ goto err_free_netdev;1314130413151305 /* xSGDMA Rx Dispatcher address space */13161306 ret = request_and_map(pdev, "rx_csr", &dma_res,13171307 &priv->rx_dma_csr);13181308 if (ret)13191319- goto out_free;13091309+ goto err_free_netdev;132013101321131113221312 /* xSGDMA Tx Dispatcher address space */13231313 ret = request_and_map(pdev, "tx_csr", &dma_res,13241314 &priv->tx_dma_csr);13251315 if (ret)13261326- goto out_free;13161316+ goto err_free_netdev;132713171328131813291319 /* Rx IRQ */···13311321 if (priv->rx_irq == -ENXIO) {13321322 dev_err(&pdev->dev, "cannot obtain Rx IRQ\n");13331323 ret = -ENXIO;13341334- goto out_free;13241324+ goto err_free_netdev;13351325 }1336132613371327 /* Tx IRQ */···13391329 if (priv->tx_irq == -ENXIO) {13401330 dev_err(&pdev->dev, "cannot obtain Tx IRQ\n");13411331 ret = -ENXIO;13421342- goto out_free;13321332+ goto err_free_netdev;13431333 }1344133413451335 /* get FIFO depths from device tree */···13471337 &priv->rx_fifo_depth)) {13481338 dev_err(&pdev->dev, "cannot obtain rx-fifo-depth\n");13491339 ret = -ENXIO;13501350- goto out_free;13401340+ goto err_free_netdev;13511341 }1352134213531343 if (of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth",13541344 &priv->rx_fifo_depth)) {13551345 dev_err(&pdev->dev, "cannot obtain tx-fifo-depth\n");13561346 ret = -ENXIO;13571357- goto out_free;13471347+ goto err_free_netdev;13581348 }1359134913601350 /* get hash filter settings for this instance */···14031393 ((priv->phy_addr >= 0) && (priv->phy_addr < PHY_MAX_ADDR)))) {14041394 dev_err(&pdev->dev, "invalid phy-addr specified %d\n",14051395 priv->phy_addr);14061406- goto out_free;13961396+ goto err_free_netdev;14071397 }1408139814091399 /* Create/attach to MDIO bus */···14111401 atomic_add_return(1, &instance_count));1412140214131403 if (ret)14141414- goto out_free;14041404+ goto err_free_netdev;1415140514161406 /* initialize netdev */14171407 ether_setup(ndev);···14481438 ret = register_netdev(ndev);14491439 if (ret) {14501440 dev_err(&pdev->dev, "failed to register TSE net device\n");14511451- goto out_free_mdio;14411441+ goto err_register_netdev;14521442 }1453144314541444 platform_set_drvdata(pdev, ndev);···14651455 ret = init_phy(ndev);14661456 if (ret != 0) {14671457 netdev_err(ndev, "Cannot attach to PHY (error: %d)\n", ret);14681468- goto out_free_mdio;14581458+ goto err_init_phy;14691459 }14701460 return 0;1471146114721472-out_free_mdio:14621462+err_init_phy:14631463+ unregister_netdev(ndev);14641464+err_register_netdev:14651465+ netif_napi_del(&priv->napi);14731466 altera_tse_mdio_destroy(ndev);14741474-out_free:14671467+err_free_netdev:14751468 free_netdev(ndev);14761469 return ret;14771470}···15091496 .get_rx_status = sgdma_rx_status,15101497 .init_dma = sgdma_initialize,15111498 .uninit_dma = sgdma_uninitialize,14991499+ .start_rxdma = sgdma_start_rxdma,15121500};1513150115141502struct altera_dmaops altera_dtype_msgdma = {···15281514 .get_rx_status = msgdma_rx_status,15291515 .init_dma = msgdma_initialize,15301516 .uninit_dma = msgdma_uninitialize,15171517+ .start_rxdma = msgdma_start_rxdma,15311518};1532151915331520static struct of_device_id altera_tse_ids[] = {
···4455config NET_CADENCE66 bool "Cadence devices"77- depends on HAS_IOMEM && (ARM || AVR32 || COMPILE_TEST)77+ depends on HAS_IOMEM && (ARM || AVR32 || MICROBLAZE || COMPILE_TEST)88 default y99 ---help---1010 If you have a network (Ethernet) card belonging to this class, say Y.···30303131config MACB3232 tristate "Cadence MACB/GEM support"3333- depends on HAS_DMA && (PLATFORM_AT32AP || ARCH_AT91 || ARCH_PICOXCELL || ARCH_ZYNQ || COMPILE_TEST)3333+ depends on HAS_DMA && (PLATFORM_AT32AP || ARCH_AT91 || ARCH_PICOXCELL || ARCH_ZYNQ || MICROBLAZE || COMPILE_TEST)3434 select PHYLIB3535 ---help---3636 The Cadence MACB ethernet interface is found on many Atmel AT32 and
+17-18
drivers/net/ethernet/cadence/macb.c
···599599{600600 unsigned int entry;601601 struct sk_buff *skb;602602- struct macb_dma_desc *desc;603602 dma_addr_t paddr;604603605604 while (CIRC_SPACE(bp->rx_prepared_head, bp->rx_tail, RX_RING_SIZE) > 0) {606606- u32 addr, ctrl;607607-608605 entry = macb_rx_ring_wrap(bp->rx_prepared_head);609609- desc = &bp->rx_ring[entry];610606611607 /* Make hw descriptor updates visible to CPU */612608 rmb();613609614614- addr = desc->addr;615615- ctrl = desc->ctrl;616610 bp->rx_prepared_head++;617617-618618- if ((addr & MACB_BIT(RX_USED)))619619- continue;620611621612 if (bp->rx_skbuff[entry] == NULL) {622613 /* allocate sk_buff for this free entry in ring */···689698 if (!(addr & MACB_BIT(RX_USED)))690699 break;691700692692- desc->addr &= ~MACB_BIT(RX_USED);693701 bp->rx_tail++;694702 count++;695703···881891 if (work_done < budget) {882892 napi_complete(napi);883893884884- /*885885- * We've done what we can to clean the buffers. Make sure we886886- * get notified when new packets arrive.887887- */888888- macb_writel(bp, IER, MACB_RX_INT_FLAGS);889889-890894 /* Packets received while interrupts were disabled */891895 status = macb_readl(bp, RSR);892892- if (unlikely(status))896896+ if (status) {897897+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)898898+ macb_writel(bp, ISR, MACB_BIT(RCOMP));893899 napi_reschedule(napi);900900+ } else {901901+ macb_writel(bp, IER, MACB_RX_INT_FLAGS);902902+ }894903 }895904896905 /* TODO: Handle errors */···940951 if (unlikely(status & (MACB_TX_ERR_FLAGS))) {941952 macb_writel(bp, IDR, MACB_TX_INT_FLAGS);942953 schedule_work(&bp->tx_error_task);954954+955955+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)956956+ macb_writel(bp, ISR, MACB_TX_ERR_FLAGS);957957+943958 break;944959 }945960···961968 bp->hw_stats.gem.rx_overruns++;962969 else963970 bp->hw_stats.macb.rx_overruns++;971971+972972+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)973973+ macb_writel(bp, ISR, MACB_BIT(ISR_ROVR));964974 }965975966976 if (status & MACB_BIT(HRESP)) {···973977 * (work queue?)974978 */975979 netdev_err(dev, "DMA bus error: HRESP not OK\n");980980+981981+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)982982+ macb_writel(bp, ISR, MACB_BIT(HRESP));976983 }977984978985 status = macb_readl(bp, ISR);···1112111311131114 desc = &bp->rx_ring[i];11141115 addr = MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr));11151115- dma_unmap_single(&bp->pdev->dev, addr, skb->len,11161116+ dma_unmap_single(&bp->pdev->dev, addr, bp->rx_buffer_size,11161117 DMA_FROM_DEVICE);11171118 dev_kfree_skb_any(skb);11181119 skb = NULL;
+7-6
drivers/net/ethernet/chelsio/Kconfig
···6767 will be called cxgb3.68686969config CHELSIO_T47070- tristate "Chelsio Communications T4 Ethernet support"7070+ tristate "Chelsio Communications T4/T5 Ethernet support"7171 depends on PCI7272 select FW_LOADER7373 select MDIO7474 ---help---7575- This driver supports Chelsio T4-based gigabit and 10Gb Ethernet7676- adapters.7575+ This driver supports Chelsio T4 and T5 based gigabit, 10Gb Ethernet7676+ adapter and T5 based 40Gb Ethernet adapter.77777878 For general information about Chelsio and our products, visit7979 our website at <http://www.chelsio.com>.···8787 will be called cxgb4.88888989config CHELSIO_T4VF9090- tristate "Chelsio Communications T4 Virtual Function Ethernet support"9090+ tristate "Chelsio Communications T4/T5 Virtual Function Ethernet support"9191 depends on PCI9292 ---help---9393- This driver supports Chelsio T4-based gigabit and 10Gb Ethernet9494- adapters with PCI-E SR-IOV Virtual Functions.9393+ This driver supports Chelsio T4 and T5 based gigabit, 10Gb Ethernet9494+ adapters and T5 based 40Gb Ethernet adapters with PCI-E SR-IOV Virtual9595+ Functions.95969697 For general information about Chelsio and our products, visit9798 our website at <http://www.chelsio.com>.
+2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
···58705870 spd = " 2.5 GT/s";58715871 else if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_5_0GB)58725872 spd = " 5 GT/s";58735873+ else if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_8_0GB)58745874+ spd = " 8 GT/s";5873587558745876 if (pi->link_cfg.supported & FW_PORT_CAP_SPEED_100M)58755877 bufp += sprintf(bufp, "100/");
+113-110
drivers/net/ethernet/freescale/gianfar.c
···121121static irqreturn_t gfar_transmit(int irq, void *dev_id);122122static irqreturn_t gfar_interrupt(int irq, void *dev_id);123123static void adjust_link(struct net_device *dev);124124+static noinline void gfar_update_link_state(struct gfar_private *priv);124125static int init_phy(struct net_device *dev);125126static int gfar_probe(struct platform_device *ofdev);126127static int gfar_remove(struct platform_device *ofdev);···30773076 return IRQ_HANDLED;30783077}3079307830803080-static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv)30813081-{30823082- struct phy_device *phydev = priv->phydev;30833083- u32 val = 0;30843084-30853085- if (!phydev->duplex)30863086- return val;30873087-30883088- if (!priv->pause_aneg_en) {30893089- if (priv->tx_pause_en)30903090- val |= MACCFG1_TX_FLOW;30913091- if (priv->rx_pause_en)30923092- val |= MACCFG1_RX_FLOW;30933093- } else {30943094- u16 lcl_adv, rmt_adv;30953095- u8 flowctrl;30963096- /* get link partner capabilities */30973097- rmt_adv = 0;30983098- if (phydev->pause)30993099- rmt_adv = LPA_PAUSE_CAP;31003100- if (phydev->asym_pause)31013101- rmt_adv |= LPA_PAUSE_ASYM;31023102-31033103- lcl_adv = mii_advertise_flowctrl(phydev->advertising);31043104-31053105- flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv);31063106- if (flowctrl & FLOW_CTRL_TX)31073107- val |= MACCFG1_TX_FLOW;31083108- if (flowctrl & FLOW_CTRL_RX)31093109- val |= MACCFG1_RX_FLOW;31103110- }31113111-31123112- return val;31133113-}31143114-31153079/* Called every time the controller might need to be made31163080 * aware of new link state. The PHY code conveys this31173081 * information through variables in the phydev structure, and this···30863120static void adjust_link(struct net_device *dev)30873121{30883122 struct gfar_private *priv = netdev_priv(dev);30893089- struct gfar __iomem *regs = priv->gfargrp[0].regs;30903123 struct phy_device *phydev = priv->phydev;30913091- int new_state = 0;3092312430933093- if (test_bit(GFAR_RESETTING, &priv->state))30943094- return;30953095-30963096- if (phydev->link) {30973097- u32 tempval1 = gfar_read(®s->maccfg1);30983098- u32 tempval = gfar_read(®s->maccfg2);30993099- u32 ecntrl = gfar_read(®s->ecntrl);31003100-31013101- /* Now we make sure that we can be in full duplex mode.31023102- * If not, we operate in half-duplex mode.31033103- */31043104- if (phydev->duplex != priv->oldduplex) {31053105- new_state = 1;31063106- if (!(phydev->duplex))31073107- tempval &= ~(MACCFG2_FULL_DUPLEX);31083108- else31093109- tempval |= MACCFG2_FULL_DUPLEX;31103110-31113111- priv->oldduplex = phydev->duplex;31123112- }31133113-31143114- if (phydev->speed != priv->oldspeed) {31153115- new_state = 1;31163116- switch (phydev->speed) {31173117- case 1000:31183118- tempval =31193119- ((tempval & ~(MACCFG2_IF)) | MACCFG2_GMII);31203120-31213121- ecntrl &= ~(ECNTRL_R100);31223122- break;31233123- case 100:31243124- case 10:31253125- tempval =31263126- ((tempval & ~(MACCFG2_IF)) | MACCFG2_MII);31273127-31283128- /* Reduced mode distinguishes31293129- * between 10 and 10031303130- */31313131- if (phydev->speed == SPEED_100)31323132- ecntrl |= ECNTRL_R100;31333133- else31343134- ecntrl &= ~(ECNTRL_R100);31353135- break;31363136- default:31373137- netif_warn(priv, link, dev,31383138- "Ack! Speed (%d) is not 10/100/1000!\n",31393139- phydev->speed);31403140- break;31413141- }31423142-31433143- priv->oldspeed = phydev->speed;31443144- }31453145-31463146- tempval1 &= ~(MACCFG1_TX_FLOW | MACCFG1_RX_FLOW);31473147- tempval1 |= gfar_get_flowctrl_cfg(priv);31483148-31493149- gfar_write(®s->maccfg1, tempval1);31503150- gfar_write(®s->maccfg2, tempval);31513151- gfar_write(®s->ecntrl, ecntrl);31523152-31533153- if (!priv->oldlink) {31543154- new_state = 1;31553155- priv->oldlink = 1;31563156- }31573157- } else if (priv->oldlink) {31583158- new_state = 1;31593159- priv->oldlink = 0;31603160- priv->oldspeed = 0;31613161- priv->oldduplex = -1;31623162- }31633163-31643164- if (new_state && netif_msg_link(priv))31653165- phy_print_status(phydev);31253125+ if (unlikely(phydev->link != priv->oldlink ||31263126+ phydev->duplex != priv->oldduplex ||31273127+ phydev->speed != priv->oldspeed))31283128+ gfar_update_link_state(priv);31663129}3167313031683131/* Update the hash table based on the current list of multicast···33353440 netif_dbg(priv, tx_err, dev, "babbling TX error\n");33363441 }33373442 return IRQ_HANDLED;34433443+}34443444+34453445+static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv)34463446+{34473447+ struct phy_device *phydev = priv->phydev;34483448+ u32 val = 0;34493449+34503450+ if (!phydev->duplex)34513451+ return val;34523452+34533453+ if (!priv->pause_aneg_en) {34543454+ if (priv->tx_pause_en)34553455+ val |= MACCFG1_TX_FLOW;34563456+ if (priv->rx_pause_en)34573457+ val |= MACCFG1_RX_FLOW;34583458+ } else {34593459+ u16 lcl_adv, rmt_adv;34603460+ u8 flowctrl;34613461+ /* get link partner capabilities */34623462+ rmt_adv = 0;34633463+ if (phydev->pause)34643464+ rmt_adv = LPA_PAUSE_CAP;34653465+ if (phydev->asym_pause)34663466+ rmt_adv |= LPA_PAUSE_ASYM;34673467+34683468+ lcl_adv = mii_advertise_flowctrl(phydev->advertising);34693469+34703470+ flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv);34713471+ if (flowctrl & FLOW_CTRL_TX)34723472+ val |= MACCFG1_TX_FLOW;34733473+ if (flowctrl & FLOW_CTRL_RX)34743474+ val |= MACCFG1_RX_FLOW;34753475+ }34763476+34773477+ return val;34783478+}34793479+34803480+static noinline void gfar_update_link_state(struct gfar_private *priv)34813481+{34823482+ struct gfar __iomem *regs = priv->gfargrp[0].regs;34833483+ struct phy_device *phydev = priv->phydev;34843484+34853485+ if (unlikely(test_bit(GFAR_RESETTING, &priv->state)))34863486+ return;34873487+34883488+ if (phydev->link) {34893489+ u32 tempval1 = gfar_read(®s->maccfg1);34903490+ u32 tempval = gfar_read(®s->maccfg2);34913491+ u32 ecntrl = gfar_read(®s->ecntrl);34923492+34933493+ if (phydev->duplex != priv->oldduplex) {34943494+ if (!(phydev->duplex))34953495+ tempval &= ~(MACCFG2_FULL_DUPLEX);34963496+ else34973497+ tempval |= MACCFG2_FULL_DUPLEX;34983498+34993499+ priv->oldduplex = phydev->duplex;35003500+ }35013501+35023502+ if (phydev->speed != priv->oldspeed) {35033503+ switch (phydev->speed) {35043504+ case 1000:35053505+ tempval =35063506+ ((tempval & ~(MACCFG2_IF)) | MACCFG2_GMII);35073507+35083508+ ecntrl &= ~(ECNTRL_R100);35093509+ break;35103510+ case 100:35113511+ case 10:35123512+ tempval =35133513+ ((tempval & ~(MACCFG2_IF)) | MACCFG2_MII);35143514+35153515+ /* Reduced mode distinguishes35163516+ * between 10 and 10035173517+ */35183518+ if (phydev->speed == SPEED_100)35193519+ ecntrl |= ECNTRL_R100;35203520+ else35213521+ ecntrl &= ~(ECNTRL_R100);35223522+ break;35233523+ default:35243524+ netif_warn(priv, link, priv->ndev,35253525+ "Ack! Speed (%d) is not 10/100/1000!\n",35263526+ phydev->speed);35273527+ break;35283528+ }35293529+35303530+ priv->oldspeed = phydev->speed;35313531+ }35323532+35333533+ tempval1 &= ~(MACCFG1_TX_FLOW | MACCFG1_RX_FLOW);35343534+ tempval1 |= gfar_get_flowctrl_cfg(priv);35353535+35363536+ gfar_write(®s->maccfg1, tempval1);35373537+ gfar_write(®s->maccfg2, tempval);35383538+ gfar_write(®s->ecntrl, ecntrl);35393539+35403540+ if (!priv->oldlink)35413541+ priv->oldlink = 1;35423542+35433543+ } else if (priv->oldlink) {35443544+ priv->oldlink = 0;35453545+ priv->oldspeed = 0;35463546+ priv->oldduplex = -1;35473547+ }35483548+35493549+ if (netif_msg_link(priv))35503550+ phy_print_status(phydev);33383551}3339355233403553static struct of_device_id gfar_match[] =
···28972897 u32 prttsyn_stat = rd32(hw, I40E_PRTTSYN_STAT_0);2898289828992899 if (prttsyn_stat & I40E_PRTTSYN_STAT_0_TXTIME_MASK) {29002900- ena_mask &= ~I40E_PFINT_ICR0_ENA_TIMESYNC_MASK;29002900+ icr0 &= ~I40E_PFINT_ICR0_ENA_TIMESYNC_MASK;29012901 i40e_ptp_tx_hwtstamp(pf);29022902- prttsyn_stat &= ~I40E_PRTTSYN_STAT_0_TXTIME_MASK;29032902 }29042904-29052905- wr32(hw, I40E_PRTTSYN_STAT_0, prttsyn_stat);29062903 }2907290429082905 /* If a critical error is pending we have no choice but to reset the···42674270 err = i40e_vsi_open(vsi);42684271 if (err)42694272 return err;42734273+42744274+ /* configure global TSO hardware offload settings */42754275+ wr32(&pf->hw, I40E_GLLAN_TSOMSK_F, be32_to_cpu(TCP_FLAG_PSH |42764276+ TCP_FLAG_FIN) >> 16);42774277+ wr32(&pf->hw, I40E_GLLAN_TSOMSK_M, be32_to_cpu(TCP_FLAG_PSH |42784278+ TCP_FLAG_FIN |42794279+ TCP_FLAG_CWR) >> 16);42804280+ wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16);4270428142714282#ifdef CONFIG_I40E_VXLAN42724283 vxlan_get_rx_port(netdev);···67176712 NETIF_F_HW_VLAN_CTAG_FILTER |67186713 NETIF_F_IPV6_CSUM |67196714 NETIF_F_TSO |67156715+ NETIF_F_TSO_ECN |67206716 NETIF_F_TSO6 |67216717 NETIF_F_RXCSUM |67226718 NETIF_F_NTUPLE |
+1-1
drivers/net/ethernet/intel/i40e/i40e_nvm.c
···160160 udelay(5);161161 }162162 if (ret_code == I40E_ERR_TIMEOUT)163163- hw_dbg(hw, "Done bit in GLNVM_SRCTL not set");163163+ hw_dbg(hw, "Done bit in GLNVM_SRCTL not set\n");164164 return ret_code;165165}166166
···418418 }419419 break;420420 default:421421- dev_info(&pf->pdev->dev, "Could not specify spec type %d",421421+ dev_info(&pf->pdev->dev, "Could not specify spec type %d\n",422422 input->flow_type);423423 ret = -EINVAL;424424 }···478478 pf->flags |= I40E_FLAG_FDIR_REQUIRES_REINIT;479479 }480480 } else {481481- dev_info(&pdev->dev, "FD filter programming error");481481+ dev_info(&pdev->dev, "FD filter programming error\n");482482 }483483 } else if (error ==484484 (0x1 << I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT)) {···17131713 I40E_TX_FLAGS_VLAN_PRIO_SHIFT;17141714 if (tx_flags & I40E_TX_FLAGS_SW_VLAN) {17151715 struct vlan_ethhdr *vhdr;17161716- if (skb_header_cloned(skb) &&17171717- pskb_expand_head(skb, 0, 0, GFP_ATOMIC))17181718- return -ENOMEM;17161716+ int rc;17171717+17181718+ rc = skb_cow_head(skb, 0);17191719+ if (rc < 0)17201720+ return rc;17191721 vhdr = (struct vlan_ethhdr *)skb->data;17201722 vhdr->h_vlan_TCI = htons(tx_flags >>17211723 I40E_TX_FLAGS_VLAN_SHIFT);···17451743 u64 *cd_type_cmd_tso_mss, u32 *cd_tunneling)17461744{17471745 u32 cd_cmd, cd_tso_len, cd_mss;17461746+ struct ipv6hdr *ipv6h;17481747 struct tcphdr *tcph;17491748 struct iphdr *iph;17501749 u32 l4len;17511750 int err;17521752- struct ipv6hdr *ipv6h;1753175117541752 if (!skb_is_gso(skb))17551753 return 0;1756175417571757- if (skb_header_cloned(skb)) {17581758- err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);17591759- if (err)17601760- return err;17611761- }17551755+ err = skb_cow_head(skb, 0);17561756+ if (err < 0)17571757+ return err;1762175817631759 if (protocol == htons(ETH_P_IP)) {17641760 iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb);
+1-1
drivers/net/ethernet/intel/igb/e1000_i210.c
···365365 word_address = INVM_DWORD_TO_WORD_ADDRESS(invm_dword);366366 if (word_address == address) {367367 *data = INVM_DWORD_TO_WORD_DATA(invm_dword);368368- hw_dbg("Read INVM Word 0x%02x = %x",368368+ hw_dbg("Read INVM Word 0x%02x = %x\n",369369 address, *data);370370 status = E1000_SUCCESS;371371 break;
+6-7
drivers/net/ethernet/intel/igb/e1000_mac.c
···929929 */930930 if (hw->fc.requested_mode == e1000_fc_full) {931931 hw->fc.current_mode = e1000_fc_full;932932- hw_dbg("Flow Control = FULL.\r\n");932932+ hw_dbg("Flow Control = FULL.\n");933933 } else {934934 hw->fc.current_mode = e1000_fc_rx_pause;935935- hw_dbg("Flow Control = "936936- "RX PAUSE frames only.\r\n");935935+ hw_dbg("Flow Control = RX PAUSE frames only.\n");937936 }938937 }939938 /* For receiving PAUSE frames ONLY.···947948 (mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) &&948949 (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) {949950 hw->fc.current_mode = e1000_fc_tx_pause;950950- hw_dbg("Flow Control = TX PAUSE frames only.\r\n");951951+ hw_dbg("Flow Control = TX PAUSE frames only.\n");951952 }952953 /* For transmitting PAUSE frames ONLY.953954 *···961962 !(mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) &&962963 (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) {963964 hw->fc.current_mode = e1000_fc_rx_pause;964964- hw_dbg("Flow Control = RX PAUSE frames only.\r\n");965965+ hw_dbg("Flow Control = RX PAUSE frames only.\n");965966 }966967 /* Per the IEEE spec, at this point flow control should be967968 * disabled. However, we want to consider that we could···987988 (hw->fc.requested_mode == e1000_fc_tx_pause) ||988989 (hw->fc.strict_ieee)) {989990 hw->fc.current_mode = e1000_fc_none;990990- hw_dbg("Flow Control = NONE.\r\n");991991+ hw_dbg("Flow Control = NONE.\n");991992 } else {992993 hw->fc.current_mode = e1000_fc_rx_pause;993993- hw_dbg("Flow Control = RX PAUSE frames only.\r\n");994994+ hw_dbg("Flow Control = RX PAUSE frames only.\n");994995 }995996996997 /* Now we need to do one last check... If we auto-
+3-1
drivers/net/ethernet/intel/igb/igb_main.c
···5193519351945194 rcu_read_lock();51955195 for (i = 0; i < adapter->num_rx_queues; i++) {51965196- u32 rqdpc = rd32(E1000_RQDPC(i));51975196 struct igb_ring *ring = adapter->rx_ring[i];51975197+ u32 rqdpc = rd32(E1000_RQDPC(i));51985198+ if (hw->mac.type >= e1000_i210)51995199+ wr32(E1000_RQDPC(i), 0);5198520051995201 if (rqdpc) {52005202 ring->rx_stats.drops += rqdpc;
···1664166416651665 ixgbe_rx_checksum(rx_ring, rx_desc, skb);1666166616671667- ixgbe_ptp_rx_hwtstamp(rx_ring, rx_desc, skb);16671667+ if (unlikely(ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_STAT_TS)))16681668+ ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector->adapter, skb);1668166916691670 if ((dev->features & NETIF_F_HW_VLAN_CTAG_RX) &&16701671 ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_VP)) {
+3-3
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
···536536537537 if (time_out == max_time_out) {538538 status = IXGBE_ERR_LINK_SETUP;539539- hw_dbg(hw, "ixgbe_setup_phy_link_generic: time out");539539+ hw_dbg(hw, "ixgbe_setup_phy_link_generic: time out\n");540540 }541541542542 return status;···745745746746 if (time_out == max_time_out) {747747 status = IXGBE_ERR_LINK_SETUP;748748- hw_dbg(hw, "ixgbe_setup_phy_link_tnx: time out");748748+ hw_dbg(hw, "ixgbe_setup_phy_link_tnx: time out\n");749749 }750750751751 return status;···11751175 status = 0;11761176 } else {11771177 if (hw->allow_unsupported_sfp) {11781178- e_warn(drv, "WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules.");11781178+ e_warn(drv, "WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules.\n");11791179 status = 0;11801180 } else {11811181 hw_dbg(hw,
+13-27
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
···435435void ixgbe_ptp_rx_hang(struct ixgbe_adapter *adapter)436436{437437 struct ixgbe_hw *hw = &adapter->hw;438438- struct ixgbe_ring *rx_ring;439438 u32 tsyncrxctl = IXGBE_READ_REG(hw, IXGBE_TSYNCRXCTL);440439 unsigned long rx_event;441441- int n;442440443441 /* if we don't have a valid timestamp in the registers, just update the444442 * timeout counter and exit···448450449451 /* determine the most recent watchdog or rx_timestamp event */450452 rx_event = adapter->last_rx_ptp_check;451451- for (n = 0; n < adapter->num_rx_queues; n++) {452452- rx_ring = adapter->rx_ring[n];453453- if (time_after(rx_ring->last_rx_timestamp, rx_event))454454- rx_event = rx_ring->last_rx_timestamp;455455- }453453+ if (time_after(adapter->last_rx_timestamp, rx_event))454454+ rx_event = adapter->last_rx_timestamp;456455457456 /* only need to read the high RXSTMP register to clear the lock */458457 if (time_is_before_jiffies(rx_event + 5*HZ)) {459458 IXGBE_READ_REG(hw, IXGBE_RXSTMPH);460459 adapter->last_rx_ptp_check = jiffies;461460462462- e_warn(drv, "clearing RX Timestamp hang");461461+ e_warn(drv, "clearing RX Timestamp hang\n");463462 }464463}465464···512517 dev_kfree_skb_any(adapter->ptp_tx_skb);513518 adapter->ptp_tx_skb = NULL;514519 clear_bit_unlock(__IXGBE_PTP_TX_IN_PROGRESS, &adapter->state);515515- e_warn(drv, "clearing Tx Timestamp hang");520520+ e_warn(drv, "clearing Tx Timestamp hang\n");516521 return;517522 }518523···525530}526531527532/**528528- * __ixgbe_ptp_rx_hwtstamp - utility function which checks for RX time stamp529529- * @q_vector: structure containing interrupt and ring information533533+ * ixgbe_ptp_rx_hwtstamp - utility function which checks for RX time stamp534534+ * @adapter: pointer to adapter struct530535 * @skb: particular skb to send timestamp with531536 *532537 * if the timestamp is valid, we convert it into the timecounter ns533538 * value, then store that result into the shhwtstamps structure which534539 * is passed up the network stack535540 */536536-void __ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector,537537- struct sk_buff *skb)541541+void ixgbe_ptp_rx_hwtstamp(struct ixgbe_adapter *adapter, struct sk_buff *skb)538542{539539- struct ixgbe_adapter *adapter;540540- struct ixgbe_hw *hw;543543+ struct ixgbe_hw *hw = &adapter->hw;541544 struct skb_shared_hwtstamps *shhwtstamps;542545 u64 regval = 0, ns;543546 u32 tsyncrxctl;544547 unsigned long flags;545548546546- /* we cannot process timestamps on a ring without a q_vector */547547- if (!q_vector || !q_vector->adapter)548548- return;549549-550550- adapter = q_vector->adapter;551551- hw = &adapter->hw;552552-553553- /*554554- * Read the tsyncrxctl register afterwards in order to prevent taking an555555- * I/O hit on every packet.556556- */557549 tsyncrxctl = IXGBE_READ_REG(hw, IXGBE_TSYNCRXCTL);558550 if (!(tsyncrxctl & IXGBE_TSYNCRXCTL_VALID))559551 return;···548566 regval |= (u64)IXGBE_READ_REG(hw, IXGBE_RXSTMPL);549567 regval |= (u64)IXGBE_READ_REG(hw, IXGBE_RXSTMPH) << 32;550568551551-552569 spin_lock_irqsave(&adapter->tmreg_lock, flags);553570 ns = timecounter_cyc2time(&adapter->tc, regval);554571 spin_unlock_irqrestore(&adapter->tmreg_lock, flags);555572556573 shhwtstamps = skb_hwtstamps(skb);557574 shhwtstamps->hwtstamp = ns_to_ktime(ns);575575+576576+ /* Update the last_rx_timestamp timer in order to enable watchdog check577577+ * for error case of latched timestamp on a dropped packet.578578+ */579579+ adapter->last_rx_timestamp = jiffies;558580}559581560582int ixgbe_ptp_get_ts_config(struct ixgbe_adapter *adapter, struct ifreq *ifr)
+4-1
drivers/net/ethernet/marvell/mvmdio.c
···232232 clk_prepare_enable(dev->clk);233233234234 dev->err_interrupt = platform_get_irq(pdev, 0);235235- if (dev->err_interrupt != -ENXIO) {235235+ if (dev->err_interrupt > 0) {236236 ret = devm_request_irq(&pdev->dev, dev->err_interrupt,237237 orion_mdio_err_irq,238238 IRQF_SHARED, pdev->name, dev);···241241242242 writel(MVMDIO_ERR_INT_SMI_DONE,243243 dev->regs + MVMDIO_ERR_INT_MASK);244244+245245+ } else if (dev->err_interrupt == -EPROBE_DEFER) {246246+ return -EPROBE_DEFER;244247 }245248246249 mutex_init(&dev->lock);
+4-3
drivers/net/ethernet/mellanox/mlx4/main.c
···754754 has_eth_port = true;755755 }756756757757- if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE))758758- request_module_nowait(IB_DRV_NAME);759757 if (has_eth_port)760758 request_module_nowait(EN_DRV_NAME);759759+ if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE))760760+ request_module_nowait(IB_DRV_NAME);761761}762762763763/*···24402440 * No return code for this call, just warn the user in case of PCI24412441 * express device capabilities are under-satisfied by the bus.24422442 */24432443- mlx4_check_pcie_caps(dev);24432443+ if (!mlx4_is_slave(dev))24442444+ mlx4_check_pcie_caps(dev);2444244524452446 /* In master functions, the communication channel must be initialized24462447 * after obtaining its address from fw */
+20-15
drivers/net/ethernet/mellanox/mlx4/port.c
···11061106 }1107110711081108 if (found_ix >= 0) {11091109+ /* Calculate a slave_gid which is the slave number in the gid11101110+ * table and not a globally unique slave number.11111111+ */11091112 if (found_ix < MLX4_ROCE_PF_GIDS)11101113 slave_gid = 0;11111114 else if (found_ix < MLX4_ROCE_PF_GIDS + (vf_gids % num_vfs) *···11211118 ((vf_gids % num_vfs) * ((vf_gids / num_vfs + 1)))) /11221119 (vf_gids / num_vfs)) + vf_gids % num_vfs + 1;1123112011211121+ /* Calculate the globally unique slave id */11241122 if (slave_gid) {11251123 struct mlx4_active_ports exclusive_ports;11261124 struct mlx4_active_ports actv_ports;11271125 struct mlx4_slaves_pport slaves_pport_actv;11281126 unsigned max_port_p_one;11291129- int num_slaves_before = 1;11271127+ int num_vfs_before = 0;11281128+ int candidate_slave_gid;1130112911301130+ /* Calculate how many VFs are on the previous port, if exists */11311131 for (i = 1; i < port; i++) {11321132 bitmap_zero(exclusive_ports.ports, dev->caps.num_ports);11331133- set_bit(i, exclusive_ports.ports);11331133+ set_bit(i - 1, exclusive_ports.ports);11341134 slaves_pport_actv =11351135 mlx4_phys_to_slaves_pport_actv(11361136 dev, &exclusive_ports);11371137- num_slaves_before += bitmap_weight(11371137+ num_vfs_before += bitmap_weight(11381138 slaves_pport_actv.slaves,11391139 dev->num_vfs + 1);11401140 }1141114111421142- if (slave_gid < num_slaves_before) {11431143- bitmap_zero(exclusive_ports.ports, dev->caps.num_ports);11441144- set_bit(port - 1, exclusive_ports.ports);11451145- slaves_pport_actv =11461146- mlx4_phys_to_slaves_pport_actv(11471147- dev, &exclusive_ports);11481148- slave_gid += bitmap_weight(11491149- slaves_pport_actv.slaves,11501150- dev->num_vfs + 1) -11511151- num_slaves_before;11521152- }11531153- actv_ports = mlx4_get_active_ports(dev, slave_gid);11421142+ /* candidate_slave_gid isn't necessarily the correct slave, but11431143+ * it has the same number of ports and is assigned to the same11441144+ * ports as the real slave we're looking for. On dual port VF,11451145+ * slave_gid = [single port VFs on port <port>] +11461146+ * [offset of the current slave from the first dual port VF] +11471147+ * 1 (for the PF).11481148+ */11491149+ candidate_slave_gid = slave_gid + num_vfs_before;11501150+11511151+ actv_ports = mlx4_get_active_ports(dev, candidate_slave_gid);11541152 max_port_p_one = find_first_bit(11551153 actv_ports.ports, dev->caps.num_ports) +11561154 bitmap_weight(actv_ports.ports,11571155 dev->caps.num_ports) + 1;1158115611571157+ /* Calculate the real slave number */11591158 for (i = 1; i < max_port_p_one; i++) {11601159 if (i == port)11611160 continue;
···322322 segs = nskb;323323 }324324 } else {325325+ /* If we receive a partial checksum and the tap side326326+ * doesn't support checksum offload, compute the checksum.327327+ * Note: it doesn't matter which checksum feature to328328+ * check, we either support them all or none.329329+ */330330+ if (skb->ip_summed == CHECKSUM_PARTIAL &&331331+ !(features & NETIF_F_ALL_CSUM) &&332332+ skb_checksum_help(skb))333333+ goto drop;325334 skb_queue_tail(&q->sk.sk_receive_queue, skb);326335 }327336
···765765 break;766766767767 if (phydev->link) {768768+ if (AUTONEG_ENABLE == phydev->autoneg) {769769+ err = phy_aneg_done(phydev);770770+ if (err < 0)771771+ break;772772+773773+ if (!err) {774774+ phydev->state = PHY_AN;775775+ phydev->link_timeout = PHY_AN_TIMEOUT;776776+ break;777777+ }778778+ }768779 phydev->state = PHY_RUNNING;769780 netif_carrier_on(phydev->attached_dev);770781 phydev->adjust_link(phydev->attached_dev);
+3-3
drivers/net/slip/slip.c
···429429 if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev))430430 return;431431432432- spin_lock(&sl->lock);432432+ spin_lock_bh(&sl->lock);433433 if (sl->xleft <= 0) {434434 /* Now serial buffer is almost free & we can start435435 * transmission of another packet */436436 sl->dev->stats.tx_packets++;437437 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);438438- spin_unlock(&sl->lock);438438+ spin_unlock_bh(&sl->lock);439439 sl_unlock(sl);440440 return;441441 }···443443 actual = tty->ops->write(tty, sl->xhead, sl->xleft);444444 sl->xleft -= actual;445445 sl->xhead += actual;446446- spin_unlock(&sl->lock);446446+ spin_unlock_bh(&sl->lock);447447}448448449449static void sl_tx_timeout(struct net_device *dev)
+2
drivers/net/team/team.c
···28342834 case NETDEV_UP:28352835 if (netif_carrier_ok(dev))28362836 team_port_change_check(port, true);28372837+ break;28372838 case NETDEV_DOWN:28382839 team_port_change_check(port, false);28402840+ break;28392841 case NETDEV_CHANGE:28402842 if (netif_running(port->dev))28412843 team_port_change_check(port,
+1-1
drivers/net/usb/cdc_ncm.c
···785785 skb_out->len > CDC_NCM_MIN_TX_PKT)786786 memset(skb_put(skb_out, ctx->tx_max - skb_out->len), 0,787787 ctx->tx_max - skb_out->len);788788- else if ((skb_out->len % dev->maxpacket) == 0)788788+ else if (skb_out->len < ctx->tx_max && (skb_out->len % dev->maxpacket) == 0)789789 *skb_put(skb_out, 1) = 0; /* force short packet */790790791791 /* set final frame length */
···303303304304 ci = core->chip;305305306306- /* if core is already in reset, just return */306306+ /* if core is already in reset, skip reset */307307 regdata = ci->ops->read32(ci->ctx, core->wrapbase + BCMA_RESET_CTL);308308 if ((regdata & BCMA_RESET_CTL_RESET) != 0)309309- return;309309+ goto in_reset_configure;310310311311 /* configure reset */312312 ci->ops->write32(ci->ctx, core->wrapbase + BCMA_IOCTL,···322322 SPINWAIT(ci->ops->read32(ci->ctx, core->wrapbase + BCMA_RESET_CTL) !=323323 BCMA_RESET_CTL_RESET, 300);324324325325+in_reset_configure:325326 /* in-reset configure */326327 ci->ops->write32(ci->ctx, core->wrapbase + BCMA_IOCTL,327328 reset | BCMA_IOCTL_FGC | BCMA_IOCTL_CLK);
+12-10
drivers/net/wireless/rt2x00/rt2x00mac.c
···621621 bss_conf->bssid);622622623623 /*624624- * Update the beacon. This is only required on USB devices. PCI625625- * devices fetch beacons periodically.626626- */627627- if (changes & BSS_CHANGED_BEACON && rt2x00_is_usb(rt2x00dev))628628- rt2x00queue_update_beacon(rt2x00dev, vif);629629-630630- /*631624 * Start/stop beaconing.632625 */633626 if (changes & BSS_CHANGED_BEACON_ENABLED) {634627 if (!bss_conf->enable_beacon && intf->enable_beacon) {635635- rt2x00queue_clear_beacon(rt2x00dev, vif);636628 rt2x00dev->intf_beaconing--;637629 intf->enable_beacon = false;630630+ /*631631+ * Clear beacon in the H/W for this vif. This is needed632632+ * to disable beaconing on this particular interface633633+ * and keep it running on other interfaces.634634+ */635635+ rt2x00queue_clear_beacon(rt2x00dev, vif);638636639637 if (rt2x00dev->intf_beaconing == 0) {640638 /*···643645 rt2x00queue_stop_queue(rt2x00dev->bcn);644646 mutex_unlock(&intf->beacon_skb_mutex);645647 }646646-647647-648648 } else if (bss_conf->enable_beacon && !intf->enable_beacon) {649649 rt2x00dev->intf_beaconing++;650650 intf->enable_beacon = true;651651+ /*652652+ * Upload beacon to the H/W. This is only required on653653+ * USB devices. PCI devices fetch beacons periodically.654654+ */655655+ if (rt2x00_is_usb(rt2x00dev))656656+ rt2x00queue_update_beacon(rt2x00dev, vif);651657652658 if (rt2x00dev->intf_beaconing == 1) {653659 /*
···10011001 err = _rtl92cu_init_mac(hw);10021002 if (err) {10031003 RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "init mac failed!\n");10041004- return err;10041004+ goto exit;10051005 }10061006 err = rtl92c_download_fw(hw);10071007 if (err) {
+6
drivers/net/wireless/rtlwifi/rtl8192se/trx.c
···4949 if (ieee80211_is_nullfunc(fc))5050 return QSLT_HIGH;51515252+ /* Kernel commit 1bf4bbb4024dcdab changed EAPOL packets to use5353+ * queue V0 at priority 7; however, the RTL8192SE appears to have5454+ * that queue at priority 65555+ */5656+ if (skb->priority == 7)5757+ return QSLT_VO;5258 return skb->priority;5359}5460
+27-1
drivers/of/irq.c
···364364365365 memset(r, 0, sizeof(*r));366366 /*367367- * Get optional "interrupts-names" property to add a name367367+ * Get optional "interrupt-names" property to add a name368368 * to the resource.369369 */370370 of_property_read_string_index(dev, "interrupt-names", index,···378378 return irq;379379}380380EXPORT_SYMBOL_GPL(of_irq_to_resource);381381+382382+/**383383+ * of_irq_get - Decode a node's IRQ and return it as a Linux irq number384384+ * @dev: pointer to device tree node385385+ * @index: zero-based index of the irq386386+ *387387+ * Returns Linux irq number on success, or -EPROBE_DEFER if the irq domain388388+ * is not yet created.389389+ *390390+ */391391+int of_irq_get(struct device_node *dev, int index)392392+{393393+ int rc;394394+ struct of_phandle_args oirq;395395+ struct irq_domain *domain;396396+397397+ rc = of_irq_parse_one(dev, index, &oirq);398398+ if (rc)399399+ return rc;400400+401401+ domain = irq_find_host(oirq.np);402402+ if (!domain)403403+ return -EPROBE_DEFER;404404+405405+ return irq_create_of_mapping(&oirq);406406+}381407382408/**383409 * of_irq_count - Count the number of IRQs a node uses
+3-1
drivers/of/platform.c
···168168 rc = of_address_to_resource(np, i, res);169169 WARN_ON(rc);170170 }171171- WARN_ON(of_irq_to_resource_table(np, res, num_irq) != num_irq);171171+ if (of_irq_to_resource_table(np, res, num_irq) != num_irq)172172+ pr_debug("not all legacy IRQ resources mapped for %s\n",173173+ np->name);172174 }173175174176 dev->dev.of_node = of_node_get(np);
+32
drivers/of/selftest.c
···1010#include <linux/module.h>1111#include <linux/of.h>1212#include <linux/of_irq.h>1313+#include <linux/of_platform.h>1314#include <linux/list.h>1415#include <linux/mutex.h>1516#include <linux/slab.h>···428427 }429428}430429430430+static void __init of_selftest_platform_populate(void)431431+{432432+ int irq;433433+ struct device_node *np;434434+ struct platform_device *pdev;435435+436436+ np = of_find_node_by_path("/testcase-data");437437+ of_platform_populate(np, of_default_bus_match_table, NULL, NULL);438438+439439+ /* Test that a missing irq domain returns -EPROBE_DEFER */440440+ np = of_find_node_by_path("/testcase-data/testcase-device1");441441+ pdev = of_find_device_by_node(np);442442+ if (!pdev)443443+ selftest(0, "device 1 creation failed\n");444444+ irq = platform_get_irq(pdev, 0);445445+ if (irq != -EPROBE_DEFER)446446+ selftest(0, "device deferred probe failed - %d\n", irq);447447+448448+ /* Test that a parsing failure does not return -EPROBE_DEFER */449449+ np = of_find_node_by_path("/testcase-data/testcase-device2");450450+ pdev = of_find_device_by_node(np);451451+ if (!pdev)452452+ selftest(0, "device 2 creation failed\n");453453+ irq = platform_get_irq(pdev, 0);454454+ if (irq >= 0 || irq == -EPROBE_DEFER)455455+ selftest(0, "device parsing error failed - %d\n", irq);456456+457457+ selftest(1, "passed");458458+}459459+431460static int __init of_selftest(void)432461{433462 struct device_node *np;···476445 of_selftest_parse_interrupts();477446 of_selftest_parse_interrupts_extended();478447 of_selftest_match_node();448448+ of_selftest_platform_populate();479449 pr_info("end of selftest - %i passed, %i failed\n",480450 selftest_results.passed, selftest_results.failed);481451 return 0;
···6464};65656666struct as3722_gpio_pin_control {6767- bool enable_gpio_invert;6867 unsigned mode_prop;6968 int io_function;7069};···319320 return mode;320321 }321322322322- if (as_pci->gpio_control[offset].enable_gpio_invert)323323- mode |= AS3722_GPIO_INV;324324-325325- return as3722_write(as3722, AS3722_GPIOn_CONTROL_REG(offset), mode);323323+ return as3722_update_bits(as3722, AS3722_GPIOn_CONTROL_REG(offset),324324+ AS3722_GPIO_MODE_MASK, mode);326325}327326328327static const struct pinmux_ops as3722_pinmux_ops = {···493496{494497 struct as3722_pctrl_info *as_pci = to_as_pci(chip);495498 struct as3722 *as3722 = as_pci->as3722;496496- int en_invert = as_pci->gpio_control[offset].enable_gpio_invert;499499+ int en_invert;497500 u32 val;498501 int ret;502502+503503+ ret = as3722_read(as3722, AS3722_GPIOn_CONTROL_REG(offset), &val);504504+ if (ret < 0) {505505+ dev_err(as_pci->dev,506506+ "GPIO_CONTROL%d_REG read failed: %d\n", offset, ret);507507+ return;508508+ }509509+ en_invert = !!(val & AS3722_GPIO_INV);499510500511 if (value)501512 val = (en_invert) ? 0 : AS3722_GPIOn_SIGNAL(offset);
+13
drivers/pinctrl/pinctrl-single.c
···810810static int pcs_add_pin(struct pcs_device *pcs, unsigned offset,811811 unsigned pin_pos)812812{813813+ struct pcs_soc_data *pcs_soc = &pcs->socdata;813814 struct pinctrl_pin_desc *pin;814815 struct pcs_name *pn;815816 int i;···820819 dev_err(pcs->dev, "too many pins, max %i\n",821820 pcs->desc.npins);822821 return -ENOMEM;822822+ }823823+824824+ if (pcs_soc->irq_enable_mask) {825825+ unsigned val;826826+827827+ val = pcs->read(pcs->base + offset);828828+ if (val & pcs_soc->irq_enable_mask) {829829+ dev_dbg(pcs->dev, "irq enabled at boot for pin at %lx (%x), clearing\n",830830+ (unsigned long)pcs->res->start + offset, val);831831+ val &= ~pcs_soc->irq_enable_mask;832832+ pcs->write(val, pcs->base + offset);833833+ }823834 }824835825836 pin = &pcs->pins.pa[i];
+1-2
drivers/pinctrl/pinctrl-tb10x.c
···629629 */630630 for (i = 0; i < state->pinfuncgrpcnt; i++) {631631 const struct tb10x_pinfuncgrp *pfg = &state->pingroups[i];632632- unsigned int port = pfg->port;633632 unsigned int mode = pfg->mode;634634- int j;633633+ int j, port = pfg->port;635634636635 /*637636 * Skip pin groups which are always mapped and don't need
···8383{8484 struct acpi_device *acpi_dev;8585 acpi_handle handle;8686- struct acpi_buffer buffer;8787- int ret;8686+ int ret = 0;88878988 pnp_dbg(&dev->dev, "set resources\n");9089···9697 if (WARN_ON_ONCE(acpi_dev != dev->data))9798 dev->data = acpi_dev;98999999- ret = pnpacpi_build_resource_template(dev, &buffer);100100- if (ret)101101- return ret;102102- ret = pnpacpi_encode_resources(dev, &buffer);103103- if (ret) {100100+ if (acpi_has_method(handle, METHOD_NAME__SRS)) {101101+ struct acpi_buffer buffer;102102+103103+ ret = pnpacpi_build_resource_template(dev, &buffer);104104+ if (ret)105105+ return ret;106106+107107+ ret = pnpacpi_encode_resources(dev, &buffer);108108+ if (!ret) {109109+ acpi_status status;110110+111111+ status = acpi_set_current_resources(handle, &buffer);112112+ if (ACPI_FAILURE(status))113113+ ret = -EIO;114114+ }104115 kfree(buffer.pointer);105105- return ret;106116 }107107- if (ACPI_FAILURE(acpi_set_current_resources(handle, &buffer)))108108- ret = -EINVAL;109109- else if (acpi_bus_power_manageable(handle))117117+ if (!ret && acpi_bus_power_manageable(handle))110118 ret = acpi_bus_set_power(handle, ACPI_STATE_D0);111111- kfree(buffer.pointer);119119+112120 return ret;113121}114122···123117{124118 struct acpi_device *acpi_dev;125119 acpi_handle handle;126126- int ret;120120+ acpi_status status;127121128122 dev_dbg(&dev->dev, "disable resources\n");129123···134128 }135129136130 /* acpi_unregister_gsi(pnp_irq(dev, 0)); */137137- ret = 0;138131 if (acpi_bus_power_manageable(handle))139132 acpi_bus_set_power(handle, ACPI_STATE_D3_COLD);140140- /* continue even if acpi_bus_set_power() fails */141141- if (ACPI_FAILURE(acpi_evaluate_object(handle, "_DIS", NULL, NULL)))142142- ret = -ENODEV;143143- return ret;133133+134134+ /* continue even if acpi_bus_set_power() fails */135135+ status = acpi_evaluate_object(handle, "_DIS", NULL, NULL);136136+ if (ACPI_FAILURE(status) && status != AE_NOT_FOUND)137137+ return -ENODEV;138138+139139+ return 0;144140}145141146142#ifdef CONFIG_ACPI_SLEEP
···7777 goto next_msg;7878 }79798080- if (!capable(CAP_SYS_ADMIN)) {8080+ if (!netlink_capable(skb, CAP_SYS_ADMIN)) {8181 err = -EPERM;8282 goto next_msg;8383 }
+5-1
drivers/scsi/virtio_scsi.c
···750750751751 vscsi->affinity_hint_set = true;752752 } else {753753- for (i = 0; i < vscsi->num_queues; i++)753753+ for (i = 0; i < vscsi->num_queues; i++) {754754+ if (!vscsi->req_vqs[i].vq)755755+ continue;756756+754757 virtqueue_set_affinity(vscsi->req_vqs[i].vq, -1);758758+ }755759756760 vscsi->affinity_hint_set = false;757761 }
+2-1
drivers/staging/iio/resolver/ad2s1200.c
···107107 int pn, ret = 0;108108 unsigned short *pins = spi->dev.platform_data;109109110110- for (pn = 0; pn < AD2S1200_PN; pn++)110110+ for (pn = 0; pn < AD2S1200_PN; pn++) {111111 ret = devm_gpio_request_one(&spi->dev, pins[pn], GPIOF_DIR_OUT,112112 DRV_NAME);113113 if (ret) {···115115 pins[pn]);116116 return ret;117117 }118118+ }118119 indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st));119120 if (!indio_dev)120121 return -ENOMEM;
+1-1
drivers/tty/hvc/hvc_console.c
···190190 return hvc_driver;191191}192192193193-static int __init hvc_console_setup(struct console *co, char *options)193193+static int hvc_console_setup(struct console *co, char *options)194194{ 195195 if (co->index < 0 || co->index >= MAX_NR_HVC_CONSOLES)196196 return -ENODEV;
+4
drivers/tty/n_tty.c
···23532353 if (tty->ops->flush_chars)23542354 tty->ops->flush_chars(tty);23552355 } else {23562356+ struct n_tty_data *ldata = tty->disc_data;23572357+23562358 while (nr > 0) {23592359+ mutex_lock(&ldata->output_lock);23572360 c = tty->ops->write(tty, b, nr);23612361+ mutex_unlock(&ldata->output_lock);23582362 if (c < 0) {23592363 retval = c;23602364 goto break_out;
···255255 if (change || left < size) {256256 /* This is the slow path - looking for new buffers to use */257257 if ((n = tty_buffer_alloc(port, size)) != NULL) {258258- unsigned long iflags;259259-260258 n->flags = flags;261259 buf->tail = n;262262-263263- spin_lock_irqsave(&buf->flush_lock, iflags);264260 b->commit = b->used;261261+ /* paired w/ barrier in flush_to_ldisc(); ensures the262262+ * latest commit value can be read before the head is263263+ * advanced to the next buffer264264+ */265265+ smp_wmb();265266 b->next = n;266266- spin_unlock_irqrestore(&buf->flush_lock, iflags);267267-268267 } else if (change)269268 size = 0;270269 else···447448 mutex_lock(&buf->lock);448449449450 while (1) {450450- unsigned long flags;451451 struct tty_buffer *head = buf->head;452452+ struct tty_buffer *next;452453 int count;453454454455 /* Ldisc or user is trying to gain exclusive access */455456 if (atomic_read(&buf->priority))456457 break;457458458458- spin_lock_irqsave(&buf->flush_lock, flags);459459+ next = head->next;460460+ /* paired w/ barrier in __tty_buffer_request_room();461461+ * ensures commit value read is not stale if the head462462+ * is advancing to the next buffer463463+ */464464+ smp_rmb();459465 count = head->commit - head->read;460466 if (!count) {461461- if (head->next == NULL) {462462- spin_unlock_irqrestore(&buf->flush_lock, flags);467467+ if (next == NULL)463468 break;464464- }465465- buf->head = head->next;466466- spin_unlock_irqrestore(&buf->flush_lock, flags);469469+ buf->head = next;467470 tty_buffer_free(port, head);468471 continue;469472 }470470- spin_unlock_irqrestore(&buf->flush_lock, flags);471473472474 count = receive_buf(tty, head, count);473475 if (!count)···523523 struct tty_bufhead *buf = &port->buf;524524525525 mutex_init(&buf->lock);526526- spin_lock_init(&buf->flush_lock);527526 tty_buffer_reset(&buf->sentinel, 0);528527 buf->head = &buf->sentinel;529528 buf->tail = &buf->sentinel;
-10
drivers/usb/gadget/at91_udc.c
···17091709 return -ENODEV;17101710 }1711171117121712- if (pdev->num_resources != 2) {17131713- DBG("invalid num_resources\n");17141714- return -ENODEV;17151715- }17161716- if ((pdev->resource[0].flags != IORESOURCE_MEM)17171717- || (pdev->resource[1].flags != IORESOURCE_IRQ)) {17181718- DBG("invalid resource type\n");17191719- return -ENODEV;17201720- }17211721-17221712 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);17231713 if (!res)17241714 return -ENXIO;
+2-1
drivers/usb/host/ehci-fsl.c
···248248 break;249249 }250250251251- if (pdata->have_sysif_regs && pdata->controller_ver &&251251+ if (pdata->have_sysif_regs &&252252+ pdata->controller_ver > FSL_USB_VER_1_6 &&252253 (phy_mode == FSL_USB2_PHY_ULPI)) {253254 /* check PHY_CLK_VALID to get phy clk valid */254255 if (!(spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) &
+18
drivers/usb/host/ohci-hub.c
···9090 dl_done_list (ohci);9191 finish_unlinks (ohci, ohci_frame_no(ohci));92929393+ /*9494+ * Some controllers don't handle "global" suspend properly if9595+ * there are unsuspended ports. For these controllers, put all9696+ * the enabled ports into suspend before suspending the root hub.9797+ */9898+ if (ohci->flags & OHCI_QUIRK_GLOBAL_SUSPEND) {9999+ __hc32 __iomem *portstat = ohci->regs->roothub.portstatus;100100+ int i;101101+ unsigned temp;102102+103103+ for (i = 0; i < ohci->num_ports; (++i, ++portstat)) {104104+ temp = ohci_readl(ohci, portstat);105105+ if ((temp & (RH_PS_PES | RH_PS_PSS)) ==106106+ RH_PS_PES)107107+ ohci_writel(ohci, RH_PS_PSS, portstat);108108+ }109109+ }110110+93111 /* maybe resume can wake root hub */94112 if (ohci_to_hcd(ohci)->self.root_hub->do_remote_wakeup || autostop) {95113 ohci->hc_control |= OHCI_CTRL_RWE;
···405405#define OHCI_QUIRK_HUB_POWER 0x100 /* distrust firmware power/oc setup */406406#define OHCI_QUIRK_AMD_PLL 0x200 /* AMD PLL quirk*/407407#define OHCI_QUIRK_AMD_PREFETCH 0x400 /* pre-fetch for ISO transfer */408408+#define OHCI_QUIRK_GLOBAL_SUSPEND 0x800 /* must suspend ports */409409+408410 // there are also chip quirks/bugs in init logic409411410412 struct work_struct nec_work; /* Worker for NEC quirk */
+5-4
drivers/usb/phy/phy-fsm-usb.c
···303303 otg_set_state(fsm, OTG_STATE_A_WAIT_VRISE);304304 break;305305 case OTG_STATE_A_WAIT_VRISE:306306- if (fsm->id || fsm->a_bus_drop || fsm->a_vbus_vld ||307307- fsm->a_wait_vrise_tmout) {306306+ if (fsm->a_vbus_vld)308307 otg_set_state(fsm, OTG_STATE_A_WAIT_BCON);309309- }308308+ else if (fsm->id || fsm->a_bus_drop ||309309+ fsm->a_wait_vrise_tmout)310310+ otg_set_state(fsm, OTG_STATE_A_WAIT_VFALL);310311 break;311312 case OTG_STATE_A_WAIT_BCON:312313 if (!fsm->a_vbus_vld)313314 otg_set_state(fsm, OTG_STATE_A_VBUS_ERR);314315 else if (fsm->b_conn)315316 otg_set_state(fsm, OTG_STATE_A_HOST);316316- else if (fsm->id | fsm->a_bus_drop | fsm->a_wait_bcon_tmout)317317+ else if (fsm->id || fsm->a_bus_drop || fsm->a_wait_bcon_tmout)317318 otg_set_state(fsm, OTG_STATE_A_WAIT_VFALL);318319 break;319320 case OTG_STATE_A_HOST:
···234234 USB_SC_DEVICE, USB_PR_DEVICE, NULL,235235 US_FL_MAX_SECTORS_64 ),236236237237+/* Reported by Daniele Forsi <dforsi@gmail.com> */238238+UNUSUAL_DEV( 0x0421, 0x04b9, 0x0350, 0x0350,239239+ "Nokia",240240+ "5300",241241+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,242242+ US_FL_MAX_SECTORS_64 ),243243+244244+/* Patch submitted by Victor A. Santos <victoraur.santos@gmail.com> */245245+UNUSUAL_DEV( 0x0421, 0x05af, 0x0742, 0x0742,246246+ "Nokia",247247+ "305",248248+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,249249+ US_FL_MAX_SECTORS_64),250250+237251/* Patch submitted by Mikhail Zolotaryov <lebon@lebon.org.ua> */238252UNUSUAL_DEV( 0x0421, 0x06aa, 0x1110, 0x1110,239253 "Nokia",
-2
fs/affs/super.c
···340340 &blocksize,&sbi->s_prefix,341341 sbi->s_volume, &mount_flags)) {342342 printk(KERN_ERR "AFFS: Error parsing options\n");343343- kfree(sbi->s_prefix);344344- kfree(sbi);345343 return -EINVAL;346344 }347345 /* N.B. after this point s_prefix must be released */
+34-8
fs/aio.c
···112112113113 struct work_struct free_work;114114115115+ /*116116+ * signals when all in-flight requests are done117117+ */118118+ struct completion *requests_done;119119+115120 struct {116121 /*117122 * This counts the number of available slots in the ringbuffer,···513508{514509 struct kioctx *ctx = container_of(ref, struct kioctx, reqs);515510511511+ /* At this point we know that there are no any in-flight requests */512512+ if (ctx->requests_done)513513+ complete(ctx->requests_done);514514+516515 INIT_WORK(&ctx->free_work, free_ioctx);517516 schedule_work(&ctx->free_work);518517}···727718 * when the processes owning a context have all exited to encourage728719 * the rapid destruction of the kioctx.729720 */730730-static void kill_ioctx(struct mm_struct *mm, struct kioctx *ctx)721721+static void kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,722722+ struct completion *requests_done)731723{732724 if (!atomic_xchg(&ctx->dead, 1)) {733725 struct kioctx_table *table;···757747 if (ctx->mmap_size)758748 vm_munmap(ctx->mmap_base, ctx->mmap_size);759749750750+ ctx->requests_done = requests_done;760751 percpu_ref_kill(&ctx->users);752752+ } else {753753+ if (requests_done)754754+ complete(requests_done);761755 }762756}763757···823809 */824810 ctx->mmap_size = 0;825811826826- kill_ioctx(mm, ctx);812812+ kill_ioctx(mm, ctx, NULL);827813 }828814}829815···11991185 if (!IS_ERR(ioctx)) {12001186 ret = put_user(ioctx->user_id, ctxp);12011187 if (ret)12021202- kill_ioctx(current->mm, ioctx);11881188+ kill_ioctx(current->mm, ioctx, NULL);12031189 percpu_ref_put(&ioctx->users);12041190 }12051191···12171203{12181204 struct kioctx *ioctx = lookup_ioctx(ctx);12191205 if (likely(NULL != ioctx)) {12201220- kill_ioctx(current->mm, ioctx);12061206+ struct completion requests_done =12071207+ COMPLETION_INITIALIZER_ONSTACK(requests_done);12081208+12091209+ /* Pass requests_done to kill_ioctx() where it can be set12101210+ * in a thread-safe way. If we try to set it here then we have12111211+ * a race condition if two io_destroy() called simultaneously.12121212+ */12131213+ kill_ioctx(current->mm, ioctx, &requests_done);12211214 percpu_ref_put(&ioctx->users);12151215+12161216+ /* Wait until all IO for the context are done. Otherwise kernel12171217+ * keep using user-space buffers even if user thinks the context12181218+ * is destroyed.12191219+ */12201220+ wait_for_completion(&requests_done);12211221+12221222 return 0;12231223 }12241224 pr_debug("EINVAL: io_destroy: invalid context id\n");···13271299 &iovec, compat)13281300 : aio_setup_single_vector(req, rw, buf, &nr_segs,13291301 iovec);13301330- if (ret)13311331- return ret;13321332-13331333- ret = rw_verify_area(rw, file, &req->ki_pos, req->ki_nbytes);13021302+ if (!ret)13031303+ ret = rw_verify_area(rw, file, &req->ki_pos, req->ki_nbytes);13341304 if (ret < 0) {13351305 if (iovec != &inline_vec)13361306 kfree(iovec);
+2-2
fs/autofs4/root.c
···179179 spin_lock(&active->d_lock);180180181181 /* Already gone? */182182- if (!d_count(active))182182+ if ((int) d_count(active) <= 0)183183 goto next;184184185185 qstr = &active->d_name;···230230231231 spin_lock(&expiring->d_lock);232232233233- /* Bad luck, we've already been dentry_iput */233233+ /* We've already been dentry_iput or unlinked */234234 if (!expiring->d_inode)235235 goto next;236236
···4545 return PTR_ERR(req);4646 req->r_inode = inode;4747 ihold(inode);4848+ req->r_num_caps = 1;48494950 /* mds requires start and length rather than start and end */5051 if (LLONG_MAX == fl->fl_end)
-1
fs/ceph/super.h
···266266 struct timespec i_rctime;267267 u64 i_rbytes, i_rfiles, i_rsubdirs;268268 u64 i_files, i_subdirs;269269- u64 i_max_offset; /* largest readdir offset, set with complete dir */270269271270 struct rb_root i_fragtree;272271 struct mutex i_fragtree_mutex;
+102-216
fs/dcache.c
···246246 kmem_cache_free(dentry_cache, dentry); 247247}248248249249-/*250250- * no locks, please.251251- */252252-static void d_free(struct dentry *dentry)249249+static void dentry_free(struct dentry *dentry)253250{254254- BUG_ON((int)dentry->d_lockref.count > 0);255255- this_cpu_dec(nr_dentry);256256- if (dentry->d_op && dentry->d_op->d_release)257257- dentry->d_op->d_release(dentry);258258-259251 /* if dentry was never visible to RCU, immediate free is OK */260252 if (!(dentry->d_flags & DCACHE_RCUACCESS))261253 __d_free(&dentry->d_u.d_rcu);···395403 d_lru_add(dentry);396404}397405398398-/*399399- * Remove a dentry with references from the LRU.400400- *401401- * If we are on the shrink list, then we can get to try_prune_one_dentry() and402402- * lose our last reference through the parent walk. In this case, we need to403403- * remove ourselves from the shrink list, not the LRU.404404- */405405-static void dentry_lru_del(struct dentry *dentry)406406-{407407- if (dentry->d_flags & DCACHE_LRU_LIST) {408408- if (dentry->d_flags & DCACHE_SHRINK_LIST)409409- return d_shrink_del(dentry);410410- d_lru_del(dentry);411411- }412412-}413413-414414-/**415415- * d_kill - kill dentry and return parent416416- * @dentry: dentry to kill417417- * @parent: parent dentry418418- *419419- * The dentry must already be unhashed and removed from the LRU.420420- *421421- * If this is the root of the dentry tree, return NULL.422422- *423423- * dentry->d_lock and parent->d_lock must be held by caller, and are dropped by424424- * d_kill.425425- */426426-static struct dentry *d_kill(struct dentry *dentry, struct dentry *parent)427427- __releases(dentry->d_lock)428428- __releases(parent->d_lock)429429- __releases(dentry->d_inode->i_lock)430430-{431431- list_del(&dentry->d_u.d_child);432432- /*433433- * Inform d_walk() that we are no longer attached to the434434- * dentry tree435435- */436436- dentry->d_flags |= DCACHE_DENTRY_KILLED;437437- if (parent)438438- spin_unlock(&parent->d_lock);439439- dentry_iput(dentry);440440- /*441441- * dentry_iput drops the locks, at which point nobody (except442442- * transient RCU lookups) can reach this dentry.443443- */444444- d_free(dentry);445445- return parent;446446-}447447-448406/**449407 * d_drop - drop a dentry450408 * @dentry: dentry to drop···452510 __releases(dentry->d_lock)453511{454512 struct inode *inode;455455- struct dentry *parent;513513+ struct dentry *parent = NULL;514514+ bool can_free = true;515515+516516+ if (unlikely(dentry->d_flags & DCACHE_DENTRY_KILLED)) {517517+ can_free = dentry->d_flags & DCACHE_MAY_FREE;518518+ spin_unlock(&dentry->d_lock);519519+ goto out;520520+ }456521457522 inode = dentry->d_inode;458523 if (inode && !spin_trylock(&inode->i_lock)) {···470521 }471522 return dentry; /* try again with same dentry */472523 }473473- if (IS_ROOT(dentry))474474- parent = NULL;475475- else524524+ if (!IS_ROOT(dentry))476525 parent = dentry->d_parent;477526 if (parent && !spin_trylock(&parent->d_lock)) {478527 if (inode)···490543 if ((dentry->d_flags & DCACHE_OP_PRUNE) && !d_unhashed(dentry))491544 dentry->d_op->d_prune(dentry);492545493493- dentry_lru_del(dentry);546546+ if (dentry->d_flags & DCACHE_LRU_LIST) {547547+ if (!(dentry->d_flags & DCACHE_SHRINK_LIST))548548+ d_lru_del(dentry);549549+ }494550 /* if it was on the hash then remove it */495551 __d_drop(dentry);496496- return d_kill(dentry, parent);552552+ list_del(&dentry->d_u.d_child);553553+ /*554554+ * Inform d_walk() that we are no longer attached to the555555+ * dentry tree556556+ */557557+ dentry->d_flags |= DCACHE_DENTRY_KILLED;558558+ if (parent)559559+ spin_unlock(&parent->d_lock);560560+ dentry_iput(dentry);561561+ /*562562+ * dentry_iput drops the locks, at which point nobody (except563563+ * transient RCU lookups) can reach this dentry.564564+ */565565+ BUG_ON((int)dentry->d_lockref.count > 0);566566+ this_cpu_dec(nr_dentry);567567+ if (dentry->d_op && dentry->d_op->d_release)568568+ dentry->d_op->d_release(dentry);569569+570570+ spin_lock(&dentry->d_lock);571571+ if (dentry->d_flags & DCACHE_SHRINK_LIST) {572572+ dentry->d_flags |= DCACHE_MAY_FREE;573573+ can_free = false;574574+ }575575+ spin_unlock(&dentry->d_lock);576576+out:577577+ if (likely(can_free))578578+ dentry_free(dentry);579579+ return parent;497580}498581499582/* ···792815}793816EXPORT_SYMBOL(d_prune_aliases);794817795795-/*796796- * Try to throw away a dentry - free the inode, dput the parent.797797- * Requires dentry->d_lock is held, and dentry->d_count == 0.798798- * Releases dentry->d_lock.799799- *800800- * This may fail if locks cannot be acquired no problem, just try again.801801- */802802-static struct dentry * try_prune_one_dentry(struct dentry *dentry)803803- __releases(dentry->d_lock)804804-{805805- struct dentry *parent;806806-807807- parent = dentry_kill(dentry, 0);808808- /*809809- * If dentry_kill returns NULL, we have nothing more to do.810810- * if it returns the same dentry, trylocks failed. In either811811- * case, just loop again.812812- *813813- * Otherwise, we need to prune ancestors too. This is necessary814814- * to prevent quadratic behavior of shrink_dcache_parent(), but815815- * is also expected to be beneficial in reducing dentry cache816816- * fragmentation.817817- */818818- if (!parent)819819- return NULL;820820- if (parent == dentry)821821- return dentry;822822-823823- /* Prune ancestors. */824824- dentry = parent;825825- while (dentry) {826826- if (lockref_put_or_lock(&dentry->d_lockref))827827- return NULL;828828- dentry = dentry_kill(dentry, 1);829829- }830830- return NULL;831831-}832832-833818static void shrink_dentry_list(struct list_head *list)834819{835835- struct dentry *dentry;820820+ struct dentry *dentry, *parent;836821837837- rcu_read_lock();838838- for (;;) {839839- dentry = list_entry_rcu(list->prev, struct dentry, d_lru);840840- if (&dentry->d_lru == list)841841- break; /* empty */842842-843843- /*844844- * Get the dentry lock, and re-verify that the dentry is845845- * this on the shrinking list. If it is, we know that846846- * DCACHE_SHRINK_LIST and DCACHE_LRU_LIST are set.847847- */822822+ while (!list_empty(list)) {823823+ dentry = list_entry(list->prev, struct dentry, d_lru);848824 spin_lock(&dentry->d_lock);849849- if (dentry != list_entry(list->prev, struct dentry, d_lru)) {850850- spin_unlock(&dentry->d_lock);851851- continue;852852- }853853-854825 /*855826 * The dispose list is isolated and dentries are not accounted856827 * to the LRU here, so we can simply remove it from the list···810885 * We found an inuse dentry which was not removed from811886 * the LRU because of laziness during lookup. Do not free it.812887 */813813- if (dentry->d_lockref.count) {888888+ if ((int)dentry->d_lockref.count > 0) {814889 spin_unlock(&dentry->d_lock);815890 continue;816891 }817817- rcu_read_unlock();818892893893+ parent = dentry_kill(dentry, 0);819894 /*820820- * If 'try_to_prune()' returns a dentry, it will821821- * be the same one we passed in, and d_lock will822822- * have been held the whole time, so it will not823823- * have been added to any other lists. We failed824824- * to get the inode lock.825825- *826826- * We just add it back to the shrink list.895895+ * If dentry_kill returns NULL, we have nothing more to do.827896 */828828- dentry = try_prune_one_dentry(dentry);897897+ if (!parent)898898+ continue;829899830830- rcu_read_lock();831831- if (dentry) {900900+ if (unlikely(parent == dentry)) {901901+ /*902902+ * trylocks have failed and d_lock has been held the903903+ * whole time, so it could not have been added to any904904+ * other lists. Just add it back to the shrink list.905905+ */832906 d_shrink_add(dentry, list);833907 spin_unlock(&dentry->d_lock);908908+ continue;834909 }910910+ /*911911+ * We need to prune ancestors too. This is necessary to prevent912912+ * quadratic behavior of shrink_dcache_parent(), but is also913913+ * expected to be beneficial in reducing dentry cache914914+ * fragmentation.915915+ */916916+ dentry = parent;917917+ while (dentry && !lockref_put_or_lock(&dentry->d_lockref))918918+ dentry = dentry_kill(dentry, 1);835919 }836836- rcu_read_unlock();837920}838921839922static enum lru_status···11941261 if (data->start == dentry)11951262 goto out;1196126311971197- /*11981198- * move only zero ref count dentries to the dispose list.11991199- *12001200- * Those which are presently on the shrink list, being processed12011201- * by shrink_dentry_list(), shouldn't be moved. Otherwise the12021202- * loop in shrink_dcache_parent() might not make any progress12031203- * and loop forever.12041204- */12051205- if (dentry->d_lockref.count) {12061206- dentry_lru_del(dentry);12071207- } else if (!(dentry->d_flags & DCACHE_SHRINK_LIST)) {12081208- /*12091209- * We can't use d_lru_shrink_move() because we12101210- * need to get the global LRU lock and do the12111211- * LRU accounting.12121212- */12131213- d_lru_del(dentry);12141214- d_shrink_add(dentry, &data->dispose);12641264+ if (dentry->d_flags & DCACHE_SHRINK_LIST) {12151265 data->found++;12161216- ret = D_WALK_NORETRY;12661266+ } else {12671267+ if (dentry->d_flags & DCACHE_LRU_LIST)12681268+ d_lru_del(dentry);12691269+ if (!dentry->d_lockref.count) {12701270+ d_shrink_add(dentry, &data->dispose);12711271+ data->found++;12721272+ }12171273 }12181274 /*12191275 * We can return to the caller if we have found some (this12201276 * ensures forward progress). We'll be coming back to find12211277 * the rest.12221278 */12231223- if (data->found && need_resched())12241224- ret = D_WALK_QUIT;12791279+ if (!list_empty(&data->dispose))12801280+ ret = need_resched() ? D_WALK_QUIT : D_WALK_NORETRY;12251281out:12261282 return ret;12271283}···12401318}12411319EXPORT_SYMBOL(shrink_dcache_parent);1242132012431243-static enum d_walk_ret umount_collect(void *_data, struct dentry *dentry)13211321+static enum d_walk_ret umount_check(void *_data, struct dentry *dentry)12441322{12451245- struct select_data *data = _data;12461246- enum d_walk_ret ret = D_WALK_CONTINUE;13231323+ /* it has busy descendents; complain about those instead */13241324+ if (!list_empty(&dentry->d_subdirs))13251325+ return D_WALK_CONTINUE;1247132612481248- if (dentry->d_lockref.count) {12491249- dentry_lru_del(dentry);12501250- if (likely(!list_empty(&dentry->d_subdirs)))12511251- goto out;12521252- if (dentry == data->start && dentry->d_lockref.count == 1)12531253- goto out;12541254- printk(KERN_ERR12551255- "BUG: Dentry %p{i=%lx,n=%s}"12561256- " still in use (%d)"12571257- " [unmount of %s %s]\n",13271327+ /* root with refcount 1 is fine */13281328+ if (dentry == _data && dentry->d_lockref.count == 1)13291329+ return D_WALK_CONTINUE;13301330+13311331+ printk(KERN_ERR "BUG: Dentry %p{i=%lx,n=%pd} "13321332+ " still in use (%d) [unmount of %s %s]\n",12581333 dentry,12591334 dentry->d_inode ?12601335 dentry->d_inode->i_ino : 0UL,12611261- dentry->d_name.name,13361336+ dentry,12621337 dentry->d_lockref.count,12631338 dentry->d_sb->s_type->name,12641339 dentry->d_sb->s_id);12651265- BUG();12661266- } else if (!(dentry->d_flags & DCACHE_SHRINK_LIST)) {12671267- /*12681268- * We can't use d_lru_shrink_move() because we12691269- * need to get the global LRU lock and do the12701270- * LRU accounting.12711271- */12721272- if (dentry->d_flags & DCACHE_LRU_LIST)12731273- d_lru_del(dentry);12741274- d_shrink_add(dentry, &data->dispose);12751275- data->found++;12761276- ret = D_WALK_NORETRY;12771277- }12781278-out:12791279- if (data->found && need_resched())12801280- ret = D_WALK_QUIT;12811281- return ret;13401340+ WARN_ON(1);13411341+ return D_WALK_CONTINUE;13421342+}13431343+13441344+static void do_one_tree(struct dentry *dentry)13451345+{13461346+ shrink_dcache_parent(dentry);13471347+ d_walk(dentry, dentry, umount_check, NULL);13481348+ d_drop(dentry);13491349+ dput(dentry);12821350}1283135112841352/*···12781366{12791367 struct dentry *dentry;1280136812811281- if (down_read_trylock(&sb->s_umount))12821282- BUG();13691369+ WARN(down_read_trylock(&sb->s_umount), "s_umount should've been locked");1283137012841371 dentry = sb->s_root;12851372 sb->s_root = NULL;12861286- for (;;) {12871287- struct select_data data;12881288-12891289- INIT_LIST_HEAD(&data.dispose);12901290- data.start = dentry;12911291- data.found = 0;12921292-12931293- d_walk(dentry, &data, umount_collect, NULL);12941294- if (!data.found)12951295- break;12961296-12971297- shrink_dentry_list(&data.dispose);12981298- cond_resched();12991299- }13001300- d_drop(dentry);13011301- dput(dentry);13731373+ do_one_tree(dentry);1302137413031375 while (!hlist_bl_empty(&sb->s_anon)) {13041304- struct select_data data;13051305- dentry = hlist_bl_entry(hlist_bl_first(&sb->s_anon), struct dentry, d_hash);13061306-13071307- INIT_LIST_HEAD(&data.dispose);13081308- data.start = NULL;13091309- data.found = 0;13101310-13111311- d_walk(dentry, &data, umount_collect, NULL);13121312- if (data.found)13131313- shrink_dentry_list(&data.dispose);13141314- cond_resched();13761376+ dentry = dget(hlist_bl_entry(hlist_bl_first(&sb->s_anon), struct dentry, d_hash));13771377+ do_one_tree(dentry);13151378 }13161379}13171380···15341647 unsigned add_flags = d_flags_for_inode(inode);1535164815361649 spin_lock(&dentry->d_lock);15371537- dentry->d_flags &= ~DCACHE_ENTRY_TYPE;15381538- dentry->d_flags |= add_flags;16501650+ __d_set_type(dentry, add_flags);15391651 if (inode)15401652 hlist_add_head(&dentry->d_alias, &inode->i_dentry);15411653 dentry->d_inode = inode;
···10301030 int error;10311031 int i;1032103210331033+ if (!hugepages_supported()) {10341034+ pr_info("hugetlbfs: disabling because there are no supported hugepage sizes\n");10351035+ return -ENOTSUPP;10361036+ }10371037+10331038 error = bdi_init(&hugetlbfs_backing_dev_info);10341039 if (error)10351040 return error;
+3-3
fs/namei.c
···15421542 inode = path->dentry->d_inode;15431543 }15441544 err = -ENOENT;15451545- if (!inode)15451545+ if (!inode || d_is_negative(path->dentry))15461546 goto out_path_put;1547154715481548 if (should_follow_link(path->dentry, follow)) {···22492249 mutex_unlock(&dir->d_inode->i_mutex);2250225022512251done:22522252- if (!dentry->d_inode) {22522252+ if (!dentry->d_inode || d_is_negative(dentry)) {22532253 error = -ENOENT;22542254 dput(dentry);22552255 goto out;···29942994finish_lookup:29952995 /* we _can_ be in RCU mode here */29962996 error = -ENOENT;29972997- if (d_is_negative(path->dentry)) {29972997+ if (!inode || d_is_negative(path->dentry)) {29982998 path_to_nameidata(path, nd);29992999 goto out;30003000 }
···246246 umode_t mode = 0;247247 int not_equiv = 0;248248249249+ /*250250+ * A null ACL can always be presented as mode bits.251251+ */252252+ if (!acl)253253+ return 0;254254+249255 FOREACH_ACL_ENTRY(pa, acl, pe) {250256 switch (pa->e_tag) {251257 case ACL_USER_OBJ:
···213213 * Out of line attribute, cannot double split, but214214 * make room for the attribute value itself.215215 */216216- uint dblocks = XFS_B_TO_FSB(mp, valuelen);216216+ uint dblocks = xfs_attr3_rmt_blocks(mp, valuelen);217217 nblks += dblocks;218218 nblks += XFS_NEXTENTADD_SPACE_RES(mp, dblocks, XFS_ATTR_FORK);219219 }···698698699699 trace_xfs_attr_leaf_replace(args);700700701701+ /* save the attribute state for later removal*/701702 args->op_flags |= XFS_DA_OP_RENAME; /* an atomic rename */702703 args->blkno2 = args->blkno; /* set 2nd entry info*/703704 args->index2 = args->index;704705 args->rmtblkno2 = args->rmtblkno;705706 args->rmtblkcnt2 = args->rmtblkcnt;707707+ args->rmtvaluelen2 = args->rmtvaluelen;708708+709709+ /*710710+ * clear the remote attr state now that it is saved so that the711711+ * values reflect the state of the attribute we are about to712712+ * add, not the attribute we just found and will remove later.713713+ */714714+ args->rmtblkno = 0;715715+ args->rmtblkcnt = 0;716716+ args->rmtvaluelen = 0;706717 }707718708719 /*···805794 args->blkno = args->blkno2;806795 args->rmtblkno = args->rmtblkno2;807796 args->rmtblkcnt = args->rmtblkcnt2;797797+ args->rmtvaluelen = args->rmtvaluelen2;808798 if (args->rmtblkno) {809799 error = xfs_attr_rmtval_remove(args);810800 if (error)···101199910121000 trace_xfs_attr_node_replace(args);1013100110021002+ /* save the attribute state for later removal*/10141003 args->op_flags |= XFS_DA_OP_RENAME; /* atomic rename op */10151004 args->blkno2 = args->blkno; /* set 2nd entry info*/10161005 args->index2 = args->index;10171006 args->rmtblkno2 = args->rmtblkno;10181007 args->rmtblkcnt2 = args->rmtblkcnt;10081008+ args->rmtvaluelen2 = args->rmtvaluelen;10091009+10101010+ /*10111011+ * clear the remote attr state now that it is saved so that the10121012+ * values reflect the state of the attribute we are about to10131013+ * add, not the attribute we just found and will remove later.10141014+ */10191015 args->rmtblkno = 0;10201016 args->rmtblkcnt = 0;10171017+ args->rmtvaluelen = 0;10211018 }1022101910231020 retval = xfs_attr3_leaf_add(blk->bp, state->args);···11541133 args->blkno = args->blkno2;11551134 args->rmtblkno = args->rmtblkno2;11561135 args->rmtblkcnt = args->rmtblkcnt2;11361136+ args->rmtvaluelen = args->rmtvaluelen2;11571137 if (args->rmtblkno) {11581138 error = xfs_attr_rmtval_remove(args);11591139 if (error)
···337337 struct xfs_buf *bp;338338 xfs_dablk_t lblkno = args->rmtblkno;339339 __uint8_t *dst = args->value;340340- int valuelen = args->valuelen;340340+ int valuelen;341341 int nmap;342342 int error;343343 int blkcnt = args->rmtblkcnt;···347347 trace_xfs_attr_rmtval_get(args);348348349349 ASSERT(!(args->flags & ATTR_KERNOVAL));350350+ ASSERT(args->rmtvaluelen == args->valuelen);350351352352+ valuelen = args->rmtvaluelen;351353 while (valuelen > 0) {352354 nmap = ATTR_RMTVALUE_MAPSIZE;353355 error = xfs_bmapi_read(args->dp, (xfs_fileoff_t)lblkno,···417415 * attributes have headers, we can't just do a straight byte to FSB418416 * conversion and have to take the header space into account.419417 */420420- blkcnt = xfs_attr3_rmt_blocks(mp, args->valuelen);418418+ blkcnt = xfs_attr3_rmt_blocks(mp, args->rmtvaluelen);421419 error = xfs_bmap_first_unused(args->trans, args->dp, blkcnt, &lfileoff,422420 XFS_ATTR_FORK);423421 if (error)···482480 */483481 lblkno = args->rmtblkno;484482 blkcnt = args->rmtblkcnt;485485- valuelen = args->valuelen;483483+ valuelen = args->rmtvaluelen;486484 while (valuelen > 0) {487485 struct xfs_buf *bp;488486 xfs_daddr_t dblkno;
+2
fs/xfs/xfs_da_btree.h
···6060 int index; /* index of attr of interest in blk */6161 xfs_dablk_t rmtblkno; /* remote attr value starting blkno */6262 int rmtblkcnt; /* remote attr value block count */6363+ int rmtvaluelen; /* remote attr value length in bytes */6364 xfs_dablk_t blkno2; /* blkno of 2nd attr leaf of interest */6465 int index2; /* index of 2nd attr in blk */6566 xfs_dablk_t rmtblkno2; /* remote attr value starting blkno */6667 int rmtblkcnt2; /* remote attr value block count */6868+ int rmtvaluelen2; /* remote attr value length in bytes */6769 int op_flags; /* operation flags */6870 enum xfs_dacmp cmpresult; /* name compare result for lookups */6971} xfs_da_args_t;
···201201 * write validation, we don't need to check feature masks.202202 */203203 if (check_version && XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) {204204- xfs_alert(mp,205205-"Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled!\n"206206-"Use of these features in this kernel is at your own risk!");207207-208204 if (xfs_sb_has_compat_feature(sbp,209205 XFS_SB_FEAT_COMPAT_UNKNOWN)) {210206 xfs_warn(mp,
+2-1
include/acpi/acpi_bus.h
···261261 u32 inrush_current:1; /* Serialize Dx->D0 */262262 u32 power_removed:1; /* Optimize Dx->D0 */263263 u32 ignore_parent:1; /* Power is independent of parent power state */264264- u32 reserved:27;264264+ u32 dsw_present:1; /* _DSW present? */265265+ u32 reserved:26;265266};266267267268struct acpi_device_power_state {
···221221#define DCACHE_SYMLINK_TYPE 0x00300000 /* Symlink */222222#define DCACHE_FILE_TYPE 0x00400000 /* Other file type */223223224224+#define DCACHE_MAY_FREE 0x00800000225225+224226extern seqlock_t rename_lock;225227226228static inline int dname_external(const struct dentry *dentry)
+2
include/linux/ftrace.h
···535535extern int ftrace_arch_read_dyn_info(char *buf, int size);536536537537extern int skip_trace(unsigned long ip);538538+extern void ftrace_module_init(struct module *mod);538539539540extern void ftrace_disable_daemon(void);540541extern void ftrace_enable_daemon(void);···545544static inline void ftrace_disable_daemon(void) { }546545static inline void ftrace_enable_daemon(void) { }547546static inline void ftrace_release_mod(struct module *mod) {}547547+static inline void ftrace_module_init(struct module *mod) {}548548static inline __init int register_ftrace_command(struct ftrace_func_command *cmd)549549{550550 return -EINVAL;
+10
include/linux/hugetlb.h
···412412 return &mm->page_table_lock;413413}414414415415+static inline bool hugepages_supported(void)416416+{417417+ /*418418+ * Some platform decide whether they support huge pages at boot419419+ * time. On these, such as powerpc, HPAGE_SHIFT is set to 0 when420420+ * there is no such support421421+ */422422+ return HPAGE_SHIFT != 0;423423+}424424+415425#else /* CONFIG_HUGETLB_PAGE */416426struct hstate {};417427#define alloc_huge_page_node(h, nid) NULL
+2-2
include/linux/interrupt.h
···210210/**211211 * irq_set_affinity - Set the irq affinity of a given irq212212 * @irq: Interrupt to set affinity213213- * @mask: cpumask213213+ * @cpumask: cpumask214214 *215215 * Fails if cpumask does not contain an online CPU216216 */···223223/**224224 * irq_force_affinity - Force the irq affinity of a given irq225225 * @irq: Interrupt to set affinity226226- * @mask: cpumask226226+ * @cpumask: cpumask227227 *228228 * Same as irq_set_affinity, but without checking the mask against229229 * online cpus.
+2
include/linux/irq.h
···603603 return d ? irqd_get_trigger_type(d) : 0;604604}605605606606+unsigned int arch_dynirq_lower_bound(unsigned int from);607607+606608int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node,607609 struct module *owner);608610
···169169extern int netlink_add_tap(struct netlink_tap *nt);170170extern int netlink_remove_tap(struct netlink_tap *nt);171171172172+bool __netlink_ns_capable(const struct netlink_skb_parms *nsp,173173+ struct user_namespace *ns, int cap);174174+bool netlink_ns_capable(const struct sk_buff *skb,175175+ struct user_namespace *ns, int cap);176176+bool netlink_capable(const struct sk_buff *skb, int cap);177177+bool netlink_net_capable(const struct sk_buff *skb, int cap);178178+172179#endif /* __LINUX_NETLINK_H */
+5
include/linux/of_irq.h
···44444545#ifdef CONFIG_OF_IRQ4646extern int of_irq_count(struct device_node *dev);4747+extern int of_irq_get(struct device_node *dev, int index);4748#else4849static inline int of_irq_count(struct device_node *dev)5050+{5151+ return 0;5252+}5353+static inline int of_irq_get(struct device_node *dev, int index)4954{5055 return 0;5156}
···643643 if ((task_active_pid_ns(current) != &init_pid_ns))644644 return -EPERM;645645646646- if (!capable(CAP_AUDIT_CONTROL))646646+ if (!netlink_capable(skb, CAP_AUDIT_CONTROL))647647 err = -EPERM;648648 break;649649 case AUDIT_USER:650650 case AUDIT_FIRST_USER_MSG ... AUDIT_LAST_USER_MSG:651651 case AUDIT_FIRST_USER_MSG2 ... AUDIT_LAST_USER_MSG2:652652- if (!capable(CAP_AUDIT_WRITE))652652+ if (!netlink_capable(skb, CAP_AUDIT_WRITE))653653 err = -EPERM;654654 break;655655 default: /* bad msg */
+1-1
kernel/context_tracking.c
···120120 * instead of preempt_schedule() to exit user context if needed before121121 * calling the scheduler.122122 */123123-asmlinkage void __sched notrace preempt_schedule_context(void)123123+asmlinkage __visible void __sched notrace preempt_schedule_context(void)124124{125125 enum ctx_state prev_ctx;126126
+22
kernel/hrtimer.c
···234234 goto again;235235 }236236 timer->base = new_base;237237+ } else {238238+ if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) {239239+ cpu = this_cpu;240240+ goto again;241241+ }237242 }238243 return new_base;239244}···573568 return;574569575570 cpu_base->expires_next.tv64 = expires_next.tv64;571571+572572+ /*573573+ * If a hang was detected in the last timer interrupt then we574574+ * leave the hang delay active in the hardware. We want the575575+ * system to make progress. That also prevents the following576576+ * scenario:577577+ * T1 expires 50ms from now578578+ * T2 expires 5s from now579579+ *580580+ * T1 is removed, so this code is called and would reprogram581581+ * the hardware to 5s from now. Any hrtimer_start after that582582+ * will not reprogram the hardware due to hang_detected being583583+ * set. So we'd effectivly block all timers until the T2 event584584+ * fires.585585+ */586586+ if (cpu_base->hang_detected)587587+ return;576588577589 if (cpu_base->expires_next.tv64 != KTIME_MAX)578590 tick_program_event(cpu_base->expires_next, 1);
+7
kernel/irq/irqdesc.c
···363363 if (from > irq)364364 return -EINVAL;365365 from = irq;366366+ } else {367367+ /*368368+ * For interrupts which are freely allocated the369369+ * architecture can force a lower bound to the @from370370+ * argument. x86 uses this to exclude the GSI space.371371+ */372372+ from = arch_dynirq_lower_bound(from);366373 }367374368375 mutex_lock(&sparse_irq_lock);
···16741674 *16751675 * See the vsnprintf() documentation for format string extensions over C99.16761676 */16771677-asmlinkage int printk(const char *fmt, ...)16771677+asmlinkage __visible int printk(const char *fmt, ...)16781678{16791679 va_list args;16801680 int r;···17371737 }17381738}1739173917401740-asmlinkage void early_printk(const char *fmt, ...)17401740+asmlinkage __visible void early_printk(const char *fmt, ...)17411741{17421742 va_list ap;17431743
+5-5
kernel/sched/core.c
···21922192 * schedule_tail - first thing a freshly forked thread must call.21932193 * @prev: the thread we just switched away from.21942194 */21952195-asmlinkage void schedule_tail(struct task_struct *prev)21952195+asmlinkage __visible void schedule_tail(struct task_struct *prev)21962196 __releases(rq->lock)21972197{21982198 struct rq *rq = this_rq();···27412741 blk_schedule_flush_plug(tsk);27422742}2743274327442744-asmlinkage void __sched schedule(void)27442744+asmlinkage __visible void __sched schedule(void)27452745{27462746 struct task_struct *tsk = current;27472747···27512751EXPORT_SYMBOL(schedule);2752275227532753#ifdef CONFIG_CONTEXT_TRACKING27542754-asmlinkage void __sched schedule_user(void)27542754+asmlinkage __visible void __sched schedule_user(void)27552755{27562756 /*27572757 * If we come here after a random call to set_need_resched(),···27832783 * off of preempt_enable. Kernel preemptions off return from interrupt27842784 * occur there and call schedule directly.27852785 */27862786-asmlinkage void __sched notrace preempt_schedule(void)27862786+asmlinkage __visible void __sched notrace preempt_schedule(void)27872787{27882788 /*27892789 * If there is a non-zero preempt_count or interrupts are disabled,···28132813 * Note, that this is called and return with irqs disabled. This will28142814 * protect us against recursive calling from irq.28152815 */28162816-asmlinkage void __sched preempt_schedule_irq(void)28162816+asmlinkage __visible void __sched preempt_schedule_irq(void)28172817{28182818 enum ctx_state prev_state;28192819
+7-2
kernel/softirq.c
···223223static inline void lockdep_softirq_end(bool in_hardirq) { }224224#endif225225226226-asmlinkage void __do_softirq(void)226226+asmlinkage __visible void __do_softirq(void)227227{228228 unsigned long end = jiffies + MAX_SOFTIRQ_TIME;229229 unsigned long old_flags = current->flags;···299299 tsk_restore_flags(current, old_flags, PF_MEMALLOC);300300}301301302302-asmlinkage void do_softirq(void)302302+asmlinkage __visible void do_softirq(void)303303{304304 __u32 pending;305305 unsigned long flags;···778778int __init __weak arch_early_irq_init(void)779779{780780 return 0;781781+}782782+783783+unsigned int __weak arch_dynirq_lower_bound(unsigned int from)784784+{785785+ return from;781786}
···43304330 ftrace_process_locs(mod, start, end);43314331}4332433243334333-static int ftrace_module_notify_enter(struct notifier_block *self,43344334- unsigned long val, void *data)43334333+void ftrace_module_init(struct module *mod)43354334{43364336- struct module *mod = data;43374337-43384338- if (val == MODULE_STATE_COMING)43394339- ftrace_init_module(mod, mod->ftrace_callsites,43404340- mod->ftrace_callsites +43414341- mod->num_ftrace_callsites);43424342- return 0;43354335+ ftrace_init_module(mod, mod->ftrace_callsites,43364336+ mod->ftrace_callsites +43374337+ mod->num_ftrace_callsites);43434338}4344433943454340static int ftrace_module_notify_exit(struct notifier_block *self,···43484353 return 0;43494354}43504355#else43514351-static int ftrace_module_notify_enter(struct notifier_block *self,43524352- unsigned long val, void *data)43534353-{43544354- return 0;43554355-}43564356static int ftrace_module_notify_exit(struct notifier_block *self,43574357 unsigned long val, void *data)43584358{43594359 return 0;43604360}43614361#endif /* CONFIG_MODULES */43624362-43634363-struct notifier_block ftrace_module_enter_nb = {43644364- .notifier_call = ftrace_module_notify_enter,43654365- .priority = INT_MAX, /* Run before anything that can use kprobes */43664366-};4367436243684363struct notifier_block ftrace_module_exit_nb = {43694364 .notifier_call = ftrace_module_notify_exit,···43874402 ret = ftrace_process_locs(NULL,43884403 __start_mcount_loc,43894404 __stop_mcount_loc);43904390-43914391- ret = register_module_notifier(&ftrace_module_enter_nb);43924392- if (ret)43934393- pr_warning("Failed to register trace ftrace module enter notifier\n");4394440543954406 ret = register_module_notifier(&ftrace_module_exit_nb);43964407 if (ret)
+1-1
kernel/trace/trace_events_trigger.c
···7777 data->ops->func(data);7878 continue;7979 }8080- filter = rcu_dereference(data->filter);8080+ filter = rcu_dereference_sched(data->filter);8181 if (filter && !filter_match_preds(filter, rec))8282 continue;8383 if (data->cmd_ops->post_trigger) {
+2-2
kernel/tracepoint.c
···188188 WARN_ON_ONCE(1);189189 return PTR_ERR(old);190190 }191191- release_probes(old);192191193192 /*194193 * rcu_assign_pointer has a smp_wmb() which makes sure that the new···199200 rcu_assign_pointer(tp->funcs, tp_funcs);200201 if (!static_key_enabled(&tp->key))201202 static_key_slow_inc(&tp->key);203203+ release_probes(old);202204 return 0;203205}204206···221221 WARN_ON_ONCE(1);222222 return PTR_ERR(old);223223 }224224- release_probes(old);225224226225 if (!tp_funcs) {227226 /* Removed last function */···231232 static_key_slow_dec(&tp->key);232233 }233234 rcu_assign_pointer(tp->funcs, tp_funcs);235235+ release_probes(old);234236 return 0;235237}236238
···671671 struct compact_control *cc)672672{673673 struct page *page;674674- unsigned long high_pfn, low_pfn, pfn, z_end_pfn, end_pfn;674674+ unsigned long high_pfn, low_pfn, pfn, z_end_pfn;675675 int nr_freepages = cc->nr_freepages;676676 struct list_head *freelist = &cc->freepages;677677678678 /*679679 * Initialise the free scanner. The starting point is where we last680680- * scanned from (or the end of the zone if starting). The low point681681- * is the end of the pageblock the migration scanner is using.680680+ * successfully isolated from, zone-cached value, or the end of the681681+ * zone when isolating for the first time. We need this aligned to682682+ * the pageblock boundary, because we do pfn -= pageblock_nr_pages683683+ * in the for loop.684684+ * The low boundary is the end of the pageblock the migration scanner685685+ * is using.682686 */683683- pfn = cc->free_pfn;687687+ pfn = cc->free_pfn & ~(pageblock_nr_pages-1);684688 low_pfn = ALIGN(cc->migrate_pfn + 1, pageblock_nr_pages);685689686690 /*···704700 for (; pfn >= low_pfn && cc->nr_migratepages > nr_freepages;705701 pfn -= pageblock_nr_pages) {706702 unsigned long isolated;703703+ unsigned long end_pfn;707704708705 /*709706 * This can iterate a massively long zone without finding any···739734 isolated = 0;740735741736 /*742742- * As pfn may not start aligned, pfn+pageblock_nr_page743743- * may cross a MAX_ORDER_NR_PAGES boundary and miss744744- * a pfn_valid check. Ensure isolate_freepages_block()745745- * only scans within a pageblock737737+ * Take care when isolating in last pageblock of a zone which738738+ * ends in the middle of a pageblock.746739 */747747- end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);748748- end_pfn = min(end_pfn, z_end_pfn);740740+ end_pfn = min(pfn + pageblock_nr_pages, z_end_pfn);749741 isolated = isolate_freepages_block(cc, pfn, end_pfn,750742 freelist, false);751743 nr_freepages += isolated;
+28-21
mm/filemap.c
···906906 * Looks up the page cache slot at @mapping & @offset. If there is a907907 * page cache page, it is returned with an increased refcount.908908 *909909- * If the slot holds a shadow entry of a previously evicted page, it910910- * is returned.909909+ * If the slot holds a shadow entry of a previously evicted page, or a910910+ * swap entry from shmem/tmpfs, it is returned.911911 *912912 * Otherwise, %NULL is returned.913913 */···928928 if (radix_tree_deref_retry(page))929929 goto repeat;930930 /*931931- * Otherwise, shmem/tmpfs must be storing a swap entry932932- * here as an exceptional entry: so return it without933933- * attempting to raise page count.931931+ * A shadow entry of a recently evicted page,932932+ * or a swap entry from shmem/tmpfs. Return933933+ * it without attempting to raise page count.934934 */935935 goto out;936936 }···983983 * page cache page, it is returned locked and with an increased984984 * refcount.985985 *986986- * If the slot holds a shadow entry of a previously evicted page, it987987- * is returned.986986+ * If the slot holds a shadow entry of a previously evicted page, or a987987+ * swap entry from shmem/tmpfs, it is returned.988988 *989989 * Otherwise, %NULL is returned.990990 *···10991099 * with ascending indexes. There may be holes in the indices due to11001100 * not-present pages.11011101 *11021102- * Any shadow entries of evicted pages are included in the returned11031103- * array.11021102+ * Any shadow entries of evicted pages, or swap entries from11031103+ * shmem/tmpfs, are included in the returned array.11041104 *11051105 * find_get_entries() returns the number of pages and shadow entries11061106 * which were found.···11281128 if (radix_tree_deref_retry(page))11291129 goto restart;11301130 /*11311131- * Otherwise, we must be storing a swap entry11321132- * here as an exceptional entry: so return it11331133- * without attempting to raise page count.11311131+ * A shadow entry of a recently evicted page,11321132+ * or a swap entry from shmem/tmpfs. Return11331133+ * it without attempting to raise page count.11341134 */11351135 goto export;11361136 }···11981198 goto restart;11991199 }12001200 /*12011201- * Otherwise, shmem/tmpfs must be storing a swap entry12021202- * here as an exceptional entry: so skip over it -12031203- * we only reach this from invalidate_mapping_pages().12011201+ * A shadow entry of a recently evicted page,12021202+ * or a swap entry from shmem/tmpfs. Skip12031203+ * over it.12041204 */12051205 continue;12061206 }···12651265 goto restart;12661266 }12671267 /*12681268- * Otherwise, shmem/tmpfs must be storing a swap entry12691269- * here as an exceptional entry: so stop looking for12701270- * contiguous pages.12681268+ * A shadow entry of a recently evicted page,12691269+ * or a swap entry from shmem/tmpfs. Stop12701270+ * looking for contiguous pages.12711271 */12721272 break;12731273 }···13411341 goto restart;13421342 }13431343 /*13441344- * This function is never used on a shmem/tmpfs13451345- * mapping, so a swap entry won't be found here.13441344+ * A shadow entry of a recently evicted page.13451345+ *13461346+ * Those entries should never be tagged, but13471347+ * this tree walk is lockless and the tags are13481348+ * looked up in bulk, one radix tree node at a13491349+ * time, so there is a sizable window for page13501350+ * reclaim to evict a page we saw tagged.13511351+ *13521352+ * Skip over it.13461353 */13471347- BUG();13541354+ continue;13481355 }1349135613501357 if (!page_cache_get_speculative(page))
+14-5
mm/hugetlb.c
···19811981{19821982 int i;1983198319841984- /* Some platform decide whether they support huge pages at boot19851985- * time. On these, such as powerpc, HPAGE_SHIFT is set to 0 when19861986- * there is no such support19871987- */19881988- if (HPAGE_SHIFT == 0)19841984+ if (!hugepages_supported())19891985 return 0;1990198619911987 if (!size_to_hstate(default_hstate_size)) {···21082112 unsigned long tmp;21092113 int ret;2110211421152115+ if (!hugepages_supported())21162116+ return -ENOTSUPP;21172117+21112118 tmp = h->max_huge_pages;2112211921132120 if (write && h->order >= MAX_ORDER)···21642165 unsigned long tmp;21652166 int ret;2166216721682168+ if (!hugepages_supported())21692169+ return -ENOTSUPP;21702170+21672171 tmp = h->nr_overcommit_huge_pages;2168217221692173 if (write && h->order >= MAX_ORDER)···21922190void hugetlb_report_meminfo(struct seq_file *m)21932191{21942192 struct hstate *h = &default_hstate;21932193+ if (!hugepages_supported())21942194+ return;21952195 seq_printf(m,21962196 "HugePages_Total: %5lu\n"21972197 "HugePages_Free: %5lu\n"···22102206int hugetlb_report_node_meminfo(int nid, char *buf)22112207{22122208 struct hstate *h = &default_hstate;22092209+ if (!hugepages_supported())22102210+ return 0;22132211 return sprintf(buf,22142212 "Node %d HugePages_Total: %5u\n"22152213 "Node %d HugePages_Free: %5u\n"···22252219{22262220 struct hstate *h;22272221 int nid;22222222+22232223+ if (!hugepages_supported())22242224+ return;2228222522292226 for_each_node_state(nid, N_MEMORY)22302227 for_each_hstate(h)
+12-8
mm/memcontrol.c
···66866686 pgoff = pte_to_pgoff(ptent);6687668766886688 /* page is moved even if it's not RSS of this task(page-faulted). */66896689- page = find_get_page(mapping, pgoff);66906690-66916689#ifdef CONFIG_SWAP66926690 /* shmem/tmpfs may report page out on swap: account for that too. */66936693- if (radix_tree_exceptional_entry(page)) {66946694- swp_entry_t swap = radix_to_swp_entry(page);66956695- if (do_swap_account)66966696- *entry = swap;66976697- page = find_get_page(swap_address_space(swap), swap.val);66986698- }66916691+ if (shmem_mapping(mapping)) {66926692+ page = find_get_entry(mapping, pgoff);66936693+ if (radix_tree_exceptional_entry(page)) {66946694+ swp_entry_t swp = radix_to_swp_entry(page);66956695+ if (do_swap_account)66966696+ *entry = swp;66976697+ page = find_get_page(swap_address_space(swp), swp.val);66986698+ }66996699+ } else67006700+ page = find_get_page(mapping, pgoff);67016701+#else67026702+ page = find_get_page(mapping, pgoff);66996703#endif67006704 return page;67016705}
+3-3
mm/page-writeback.c
···593593 * (5) the closer to setpoint, the smaller |df/dx| (and the reverse)594594 * => fast response on large errors; small oscillation near setpoint595595 */596596-static inline long long pos_ratio_polynom(unsigned long setpoint,596596+static long long pos_ratio_polynom(unsigned long setpoint,597597 unsigned long dirty,598598 unsigned long limit)599599{600600 long long pos_ratio;601601 long x;602602603603- x = div_s64(((s64)setpoint - (s64)dirty) << RATELIMIT_CALC_SHIFT,603603+ x = div64_s64(((s64)setpoint - (s64)dirty) << RATELIMIT_CALC_SHIFT,604604 limit - setpoint + 1);605605 pos_ratio = x;606606 pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;···842842 x_intercept = bdi_setpoint + span;843843844844 if (bdi_dirty < x_intercept - span / 4) {845845- pos_ratio = div_u64(pos_ratio * (x_intercept - bdi_dirty),845845+ pos_ratio = div64_u64(pos_ratio * (x_intercept - bdi_dirty),846846 x_intercept - bdi_setpoint + 1);847847 } else848848 pos_ratio /= 4;
+3-3
mm/slab.c
···166166typedef unsigned short freelist_idx_t;167167#endif168168169169-#define SLAB_OBJ_MAX_NUM (1 << sizeof(freelist_idx_t) * BITS_PER_BYTE)169169+#define SLAB_OBJ_MAX_NUM ((1 << sizeof(freelist_idx_t) * BITS_PER_BYTE) - 1)170170171171/*172172 * true if a page was allocated from pfmemalloc reserves for network-based···25722572 return freelist;25732573}2574257425752575-static inline freelist_idx_t get_free_obj(struct page *page, unsigned char idx)25752575+static inline freelist_idx_t get_free_obj(struct page *page, unsigned int idx)25762576{25772577 return ((freelist_idx_t *)page->freelist)[idx];25782578}2579257925802580static inline void set_free_obj(struct page *page,25812581- unsigned char idx, freelist_idx_t val)25812581+ unsigned int idx, freelist_idx_t val)25822582{25832583 ((freelist_idx_t *)(page->freelist))[idx] = val;25842584}
···210210#ifdef CONFIG_SYSFS211211static int sysfs_slab_add(struct kmem_cache *);212212static int sysfs_slab_alias(struct kmem_cache *, const char *);213213-static void sysfs_slab_remove(struct kmem_cache *);214213static void memcg_propagate_slab_attrs(struct kmem_cache *s);215214#else216215static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; }217216static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p)218217 { return 0; }219219-static inline void sysfs_slab_remove(struct kmem_cache *s) { }220220-221218static inline void memcg_propagate_slab_attrs(struct kmem_cache *s) { }222219#endif223220···3235323832363239int __kmem_cache_shutdown(struct kmem_cache *s)32373240{32383238- int rc = kmem_cache_close(s);32393239-32403240- if (!rc) {32413241- /*32423242- * Since slab_attr_store may take the slab_mutex, we should32433243- * release the lock while removing the sysfs entry in order to32443244- * avoid a deadlock. Because this is pretty much the last32453245- * operation we do and the lock will be released shortly after32463246- * that in slab_common.c, we could just move sysfs_slab_remove32473247- * to a later point in common code. We should do that when we32483248- * have a common sysfs framework for all allocators.32493249- */32503250- mutex_unlock(&slab_mutex);32513251- sysfs_slab_remove(s);32523252- mutex_lock(&slab_mutex);32533253- }32543254-32553255- return rc;32413241+ return kmem_cache_close(s);32563242}3257324332583244/********************************************************************···50515071#ifdef CONFIG_MEMCG_KMEM50525072 int i;50535073 char *buffer = NULL;50745074+ struct kmem_cache *root_cache;5054507550555055- if (!is_root_cache(s))50765076+ if (is_root_cache(s))50565077 return;50785078+50795079+ root_cache = s->memcg_params->root_cache;5057508050585081 /*50595082 * This mean this cache had no attribute written. Therefore, no point50605083 * in copying default values around50615084 */50625062- if (!s->max_attr_size)50855085+ if (!root_cache->max_attr_size)50635086 return;5064508750655088 for (i = 0; i < ARRAY_SIZE(slab_attrs); i++) {···50845101 */50855102 if (buffer)50865103 buf = buffer;50875087- else if (s->max_attr_size < ARRAY_SIZE(mbuf))51045104+ else if (root_cache->max_attr_size < ARRAY_SIZE(mbuf))50885105 buf = mbuf;50895106 else {50905107 buffer = (char *) get_zeroed_page(GFP_KERNEL);···50935110 buf = buffer;50945111 }5095511250965096- attr->show(s->memcg_params->root_cache, buf);51135113+ attr->show(root_cache, buf);50975114 attr->store(s, buf, strlen(buf));50985115 }5099511651005117 if (buffer)51015118 free_page((unsigned long)buffer);51025119#endif51205120+}51215121+51225122+static void kmem_cache_release(struct kobject *k)51235123+{51245124+ slab_kmem_cache_release(to_slab(k));51035125}5104512651055127static const struct sysfs_ops slab_sysfs_ops = {···5114512651155127static struct kobj_type slab_ktype = {51165128 .sysfs_ops = &slab_sysfs_ops,51295129+ .release = kmem_cache_release,51175130};5118513151195132static int uevent_filter(struct kset *kset, struct kobject *kobj)···52415252 goto out;52425253}5243525452445244-static void sysfs_slab_remove(struct kmem_cache *s)52555255+void sysfs_slab_remove(struct kmem_cache *s)52455256{52465257 if (slab_state < FULL)52475258 /*
-8
mm/truncate.c
···484484 unsigned long count = 0;485485 int i;486486487487- /*488488- * Note: this function may get called on a shmem/tmpfs mapping:489489- * pagevec_lookup() might then return 0 prematurely (because it490490- * got a gangful of swap entries); but it's hardly worth worrying491491- * about - it can rarely have anything to free from such a mapping492492- * (most pages are dirty), and already skips over any difficulties.493493- */494494-495487 pagevec_init(&pvec, 0);496488 while (index <= end && pagevec_lookup_entries(&pvec, mapping, index,497489 min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1,
···8181 for (i = 0; i < VMACACHE_SIZE; i++) {8282 struct vm_area_struct *vma = current->vmacache[i];83838484- if (vma && vma->vm_start <= addr && vma->vm_end > addr) {8585- BUG_ON(vma->vm_mm != mm);8484+ if (!vma)8585+ continue;8686+ if (WARN_ON_ONCE(vma->vm_mm != mm))8787+ break;8888+ if (vma->vm_start <= addr && vma->vm_end > addr)8689 return vma;8787- }8890 }89919092 return NULL;
+18
mm/vmscan.c
···19161916 get_lru_size(lruvec, LRU_INACTIVE_FILE);1917191719181918 /*19191919+ * Prevent the reclaimer from falling into the cache trap: as19201920+ * cache pages start out inactive, every cache fault will tip19211921+ * the scan balance towards the file LRU. And as the file LRU19221922+ * shrinks, so does the window for rotation from references.19231923+ * This means we have a runaway feedback loop where a tiny19241924+ * thrashing file LRU becomes infinitely more attractive than19251925+ * anon pages. Try to detect this based on file LRU size.19261926+ */19271927+ if (global_reclaim(sc)) {19281928+ unsigned long free = zone_page_state(zone, NR_FREE_PAGES);19291929+19301930+ if (unlikely(file + free <= high_wmark_pages(zone))) {19311931+ scan_balance = SCAN_ANON;19321932+ goto out;19331933+ }19341934+ }19351935+19361936+ /*19191937 * There is enough inactive page cache, do not reclaim19201938 * anything from the anonymous working set right now.19211939 */
+6-3
net/bluetooth/hci_conn.c
···819819 if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->flags)) {820820 struct hci_cp_auth_requested cp;821821822822- /* encrypt must be pending if auth is also pending */823823- set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);824824-825822 cp.handle = cpu_to_le16(conn->handle);826823 hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED,827824 sizeof(cp), &cp);825825+826826+ /* If we're already encrypted set the REAUTH_PEND flag,827827+ * otherwise set the ENCRYPT_PEND.828828+ */828829 if (conn->key_type != 0xff)829830 set_bit(HCI_CONN_REAUTH_PEND, &conn->flags);831831+ else832832+ set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);830833 }831834832835 return 0;
+6
net/bluetooth/hci_event.c
···33303330 if (!conn)33313331 goto unlock;3332333233333333+ /* For BR/EDR the necessary steps are taken through the33343334+ * auth_complete event.33353335+ */33363336+ if (conn->type != LE_LINK)33373337+ goto unlock;33383338+33333339 if (!ev->status)33343340 conn->sec_level = conn->pending_sec_level;33353341
···804804 u8 limhops = 0;805805 int err = 0;806806807807- if (!capable(CAP_NET_ADMIN))807807+ if (!netlink_capable(skb, CAP_NET_ADMIN))808808 return -EPERM;809809810810 if (nlmsg_len(nlh) < sizeof(*r))···893893 u8 limhops = 0;894894 int err = 0;895895896896- if (!capable(CAP_NET_ADMIN))896896+ if (!netlink_capable(skb, CAP_NET_ADMIN))897897 return -EPERM;898898899899 if (nlmsg_len(nlh) < sizeof(*r))
+5-4
net/ceph/osdmap.c
···15481548 return;1549154915501550 for (i = 0; i < len; i++) {15511551- if (osds[i] != CRUSH_ITEM_NONE &&15521552- osdmap->osd_primary_affinity[i] !=15511551+ int osd = osds[i];15521552+15531553+ if (osd != CRUSH_ITEM_NONE &&15541554+ osdmap->osd_primary_affinity[osd] !=15531555 CEPH_OSD_DEFAULT_PRIMARY_AFFINITY) {15541556 break;15551557 }···15651563 * osd's pgs get rejected as primary.15661564 */15671565 for (i = 0; i < len; i++) {15681568- int osd;15661566+ int osd = osds[i];15691567 u32 aff;1570156815711571- osd = osds[i];15721569 if (osd == CRUSH_ITEM_NONE)15731570 continue;15741571
+9-7
net/core/filter.c
···122122 return 0;123123}124124125125+/* Register mappings for user programs. */126126+#define A_REG 0127127+#define X_REG 7128128+#define TMP_REG 8129129+#define ARG2_REG 2130130+#define ARG3_REG 3131131+125132/**126133 * __sk_run_filter - run a filter on a given context127134 * @ctx: buffer to run the filter on···249242250243 regs[FP_REG] = (u64) (unsigned long) &stack[ARRAY_SIZE(stack)];251244 regs[ARG1_REG] = (u64) (unsigned long) ctx;245245+ regs[A_REG] = 0;246246+ regs[X_REG] = 0;252247253248select_insn:254249 goto *jumptable[insn->code];···651642{652643 return raw_smp_processor_id();653644}654654-655655-/* Register mappings for user programs. */656656-#define A_REG 0657657-#define X_REG 7658658-#define TMP_REG 8659659-#define ARG2_REG 2660660-#define ARG3_REG 3661645662646static bool convert_bpf_extensions(struct sock_filter *fp,663647 struct sock_filter_int **insnp)
+33-20
net/core/rtnetlink.c
···774774 return 0;775775}776776777777-static size_t rtnl_port_size(const struct net_device *dev)777777+static size_t rtnl_port_size(const struct net_device *dev,778778+ u32 ext_filter_mask)778779{779780 size_t port_size = nla_total_size(4) /* PORT_VF */780781 + nla_total_size(PORT_PROFILE_MAX) /* PORT_PROFILE */···791790 size_t port_self_size = nla_total_size(sizeof(struct nlattr))792791 + port_size;793792794794- if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent)793793+ if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent ||794794+ !(ext_filter_mask & RTEXT_FILTER_VF))795795 return 0;796796 if (dev_num_vf(dev->dev.parent))797797 return port_self_size + vf_ports_size +···828826 + nla_total_size(ext_filter_mask829827 & RTEXT_FILTER_VF ? 4 : 0) /* IFLA_NUM_VF */830828 + rtnl_vfinfo_size(dev, ext_filter_mask) /* IFLA_VFINFO_LIST */831831- + rtnl_port_size(dev) /* IFLA_VF_PORTS + IFLA_PORT_SELF */829829+ + rtnl_port_size(dev, ext_filter_mask) /* IFLA_VF_PORTS + IFLA_PORT_SELF */832830 + rtnl_link_get_size(dev) /* IFLA_LINKINFO */833831 + rtnl_link_get_af_size(dev) /* IFLA_AF_SPEC */834832 + nla_total_size(MAX_PHYS_PORT_ID_LEN); /* IFLA_PHYS_PORT_ID */···890888 return 0;891889}892890893893-static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev)891891+static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev,892892+ u32 ext_filter_mask)894893{895894 int err;896895897897- if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent)896896+ if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent ||897897+ !(ext_filter_mask & RTEXT_FILTER_VF))898898 return 0;899899900900 err = rtnl_port_self_fill(skb, dev);···10831079 nla_nest_end(skb, vfinfo);10841080 }1085108110861086- if (rtnl_port_fill(skb, dev))10821082+ if (rtnl_port_fill(skb, dev, ext_filter_mask))10871083 goto nla_put_failure;1088108410891085 if (dev->rtnl_link_ops || rtnl_have_link_slave_info(dev)) {···12021198 struct hlist_head *head;12031199 struct nlattr *tb[IFLA_MAX+1];12041200 u32 ext_filter_mask = 0;12011201+ int err;1205120212061203 s_h = cb->args[0];12071204 s_idx = cb->args[1];···12231218 hlist_for_each_entry_rcu(dev, head, index_hlist) {12241219 if (idx < s_idx)12251220 goto cont;12261226- if (rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK,12271227- NETLINK_CB(cb->skb).portid,12281228- cb->nlh->nlmsg_seq, 0,12291229- NLM_F_MULTI,12301230- ext_filter_mask) <= 0)12211221+ err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK,12221222+ NETLINK_CB(cb->skb).portid,12231223+ cb->nlh->nlmsg_seq, 0,12241224+ NLM_F_MULTI,12251225+ ext_filter_mask);12261226+ /* If we ran out of room on the first message,12271227+ * we're in trouble12281228+ */12291229+ WARN_ON((err == -EMSGSIZE) && (skb->len == 0));12301230+12311231+ if (err <= 0)12311232 goto out;1232123312331234 nl_dump_check_consistent(cb, nlmsg_hdr(skb));···14061395 return 0;14071396}1408139714091409-static int do_setlink(struct net_device *dev, struct ifinfomsg *ifm,13981398+static int do_setlink(const struct sk_buff *skb,13991399+ struct net_device *dev, struct ifinfomsg *ifm,14101400 struct nlattr **tb, char *ifname, int modified)14111401{14121402 const struct net_device_ops *ops = dev->netdev_ops;···14191407 err = PTR_ERR(net);14201408 goto errout;14211409 }14221422- if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) {14101410+ if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN)) {14231411 err = -EPERM;14241412 goto errout;14251413 }···16731661 if (err < 0)16741662 goto errout;1675166316761676- err = do_setlink(dev, ifm, tb, ifname, 0);16641664+ err = do_setlink(skb, dev, ifm, tb, ifname, 0);16771665errout:16781666 return err;16791667}···17901778}17911779EXPORT_SYMBOL(rtnl_create_link);1792178017931793-static int rtnl_group_changelink(struct net *net, int group,17811781+static int rtnl_group_changelink(const struct sk_buff *skb,17821782+ struct net *net, int group,17941783 struct ifinfomsg *ifm,17951784 struct nlattr **tb)17961785{···1800178718011788 for_each_netdev(net, dev) {18021789 if (dev->group == group) {18031803- err = do_setlink(dev, ifm, tb, NULL, 0);17901790+ err = do_setlink(skb, dev, ifm, tb, NULL, 0);18041791 if (err < 0)18051792 return err;18061793 }···19421929 modified = 1;19431930 }1944193119451945- return do_setlink(dev, ifm, tb, ifname, modified);19321932+ return do_setlink(skb, dev, ifm, tb, ifname, modified);19461933 }1947193419481935 if (!(nlh->nlmsg_flags & NLM_F_CREATE)) {19491936 if (ifm->ifi_index == 0 && tb[IFLA_GROUP])19501950- return rtnl_group_changelink(net,19371937+ return rtnl_group_changelink(skb, net,19511938 nla_get_u32(tb[IFLA_GROUP]),19521939 ifm, tb);19531940 return -ENODEV;···23342321 int err = -EINVAL;23352322 __u8 *addr;2336232323372337- if (!capable(CAP_NET_ADMIN))23242324+ if (!netlink_capable(skb, CAP_NET_ADMIN))23382325 return -EPERM;2339232623402327 err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX, NULL);···27862773 sz_idx = type>>2;27872774 kind = type&3;2788277527892789- if (kind != 2 && !ns_capable(net->user_ns, CAP_NET_ADMIN))27762776+ if (kind != 2 && !netlink_net_capable(skb, CAP_NET_ADMIN))27902777 return -EPERM;2791277827922779 if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) {
+49
net/core/sock.c
···145145static DEFINE_MUTEX(proto_list_mutex);146146static LIST_HEAD(proto_list);147147148148+/**149149+ * sk_ns_capable - General socket capability test150150+ * @sk: Socket to use a capability on or through151151+ * @user_ns: The user namespace of the capability to use152152+ * @cap: The capability to use153153+ *154154+ * Test to see if the opener of the socket had when the socket was155155+ * created and the current process has the capability @cap in the user156156+ * namespace @user_ns.157157+ */158158+bool sk_ns_capable(const struct sock *sk,159159+ struct user_namespace *user_ns, int cap)160160+{161161+ return file_ns_capable(sk->sk_socket->file, user_ns, cap) &&162162+ ns_capable(user_ns, cap);163163+}164164+EXPORT_SYMBOL(sk_ns_capable);165165+166166+/**167167+ * sk_capable - Socket global capability test168168+ * @sk: Socket to use a capability on or through169169+ * @cap: The global capbility to use170170+ *171171+ * Test to see if the opener of the socket had when the socket was172172+ * created and the current process has the capability @cap in all user173173+ * namespaces.174174+ */175175+bool sk_capable(const struct sock *sk, int cap)176176+{177177+ return sk_ns_capable(sk, &init_user_ns, cap);178178+}179179+EXPORT_SYMBOL(sk_capable);180180+181181+/**182182+ * sk_net_capable - Network namespace socket capability test183183+ * @sk: Socket to use a capability on or through184184+ * @cap: The capability to use185185+ *186186+ * Test to see if the opener of the socket had when the socke was created187187+ * and the current process has the capability @cap over the network namespace188188+ * the socket is a member of.189189+ */190190+bool sk_net_capable(const struct sock *sk, int cap)191191+{192192+ return sk_ns_capable(sk, sock_net(sk)->user_ns, cap);193193+}194194+EXPORT_SYMBOL(sk_net_capable);195195+196196+148197#ifdef CONFIG_MEMCG_KMEM149198int mem_cgroup_sockets_init(struct mem_cgroup *memcg, struct cgroup_subsys *ss)150199{
+2-2
net/core/sock_diag.c
···4949}5050EXPORT_SYMBOL_GPL(sock_diag_put_meminfo);51515252-int sock_diag_put_filterinfo(struct user_namespace *user_ns, struct sock *sk,5252+int sock_diag_put_filterinfo(bool may_report_filterinfo, struct sock *sk,5353 struct sk_buff *skb, int attrtype)5454{5555 struct sock_fprog_kern *fprog;···5858 unsigned int flen;5959 int err = 0;60606161- if (!ns_capable(user_ns, CAP_NET_ADMIN)) {6161+ if (!may_report_filterinfo) {6262 nla_reserve(skb, attrtype, 0);6363 return 0;6464 }
···574574 struct dn_ifaddr __rcu **ifap;575575 int err = -EINVAL;576576577577- if (!capable(CAP_NET_ADMIN))577577+ if (!netlink_capable(skb, CAP_NET_ADMIN))578578 return -EPERM;579579580580 if (!net_eq(net, &init_net))···618618 struct dn_ifaddr *ifa;619619 int err;620620621621- if (!capable(CAP_NET_ADMIN))621621+ if (!netlink_capable(skb, CAP_NET_ADMIN))622622 return -EPERM;623623624624 if (!net_eq(net, &init_net))
+2-2
net/decnet/dn_fib.c
···505505 struct nlattr *attrs[RTA_MAX+1];506506 int err;507507508508- if (!capable(CAP_NET_ADMIN))508508+ if (!netlink_capable(skb, CAP_NET_ADMIN))509509 return -EPERM;510510511511 if (!net_eq(net, &init_net))···530530 struct nlattr *attrs[RTA_MAX+1];531531 int err;532532533533- if (!capable(CAP_NET_ADMIN))533533+ if (!netlink_capable(skb, CAP_NET_ADMIN))534534 return -EPERM;535535536536 if (!net_eq(net, &init_net))
+1-1
net/decnet/netfilter/dn_rtmsg.c
···107107 if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len)108108 return;109109110110- if (!capable(CAP_NET_ADMIN))110110+ if (!netlink_capable(skb, CAP_NET_ADMIN))111111 RCV_SKB_FAIL(-EPERM);112112113113 /* Eventually we might send routing messages too */
+2
net/ipv4/ip_tunnel.c
···442442 tunnel->i_seqno = ntohl(tpi->seq) + 1;443443 }444444445445+ skb_reset_network_header(skb);446446+445447 err = IP_ECN_decapsulate(iph, skb);446448 if (unlikely(err)) {447449 if (log_ecn_error)
+1-1
net/ipv4/tcp_cubic.c
···409409 ratio -= ca->delayed_ack >> ACK_RATIO_SHIFT;410410 ratio += cnt;411411412412- ca->delayed_ack = min(ratio, ACK_RATIO_LIMIT);412412+ ca->delayed_ack = clamp(ratio, 1U, ACK_RATIO_LIMIT);413413 }414414415415 /* Some calls are for duplicates without timetamps */
+7-7
net/ipv4/tcp_output.c
···24412441 err = tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC);24422442 }2443244324442444- if (likely(!err))24442444+ if (likely(!err)) {24452445 TCP_SKB_CB(skb)->sacked |= TCPCB_EVER_RETRANS;24462446+ /* Update global TCP statistics. */24472447+ TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);24482448+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)24492449+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);24502450+ tp->total_retrans++;24512451+ }24462452 return err;24472453}24482454···24582452 int err = __tcp_retransmit_skb(sk, skb);2459245324602454 if (err == 0) {24612461- /* Update global TCP statistics. */24622462- TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);24632463- if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)24642464- NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);24652465- tp->total_retrans++;24662466-24672455#if FASTRETRANS_DEBUG > 024682456 if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS) {24692457 net_dbg_ratelimited("retrans_out leaked\n");
···368368static void nfnetlink_rcv(struct sk_buff *skb)369369{370370 struct nlmsghdr *nlh = nlmsg_hdr(skb);371371- struct net *net = sock_net(skb->sk);372371 int msglen;373372374373 if (nlh->nlmsg_len < NLMSG_HDRLEN ||375374 skb->len < nlh->nlmsg_len)376375 return;377376378378- if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) {377377+ if (!netlink_net_capable(skb, CAP_NET_ADMIN)) {379378 netlink_ack(skb, nlh, -EPERM);380379 return;381380 }
+70-5
net/netlink/af_netlink.c
···13601360 return err;13611361}1362136213631363-static inline int netlink_capable(const struct socket *sock, unsigned int flag)13631363+/**13641364+ * __netlink_ns_capable - General netlink message capability test13651365+ * @nsp: NETLINK_CB of the socket buffer holding a netlink command from userspace.13661366+ * @user_ns: The user namespace of the capability to use13671367+ * @cap: The capability to use13681368+ *13691369+ * Test to see if the opener of the socket we received the message13701370+ * from had when the netlink socket was created and the sender of the13711371+ * message has has the capability @cap in the user namespace @user_ns.13721372+ */13731373+bool __netlink_ns_capable(const struct netlink_skb_parms *nsp,13741374+ struct user_namespace *user_ns, int cap)13751375+{13761376+ return sk_ns_capable(nsp->sk, user_ns, cap);13771377+}13781378+EXPORT_SYMBOL(__netlink_ns_capable);13791379+13801380+/**13811381+ * netlink_ns_capable - General netlink message capability test13821382+ * @skb: socket buffer holding a netlink command from userspace13831383+ * @user_ns: The user namespace of the capability to use13841384+ * @cap: The capability to use13851385+ *13861386+ * Test to see if the opener of the socket we received the message13871387+ * from had when the netlink socket was created and the sender of the13881388+ * message has has the capability @cap in the user namespace @user_ns.13891389+ */13901390+bool netlink_ns_capable(const struct sk_buff *skb,13911391+ struct user_namespace *user_ns, int cap)13921392+{13931393+ return __netlink_ns_capable(&NETLINK_CB(skb), user_ns, cap);13941394+}13951395+EXPORT_SYMBOL(netlink_ns_capable);13961396+13971397+/**13981398+ * netlink_capable - Netlink global message capability test13991399+ * @skb: socket buffer holding a netlink command from userspace14001400+ * @cap: The capability to use14011401+ *14021402+ * Test to see if the opener of the socket we received the message14031403+ * from had when the netlink socket was created and the sender of the14041404+ * message has has the capability @cap in all user namespaces.14051405+ */14061406+bool netlink_capable(const struct sk_buff *skb, int cap)14071407+{14081408+ return netlink_ns_capable(skb, &init_user_ns, cap);14091409+}14101410+EXPORT_SYMBOL(netlink_capable);14111411+14121412+/**14131413+ * netlink_net_capable - Netlink network namespace message capability test14141414+ * @skb: socket buffer holding a netlink command from userspace14151415+ * @cap: The capability to use14161416+ *14171417+ * Test to see if the opener of the socket we received the message14181418+ * from had when the netlink socket was created and the sender of the14191419+ * message has has the capability @cap over the network namespace of14201420+ * the socket we received the message from.14211421+ */14221422+bool netlink_net_capable(const struct sk_buff *skb, int cap)14231423+{14241424+ return netlink_ns_capable(skb, sock_net(skb->sk)->user_ns, cap);14251425+}14261426+EXPORT_SYMBOL(netlink_net_capable);14271427+14281428+static inline int netlink_allowed(const struct socket *sock, unsigned int flag)13641429{13651430 return (nl_table[sock->sk->sk_protocol].flags & flag) ||13661431 ns_capable(sock_net(sock->sk)->user_ns, CAP_NET_ADMIN);···1493142814941429 /* Only superuser is allowed to listen multicasts */14951430 if (nladdr->nl_groups) {14961496- if (!netlink_capable(sock, NL_CFG_F_NONROOT_RECV))14311431+ if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV))14971432 return -EPERM;14981433 err = netlink_realloc_groups(sk);14991434 if (err)···15551490 return -EINVAL;1556149115571492 if ((nladdr->nl_groups || nladdr->nl_pid) &&15581558- !netlink_capable(sock, NL_CFG_F_NONROOT_SEND))14931493+ !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND))15591494 return -EPERM;1560149515611496 if (!nlk->portid)···21612096 break;21622097 case NETLINK_ADD_MEMBERSHIP:21632098 case NETLINK_DROP_MEMBERSHIP: {21642164- if (!netlink_capable(sock, NL_CFG_F_NONROOT_RECV))20992099+ if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV))21652100 return -EPERM;21662101 err = netlink_realloc_groups(sk);21672102 if (err)···23122247 dst_group = ffs(addr->nl_groups);23132248 err = -EPERM;23142249 if ((dst_group || dst_portid) &&23152315- !netlink_capable(sock, NL_CFG_F_NONROOT_SEND))22502250+ !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND))23162251 goto out;23172252 } else {23182253 dst_portid = nlk->dst_portid;
+1-1
net/netlink/genetlink.c
···561561 return -EOPNOTSUPP;562562563563 if ((ops->flags & GENL_ADMIN_PERM) &&564564- !capable(CAP_NET_ADMIN))564564+ !netlink_capable(skb, CAP_NET_ADMIN))565565 return -EPERM;566566567567 if ((nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
···7070 int err;7171 u8 pnaddr;72727373- if (!capable(CAP_NET_ADMIN))7373+ if (!netlink_capable(skb, CAP_NET_ADMIN))7474 return -EPERM;75757676- if (!capable(CAP_SYS_ADMIN))7676+ if (!netlink_capable(skb, CAP_SYS_ADMIN))7777 return -EPERM;78787979 ASSERT_RTNL();···233233 int err;234234 u8 dst;235235236236- if (!capable(CAP_NET_ADMIN))236236+ if (!netlink_capable(skb, CAP_NET_ADMIN))237237 return -EPERM;238238239239- if (!capable(CAP_SYS_ADMIN))239239+ if (!netlink_capable(skb, CAP_SYS_ADMIN))240240 return -EPERM;241241242242 ASSERT_RTNL();
+1-1
net/sched/act_api.c
···948948 u32 portid = skb ? NETLINK_CB(skb).portid : 0;949949 int ret = 0, ovr = 0;950950951951- if ((n->nlmsg_type != RTM_GETACTION) && !capable(CAP_NET_ADMIN))951951+ if ((n->nlmsg_type != RTM_GETACTION) && !netlink_capable(skb, CAP_NET_ADMIN))952952 return -EPERM;953953954954 ret = nlmsg_parse(n, sizeof(struct tcamsg), tca, TCA_ACT_MAX, NULL);
+1-1
net/sched/cls_api.c
···134134 int err;135135 int tp_created = 0;136136137137- if ((n->nlmsg_type != RTM_GETTFILTER) && !capable(CAP_NET_ADMIN))137137+ if ((n->nlmsg_type != RTM_GETTFILTER) && !netlink_capable(skb, CAP_NET_ADMIN))138138 return -EPERM;139139140140replay:
+3-3
net/sched/sch_api.c
···10841084 struct Qdisc *p = NULL;10851085 int err;1086108610871087- if ((n->nlmsg_type != RTM_GETQDISC) && !capable(CAP_NET_ADMIN))10871087+ if ((n->nlmsg_type != RTM_GETQDISC) && !netlink_capable(skb, CAP_NET_ADMIN))10881088 return -EPERM;1089108910901090 err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL);···11511151 struct Qdisc *q, *p;11521152 int err;1153115311541154- if (!capable(CAP_NET_ADMIN))11541154+ if (!netlink_capable(skb, CAP_NET_ADMIN))11551155 return -EPERM;1156115611571157replay:···14901490 u32 qid;14911491 int err;1492149214931493- if ((n->nlmsg_type != RTM_GETTCLASS) && !capable(CAP_NET_ADMIN))14931493+ if ((n->nlmsg_type != RTM_GETTCLASS) && !netlink_capable(skb, CAP_NET_ADMIN))14941494 return -EPERM;1495149514961496 err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL);
+6-5
net/sched/sch_hhf.c
···553553 if (err < 0)554554 return err;555555556556- sch_tree_lock(sch);557557-558558- if (tb[TCA_HHF_BACKLOG_LIMIT])559559- sch->limit = nla_get_u32(tb[TCA_HHF_BACKLOG_LIMIT]);560560-561556 if (tb[TCA_HHF_QUANTUM])562557 new_quantum = nla_get_u32(tb[TCA_HHF_QUANTUM]);563558···562567 non_hh_quantum = (u64)new_quantum * new_hhf_non_hh_weight;563568 if (non_hh_quantum > INT_MAX)564569 return -EINVAL;570570+571571+ sch_tree_lock(sch);572572+573573+ if (tb[TCA_HHF_BACKLOG_LIMIT])574574+ sch->limit = nla_get_u32(tb[TCA_HHF_BACKLOG_LIMIT]);575575+565576 q->quantum = new_quantum;566577 q->hhf_non_hh_weight = new_hhf_non_hh_weight;567578
···496496497497 /* If the transport error count is greater than the pf_retrans498498 * threshold, and less than pathmaxrtx, and if the current state499499- * is not SCTP_UNCONFIRMED, then mark this transport as Partially500500- * Failed, see SCTP Quick Failover Draft, section 5.1499499+ * is SCTP_ACTIVE, then mark this transport as Partially Failed,500500+ * see SCTP Quick Failover Draft, section 5.1501501 */502502- if ((transport->state != SCTP_PF) &&503503- (transport->state != SCTP_UNCONFIRMED) &&502502+ if ((transport->state == SCTP_ACTIVE) &&504503 (asoc->pf_retrans < transport->pathmaxrxt) &&505504 (transport->error_count > asoc->pf_retrans)) {506505
+1-1
net/tipc/netlink.c
···4747 int hdr_space = nlmsg_total_size(GENL_HDRLEN + TIPC_GENL_HDRLEN);4848 u16 cmd;49495050- if ((req_userhdr->cmd & 0xC000) && (!capable(CAP_NET_ADMIN)))5050+ if ((req_userhdr->cmd & 0xC000) && (!netlink_capable(skb, CAP_NET_ADMIN)))5151 cmd = TIPC_CMD_NOT_NET_ADMIN;5252 else5353 cmd = req_userhdr->cmd;
+22-25
net/vmw_vsock/af_vsock.c
···19251925 .fops = &vsock_device_ops,19261926};1927192719281928-static int __vsock_core_init(void)19281928+int __vsock_core_init(const struct vsock_transport *t, struct module *owner)19291929{19301930- int err;19301930+ int err = mutex_lock_interruptible(&vsock_register_mutex);19311931+19321932+ if (err)19331933+ return err;19341934+19351935+ if (transport) {19361936+ err = -EBUSY;19371937+ goto err_busy;19381938+ }19391939+19401940+ /* Transport must be the owner of the protocol so that it can't19411941+ * unload while there are open sockets.19421942+ */19431943+ vsock_proto.owner = owner;19441944+ transport = t;1931194519321946 vsock_init_tables();19331947···19651951 goto err_unregister_proto;19661952 }1967195319541954+ mutex_unlock(&vsock_register_mutex);19681955 return 0;1969195619701957err_unregister_proto:19711958 proto_unregister(&vsock_proto);19721959err_misc_deregister:19731960 misc_deregister(&vsock_device);19611961+ transport = NULL;19621962+err_busy:19631963+ mutex_unlock(&vsock_register_mutex);19741964 return err;19751965}19761976-19771977-int vsock_core_init(const struct vsock_transport *t)19781978-{19791979- int retval = mutex_lock_interruptible(&vsock_register_mutex);19801980- if (retval)19811981- return retval;19821982-19831983- if (transport) {19841984- retval = -EBUSY;19851985- goto out;19861986- }19871987-19881988- transport = t;19891989- retval = __vsock_core_init();19901990- if (retval)19911991- transport = NULL;19921992-19931993-out:19941994- mutex_unlock(&vsock_register_mutex);19951995- return retval;19961996-}19971997-EXPORT_SYMBOL_GPL(vsock_core_init);19661966+EXPORT_SYMBOL_GPL(__vsock_core_init);1998196719991968void vsock_core_exit(void)20001969{···1997200019982001MODULE_AUTHOR("VMware, Inc.");19992002MODULE_DESCRIPTION("VMware Virtual Socket Family");20002000-MODULE_VERSION("1.0.0.0-k");20032003+MODULE_VERSION("1.0.1.0-k");20012004MODULE_LICENSE("GPL v2");
+1-1
net/xfrm/xfrm_user.c
···23772377 link = &xfrm_dispatch[type];2378237823792379 /* All operations require privileges, even GET */23802380- if (!ns_capable(net->user_ns, CAP_NET_ADMIN))23802380+ if (!netlink_net_capable(skb, CAP_NET_ADMIN))23812381 return -EPERM;2382238223832383 if ((type == (XFRM_MSG_GETSA - XFRM_MSG_BASE) ||
+5
scripts/sortextable.c
···3535#define EM_ARCOMPACT 933636#endif37373838+#ifndef EM_XTENSA3939+#define EM_XTENSA 944040+#endif4141+3842#ifndef EM_AARCH643943#define EM_AARCH64 1834044#endif···285281 case EM_AARCH64:286282 case EM_MICROBLAZE:287283 case EM_MIPS:284284+ case EM_XTENSA:288285 break;289286 } /* end switch */290287
···104104 }105105 return buffer;106106}107107-108108-/**109109- * kvfree - free an allocation do by kvmalloc110110- * @buffer: buffer to free (MAYBE_NULL)111111- *112112- * Free a buffer allocated by kvmalloc113113- */114114-void kvfree(void *buffer)115115-{116116- if (is_vmalloc_addr(buffer))117117- vfree(buffer);118118- else119119- kfree(buffer);120120-}
···651651 int err = -ENODEV;652652653653 down_read(&chip->shutdown_rwsem);654654- if (chip->probing)654654+ if (chip->probing && chip->in_pm)655655 err = 0;656656 else if (!chip->shutdown)657657 err = usb_autopm_get_interface(chip->pm_intf);···663663void snd_usb_autosuspend(struct snd_usb_audio *chip)664664{665665 down_read(&chip->shutdown_rwsem);666666- if (!chip->shutdown && !chip->probing)666666+ if (!chip->shutdown && !chip->probing && !chip->in_pm)667667 usb_autopm_put_interface(chip->pm_intf);668668 up_read(&chip->shutdown_rwsem);669669}···695695 chip->autosuspended = 1;696696 }697697698698- list_for_each_entry(mixer, &chip->mixer_list, list)699699- snd_usb_mixer_suspend(mixer);698698+ if (chip->num_suspended_intf == 1)699699+ list_for_each_entry(mixer, &chip->mixer_list, list)700700+ snd_usb_mixer_suspend(mixer);700701701702 return 0;702703}···712711 return 0;713712 if (--chip->num_suspended_intf)714713 return 0;714714+715715+ chip->in_pm = 1;715716 /*716717 * ALSA leaves material resumption to user space717718 * we just notify and restart the mixers···729726 chip->autosuspended = 0;730727731728err_out:729729+ chip->in_pm = 0;732730 return err;733731}734732
+1
sound/usb/card.h
···9292 unsigned int curframesize; /* current packet size in frames (for capture) */9393 unsigned int syncmaxsize; /* sync endpoint packet size */9494 unsigned int fill_max:1; /* fill max packet size always */9595+ unsigned int udh01_fb_quirk:1; /* corrupted feedback data */9596 unsigned int datainterval; /* log_2 of data packet interval */9697 unsigned int syncinterval; /* P for adaptive mode, 0 otherwise */9798 unsigned char silence_value;
+14-1
sound/usb/endpoint.c
···471471 ep->syncinterval = 3;472472473473 ep->syncmaxsize = le16_to_cpu(get_endpoint(alts, 1)->wMaxPacketSize);474474+475475+ if (chip->usb_id == USB_ID(0x0644, 0x8038) /* TEAC UD-H01 */ &&476476+ ep->syncmaxsize == 4)477477+ ep->udh01_fb_quirk = 1;474478 }475479476480 list_add_tail(&ep->list, &chip->ep_list);···11091105 if (f == 0)11101106 return;1111110711121112- if (unlikely(ep->freqshift == INT_MIN)) {11081108+ if (unlikely(sender->udh01_fb_quirk)) {11091109+ /*11101110+ * The TEAC UD-H01 firmware sometimes changes the feedback value11111111+ * by +/- 0x1.0000.11121112+ */11131113+ if (f < ep->freqn - 0x8000)11141114+ f += 0x10000;11151115+ else if (f > ep->freqn + 0x8000)11161116+ f -= 0x10000;11171117+ } else if (unlikely(ep->freqshift == INT_MIN)) {11131118 /*11141119 * The first time we see a feedback value, determine its format11151120 * by shifting it left or right until it matches the nominal
+2-3
sound/usb/pcm.c
···15011501 * The error should be lower than 2ms since the estimate relies15021502 * on two reads of a counter updated every ms.15031503 */15041504- if (printk_ratelimit() &&15051505- abs(est_delay - subs->last_delay) * 1000 > runtime->rate * 2)15061506- dev_dbg(&subs->dev->dev,15041504+ if (abs(est_delay - subs->last_delay) * 1000 > runtime->rate * 2)15051505+ dev_dbg_ratelimited(&subs->dev->dev,15071506 "delay: estimated %d, actual %d\n",15081507 est_delay, subs->last_delay);15091508
+1
sound/usb/usbaudio.h
···4040 struct rw_semaphore shutdown_rwsem;4141 unsigned int shutdown:1;4242 unsigned int probing:1;4343+ unsigned int in_pm:1;4344 unsigned int autosuspended:1; 4445 unsigned int txfr_quirk:1; /* Subframe boundaries on transfers */4546
···23232424 sp = (unsigned long) regs[PERF_REG_X86_SP];25252626- map = map_groups__find(&thread->mg, MAP__FUNCTION, (u64) sp);2626+ map = map_groups__find(&thread->mg, MAP__VARIABLE, (u64) sp);2727 if (!map) {2828 pr_debug("failed to get stack map\n");2929+ free(buf);2930 return -1;3031 }3132
+7-1
tools/perf/arch/x86/tests/regs_load.S
···11-21#include <linux/linkage.h>3243#define AX 0···8990 ret9091ENDPROC(perf_regs_load)9192#endif9393+9494+/*9595+ * We need to provide note.GNU-stack section, saying that we want9696+ * NOT executable stack. Otherwise the final linking will assume that9797+ * the ELF stack should not be restricted at all and set it RWX.9898+ */9999+.section .note.GNU-stack,"",@progbits
+35-11
tools/perf/config/Makefile
···3434 LIBUNWIND_LIBS = -lunwind -lunwind-arm3535endif36363737+# So far there's only x86 libdw unwind support merged in perf.3838+# Disable it on all other architectures in case libdw unwind3939+# support is detected in system. Add supported architectures4040+# to the check.4141+ifneq ($(ARCH),x86)4242+ NO_LIBDW_DWARF_UNWIND := 14343+endif4444+3745ifeq ($(LIBUNWIND_LIBS),)3846 NO_LIBUNWIND := 13947else···116108CFLAGS += -Wall117109CFLAGS += -Wextra118110CFLAGS += -std=gnu99111111+112112+# Enforce a non-executable stack, as we may regress (again) in the future by113113+# adding assembler files missing the .GNU-stack linker note.114114+LDFLAGS += -Wl,-z,noexecstack119115120116EXTLIBS = -lelf -lpthread -lrt -lm -ldl121117···198186 stackprotector-all \199187 timerfd \200188 libunwind-debug-frame \201201- bionic189189+ bionic \190190+ liberty \191191+ liberty-z \192192+ cplus-demangle202193203194# Set FEATURE_CHECK_(C|LD)FLAGS-all for all CORE_FEATURE_TESTS features.204195# If in the future we need per-feature checks/flags for features not···519504endif520505521506ifeq ($(feature-libbfd), 1)522522- EXTLIBS += -lbfd -lz -liberty507507+ EXTLIBS += -lbfd508508+509509+ # call all detections now so we get correct510510+ # status in VF output511511+ $(call feature_check,liberty)512512+ $(call feature_check,liberty-z)513513+ $(call feature_check,cplus-demangle)514514+515515+ ifeq ($(feature-liberty), 1)516516+ EXTLIBS += -liberty517517+ else518518+ ifeq ($(feature-liberty-z), 1)519519+ EXTLIBS += -liberty -lz520520+ endif521521+ endif523522endif524523525524ifdef NO_DEMANGLE···544515 CFLAGS += -DHAVE_CPLUS_DEMANGLE_SUPPORT545516 else546517 ifneq ($(feature-libbfd), 1)547547- $(call feature_check,liberty)548548- ifeq ($(feature-liberty), 1)549549- EXTLIBS += -lbfd -liberty550550- else551551- $(call feature_check,liberty-z)552552- ifeq ($(feature-liberty-z), 1)553553- EXTLIBS += -lbfd -liberty -lz554554- else555555- $(call feature_check,cplus-demangle)518518+ ifneq ($(feature-liberty), 1)519519+ ifneq ($(feature-liberty-z), 1)520520+ # we dont have neither HAVE_CPLUS_DEMANGLE_SUPPORT521521+ # or any of 'bfd iberty z' trinity556522 ifeq ($(feature-cplus-demangle), 1)557523 EXTLIBS += -liberty558524 CFLAGS += -DHAVE_CPLUS_DEMANGLE_SUPPORT
+2
tools/perf/tests/make
···4646make_install_html := install-html4747make_install_info := install-info4848make_install_pdf := install-pdf4949+make_static := LDFLAGS=-static49505051# all the NO_* variable combined5152make_minimal := NO_LIBPERL=1 NO_LIBPYTHON=1 NO_NEWT=1 NO_GTK2=1···8887# run += make_install_info8988# run += make_install_pdf9089run += make_minimal9090+run += make_static91919292ifneq ($(call has,ctags),)9393run += make_tags
+12-4
tools/perf/util/machine.c
···717717}718718719719static int map_groups__set_modules_path_dir(struct map_groups *mg,720720- const char *dir_name)720720+ const char *dir_name, int depth)721721{722722 struct dirent *dent;723723 DIR *dir = opendir(dir_name);···742742 !strcmp(dent->d_name, ".."))743743 continue;744744745745- ret = map_groups__set_modules_path_dir(mg, path);745745+ /* Do not follow top-level source and build symlinks */746746+ if (depth == 0) {747747+ if (!strcmp(dent->d_name, "source") ||748748+ !strcmp(dent->d_name, "build"))749749+ continue;750750+ }751751+752752+ ret = map_groups__set_modules_path_dir(mg, path,753753+ depth + 1);746754 if (ret < 0)747755 goto out;748756 } else {···794786 if (!version)795787 return -1;796788797797- snprintf(modules_path, sizeof(modules_path), "%s/lib/modules/%s/kernel",789789+ snprintf(modules_path, sizeof(modules_path), "%s/lib/modules/%s",798790 machine->root_dir, version);799791 free(version);800792801801- return map_groups__set_modules_path_dir(&machine->kmaps, modules_path);793793+ return map_groups__set_modules_path_dir(&machine->kmaps, modules_path, 0);802794}803795804796static int machine__create_module(void *arg, const char *name, u64 start)
+8-7
virt/kvm/arm/vgic.c
···548548 u32 val;549549 u32 *reg;550550551551- offset >>= 1;552551 reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_cfg,553553- vcpu->vcpu_id, offset);552552+ vcpu->vcpu_id, offset >> 1);554553555555- if (offset & 2)554554+ if (offset & 4)556555 val = *reg >> 16;557556 else558557 val = *reg & 0xffff;···560561 vgic_reg_access(mmio, &val, offset,561562 ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);562563 if (mmio->is_write) {563563- if (offset < 4) {564564+ if (offset < 8) {564565 *reg = ~0U; /* Force PPIs/SGIs to 1 */565566 return false;566567 }567568568569 val = vgic_cfg_compress(val);569569- if (offset & 2) {570570+ if (offset & 4) {570571 *reg &= 0xffff;571572 *reg |= val << 16;572573 } else {···915916 case 0:916917 if (!target_cpus)917918 return;919919+ break;918920919921 case 1:920922 target_cpus = ((1 << nrcpus) - 1) & ~(1 << vcpu_id) & 0xff;···16671667 if (addr + size < addr)16681668 return -EINVAL;1669166916701670+ *ioaddr = addr;16701671 ret = vgic_ioaddr_overlap(kvm);16711672 if (ret)16721672- return ret;16731673- *ioaddr = addr;16731673+ *ioaddr = VGIC_ADDR_UNDEF;16741674+16741675 return ret;16751676}16761677
+2-1
virt/kvm/assigned-dev.c
···395395 if (dev->entries_nr == 0)396396 return r;397397398398- r = pci_enable_msix(dev->dev, dev->host_msix_entries, dev->entries_nr);398398+ r = pci_enable_msix_exact(dev->dev,399399+ dev->host_msix_entries, dev->entries_nr);399400 if (r)400401 return r;401402