···2233The ARC HS can be configured with a pipeline performance monitor for counting44CPU and cache events like cache misses and hits. Like conventional PCT there55-are 100+ hardware conditions dynamically mapped to upto 32 counters.55+are 100+ hardware conditions dynamically mapped to up to 32 counters.66It also supports overflow interrupts.7788Required properties:
+1-1
Documentation/devicetree/bindings/arc/pct.txt
···2233The ARC700 can be configured with a pipeline performance monitor for counting44CPU and cache events like cache misses and hits. Like conventional PCT there55-are 100+ hardware conditions dynamically mapped to upto 32 counters55+are 100+ hardware conditions dynamically mapped to up to 32 counters6677Note that:88 * The ARC 700 PCT does not support interrupts; although HW events may be
···66Required properties :7788 - reg : Offset and length of the register set for the device99- - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c" or1010- "rockchip,rk3288-i2c".99+ - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c",1010+ "rockchip,rk3228-i2c" or "rockchip,rk3288-i2c".1111 - interrupts : interrupt number1212 - clocks : parent clock1313
+3-3
Documentation/devicetree/bindings/net/cpsw.txt
···4545Optional properties:4646- dual_emac_res_vlan : Specifies VID to be used to segregate the ports4747- mac-address : See ethernet.txt file in the same directory4848-- phy_id : Specifies slave phy id4848+- phy_id : Specifies slave phy id (deprecated, use phy-handle)4949- phy-handle : See ethernet.txt file in the same directory50505151Slave sub-nodes:5252- fixed-link : See fixed-link.txt file in the same directory5353- Either the property phy_id, or the sub-node5454- fixed-link can be specified5353+5454+Note: Exactly one of phy_id, phy-handle, or fixed-link must be specified.55555656Note: "ti,hwmods" field is used to fetch the base address and irq5757resources from TI, omap hwmod data base during device registration.
+3-3
Documentation/networking/altera_tse.txt
···66using the SGDMA and MSGDMA soft DMA IP components. The driver uses the77platform bus to obtain component resources. The designs used to test this88driver were built for a Cyclone(R) V SOC FPGA board, a Cyclone(R) V FPGA board,99-and tested with ARM and NIOS processor hosts seperately. The anticipated use99+and tested with ARM and NIOS processor hosts separately. The anticipated use1010cases are simple communications between an embedded system and an external peer1111for status and simple configuration of the embedded system.1212···65654.1) Transmit process6666When the driver's transmit routine is called by the kernel, it sets up a6767transmit descriptor by calling the underlying DMA transmit routine (SGDMA or6868-MSGDMA), and initites a transmit operation. Once the transmit is complete, an6868+MSGDMA), and initiates a transmit operation. Once the transmit is complete, an6969interrupt is driven by the transmit DMA logic. The driver handles the transmit7070completion in the context of the interrupt handling chain by recycling7171resource required to send and track the requested transmit operation.727273734.2) Receive process7474The driver will post receive buffers to the receive DMA logic during driver7575-intialization. Receive buffers may or may not be queued depending upon the7575+initialization. Receive buffers may or may not be queued depending upon the7676underlying DMA logic (MSGDMA is able queue receive buffers, SGDMA is not able7777to queue receive buffers to the SGDMA receive logic). When a packet is7878received, the DMA logic generates an interrupt. The driver handles a receive
+3-3
Documentation/networking/ipvlan.txt
···88 This is conceptually very similar to the macvlan driver with one major99exception of using L3 for mux-ing /demux-ing among slaves. This property makes1010the master device share the L2 with it's slave devices. I have developed this1111-driver in conjuntion with network namespaces and not sure if there is use case1111+driver in conjunction with network namespaces and not sure if there is use case1212outside of it.13131414···4242as well.434344444.2 L3 mode:4545- In this mode TX processing upto L3 happens on the stack instance attached4545+ In this mode TX processing up to L3 happens on the stack instance attached4646to the slave device and packets are switched to the stack instance of the4747master device for the L2 processing and routing from that instance will be4848used before packets are queued on the outbound device. In this mode the slaves···5656 (a) The Linux host that is connected to the external switch / router has5757policy configured that allows only one mac per port.5858 (b) No of virtual devices created on a master exceed the mac capacity and5959-puts the NIC in promiscous mode and degraded performance is a concern.5959+puts the NIC in promiscuous mode and degraded performance is a concern.6060 (c) If the slave device is to be put into the hostile / untrusted network6161namespace where L2 on the slave could be changed / misused.6262
+3-3
Documentation/networking/pktgen.txt
···6767 * add_device DEVICE@NAME -- adds a single device6868 * rem_device_all -- remove all associated devices69697070-When adding a device to a thread, a corrosponding procfile is created7070+When adding a device to a thread, a corresponding procfile is created7171which is used for configuring this device. Thus, device names need to7272be unique.73737474To support adding the same device to multiple threads, which is useful7575-with multi queue NICs, a the device naming scheme is extended with "@":7575+with multi queue NICs, the device naming scheme is extended with "@":7676 device@something77777878The part after "@" can be anything, but it is custom to use the thread···221221222222A collection of tutorial scripts and helpers for pktgen is in the223223samples/pktgen directory. The helper parameters.sh file support easy224224-and consistant parameter parsing across the sample scripts.224224+and consistent parameter parsing across the sample scripts.225225226226Usage example and help:227227 ./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
+1-1
Documentation/networking/vrf.txt
···4141the VRF device. Similarly on egress routing rules are used to send packets4242to the VRF device driver before getting sent out the actual interface. This4343allows tcpdump on a VRF device to capture all packets into and out of the4444-VRF as a whole.[1] Similiarly, netfilter [2] and tc rules can be applied4444+VRF as a whole.[1] Similarly, netfilter [2] and tc rules can be applied4545using the VRF device to specify rules that apply to the VRF domain as a whole.46464747[1] Packets in the forwarded state do not flow through the device, so those
+3-3
Documentation/networking/xfrm_sync.txt
···44from Jamal <hadi@cyberus.ca>.5566The end goal for syncing is to be able to insert attributes + generate77-events so that the an SA can be safely moved from one machine to another77+events so that the SA can be safely moved from one machine to another88for HA purposes.99The idea is to synchronize the SA so that the takeover machine can do1010the processing of the SA as accurate as possible if it has access to it.···1313These patches add ability to sync and have accurate lifetime byte (to1414ensure proper decay of SAs) and replay counters to avoid replay attacks1515with as minimal loss at failover time.1616-This way a backup stays as closely uptodate as an active member.1616+This way a backup stays as closely up-to-date as an active member.17171818Because the above items change for every packet the SA receives,1919it is possible for a lot of the events to be generated.···163163there is a period where the timer threshold expires with no packets164164seen, then an odd behavior is seen as follows:165165The first packet arrival after a timer expiry will trigger a timeout166166-aevent; i.e we dont wait for a timeout period or a packet threshold166166+event; i.e we don't wait for a timeout period or a packet threshold167167to be reached. This is done for simplicity and efficiency reasons.168168169169-JHS
+9-8
Documentation/sysctl/vm.txt
···581581"Zone Order" orders the zonelists by zone type, then by node within each582582zone. Specify "[Zz]one" for zone order.583583584584-Specify "[Dd]efault" to request automatic configuration. Autoconfiguration585585-will select "node" order in following case.586586-(1) if the DMA zone does not exist or587587-(2) if the DMA zone comprises greater than 50% of the available memory or588588-(3) if any node's DMA zone comprises greater than 70% of its local memory and589589- the amount of local memory is big enough.584584+Specify "[Dd]efault" to request automatic configuration.590585591591-Otherwise, "zone" order will be selected. Default order is recommended unless592592-this is causing problems for your system/application.586586+On 32-bit, the Normal zone needs to be preserved for allocations accessible587587+by the kernel, so "zone" order will be selected.588588+589589+On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"590590+order will be selected.591591+592592+Default order is recommended unless this is causing problems for your593593+system/application.593594594595==============================================================595596
+7-6
MAINTAINERS
···4745474547464746FUSE: FILESYSTEM IN USERSPACE47474747M: Miklos Szeredi <miklos@szeredi.hu>47484748-L: fuse-devel@lists.sourceforge.net47484748+L: linux-fsdevel@vger.kernel.org47494749W: http://fuse.sourceforge.net/47504750T: git git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git47514751S: Maintained···49044904F: include/net/gre.h4905490549064906GRETH 10/100/1G Ethernet MAC device driver49074907-M: Kristoffer Glembo <kristoffer@gaisler.com>49074907+M: Andreas Larsson <andreas@gaisler.com>49084908L: netdev@vger.kernel.org49094909S: Maintained49104910F: drivers/net/ethernet/aeroflex/···6028602860296029ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR60306030M: Or Gerlitz <ogerlitz@mellanox.com>60316031-M: Sagi Grimberg <sagig@mellanox.com>60316031+M: Sagi Grimberg <sagi@grimberg.me>60326032M: Roi Dayan <roid@mellanox.com>60336033L: linux-rdma@vger.kernel.org60346034S: Supported···60386038F: drivers/infiniband/ulp/iser/6039603960406040ISCSI EXTENSIONS FOR RDMA (ISER) TARGET60416041-M: Sagi Grimberg <sagig@mellanox.com>60416041+M: Sagi Grimberg <sagi@grimberg.me>60426042T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master60436043L: linux-rdma@vger.kernel.org60446044L: target-devel@vger.kernel.org···64016401F: mm/kmemleak-test.c6402640264036403KPROBES64046404-M: Ananth N Mavinakayanahalli <ananth@in.ibm.com>64046404+M: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>64056405M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>64066406M: "David S. Miller" <davem@davemloft.net>64076407M: Masami Hiramatsu <mhiramat@kernel.org>···10015100151001610016SFC NETWORK DRIVER1001710017M: Solarflare linux maintainers <linux-net-drivers@solarflare.com>1001810018-M: Shradha Shah <sshah@solarflare.com>1001810018+M: Edward Cree <ecree@solarflare.com>1001910019+M: Bert Kenward <bkenward@solarflare.com>1001910020L: netdev@vger.kernel.org1002010021S: Supported1002110022F: drivers/net/ethernet/sfc/
+2-2
Makefile
···11VERSION = 422PATCHLEVEL = 633SUBLEVEL = 044-EXTRAVERSION = -rc555-NAME = Blurry Fish Butt44+EXTRAVERSION = -rc655+NAME = Charred Weasel6677# *DOCUMENTATION*88# To see a list of typical targets execute "make help"
···1818#define STATUS_AD_MASK (1<<STATUS_AD_BIT)1919#define STATUS_IE_MASK (1<<STATUS_IE_BIT)20202121+/* status32 Bits as encoded/expected by CLRI/SETI */2222+#define CLRI_STATUS_IE_BIT 42323+2424+#define CLRI_STATUS_E_MASK 0xF2525+#define CLRI_STATUS_IE_MASK (1 << CLRI_STATUS_IE_BIT)2626+2127#define AUX_USER_SP 0x00D2228#define AUX_IRQ_CTRL 0x00E2329#define AUX_IRQ_ACT 0x043 /* Active Intr across all levels */···106100 :107101 : "memory");108102103103+ /* To be compatible with irq_save()/irq_restore()104104+ * encode the irq bits as expected by CLRI/SETI105105+ * (this was needed to make CONFIG_TRACE_IRQFLAGS work)106106+ */107107+ temp = (1 << 5) |108108+ ((!!(temp & STATUS_IE_MASK)) << CLRI_STATUS_IE_BIT) |109109+ (temp & CLRI_STATUS_E_MASK);109110 return temp;110111}111112···121108 */122109static inline int arch_irqs_disabled_flags(unsigned long flags)123110{124124- return !(flags & (STATUS_IE_MASK));111111+ return !(flags & CLRI_STATUS_IE_MASK);125112}126113127114static inline int arch_irqs_disabled(void)···141128142129#else143130131131+#ifdef CONFIG_TRACE_IRQFLAGS132132+133133+.macro TRACE_ASM_IRQ_DISABLE134134+ bl trace_hardirqs_off135135+.endm136136+137137+.macro TRACE_ASM_IRQ_ENABLE138138+ bl trace_hardirqs_on139139+.endm140140+141141+#else142142+143143+.macro TRACE_ASM_IRQ_DISABLE144144+.endm145145+146146+.macro TRACE_ASM_IRQ_ENABLE147147+.endm148148+149149+#endif144150.macro IRQ_DISABLE scratch145151 clri152152+ TRACE_ASM_IRQ_DISABLE146153.endm147154148155.macro IRQ_ENABLE scratch156156+ TRACE_ASM_IRQ_ENABLE149157 seti150158.endm151159
+9-1
arch/arc/kernel/entry-arcv2.S
···69697070 clri ; To make status32.IE agree with CPU internal state71717272- lr r0, [ICAUSE]7272+#ifdef CONFIG_TRACE_IRQFLAGS7373+ TRACE_ASM_IRQ_DISABLE7474+#endif73757676+ lr r0, [ICAUSE]7477 mov blink, ret_from_exception75787679 b.d arch_do_IRQ···171168; All 2 entry points to here already disable interrupts172169173170.Lrestore_regs:171171+172172+ # Interrpts are actually disabled from this point on, but will get173173+ # reenabled after we return from interrupt/exception.174174+ # But irq tracer needs to be told now...175175+ TRACE_ASM_IRQ_ENABLE174176175177 ld r0, [sp, PT_status32] ; U/K mode at time of entry176178 lr r10, [AUX_IRQ_ACT]
+3
arch/arc/kernel/entry-compact.S
···341341342342.Lrestore_regs:343343344344+ # Interrpts are actually disabled from this point on, but will get345345+ # reenabled after we return from interrupt/exception.346346+ # But irq tracer needs to be told now...344347 TRACE_ASM_IRQ_ENABLE345348346349 lr r10, [status32]
···2424CONFIG_INET_ESP=y2525CONFIG_INET_IPCOMP=y2626# CONFIG_INET_LRO is not set2727-CONFIG_IPV6_PRIVACY=y2827CONFIG_INET6_AH=m2928CONFIG_INET6_ESP=m3029CONFIG_INET6_IPCOMP=m
···328328 case SUN4V_CHIP_NIAGARA5:329329 case SUN4V_CHIP_SPARC_M6:330330 case SUN4V_CHIP_SPARC_M7:331331+ case SUN4V_CHIP_SPARC_SN:331332 case SUN4V_CHIP_SPARC64X:332333 rover_inc_table = niagara_iterate_method;333334 break;
···3333jiffies = jiffies_64;3434#endif35353636+#ifdef CONFIG_SPARC643737+ASSERT((swapper_tsb == 0x0000000000408000), "Error: sparc64 early assembler too large")3838+#endif3939+3640SECTIONS3741{3842#ifdef CONFIG_SPARC64
+1-2
arch/sparc/kernel/winfixup.S
···3232 rd %pc, %g73333 call do_sparc64_fault3434 add %sp, PTREGS_OFF, %o03535- ba,pt %xcc, rtrap3636- nop3535+ ba,a,pt %xcc, rtrap37363837 /* Be very careful about usage of the trap globals here.3938 * You cannot touch %g5 as that has the fault information.
···3639363936403640 case 78: /* 14nm Skylake Mobile */36413641 case 94: /* 14nm Skylake Desktop */36423642+ case 85: /* 14nm Skylake Server */36423643 x86_pmu.late_ack = true;36433644 memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids));36443645 memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+4-2
arch/x86/events/intel/lbr.c
···63636464#define LBR_PLM (LBR_KERNEL | LBR_USER)65656666-#define LBR_SEL_MASK 0x1ff /* valid bits in LBR_SELECT */6666+#define LBR_SEL_MASK 0x3ff /* valid bits in LBR_SELECT */6767#define LBR_NOT_SUPP -1 /* LBR filter not supported */6868#define LBR_IGN 0 /* ignored */6969···610610 * The first 9 bits (LBR_SEL_MASK) in LBR_SELECT operate611611 * in suppress mode. So LBR_SELECT should be set to612612 * (~mask & LBR_SEL_MASK) | (mask & ~LBR_SEL_MASK)613613+ * But the 10th bit LBR_CALL_STACK does not operate614614+ * in suppress mode.613615 */614614- reg->config = mask ^ x86_pmu.lbr_sel_mask;616616+ reg->config = mask ^ (x86_pmu.lbr_sel_mask & ~LBR_CALL_STACK);615617616618 if ((br_type & PERF_SAMPLE_BRANCH_NO_CYCLES) &&617619 (br_type & PERF_SAMPLE_BRANCH_NO_FLAGS) &&
+64-11
arch/x86/events/intel/pt.c
···136136 struct dev_ext_attribute *de_attrs;137137 struct attribute **attrs;138138 size_t size;139139+ u64 reg;139140 int ret;140141 long i;142142+143143+ if (boot_cpu_has(X86_FEATURE_VMX)) {144144+ /*145145+ * Intel SDM, 36.5 "Tracing post-VMXON" says that146146+ * "IA32_VMX_MISC[bit 14]" being 1 means PT can trace147147+ * post-VMXON.148148+ */149149+ rdmsrl(MSR_IA32_VMX_MISC, reg);150150+ if (reg & BIT(14))151151+ pt_pmu.vmx = true;152152+ }141153142154 attrs = NULL;143155···281269282270 reg |= (event->attr.config & PT_CONFIG_MASK);283271272272+ event->hw.config = reg;284273 wrmsrl(MSR_IA32_RTIT_CTL, reg);285274}286275287287-static void pt_config_start(bool start)276276+static void pt_config_stop(struct perf_event *event)288277{289289- u64 ctl;278278+ u64 ctl = READ_ONCE(event->hw.config);290279291291- rdmsrl(MSR_IA32_RTIT_CTL, ctl);292292- if (start)293293- ctl |= RTIT_CTL_TRACEEN;294294- else295295- ctl &= ~RTIT_CTL_TRACEEN;280280+ /* may be already stopped by a PMI */281281+ if (!(ctl & RTIT_CTL_TRACEEN))282282+ return;283283+284284+ ctl &= ~RTIT_CTL_TRACEEN;296285 wrmsrl(MSR_IA32_RTIT_CTL, ctl);286286+287287+ WRITE_ONCE(event->hw.config, ctl);297288298289 /*299290 * A wrmsr that disables trace generation serializes other PT···306291 * The below WMB, separating data store and aux_head store matches307292 * the consumer's RMB that separates aux_head load and data load.308293 */309309- if (!start)310310- wmb();294294+ wmb();311295}312296313297static void pt_config_buffer(void *buf, unsigned int topa_idx,···956942 if (!ACCESS_ONCE(pt->handle_nmi))957943 return;958944959959- pt_config_start(false);945945+ /*946946+ * If VMX is on and PT does not support it, don't touch anything.947947+ */948948+ if (READ_ONCE(pt->vmx_on))949949+ return;960950961951 if (!event)962952 return;953953+954954+ pt_config_stop(event);963955964956 buf = perf_get_aux(&pt->handle);965957 if (!buf)···1003983 }1004984}1005985986986+void intel_pt_handle_vmx(int on)987987+{988988+ struct pt *pt = this_cpu_ptr(&pt_ctx);989989+ struct perf_event *event;990990+ unsigned long flags;991991+992992+ /* PT plays nice with VMX, do nothing */993993+ if (pt_pmu.vmx)994994+ return;995995+996996+ /*997997+ * VMXON will clear RTIT_CTL.TraceEn; we need to make998998+ * sure to not try to set it while VMX is on. Disable999999+ * interrupts to avoid racing with pmu callbacks;10001000+ * concurrent PMI should be handled fine.10011001+ */10021002+ local_irq_save(flags);10031003+ WRITE_ONCE(pt->vmx_on, on);10041004+10051005+ if (on) {10061006+ /* prevent pt_config_stop() from writing RTIT_CTL */10071007+ event = pt->handle.event;10081008+ if (event)10091009+ event->hw.config = 0;10101010+ }10111011+ local_irq_restore(flags);10121012+}10131013+EXPORT_SYMBOL_GPL(intel_pt_handle_vmx);10141014+10061015/*10071016 * PMU callbacks10081017 */···1040991{1041992 struct pt *pt = this_cpu_ptr(&pt_ctx);1042993 struct pt_buffer *buf = perf_get_aux(&pt->handle);994994+995995+ if (READ_ONCE(pt->vmx_on))996996+ return;10439971044998 if (!buf || pt_buffer_is_full(buf, pt)) {1045999 event->hw.state = PERF_HES_STOPPED;···10661014 * see comment in intel_pt_interrupt().10671015 */10681016 ACCESS_ONCE(pt->handle_nmi) = 0;10691069- pt_config_start(false);10171017+10181018+ pt_config_stop(event);1070101910711020 if (event->hw.state == PERF_HES_STOPPED)10721021 return;
+3
arch/x86/events/intel/pt.h
···6565struct pt_pmu {6666 struct pmu pmu;6767 u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES];6868+ bool vmx;6869};69707071/**···108107 * struct pt - per-cpu pt context109108 * @handle: perf output handle110109 * @handle_nmi: do handle PT PMI on this cpu, there's an active event110110+ * @vmx_on: 1 if VMX is ON on this cpu111111 */112112struct pt {113113 struct perf_output_handle handle;114114 int handle_nmi;115115+ int vmx_on;115116};116117117118#endif /* __INTEL_PT_H__ */
+1
arch/x86/events/intel/rapl.c
···718718 break;719719 case 60: /* Haswell */720720 case 69: /* Haswell-Celeron */721721+ case 70: /* Haswell GT3e */721722 case 61: /* Broadwell */722723 case 71: /* Broadwell-H */723724 rapl_cntr_mask = RAPL_IDX_HSW;
···256256 struct irq_desc *desc;257257 int cpu, vector;258258259259- BUG_ON(!data->cfg.vector);259259+ if (!data->cfg.vector)260260+ return;260261261262 vector = data->cfg.vector;262263 for_each_cpu_and(cpu, data->domain, cpu_online_mask)
-6
arch/x86/kernel/head_32.S
···389389 /* Make changes effective */390390 wrmsr391391392392- /*393393- * And make sure that all the mappings we set up have NX set from394394- * the beginning.395395- */396396- orl $(1 << (_PAGE_BIT_NX - 32)), pa(__supported_pte_mask + 4)397397-398392enable_paging:399393400394/*
···32323333void x86_configure_nx(void)3434{3535- /* If disable_nx is set, clear NX on all new mappings going forward. */3636- if (disable_nx)3535+ if (boot_cpu_has(X86_FEATURE_NX) && !disable_nx)3636+ __supported_pte_mask |= _PAGE_NX;3737+ else3738 __supported_pte_mask &= ~_PAGE_NX;3839}3940
+6
arch/x86/xen/spinlock.c
···27272828static void xen_qlock_kick(int cpu)2929{3030+ int irq = per_cpu(lock_kicker_irq, cpu);3131+3232+ /* Don't kick if the target's kicker interrupt is not initialized. */3333+ if (irq == -1)3434+ return;3535+3036 xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);3137}3238
+25-27
drivers/block/rbd.c
···538538 u8 *order, u64 *snap_size);539539static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,540540 u64 *snap_features);541541-static u64 rbd_snap_id_by_name(struct rbd_device *rbd_dev, const char *name);542541543542static int rbd_open(struct block_device *bdev, fmode_t mode)544543{···31263127 struct rbd_device *rbd_dev = (struct rbd_device *)data;31273128 int ret;3128312931293129- if (!rbd_dev)31303130- return;31313131-31323130 dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__,31333131 rbd_dev->header_name, (unsigned long long)notify_id,31343132 (unsigned int)opcode);···3259326332603264 ceph_osdc_cancel_event(rbd_dev->watch_event);32613265 rbd_dev->watch_event = NULL;32663266+32673267+ dout("%s flushing notifies\n", __func__);32683268+ ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);32623269}3263327032643271/*···36413642static void rbd_dev_update_size(struct rbd_device *rbd_dev)36423643{36433644 sector_t size;36443644- bool removing;3645364536463646 /*36473647- * Don't hold the lock while doing disk operations,36483648- * or lock ordering will conflict with the bdev mutex via:36493649- * rbd_add() -> blkdev_get() -> rbd_open()36473647+ * If EXISTS is not set, rbd_dev->disk may be NULL, so don't36483648+ * try to update its size. If REMOVING is set, updating size36493649+ * is just useless work since the device can't be opened.36503650 */36513651- spin_lock_irq(&rbd_dev->lock);36523652- removing = test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags);36533653- spin_unlock_irq(&rbd_dev->lock);36543654- /*36553655- * If the device is being removed, rbd_dev->disk has36563656- * been destroyed, so don't try to update its size36573657- */36583658- if (!removing) {36513651+ if (test_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags) &&36523652+ !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {36593653 size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;36603654 dout("setting size to %llu sectors", (unsigned long long)size);36613655 set_capacity(rbd_dev->disk, size);···41834191 __le64 features;41844192 __le64 incompat;41854193 } __attribute__ ((packed)) features_buf = { 0 };41864186- u64 incompat;41944194+ u64 unsup;41874195 int ret;4188419641894197 ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,···41964204 if (ret < sizeof (features_buf))41974205 return -ERANGE;4198420641994199- incompat = le64_to_cpu(features_buf.incompat);42004200- if (incompat & ~RBD_FEATURES_SUPPORTED)42074207+ unsup = le64_to_cpu(features_buf.incompat) & ~RBD_FEATURES_SUPPORTED;42084208+ if (unsup) {42094209+ rbd_warn(rbd_dev, "image uses unsupported features: 0x%llx",42104210+ unsup);42014211 return -ENXIO;42124212+ }4202421342034214 *snap_features = le64_to_cpu(features_buf.features);42044215···51825187 return ret;51835188}5184518951905190+/*51915191+ * rbd_dev->header_rwsem must be locked for write and will be unlocked51925192+ * upon return.51935193+ */51855194static int rbd_dev_device_setup(struct rbd_device *rbd_dev)51865195{51875196 int ret;···5194519551955196 ret = rbd_dev_id_get(rbd_dev);51965197 if (ret)51975197- return ret;51985198+ goto err_out_unlock;5198519951995200 BUILD_BUG_ON(DEV_NAME_LEN52005201 < sizeof (RBD_DRV_NAME) + MAX_INT_FORMAT_WIDTH);···52355236 /* Everything's ready. Announce the disk to the world. */5236523752375238 set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);52385238- add_disk(rbd_dev->disk);52395239+ up_write(&rbd_dev->header_rwsem);5239524052415241+ add_disk(rbd_dev->disk);52405242 pr_info("%s: added with size 0x%llx\n", rbd_dev->disk->disk_name,52415243 (unsigned long long) rbd_dev->mapping.size);52425244···52525252 unregister_blkdev(rbd_dev->major, rbd_dev->name);52535253err_out_id:52545254 rbd_dev_id_put(rbd_dev);52555255+err_out_unlock:52565256+ up_write(&rbd_dev->header_rwsem);52555257 return ret;52565258}52575259···54445442 spec = NULL; /* rbd_dev now owns this */54455443 rbd_opts = NULL; /* rbd_dev now owns this */5446544454455445+ down_write(&rbd_dev->header_rwsem);54475446 rc = rbd_dev_image_probe(rbd_dev, 0);54485447 if (rc < 0)54495448 goto err_out_rbd_dev;···54745471 return rc;5475547254765473err_out_rbd_dev:54745474+ up_write(&rbd_dev->header_rwsem);54775475 rbd_dev_destroy(rbd_dev);54785476err_out_client:54795477 rbd_put_client(rbdc);···55815577 return ret;5582557855835579 rbd_dev_header_unwatch_sync(rbd_dev);55845584- /*55855585- * flush remaining watch callbacks - these must be complete55865586- * before the osd_client is shutdown55875587- */55885588- dout("%s: flushing notifies", __func__);55895589- ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);5590558055915581 /*55925582 * Don't free anything from rbd_dev->disk until after all
···813813 if (err)814814 goto skip_tar;815815816816+ /* For level 1 and 2, bits[23:16] contain the ratio */817817+ if (tdp_ctrl)818818+ tdp_ratio >>= 16;819819+820820+ tdp_ratio &= 0xff; /* ratios are only 8 bits long */816821 if (tdp_ratio - 1 == tar) {817822 max_pstate = tar;818823 pr_debug("max_pstate=TAC %x\n", max_pstate);
···202202 { NULL_GUID, "", NULL },203203};204204205205+/*206206+ * Check if @var_name matches the pattern given in @match_name.207207+ *208208+ * @var_name: an array of @len non-NUL characters.209209+ * @match_name: a NUL-terminated pattern string, optionally ending in "*". A210210+ * final "*" character matches any trailing characters @var_name,211211+ * including the case when there are none left in @var_name.212212+ * @match: on output, the number of non-wildcard characters in @match_name213213+ * that @var_name matches, regardless of the return value.214214+ * @return: whether @var_name fully matches @match_name.215215+ */205216static bool206217variable_matches(const char *var_name, size_t len, const char *match_name,207218 int *match)208219{209220 for (*match = 0; ; (*match)++) {210221 char c = match_name[*match];211211- char u = var_name[*match];212222213213- /* Wildcard in the matching name means we've matched */214214- if (c == '*')223223+ switch (c) {224224+ case '*':225225+ /* Wildcard in @match_name means we've matched. */215226 return true;216227217217- /* Case sensitive match */218218- if (!c && *match == len)219219- return true;228228+ case '\0':229229+ /* @match_name has ended. Has @var_name too? */230230+ return (*match == len);220231221221- if (c != u)232232+ default:233233+ /*234234+ * We've reached a non-wildcard char in @match_name.235235+ * Continue only if there's an identical character in236236+ * @var_name.237237+ */238238+ if (*match < len && c == var_name[*match])239239+ continue;222240 return false;223223-224224- if (!c)225225- return true;241241+ }226242 }227227- return true;228243}229244230245bool
+6-59
drivers/gpio/gpio-rcar.c
···196196 return 0;197197}198198199199-static void gpio_rcar_irq_bus_lock(struct irq_data *d)200200-{201201- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);202202- struct gpio_rcar_priv *p = gpiochip_get_data(gc);203203-204204- pm_runtime_get_sync(&p->pdev->dev);205205-}206206-207207-static void gpio_rcar_irq_bus_sync_unlock(struct irq_data *d)208208-{209209- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);210210- struct gpio_rcar_priv *p = gpiochip_get_data(gc);211211-212212- pm_runtime_put(&p->pdev->dev);213213-}214214-215215-216216-static int gpio_rcar_irq_request_resources(struct irq_data *d)217217-{218218- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);219219- struct gpio_rcar_priv *p = gpiochip_get_data(gc);220220- int error;221221-222222- error = pm_runtime_get_sync(&p->pdev->dev);223223- if (error < 0)224224- return error;225225-226226- return 0;227227-}228228-229229-static void gpio_rcar_irq_release_resources(struct irq_data *d)230230-{231231- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);232232- struct gpio_rcar_priv *p = gpiochip_get_data(gc);233233-234234- pm_runtime_put(&p->pdev->dev);235235-}236236-237199static irqreturn_t gpio_rcar_irq_handler(int irq, void *dev_id)238200{239201 struct gpio_rcar_priv *p = dev_id;···242280243281static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset)244282{245245- struct gpio_rcar_priv *p = gpiochip_get_data(chip);246246- int error;247247-248248- error = pm_runtime_get_sync(&p->pdev->dev);249249- if (error < 0)250250- return error;251251-252252- error = pinctrl_request_gpio(chip->base + offset);253253- if (error)254254- pm_runtime_put(&p->pdev->dev);255255-256256- return error;283283+ return pinctrl_request_gpio(chip->base + offset);257284}258285259286static void gpio_rcar_free(struct gpio_chip *chip, unsigned offset)260287{261261- struct gpio_rcar_priv *p = gpiochip_get_data(chip);262262-263288 pinctrl_free_gpio(chip->base + offset);264289265265- /* Set the GPIO as an input to ensure that the next GPIO request won't290290+ /*291291+ * Set the GPIO as an input to ensure that the next GPIO request won't266292 * drive the GPIO pin as an output.267293 */268294 gpio_rcar_config_general_input_output_mode(chip, offset, false);269269-270270- pm_runtime_put(&p->pdev->dev);271295}272296273297static int gpio_rcar_direction_input(struct gpio_chip *chip, unsigned offset)···400452 }401453402454 pm_runtime_enable(dev);455455+ pm_runtime_get_sync(dev);403456404457 io = platform_get_resource(pdev, IORESOURCE_MEM, 0);405458 irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);···437488 irq_chip->irq_unmask = gpio_rcar_irq_enable;438489 irq_chip->irq_set_type = gpio_rcar_irq_set_type;439490 irq_chip->irq_set_wake = gpio_rcar_irq_set_wake;440440- irq_chip->irq_bus_lock = gpio_rcar_irq_bus_lock;441441- irq_chip->irq_bus_sync_unlock = gpio_rcar_irq_bus_sync_unlock;442442- irq_chip->irq_request_resources = gpio_rcar_irq_request_resources;443443- irq_chip->irq_release_resources = gpio_rcar_irq_release_resources;444491 irq_chip->flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_MASK_ON_SUSPEND;445492446493 ret = gpiochip_add_data(gpio_chip, p);···467522err1:468523 gpiochip_remove(gpio_chip);469524err0:525525+ pm_runtime_put(dev);470526 pm_runtime_disable(dev);471527 return ret;472528}···478532479533 gpiochip_remove(&p->gpio_chip);480534535535+ pm_runtime_put(&pdev->dev);481536 pm_runtime_disable(&pdev->dev);482537 return 0;483538}
···6363 return amdgpu_atpx_priv.atpx_detected;6464}65656666-bool amdgpu_has_atpx_dgpu_power_cntl(void) {6767- return amdgpu_atpx_priv.atpx.functions.power_cntl;6868-}6969-7066/**7167 * amdgpu_atpx_call - call an ATPX method7268 *···142146 */143147static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)144148{149149+ /* make sure required functions are enabled */150150+ /* dGPU power control is required */151151+ if (atpx->functions.power_cntl == false) {152152+ printk("ATPX dGPU power cntl not present, forcing\n");153153+ atpx->functions.power_cntl = true;154154+ }155155+145156 if (atpx->functions.px_params) {146157 union acpi_object *info;147158 struct atpx_px_params output;
+1-7
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···6262 "LAST",6363};64646565-#if defined(CONFIG_VGA_SWITCHEROO)6666-bool amdgpu_has_atpx_dgpu_power_cntl(void);6767-#else6868-static inline bool amdgpu_has_atpx_dgpu_power_cntl(void) { return false; }6969-#endif7070-7165bool amdgpu_device_is_px(struct drm_device *dev)7266{7367 struct amdgpu_device *adev = dev->dev_private;···1479148514801486 if (amdgpu_runtime_pm == 1)14811487 runtime = true;14821482- if (amdgpu_device_is_px(ddev) && amdgpu_has_atpx_dgpu_power_cntl())14881488+ if (amdgpu_device_is_px(ddev))14831489 runtime = true;14841490 vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime);14851491 if (runtime)
···17961796 req_payload.start_slot = cur_slots;17971797 if (mgr->proposed_vcpis[i]) {17981798 port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);17991799+ port = drm_dp_get_validated_port_ref(mgr, port);18001800+ if (!port) {18011801+ mutex_unlock(&mgr->payload_lock);18021802+ return -EINVAL;18031803+ }17991804 req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots;18001805 req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi;18011806 } else {···18281823 mgr->payloads[i].payload_state = req_payload.payload_state;18291824 }18301825 cur_slots += req_payload.num_slots;18261826+18271827+ if (port)18281828+ drm_dp_put_port(port);18311829 }1832183018331831 for (i = 0; i < mgr->max_payloads; i++) {···2136212821372129 if (mgr->mst_primary) {21382130 int sret;21312131+ u8 guid[16];21322132+21392133 sret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE);21402134 if (sret != DP_RECEIVER_CAP_SIZE) {21412135 DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");···21522142 ret = -1;21532143 goto out_unlock;21542144 }21452145+21462146+ /* Some hubs forget their guids after they resume */21472147+ sret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16);21482148+ if (sret != 16) {21492149+ DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");21502150+ ret = -1;21512151+ goto out_unlock;21522152+ }21532153+ drm_dp_check_mstb_guid(mgr->mst_primary, guid);21542154+21552155 ret = 0;21562156 } else21572157 ret = -1;
+18-13
drivers/gpu/drm/etnaviv/etnaviv_gpu.c
···572572 goto fail;573573 }574574575575+ /*576576+ * Set the GPU linear window to be at the end of the DMA window, where577577+ * the CMA area is likely to reside. This ensures that we are able to578578+ * map the command buffers while having the linear window overlap as579579+ * much RAM as possible, so we can optimize mappings for other buffers.580580+ *581581+ * For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads582582+ * to different views of the memory on the individual engines.583583+ */584584+ if (!(gpu->identity.features & chipFeatures_PIPE_3D) ||585585+ (gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) {586586+ u32 dma_mask = (u32)dma_get_required_mask(gpu->dev);587587+ if (dma_mask < PHYS_OFFSET + SZ_2G)588588+ gpu->memory_base = PHYS_OFFSET;589589+ else590590+ gpu->memory_base = dma_mask - SZ_2G + 1;591591+ }592592+575593 ret = etnaviv_hw_reset(gpu);576594 if (ret)577595 goto fail;···15841566{15851567 struct device *dev = &pdev->dev;15861568 struct etnaviv_gpu *gpu;15871587- u32 dma_mask;15881569 int err = 0;1589157015901571 gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL);···1592157515931576 gpu->dev = &pdev->dev;15941577 mutex_init(&gpu->lock);15951595-15961596- /*15971597- * Set the GPU linear window to be at the end of the DMA window, where15981598- * the CMA area is likely to reside. This ensures that we are able to15991599- * map the command buffers while having the linear window overlap as16001600- * much RAM as possible, so we can optimize mappings for other buffers.16011601- */16021602- dma_mask = (u32)dma_get_required_mask(dev);16031603- if (dma_mask < PHYS_OFFSET + SZ_2G)16041604- gpu->memory_base = PHYS_OFFSET;16051605- else16061606- gpu->memory_base = dma_mask - SZ_2G + 1;1607157816081579 /* Map registers: */16091580 gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev));
+153-1
drivers/gpu/drm/radeon/evergreen.c
···26082608 WREG32(VM_CONTEXT1_CNTL, 0);26092609}2610261026112611+static const unsigned ni_dig_offsets[] =26122612+{26132613+ NI_DIG0_REGISTER_OFFSET,26142614+ NI_DIG1_REGISTER_OFFSET,26152615+ NI_DIG2_REGISTER_OFFSET,26162616+ NI_DIG3_REGISTER_OFFSET,26172617+ NI_DIG4_REGISTER_OFFSET,26182618+ NI_DIG5_REGISTER_OFFSET26192619+};26202620+26212621+static const unsigned ni_tx_offsets[] =26222622+{26232623+ NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1,26242624+ NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1,26252625+ NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1,26262626+ NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1,26272627+ NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1,26282628+ NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL126292629+};26302630+26312631+static const unsigned evergreen_dp_offsets[] =26322632+{26332633+ EVERGREEN_DP0_REGISTER_OFFSET,26342634+ EVERGREEN_DP1_REGISTER_OFFSET,26352635+ EVERGREEN_DP2_REGISTER_OFFSET,26362636+ EVERGREEN_DP3_REGISTER_OFFSET,26372637+ EVERGREEN_DP4_REGISTER_OFFSET,26382638+ EVERGREEN_DP5_REGISTER_OFFSET26392639+};26402640+26412641+26422642+/*26432643+ * Assumption is that EVERGREEN_CRTC_MASTER_EN enable for requested crtc26442644+ * We go from crtc to connector and it is not relible since it26452645+ * should be an opposite direction .If crtc is enable then26462646+ * find the dig_fe which selects this crtc and insure that it enable.26472647+ * if such dig_fe is found then find dig_be which selects found dig_be and26482648+ * insure that it enable and in DP_SST mode.26492649+ * if UNIPHY_PLL_CONTROL1.enable then we should disconnect timing26502650+ * from dp symbols clocks .26512651+ */26522652+static bool evergreen_is_dp_sst_stream_enabled(struct radeon_device *rdev,26532653+ unsigned crtc_id, unsigned *ret_dig_fe)26542654+{26552655+ unsigned i;26562656+ unsigned dig_fe;26572657+ unsigned dig_be;26582658+ unsigned dig_en_be;26592659+ unsigned uniphy_pll;26602660+ unsigned digs_fe_selected;26612661+ unsigned dig_be_mode;26622662+ unsigned dig_fe_mask;26632663+ bool is_enabled = false;26642664+ bool found_crtc = false;26652665+26662666+ /* loop through all running dig_fe to find selected crtc */26672667+ for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {26682668+ dig_fe = RREG32(NI_DIG_FE_CNTL + ni_dig_offsets[i]);26692669+ if (dig_fe & NI_DIG_FE_CNTL_SYMCLK_FE_ON &&26702670+ crtc_id == NI_DIG_FE_CNTL_SOURCE_SELECT(dig_fe)) {26712671+ /* found running pipe */26722672+ found_crtc = true;26732673+ dig_fe_mask = 1 << i;26742674+ dig_fe = i;26752675+ break;26762676+ }26772677+ }26782678+26792679+ if (found_crtc) {26802680+ /* loop through all running dig_be to find selected dig_fe */26812681+ for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {26822682+ dig_be = RREG32(NI_DIG_BE_CNTL + ni_dig_offsets[i]);26832683+ /* if dig_fe_selected by dig_be? */26842684+ digs_fe_selected = NI_DIG_BE_CNTL_FE_SOURCE_SELECT(dig_be);26852685+ dig_be_mode = NI_DIG_FE_CNTL_MODE(dig_be);26862686+ if (dig_fe_mask & digs_fe_selected &&26872687+ /* if dig_be in sst mode? */26882688+ dig_be_mode == NI_DIG_BE_DPSST) {26892689+ dig_en_be = RREG32(NI_DIG_BE_EN_CNTL +26902690+ ni_dig_offsets[i]);26912691+ uniphy_pll = RREG32(NI_DCIO_UNIPHY0_PLL_CONTROL1 +26922692+ ni_tx_offsets[i]);26932693+ /* dig_be enable and tx is running */26942694+ if (dig_en_be & NI_DIG_BE_EN_CNTL_ENABLE &&26952695+ dig_en_be & NI_DIG_BE_EN_CNTL_SYMBCLK_ON &&26962696+ uniphy_pll & NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE) {26972697+ is_enabled = true;26982698+ *ret_dig_fe = dig_fe;26992699+ break;27002700+ }27012701+ }27022702+ }27032703+ }27042704+27052705+ return is_enabled;27062706+}27072707+27082708+/*27092709+ * Blank dig when in dp sst mode27102710+ * Dig ignores crtc timing27112711+ */27122712+static void evergreen_blank_dp_output(struct radeon_device *rdev,27132713+ unsigned dig_fe)27142714+{27152715+ unsigned stream_ctrl;27162716+ unsigned fifo_ctrl;27172717+ unsigned counter = 0;27182718+27192719+ if (dig_fe >= ARRAY_SIZE(evergreen_dp_offsets)) {27202720+ DRM_ERROR("invalid dig_fe %d\n", dig_fe);27212721+ return;27222722+ }27232723+27242724+ stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +27252725+ evergreen_dp_offsets[dig_fe]);27262726+ if (!(stream_ctrl & EVERGREEN_DP_VID_STREAM_CNTL_ENABLE)) {27272727+ DRM_ERROR("dig %d , should be enable\n", dig_fe);27282728+ return;27292729+ }27302730+27312731+ stream_ctrl &=~EVERGREEN_DP_VID_STREAM_CNTL_ENABLE;27322732+ WREG32(EVERGREEN_DP_VID_STREAM_CNTL +27332733+ evergreen_dp_offsets[dig_fe], stream_ctrl);27342734+27352735+ stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +27362736+ evergreen_dp_offsets[dig_fe]);27372737+ while (counter < 32 && stream_ctrl & EVERGREEN_DP_VID_STREAM_STATUS) {27382738+ msleep(1);27392739+ counter++;27402740+ stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +27412741+ evergreen_dp_offsets[dig_fe]);27422742+ }27432743+ if (counter >= 32 )27442744+ DRM_ERROR("counter exceeds %d\n", counter);27452745+27462746+ fifo_ctrl = RREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe]);27472747+ fifo_ctrl |= EVERGREEN_DP_STEER_FIFO_RESET;27482748+ WREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe], fifo_ctrl);27492749+27502750+}27512751+26112752void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)26122753{26132754 u32 crtc_enabled, tmp, frame_count, blackout;26142755 int i, j;27562756+ unsigned dig_fe;2615275726162758 if (!ASIC_IS_NODCE(rdev)) {26172759 save->vga_render_control = RREG32(VGA_RENDER_CONTROL);···27932651 break;27942652 udelay(1);27952653 }27962796-26542654+ /*we should disable dig if it drives dp sst*/26552655+ /*but we are in radeon_device_init and the topology is unknown*/26562656+ /*and it is available after radeon_modeset_init*/26572657+ /*the following method radeon_atom_encoder_dpms_dig*/26582658+ /*does the job if we initialize it properly*/26592659+ /*for now we do it this manually*/26602660+ /**/26612661+ if (ASIC_IS_DCE5(rdev) &&26622662+ evergreen_is_dp_sst_stream_enabled(rdev, i ,&dig_fe))26632663+ evergreen_blank_dp_output(rdev, dig_fe);26642664+ /*we could remove 6 lines below*/27972665 /* XXX this is a hack to avoid strange behavior with EFI on certain systems */27982666 WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);27992667 tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
···975975976976config I2C_XLP9XX977977 tristate "XLP9XX I2C support"978978- depends on CPU_XLP || COMPILE_TEST978978+ depends on CPU_XLP || ARCH_VULCAN || COMPILE_TEST979979 help980980 This driver enables support for the on-chip I2C interface of981981- the Broadcom XLP9xx/XLP5xx MIPS processors.981981+ the Broadcom XLP9xx/XLP5xx MIPS and Vulcan ARM64 processors.982982983983 This driver can also be built as a module. If so, the module will984984 be called i2c-xlp9xx.
···498498 * skb_shinfo(skb)->nr_frags, skb_is_gso(skb));499499 */500500501501- if (!netif_carrier_ok(netdev))502502- return NETDEV_TX_OK;503503-504501 if (netif_queue_stopped(netdev))505502 return NETDEV_TX_BUSY;506503
+5
drivers/infiniband/hw/qib/qib_file_ops.c
···4545#include <linux/export.h>4646#include <linux/uio.h>47474848+#include <rdma/ib.h>4949+4850#include "qib.h"4951#include "qib_common.h"5052#include "qib_user_sdma.h"···20682066 struct qib_cmd cmd;20692067 ssize_t ret = 0;20702068 void *dest;20692069+20702070+ if (WARN_ON_ONCE(!ib_safe_file_access(fp)))20712071+ return -EACCES;2071207220722073 if (count < sizeof(cmd.type)) {20732074 ret = -EINVAL;
+2-2
drivers/infiniband/sw/rdmavt/qp.c
···16371637 spin_unlock_irqrestore(&qp->s_hlock, flags);16381638 if (nreq) {16391639 if (call_send)16401640- rdi->driver_f.schedule_send_no_lock(qp);16411641- else16421640 rdi->driver_f.do_send(qp);16411641+ else16421642+ rdi->driver_f.schedule_send_no_lock(qp);16431643 }16441644 return err;16451645}
+2
drivers/md/md.c
···284284 * go away inside make_request285285 */286286 sectors = bio_sectors(bio);287287+ /* bio could be mergeable after passing to underlayer */288288+ bio->bi_rw &= ~REQ_NOMERGE;287289 mddev->pers->make_request(mddev, bio);288290289291 cpu = part_stat_lock();
+1-1
drivers/md/raid0.c
···7070 (unsigned long long)zone_size>>1);7171 zone_start = conf->strip_zone[j].zone_end;7272 }7373- printk(KERN_INFO "\n");7473}75747675static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)···8485 struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL);8586 unsigned short blksize = 512;86878888+ *private_conf = ERR_PTR(-ENOMEM);8789 if (!conf)8890 return -ENOMEM;8991 rdev_for_each(rdev1, mddev) {
-2
drivers/md/raid5.c
···35023502 dev = &sh->dev[i];35033503 } else if (test_bit(R5_Discard, &dev->flags))35043504 discard_pending = 1;35053505- WARN_ON(test_bit(R5_SkipCopy, &dev->flags));35063506- WARN_ON(dev->page != dev->orig_page);35073505 }3508350635093507 r5l_stripe_write_finished(sh);
-7
drivers/media/usb/usbvision/usbvision-video.c
···14521452 printk(KERN_INFO "%s: %s found\n", __func__,14531453 usbvision_device_data[model].model_string);1454145414551455- /*14561456- * this is a security check.14571457- * an exploit using an incorrect bInterfaceNumber is known14581458- */14591459- if (ifnum >= USB_MAXINTERFACES || !dev->actconfig->interface[ifnum])14601460- return -ENODEV;14611461-14621455 if (usbvision_device_data[model].interface >= 0)14631456 interface = &dev->actconfig->interface[usbvision_device_data[model].interface]->altsetting[0];14641457 else if (ifnum < dev->actconfig->desc.bNumInterfaces)
+15-5
drivers/media/v4l2-core/videobuf2-core.c
···16451645 * Will sleep if required for nonblocking == false.16461646 */16471647static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb,16481648- int nonblocking)16481648+ void *pb, int nonblocking)16491649{16501650 unsigned long flags;16511651 int ret;···16661666 /*16671667 * Only remove the buffer from done_list if v4l2_buffer can handle all16681668 * the planes.16691669- * Verifying planes is NOT necessary since it already has been checked16701670- * before the buffer is queued/prepared. So it can never fail.16711669 */16721672- list_del(&(*vb)->done_entry);16701670+ ret = call_bufop(q, verify_planes_array, *vb, pb);16711671+ if (!ret)16721672+ list_del(&(*vb)->done_entry);16731673 spin_unlock_irqrestore(&q->done_lock, flags);1674167416751675 return ret;···17481748 struct vb2_buffer *vb = NULL;17491749 int ret;1750175017511751- ret = __vb2_get_done_vb(q, &vb, nonblocking);17511751+ ret = __vb2_get_done_vb(q, &vb, pb, nonblocking);17521752 if (ret < 0)17531753 return ret;17541754···22952295 * error flag is set.22962296 */22972297 if (!vb2_is_streaming(q) || q->error)22982298+ return POLLERR;22992299+23002300+ /*23012301+ * If this quirk is set and QBUF hasn't been called yet then23022302+ * return POLLERR as well. This only affects capture queues, output23032303+ * queues will always initialize waiting_for_buffers to false.23042304+ * This quirk is set by V4L2 for backwards compatibility reasons.23052305+ */23062306+ if (q->quirk_poll_must_check_waiting_for_buffers &&23072307+ q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))22982308 return POLLERR;2299230923002310 /*
+1-1
drivers/media/v4l2-core/videobuf2-memops.c
···4949 vec = frame_vector_create(nr);5050 if (!vec)5151 return ERR_PTR(-ENOMEM);5252- ret = get_vaddr_frames(start, nr, write, 1, vec);5252+ ret = get_vaddr_frames(start & PAGE_MASK, nr, write, true, vec);5353 if (ret < 0)5454 goto out_destroy;5555 /* We accept only complete set of PFNs */
+12-8
drivers/media/v4l2-core/videobuf2-v4l2.c
···7474 return 0;7575}76767777+static int __verify_planes_array_core(struct vb2_buffer *vb, const void *pb)7878+{7979+ return __verify_planes_array(vb, pb);8080+}8181+7782/**7883 * __verify_length() - Verify that the bytesused value for each plane fits in7984 * the plane length and that the data offset doesn't exceed the bytesused value.···442437}443438444439static const struct vb2_buf_ops v4l2_buf_ops = {440440+ .verify_planes_array = __verify_planes_array_core,445441 .fill_user_buffer = __fill_v4l2_buffer,446442 .fill_vb2_buffer = __fill_vb2_buffer,447443 .copy_timestamp = __copy_timestamp,···771765 q->is_output = V4L2_TYPE_IS_OUTPUT(q->type);772766 q->copy_timestamp = (q->timestamp_flags & V4L2_BUF_FLAG_TIMESTAMP_MASK)773767 == V4L2_BUF_FLAG_TIMESTAMP_COPY;768768+ /*769769+ * For compatibility with vb1: if QBUF hasn't been called yet, then770770+ * return POLLERR as well. This only affects capture queues, output771771+ * queues will always initialize waiting_for_buffers to false.772772+ */773773+ q->quirk_poll_must_check_waiting_for_buffers = true;774774775775 return vb2_core_queue_init(q);776776}···829817 else if (req_events & POLLPRI)830818 poll_wait(file, &fh->wait, wait);831819 }832832-833833- /*834834- * For compatibility with vb1: if QBUF hasn't been called yet, then835835- * return POLLERR as well. This only affects capture queues, output836836- * queues will always initialize waiting_for_buffers to false.837837- */838838- if (q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))839839- return POLLERR;840820841821 return res | vb2_core_poll(q, file, wait);842822}
+7
drivers/misc/cxl/context.c
···223223 cxl_ops->link_ok(ctx->afu->adapter, ctx->afu));224224 flush_work(&ctx->fault_work); /* Only needed for dedicated process */225225226226+ /*227227+ * Wait until no further interrupts are presented by the PSL228228+ * for this context.229229+ */230230+ if (cxl_ops->irq_wait)231231+ cxl_ops->irq_wait(ctx);232232+226233 /* release the reference to the group leader and mm handling pid */227234 put_pid(ctx->pid);228235 put_pid(ctx->glpid);
···1414#include <linux/mutex.h>1515#include <linux/mm.h>1616#include <linux/uaccess.h>1717+#include <linux/delay.h>1718#include <asm/synch.h>1819#include <misc/cxl-base.h>1920···798797 return fail_psl_irq(afu, &irq_info);799798}800799800800+void native_irq_wait(struct cxl_context *ctx)801801+{802802+ u64 dsisr;803803+ int timeout = 1000;804804+ int ph;805805+806806+ /*807807+ * Wait until no further interrupts are presented by the PSL808808+ * for this context.809809+ */810810+ while (timeout--) {811811+ ph = cxl_p2n_read(ctx->afu, CXL_PSL_PEHandle_An) & 0xffff;812812+ if (ph != ctx->pe)813813+ return;814814+ dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);815815+ if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)816816+ return;817817+ /*818818+ * We are waiting for the workqueue to process our819819+ * irq, so need to let that run here.820820+ */821821+ msleep(1);822822+ }823823+824824+ dev_warn(&ctx->afu->dev, "WARNING: waiting on DSI for PE %i"825825+ " DSISR %016llx!\n", ph, dsisr);826826+ return;827827+}828828+801829static irqreturn_t native_slice_irq_err(int irq, void *data)802830{803831 struct cxl_afu *afu = data;···11061076 .handle_psl_slice_error = native_handle_psl_slice_error,11071077 .psl_interrupt = NULL,11081078 .ack_irq = native_ack_irq,10791079+ .irq_wait = native_irq_wait,11091080 .attach_process = native_attach_process,11101081 .detach_process = native_detach_process,11111082 .support_attributes = native_support_attributes,
+1
drivers/mmc/host/Kconfig
···9797config MMC_SDHCI_ACPI9898 tristate "SDHCI support for ACPI enumerated SDHCI controllers"9999 depends on MMC_SDHCI && ACPI100100+ select IOSF_MBI if X86100101 help101102 This selects support for ACPI enumerated SDHCI controllers,102103 identified by ACPI Compatibility ID PNP0D40 or specific
···11291129 MMC_CAP_1_8V_DDR |11301130 MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ;1131113111321132+ /* TODO MMC DDR is not working on A80 */11331133+ if (of_device_is_compatible(pdev->dev.of_node,11341134+ "allwinner,sun9i-a80-mmc"))11351135+ mmc->caps &= ~MMC_CAP_1_8V_DDR;11361136+11321137 ret = mmc_of_parse(mmc);11331138 if (ret)11341139 goto error_free_dma;
···33543354 /* Enable per-CPU interrupts on the CPU that is33553355 * brought up.33563356 */33573357- smp_call_function_single(cpu, mvneta_percpu_enable,33583358- pp, true);33573357+ mvneta_percpu_enable(pp);3359335833603359 /* Enable per-CPU interrupt on the one CPU we care33613360 * about.···33863387 /* Disable per-CPU interrupts on the CPU that is33873388 * brought down.33883389 */33893389- smp_call_function_single(cpu, mvneta_percpu_disable,33903390- pp, true);33903390+ mvneta_percpu_disable(pp);3391339133923392 break;33933393 case CPU_DEAD:
+2
drivers/net/ethernet/marvell/pxa168_eth.c
···979979 return 0;980980981981 pep->phy = mdiobus_scan(pep->smi_bus, pep->phy_addr);982982+ if (IS_ERR(pep->phy))983983+ return PTR_ERR(pep->phy);982984 if (!pep->phy)983985 return -ENODEV;984986
+1
drivers/net/ethernet/mellanox/mlx5/core/Kconfig
···1414 bool "Mellanox Technologies ConnectX-4 Ethernet support"1515 depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE1616 select PTP_1588_CLOCK1717+ select VXLAN if MLX5_CORE=y1718 default n1819 ---help---1920 Ethernet support in Mellanox Technologies ConnectX-4 NIC.
···2668266826692669 del_timer_sync(&mgp->watchdog_timer);26702670 mgp->running = MYRI10GE_ETH_STOPPING;26712671- local_bh_disable(); /* myri10ge_ss_lock_napi needs bh disabled */26722671 for (i = 0; i < mgp->num_slices; i++) {26732672 napi_disable(&mgp->ss[i].napi);26732673+ local_bh_disable(); /* myri10ge_ss_lock_napi needs this */26742674 /* Lock the slice to prevent the busy_poll handler from26752675 * accessing it. Later when we bring the NIC up, myri10ge_open26762676 * resets the slice including this lock.···26792679 pr_info("Slice %d locked\n", i);26802680 mdelay(1);26812681 }26822682+ local_bh_enable();26822683 }26832683- local_bh_enable();26842684 netif_carrier_off(dev);2685268526862686 netif_tx_stop_all_queues(dev);
+13-2
drivers/net/ethernet/sfc/ef10.c
···19201920 return 0;19211921 }1922192219231923+ if (nic_data->datapath_caps &19241924+ 1 << MC_CMD_GET_CAPABILITIES_OUT_RX_RSS_LIMITED_LBN)19251925+ return -EOPNOTSUPP;19261926+19231927 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_UPSTREAM_PORT_ID,19241928 nic_data->vport_id);19251929 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_TYPE, alloc_type);···29272923 bool replacing)29282924{29292925 struct efx_ef10_nic_data *nic_data = efx->nic_data;29262926+ u32 flags = spec->flags;2930292729312928 memset(inbuf, 0, MC_CMD_FILTER_OP_IN_LEN);29292929+29302930+ /* Remove RSS flag if we don't have an RSS context. */29312931+ if (flags & EFX_FILTER_FLAG_RX_RSS &&29322932+ spec->rss_context == EFX_FILTER_RSS_CONTEXT_DEFAULT &&29332933+ nic_data->rx_rss_context == EFX_EF10_RSS_CONTEXT_INVALID)29342934+ flags &= ~EFX_FILTER_FLAG_RX_RSS;2932293529332936 if (replacing) {29342937 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_OP,···29962985 spec->dmaq_id == EFX_FILTER_RX_DMAQ_ID_DROP ?29972986 0 : spec->dmaq_id);29982987 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_MODE,29992999- (spec->flags & EFX_FILTER_FLAG_RX_RSS) ?29882988+ (flags & EFX_FILTER_FLAG_RX_RSS) ?30002989 MC_CMD_FILTER_OP_IN_RX_MODE_RSS :30012990 MC_CMD_FILTER_OP_IN_RX_MODE_SIMPLE);30023002- if (spec->flags & EFX_FILTER_FLAG_RX_RSS)29912991+ if (flags & EFX_FILTER_FLAG_RX_RSS)30032992 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_CONTEXT,30042993 spec->rss_context !=30052994 EFX_FILTER_RSS_CONTEXT_DEFAULT ?
+37-32
drivers/net/ethernet/ti/cpsw.c
···367367 spinlock_t lock;368368 struct platform_device *pdev;369369 struct net_device *ndev;370370- struct device_node *phy_node;371370 struct napi_struct napi_rx;372371 struct napi_struct napi_tx;373372 struct device *dev;···11411142 cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast,11421143 1 << slave_port, 0, 0, ALE_MCAST_FWD_2);1143114411441144- if (priv->phy_node)11451145- slave->phy = of_phy_connect(priv->ndev, priv->phy_node,11451145+ if (slave->data->phy_node) {11461146+ slave->phy = of_phy_connect(priv->ndev, slave->data->phy_node,11461147 &cpsw_adjust_link, 0, slave->data->phy_if);11471147- else11481148+ if (!slave->phy) {11491149+ dev_err(priv->dev, "phy \"%s\" not found on slave %d\n",11501150+ slave->data->phy_node->full_name,11511151+ slave->slave_num);11521152+ return;11531153+ }11541154+ } else {11481155 slave->phy = phy_connect(priv->ndev, slave->data->phy_id,11491156 &cpsw_adjust_link, slave->data->phy_if);11501150- if (IS_ERR(slave->phy)) {11511151- dev_err(priv->dev, "phy %s not found on slave %d\n",11521152- slave->data->phy_id, slave->slave_num);11531153- slave->phy = NULL;11541154- } else {11551155- phy_attached_info(slave->phy);11561156-11571157- phy_start(slave->phy);11581158-11591159- /* Configure GMII_SEL register */11601160- cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface,11611161- slave->slave_num);11571157+ if (IS_ERR(slave->phy)) {11581158+ dev_err(priv->dev,11591159+ "phy \"%s\" not found on slave %d, err %ld\n",11601160+ slave->data->phy_id, slave->slave_num,11611161+ PTR_ERR(slave->phy));11621162+ slave->phy = NULL;11631163+ return;11641164+ }11621165 }11661166+11671167+ phy_attached_info(slave->phy);11681168+11691169+ phy_start(slave->phy);11701170+11711171+ /* Configure GMII_SEL register */11721172+ cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, slave->slave_num);11631173}1164117411651175static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)···19401932 slave->port_vlan = data->dual_emac_res_vlan;19411933}1942193419431943-static int cpsw_probe_dt(struct cpsw_priv *priv,19351935+static int cpsw_probe_dt(struct cpsw_platform_data *data,19441936 struct platform_device *pdev)19451937{19461938 struct device_node *node = pdev->dev.of_node;19471939 struct device_node *slave_node;19481948- struct cpsw_platform_data *data = &priv->data;19491940 int i = 0, ret;19501941 u32 prop;19511942···20322025 if (strcmp(slave_node->name, "slave"))20332026 continue;2034202720352035- priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0);20282028+ slave_data->phy_node = of_parse_phandle(slave_node,20292029+ "phy-handle", 0);20362030 parp = of_get_property(slave_node, "phy_id", &lenp);20372037- if (of_phy_is_fixed_link(slave_node)) {20382038- struct device_node *phy_node;20392039- struct phy_device *phy_dev;20402040-20312031+ if (slave_data->phy_node) {20322032+ dev_dbg(&pdev->dev,20332033+ "slave[%d] using phy-handle=\"%s\"\n",20342034+ i, slave_data->phy_node->full_name);20352035+ } else if (of_phy_is_fixed_link(slave_node)) {20412036 /* In the case of a fixed PHY, the DT node associated20422037 * to the PHY is the Ethernet MAC DT node.20432038 */20442039 ret = of_phy_register_fixed_link(slave_node);20452040 if (ret)20462041 return ret;20472047- phy_node = of_node_get(slave_node);20482048- phy_dev = of_phy_find_device(phy_node);20492049- if (!phy_dev)20502050- return -ENODEV;20512051- snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),20522052- PHY_ID_FMT, phy_dev->mdio.bus->id,20532053- phy_dev->mdio.addr);20422042+ slave_data->phy_node = of_node_get(slave_node);20542043 } else if (parp) {20552044 u32 phyid;20562045 struct device_node *mdio_node;···20672064 snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),20682065 PHY_ID_FMT, mdio->name, phyid);20692066 } else {20702070- dev_err(&pdev->dev, "No slave[%d] phy_id or fixed-link property\n", i);20672067+ dev_err(&pdev->dev,20682068+ "No slave[%d] phy_id, phy-handle, or fixed-link property\n",20692069+ i);20712070 goto no_phy_slave;20722071 }20732072 slave_data->phy_if = of_get_phy_mode(slave_node);···22712266 /* Select default pin state */22722267 pinctrl_pm_select_default_state(&pdev->dev);2273226822742274- if (cpsw_probe_dt(priv, pdev)) {22692269+ if (cpsw_probe_dt(&priv->data, pdev)) {22752270 dev_err(&pdev->dev, "cpsw: platform data missing\n");22762271 ret = -ENODEV;22772272 goto clean_runtime_disable_ret;
···2929#include <linux/crc32.h>3030#include <linux/usb/usbnet.h>3131#include <linux/slab.h>3232+#include <linux/of_net.h>3233#include "smsc75xx.h"33343435#define SMSC_CHIPNAME "smsc75xx"···762761763762static void smsc75xx_init_mac_address(struct usbnet *dev)764763{764764+ const u8 *mac_addr;765765+766766+ /* maybe the boot loader passed the MAC address in devicetree */767767+ mac_addr = of_get_mac_address(dev->udev->dev.of_node);768768+ if (mac_addr) {769769+ memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN);770770+ return;771771+ }772772+765773 /* try reading mac address from EEPROM */766774 if (smsc75xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN,767775 dev->net->dev_addr) == 0) {···782772 }783773 }784774785785- /* no eeprom, or eeprom values are invalid. generate random MAC */775775+ /* no useful static MAC address found. generate a random one */786776 eth_hw_addr_random(dev->net);787777 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n");788778}
+11-1
drivers/net/usb/smsc95xx.c
···2929#include <linux/crc32.h>3030#include <linux/usb/usbnet.h>3131#include <linux/slab.h>3232+#include <linux/of_net.h>3233#include "smsc95xx.h"33343435#define SMSC_CHIPNAME "smsc95xx"···766765767766static void smsc95xx_init_mac_address(struct usbnet *dev)768767{768768+ const u8 *mac_addr;769769+770770+ /* maybe the boot loader passed the MAC address in devicetree */771771+ mac_addr = of_get_mac_address(dev->udev->dev.of_node);772772+ if (mac_addr) {773773+ memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN);774774+ return;775775+ }776776+769777 /* try reading mac address from EEPROM */770778 if (smsc95xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN,771779 dev->net->dev_addr) == 0) {···785775 }786776 }787777788788- /* no eeprom, or eeprom values are invalid. generate random MAC */778778+ /* no useful static MAC address found. generate a random one */789779 eth_hw_addr_random(dev->net);790780 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n");791781}
+3-5
drivers/net/wireless/ath/ath9k/ar5008_phy.c
···274274 };275275 static const int inc[4] = { 0, 100, 0, 0 };276276277277+ memset(&mask_m, 0, sizeof(int8_t) * 123);278278+ memset(&mask_p, 0, sizeof(int8_t) * 123);279279+277280 cur_bin = -6000;278281 upper = bin + 100;279282 lower = bin - 100;···427424 int tmp, new;428425 int i;429426430430- int8_t mask_m[123];431431- int8_t mask_p[123];432427 int cur_bb_spur;433428 bool is2GHz = IS_CHAN_2GHZ(chan);434434-435435- memset(&mask_m, 0, sizeof(int8_t) * 123);436436- memset(&mask_p, 0, sizeof(int8_t) * 123);437429438430 for (i = 0; i < AR_EEPROM_MODAL_SPURS; i++) {439431 cur_bb_spur = ah->eep_ops->get_spur_channel(ah, i, is2GHz);
···172172static int vpfe_update_pipe_state(struct vpfe_video_device *video)173173{174174 struct vpfe_pipeline *pipe = &video->pipe;175175+ int ret;175176176176- if (vpfe_prepare_pipeline(video))177177- return vpfe_prepare_pipeline(video);177177+ ret = vpfe_prepare_pipeline(video);178178+ if (ret)179179+ return ret;178180179181 /*180182 * Find out if there is any input video···184182 */185183 if (pipe->input_num == 0) {186184 pipe->state = VPFE_PIPELINE_STREAM_CONTINUOUS;187187- if (vpfe_update_current_ext_subdev(video)) {185185+ ret = vpfe_update_current_ext_subdev(video);186186+ if (ret) {188187 pr_err("Invalid external subdev\n");189189- return vpfe_update_current_ext_subdev(video);188188+ return ret;190189 }191190 } else {192191 pipe->state = VPFE_PIPELINE_STREAM_SINGLESHOT;···670667 struct v4l2_subdev *subdev;671668 struct v4l2_format format;672669 struct media_pad *remote;670670+ int ret;673671674672 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_enum_fmt\n");675673···699695 sd_fmt.pad = remote->index;700696 sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;701697 /* get output format of remote subdev */702702- if (v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt)) {698698+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);699699+ if (ret) {703700 v4l2_err(&vpfe_dev->v4l2_dev,704701 "invalid remote subdev for video node\n");705705- return v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);702702+ return ret;706703 }707704 /* convert to pix format */708705 mbus.code = sd_fmt.format.code;···730725 struct vpfe_video_device *video = video_drvdata(file);731726 struct vpfe_device *vpfe_dev = video->vpfe_dev;732727 struct v4l2_format format;728728+ int ret;733729734730 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_fmt\n");735731 /* If streaming is started, return error */···739733 return -EBUSY;740734 }741735 /* get adjacent subdev's output pad format */742742- if (__vpfe_video_get_format(video, &format))743743- return __vpfe_video_get_format(video, &format);736736+ ret = __vpfe_video_get_format(video, &format);737737+ if (ret)738738+ return ret;744739 *fmt = format;745740 video->fmt = *fmt;746741 return 0;···764757 struct vpfe_video_device *video = video_drvdata(file);765758 struct vpfe_device *vpfe_dev = video->vpfe_dev;766759 struct v4l2_format format;760760+ int ret;767761768762 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_try_fmt\n");769763 /* get adjacent subdev's output pad format */770770- if (__vpfe_video_get_format(video, &format))771771- return __vpfe_video_get_format(video, &format);764764+ ret = __vpfe_video_get_format(video, &format);765765+ if (ret)766766+ return ret;772767773768 *fmt = format;774769 return 0;···847838848839 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_input\n");849840850850- if (mutex_lock_interruptible(&video->lock))851851- return mutex_lock_interruptible(&video->lock);841841+ ret = mutex_lock_interruptible(&video->lock);842842+ if (ret)843843+ return ret;852844 /*853845 * If streaming is started return device busy854846 * error···950940 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_std\n");951941952942 /* Call decoder driver function to set the standard */953953- if (mutex_lock_interruptible(&video->lock))954954- return mutex_lock_interruptible(&video->lock);943943+ ret = mutex_lock_interruptible(&video->lock);944944+ if (ret)945945+ return ret;955946 sdinfo = video->current_ext_subdev;956947 /* If streaming is started, return device busy error */957948 if (video->started) {···13381327 return -EINVAL;13391328 }1340132913411341- if (mutex_lock_interruptible(&video->lock))13421342- return mutex_lock_interruptible(&video->lock);13301330+ ret = mutex_lock_interruptible(&video->lock);13311331+ if (ret)13321332+ return ret;1343133313441334 if (video->io_usrs != 0) {13451335 v4l2_err(&vpfe_dev->v4l2_dev, "Only one IO user allowed\n");···13661354 q->buf_struct_size = sizeof(struct vpfe_cap_buffer);13671355 q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;1368135613691369- if (vb2_queue_init(q)) {13571357+ ret = vb2_queue_init(q);13581358+ if (ret) {13701359 v4l2_err(&vpfe_dev->v4l2_dev, "vb2_queue_init() failed\n");13711360 vb2_dma_contig_cleanup_ctx(vpfe_dev->pdev);13721372- return vb2_queue_init(q);13611361+ return ret;13731362 }1374136313751364 fh->io_allowed = 1;···15461533 return -EINVAL;15471534 }1548153515491549- if (mutex_lock_interruptible(&video->lock))15501550- return mutex_lock_interruptible(&video->lock);15361536+ ret = mutex_lock_interruptible(&video->lock);15371537+ if (ret)15381538+ return ret;1551153915521540 vpfe_stop_capture(video);15531541 ret = vb2_streamoff(&video->buffer_queue, buf_type);
+1-1
drivers/staging/rdma/hfi1/TODO
···33- Remove unneeded file entries in sysfs44- Remove software processing of IB protocol and place in library for use55 by qib, ipath (if still present), hfi1, and eventually soft-roce66-66+- Replace incorrect uAPI
+35-56
drivers/staging/rdma/hfi1/file_ops.c
···4949#include <linux/vmalloc.h>5050#include <linux/io.h>51515252+#include <rdma/ib.h>5353+5254#include "hfi.h"5355#include "pio.h"5456#include "device.h"···191189 __u64 user_val = 0;192190 int uctxt_required = 1;193191 int must_be_root = 0;192192+193193+ /* FIXME: This interface cannot continue out of staging */194194+ if (WARN_ON_ONCE(!ib_safe_file_access(fp)))195195+ return -EACCES;194196195197 if (count < sizeof(cmd)) {196198 ret = -EINVAL;···797791 spin_unlock_irqrestore(&dd->uctxt_lock, flags);798792799793 dd->rcd[uctxt->ctxt] = NULL;794794+795795+ hfi1_user_exp_rcv_free(fdata);796796+ hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);797797+800798 uctxt->rcvwait_to = 0;801799 uctxt->piowait_to = 0;802800 uctxt->rcvnowait = 0;803801 uctxt->pionowait = 0;804802 uctxt->event_flags = 0;805805-806806- hfi1_user_exp_rcv_free(fdata);807807- hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);808803809804 hfi1_stats.sps_ctxts--;810805 if (++dd->freectxts == dd->num_user_contexts)···1134112711351128static int user_init(struct file *fp)11361129{11371137- int ret;11381130 unsigned int rcvctrl_ops = 0;11391131 struct hfi1_filedata *fd = fp->private_data;11401132 struct hfi1_ctxtdata *uctxt = fd->uctxt;1141113311421134 /* make sure that the context has already been setup */11431143- if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags)) {11441144- ret = -EFAULT;11451145- goto done;11461146- }11471147-11481148- /*11491149- * Subctxts don't need to initialize anything since master11501150- * has done it.11511151- */11521152- if (fd->subctxt) {11531153- ret = wait_event_interruptible(uctxt->wait, !test_bit(11541154- HFI1_CTXT_MASTER_UNINIT,11551155- &uctxt->event_flags));11561156- goto expected;11571157- }11351135+ if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags))11361136+ return -EFAULT;1158113711591138 /* initialize poll variables... */11601139 uctxt->urgent = 0;···11951202 wake_up(&uctxt->wait);11961203 }1197120411981198-expected:11991199- /*12001200- * Expected receive has to be setup for all processes (including12011201- * shared contexts). However, it has to be done after the master12021202- * context has been fully configured as it depends on the12031203- * eager/expected split of the RcvArray entries.12041204- * Setting it up here ensures that the subcontexts will be waiting12051205- * (due to the above wait_event_interruptible() until the master12061206- * is setup.12071207- */12081208- ret = hfi1_user_exp_rcv_init(fp);12091209-done:12101210- return ret;12051205+ return 0;12111206}1212120712131208static int get_ctxt_info(struct file *fp, void __user *ubase, __u32 len)···12421261 int ret = 0;1243126212441263 /*12451245- * Context should be set up only once (including allocation and12641264+ * Context should be set up only once, including allocation and12461265 * programming of eager buffers. This is done if context sharing12471266 * is not requested or by the master process.12481267 */···12631282 if (ret)12641283 goto done;12651284 }12851285+ } else {12861286+ ret = wait_event_interruptible(uctxt->wait, !test_bit(12871287+ HFI1_CTXT_MASTER_UNINIT,12881288+ &uctxt->event_flags));12891289+ if (ret)12901290+ goto done;12661291 }12921292+12671293 ret = hfi1_user_sdma_alloc_queues(uctxt, fp);12941294+ if (ret)12951295+ goto done;12961296+ /*12971297+ * Expected receive has to be setup for all processes (including12981298+ * shared contexts). However, it has to be done after the master12991299+ * context has been fully configured as it depends on the13001300+ * eager/expected split of the RcvArray entries.13011301+ * Setting it up here ensures that the subcontexts will be waiting13021302+ * (due to the above wait_event_interruptible() until the master13031303+ * is setup.13041304+ */13051305+ ret = hfi1_user_exp_rcv_init(fp);12681306 if (ret)12691307 goto done;12701308···15651565{15661566 struct hfi1_devdata *dd = filp->private_data;1567156715681568- switch (whence) {15691569- case SEEK_SET:15701570- break;15711571- case SEEK_CUR:15721572- offset += filp->f_pos;15731573- break;15741574- case SEEK_END:15751575- offset = ((dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE) -15761576- offset;15771577- break;15781578- default:15791579- return -EINVAL;15801580- }15811581-15821582- if (offset < 0)15831583- return -EINVAL;15841584-15851585- if (offset >= (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE)15861586- return -EINVAL;15871587-15881588- filp->f_pos = offset;15891589-15901590- return filp->f_pos;15681568+ return fixed_size_llseek(filp, offset, whence,15691569+ (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE);15911570}1592157115931572/* NOTE: assumes unsigned long is 8 bytes */
+25-15
drivers/staging/rdma/hfi1/mmu_rb.c
···7171 struct mm_struct *,7272 unsigned long, unsigned long);7373static void mmu_notifier_mem_invalidate(struct mmu_notifier *,7474+ struct mm_struct *,7475 unsigned long, unsigned long);7576static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,7677 unsigned long, unsigned long);···138137 rbnode = rb_entry(node, struct mmu_rb_node, node);139138 rb_erase(node, root);140139 if (handler->ops->remove)141141- handler->ops->remove(root, rbnode, false);140140+ handler->ops->remove(root, rbnode, NULL);142141 }143142 }144143···177176 return ret;178177}179178180180-/* Caller must host handler lock */179179+/* Caller must hold handler lock */181180static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,182181 unsigned long addr,183182 unsigned long len)···201200 return node;202201}203202203203+/* Caller must *not* hold handler lock. */204204static void __mmu_rb_remove(struct mmu_rb_handler *handler,205205- struct mmu_rb_node *node, bool arg)205205+ struct mmu_rb_node *node, struct mm_struct *mm)206206{207207+ unsigned long flags;208208+207209 /* Validity of handler and node pointers has been checked by caller. */208210 hfi1_cdbg(MMU, "Removing node addr 0x%llx, len %u", node->addr,209211 node->len);212212+ spin_lock_irqsave(&handler->lock, flags);210213 __mmu_int_rb_remove(node, handler->root);214214+ spin_unlock_irqrestore(&handler->lock, flags);215215+211216 if (handler->ops->remove)212212- handler->ops->remove(handler->root, node, arg);217217+ handler->ops->remove(handler->root, node, mm);213218}214219215220struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *root, unsigned long addr,···238231void hfi1_mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node)239232{240233 struct mmu_rb_handler *handler = find_mmu_handler(root);241241- unsigned long flags;242234243235 if (!handler || !node)244236 return;245237246246- spin_lock_irqsave(&handler->lock, flags);247247- __mmu_rb_remove(handler, node, false);248248- spin_unlock_irqrestore(&handler->lock, flags);238238+ __mmu_rb_remove(handler, node, NULL);249239}250240251241static struct mmu_rb_handler *find_mmu_handler(struct rb_root *root)···264260static inline void mmu_notifier_page(struct mmu_notifier *mn,265261 struct mm_struct *mm, unsigned long addr)266262{267267- mmu_notifier_mem_invalidate(mn, addr, addr + PAGE_SIZE);263263+ mmu_notifier_mem_invalidate(mn, mm, addr, addr + PAGE_SIZE);268264}269265270266static inline void mmu_notifier_range_start(struct mmu_notifier *mn,···272268 unsigned long start,273269 unsigned long end)274270{275275- mmu_notifier_mem_invalidate(mn, start, end);271271+ mmu_notifier_mem_invalidate(mn, mm, start, end);276272}277273278274static void mmu_notifier_mem_invalidate(struct mmu_notifier *mn,275275+ struct mm_struct *mm,279276 unsigned long start, unsigned long end)280277{281278 struct mmu_rb_handler *handler =282279 container_of(mn, struct mmu_rb_handler, mn);283280 struct rb_root *root = handler->root;284284- struct mmu_rb_node *node;281281+ struct mmu_rb_node *node, *ptr = NULL;285282 unsigned long flags;286283287284 spin_lock_irqsave(&handler->lock, flags);288288- for (node = __mmu_int_rb_iter_first(root, start, end - 1); node;289289- node = __mmu_int_rb_iter_next(node, start, end - 1)) {285285+ for (node = __mmu_int_rb_iter_first(root, start, end - 1);286286+ node; node = ptr) {287287+ /* Guard against node removal. */288288+ ptr = __mmu_int_rb_iter_next(node, start, end - 1);290289 hfi1_cdbg(MMU, "Invalidating node addr 0x%llx, len %u",291290 node->addr, node->len);292292- if (handler->ops->invalidate(root, node))293293- __mmu_rb_remove(handler, node, true);291291+ if (handler->ops->invalidate(root, node)) {292292+ spin_unlock_irqrestore(&handler->lock, flags);293293+ __mmu_rb_remove(handler, node, mm);294294+ spin_lock_irqsave(&handler->lock, flags);295295+ }294296 }295297 spin_unlock_irqrestore(&handler->lock, flags);296298}
···519519 * do the flush work until that QP's520520 * sdma work has finished.521521 */522522+ spin_lock(&qp->s_lock);522523 if (qp->s_flags & RVT_S_WAIT_DMA) {523524 qp->s_flags &= ~RVT_S_WAIT_DMA;524525 hfi1_schedule_send(qp);525526 }527527+ spin_unlock(&qp->s_lock);526528}527529528530/**
+7-4
drivers/staging/rdma/hfi1/user_exp_rcv.c
···8787static int set_rcvarray_entry(struct file *, unsigned long, u32,8888 struct tid_group *, struct page **, unsigned);8989static int mmu_rb_insert(struct rb_root *, struct mmu_rb_node *);9090-static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *, bool);9090+static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *,9191+ struct mm_struct *);9192static int mmu_rb_invalidate(struct rb_root *, struct mmu_rb_node *);9293static int program_rcvarray(struct file *, unsigned long, struct tid_group *,9394 struct tid_pageset *, unsigned, u16, struct page **,···255254 struct hfi1_ctxtdata *uctxt = fd->uctxt;256255 struct tid_group *grp, *gptr;257256257257+ if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags))258258+ return 0;258259 /*259260 * The notifier would have been removed when the process'es mm260261 * was freed.···902899 if (!node || node->rcventry != (uctxt->expected_base + rcventry))903900 return -EBADF;904901 if (HFI1_CAP_IS_USET(TID_UNMAP))905905- mmu_rb_remove(&fd->tid_rb_root, &node->mmu, false);902902+ mmu_rb_remove(&fd->tid_rb_root, &node->mmu, NULL);906903 else907904 hfi1_mmu_rb_remove(&fd->tid_rb_root, &node->mmu);908905···968965 continue;969966 if (HFI1_CAP_IS_USET(TID_UNMAP))970967 mmu_rb_remove(&fd->tid_rb_root,971971- &node->mmu, false);968968+ &node->mmu, NULL);972969 else973970 hfi1_mmu_rb_remove(&fd->tid_rb_root,974971 &node->mmu);···10351032}1036103310371034static void mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node,10381038- bool notifier)10351035+ struct mm_struct *mm)10391036{10401037 struct hfi1_filedata *fdata =10411038 container_of(root, struct hfi1_filedata, tid_rb_root);
+22-11
drivers/staging/rdma/hfi1/user_sdma.c
···278278static void user_sdma_free_request(struct user_sdma_request *, bool);279279static int pin_vector_pages(struct user_sdma_request *,280280 struct user_sdma_iovec *);281281-static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned);281281+static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned,282282+ unsigned);282283static int check_header_template(struct user_sdma_request *,283284 struct hfi1_pkt_header *, u32, u32);284285static int set_txreq_header(struct user_sdma_request *,···300299static void activate_packet_queue(struct iowait *, int);301300static bool sdma_rb_filter(struct mmu_rb_node *, unsigned long, unsigned long);302301static int sdma_rb_insert(struct rb_root *, struct mmu_rb_node *);303303-static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *, bool);302302+static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *,303303+ struct mm_struct *);304304static int sdma_rb_invalidate(struct rb_root *, struct mmu_rb_node *);305305306306static struct mmu_rb_ops sdma_rb_ops = {···10651063 rb_node = hfi1_mmu_rb_search(&pq->sdma_rb_root,10661064 (unsigned long)iovec->iov.iov_base,10671065 iovec->iov.iov_len);10681068- if (rb_node)10661066+ if (rb_node && !IS_ERR(rb_node))10691067 node = container_of(rb_node, struct sdma_mmu_node, rb);10681068+ else10691069+ rb_node = NULL;1070107010711071 if (!node) {10721072 node = kzalloc(sizeof(*node), GFP_KERNEL);···11111107 goto bail;11121108 }11131109 if (pinned != npages) {11141114- unpin_vector_pages(current->mm, pages, pinned);11101110+ unpin_vector_pages(current->mm, pages, node->npages,11111111+ pinned);11151112 ret = -EFAULT;11161113 goto bail;11171114 }···11521147}1153114811541149static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,11551155- unsigned npages)11501150+ unsigned start, unsigned npages)11561151{11571157- hfi1_release_user_pages(mm, pages, npages, 0);11521152+ hfi1_release_user_pages(mm, pages + start, npages, 0);11581153 kfree(pages);11591154}11601155···15071502 &req->pq->sdma_rb_root,15081503 (unsigned long)req->iovs[i].iov.iov_base,15091504 req->iovs[i].iov.iov_len);15101510- if (!mnode)15051505+ if (!mnode || IS_ERR(mnode))15111506 continue;1512150715131508 node = container_of(mnode, struct sdma_mmu_node, rb);···15521547}1553154815541549static void sdma_rb_remove(struct rb_root *root, struct mmu_rb_node *mnode,15551555- bool notifier)15501550+ struct mm_struct *mm)15561551{15571552 struct sdma_mmu_node *node =15581553 container_of(mnode, struct sdma_mmu_node, rb);···15621557 node->pq->n_locked -= node->npages;15631558 spin_unlock(&node->pq->evict_lock);1564155915651565- unpin_vector_pages(notifier ? NULL : current->mm, node->pages,15601560+ /*15611561+ * If mm is set, we are being called by the MMU notifier and we15621562+ * should not pass a mm_struct to unpin_vector_page(). This is to15631563+ * prevent a deadlock when hfi1_release_user_pages() attempts to15641564+ * take the mmap_sem, which the MMU notifier has already taken.15651565+ */15661566+ unpin_vector_pages(mm ? NULL : current->mm, node->pages, 0,15661567 node->npages);15671568 /*15681569 * If called by the MMU notifier, we have to adjust the pinned15691570 * page count ourselves.15701571 */15711571- if (notifier)15721572- current->mm->pinned_vm -= node->npages;15721572+ if (mm)15731573+ mm->pinned_vm -= node->npages;15731574 kfree(node);15741575}15751576
···3232#error Wordsize not 32 or 643333#endif34343535+/*3636+ * The above primes are actively bad for hashing, since they are3737+ * too sparse. The 32-bit one is mostly ok, the 64-bit one causes3838+ * real problems. Besides, the "prime" part is pointless for the3939+ * multiplicative hash.4040+ *4141+ * Although a random odd number will do, it turns out that the golden4242+ * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice4343+ * properties.4444+ *4545+ * These are the negative, (1 - phi) = (phi^2) = (3 - sqrt(5))/2.4646+ * (See Knuth vol 3, section 6.4, exercise 9.)4747+ */4848+#define GOLDEN_RATIO_32 0x61C886474949+#define GOLDEN_RATIO_64 0x61C8864680B583EBull5050+3551static __always_inline u64 hash_64(u64 val, unsigned int bits)3652{3753 u64 hash = val;38543939-#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 644040- hash = hash * GOLDEN_RATIO_PRIME_64;5555+#if BITS_PER_LONG == 645656+ hash = hash * GOLDEN_RATIO_64;4157#else4258 /* Sigh, gcc can't optimise this alone like it does for 32 bits. */4359 u64 n = hash;
···196196 * We record lock dependency chains, so that we can cache them:197197 */198198struct lock_chain {199199- u8 irq_context;200200- u8 depth;201201- u16 base;199199+ /* see BUILD_BUG_ON()s in lookup_chain_cache() */200200+ unsigned int irq_context : 2,201201+ depth : 6,202202+ base : 24;203203+ /* 4 byte hole */202204 struct hlist_node entry;203205 u64 chain_key;204206};
+11
include/linux/mlx5/device.h
···393393 MLX5_CAP_OFF_CMDIF_CSUM = 46,394394};395395396396+enum {397397+ /*398398+ * Max wqe size for rdma read is 512 bytes, so this399399+ * limits our max_sge_rd as the wqe needs to fit:400400+ * - ctrl segment (16 bytes)401401+ * - rdma segment (16 bytes)402402+ * - scatter elements (16 bytes each)403403+ */404404+ MLX5_MAX_SGE_RD = (512 - 16 - 16) / 16405405+};406406+396407struct mlx5_inbox_hdr {397408 __be16 opcode;398409 u8 rsvd[4];
+4
include/linux/mm.h
···10311031 page = compound_head(page);10321032 if (atomic_read(compound_mapcount_ptr(page)) >= 0)10331033 return true;10341034+ if (PageHuge(page))10351035+ return false;10341036 for (i = 0; i < hpage_nr_pages(page); i++) {10351037 if (atomic_read(&page[i]._mapcount) >= 0)10361038 return true;···1140113811411139struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,11421140 pte_t pte);11411141+struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,11421142+ pmd_t pmd);1143114311441144int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,11451145 unsigned long size);
···375375/**376376 * struct vb2_ops - driver-specific callbacks377377 *378378+ * @verify_planes_array: Verify that a given user space structure contains379379+ * enough planes for the buffer. This is called380380+ * for each dequeued buffer.378381 * @fill_user_buffer: given a vb2_buffer fill in the userspace structure.379382 * For V4L2 this is a struct v4l2_buffer.380383 * @fill_vb2_buffer: given a userspace structure, fill in the vb2_buffer.···387384 * the vb2_buffer struct.388385 */389386struct vb2_buf_ops {387387+ int (*verify_planes_array)(struct vb2_buffer *vb, const void *pb);390388 void (*fill_user_buffer)(struct vb2_buffer *vb, void *pb);391389 int (*fill_vb2_buffer)(struct vb2_buffer *vb, const void *pb,392390 struct vb2_plane *planes);···404400 * @fileio_read_once: report EOF after reading the first buffer405401 * @fileio_write_immediately: queue buffer after each write() call406402 * @allow_zero_bytesused: allow bytesused == 0 to be passed to the driver403403+ * @quirk_poll_must_check_waiting_for_buffers: Return POLLERR at poll when QBUF404404+ * has not been called. This is a vb1 idiom that has been adopted405405+ * also by vb2.407406 * @lock: pointer to a mutex that protects the vb2_queue struct. The408407 * driver can set this to a mutex to let the v4l2 core serialize409408 * the queuing ioctls. If the driver wants to handle locking···470463 unsigned fileio_read_once:1;471464 unsigned fileio_write_immediately:1;472465 unsigned allow_zero_bytesused:1;466466+ unsigned quirk_poll_must_check_waiting_for_buffers:1;473467474468 struct mutex *lock;475469 void *owner;
···3434#define _RDMA_IB_H35353636#include <linux/types.h>3737+#include <linux/sched.h>37383839struct ib_addr {3940 union {···8685 __be64 sib_sid_mask;8786 __u64 sib_scope_id;8887};8888+8989+/*9090+ * The IB interfaces that use write() as bi-directional ioctl() are9191+ * fundamentally unsafe, since there are lots of ways to trigger "write()"9292+ * calls from various contexts with elevated privileges. That includes the9393+ * traditional suid executable error message writes, but also various kernel9494+ * interfaces that can write to file descriptors.9595+ *9696+ * This function provides protection for the legacy API by restricting the9797+ * calling context.9898+ */9999+static inline bool ib_safe_file_access(struct file *filp)100100+{101101+ return filp->f_cred == current_cred() && segment_eq(get_fs(), USER_DS);102102+}8910390104#endif /* _RDMA_IB_H */
···3131{3232 switch (type) {3333 case BPF_TYPE_PROG:3434- atomic_inc(&((struct bpf_prog *)raw)->aux->refcnt);3434+ raw = bpf_prog_inc(raw);3535 break;3636 case BPF_TYPE_MAP:3737- bpf_map_inc(raw, true);3737+ raw = bpf_map_inc(raw, true);3838 break;3939 default:4040 WARN_ON_ONCE(1);···297297 goto out;298298299299 raw = bpf_any_get(inode->i_private, *type);300300- touch_atime(&path);300300+ if (!IS_ERR(raw))301301+ touch_atime(&path);301302302303 path_put(&path);303304 return raw;
+20-4
kernel/bpf/syscall.c
···218218 return f.file->private_data;219219}220220221221-void bpf_map_inc(struct bpf_map *map, bool uref)221221+/* prog's and map's refcnt limit */222222+#define BPF_MAX_REFCNT 32768223223+224224+struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref)222225{223223- atomic_inc(&map->refcnt);226226+ if (atomic_inc_return(&map->refcnt) > BPF_MAX_REFCNT) {227227+ atomic_dec(&map->refcnt);228228+ return ERR_PTR(-EBUSY);229229+ }224230 if (uref)225231 atomic_inc(&map->usercnt);232232+ return map;226233}227234228235struct bpf_map *bpf_map_get_with_uref(u32 ufd)···241234 if (IS_ERR(map))242235 return map;243236244244- bpf_map_inc(map, true);237237+ map = bpf_map_inc(map, true);245238 fdput(f);246239247240 return map;···665658 return f.file->private_data;666659}667660661661+struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog)662662+{663663+ if (atomic_inc_return(&prog->aux->refcnt) > BPF_MAX_REFCNT) {664664+ atomic_dec(&prog->aux->refcnt);665665+ return ERR_PTR(-EBUSY);666666+ }667667+ return prog;668668+}669669+668670/* called by sockets/tracing/seccomp before attaching program to an event669671 * pairs with bpf_prog_put()670672 */···686670 if (IS_ERR(prog))687671 return prog;688672689689- atomic_inc(&prog->aux->refcnt);673673+ prog = bpf_prog_inc(prog);690674 fdput(f);691675692676 return prog;
+47-29
kernel/bpf/verifier.c
···249249 [CONST_IMM] = "imm",250250};251251252252-static const struct {253253- int map_type;254254- int func_id;255255-} func_limit[] = {256256- {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call},257257- {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read},258258- {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_output},259259- {BPF_MAP_TYPE_STACK_TRACE, BPF_FUNC_get_stackid},260260-};261261-262252static void print_verifier_state(struct verifier_env *env)263253{264254 enum bpf_reg_type t;···933943934944static int check_map_func_compatibility(struct bpf_map *map, int func_id)935945{936936- bool bool_map, bool_func;937937- int i;938938-939946 if (!map)940947 return 0;941948942942- for (i = 0; i < ARRAY_SIZE(func_limit); i++) {943943- bool_map = (map->map_type == func_limit[i].map_type);944944- bool_func = (func_id == func_limit[i].func_id);945945- /* only when map & func pair match it can continue.946946- * don't allow any other map type to be passed into947947- * the special func;948948- */949949- if (bool_func && bool_map != bool_func) {950950- verbose("cannot pass map_type %d into func %d\n",951951- map->map_type, func_id);952952- return -EINVAL;953953- }949949+ /* We need a two way check, first is from map perspective ... */950950+ switch (map->map_type) {951951+ case BPF_MAP_TYPE_PROG_ARRAY:952952+ if (func_id != BPF_FUNC_tail_call)953953+ goto error;954954+ break;955955+ case BPF_MAP_TYPE_PERF_EVENT_ARRAY:956956+ if (func_id != BPF_FUNC_perf_event_read &&957957+ func_id != BPF_FUNC_perf_event_output)958958+ goto error;959959+ break;960960+ case BPF_MAP_TYPE_STACK_TRACE:961961+ if (func_id != BPF_FUNC_get_stackid)962962+ goto error;963963+ break;964964+ default:965965+ break;966966+ }967967+968968+ /* ... and second from the function itself. */969969+ switch (func_id) {970970+ case BPF_FUNC_tail_call:971971+ if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY)972972+ goto error;973973+ break;974974+ case BPF_FUNC_perf_event_read:975975+ case BPF_FUNC_perf_event_output:976976+ if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY)977977+ goto error;978978+ break;979979+ case BPF_FUNC_get_stackid:980980+ if (map->map_type != BPF_MAP_TYPE_STACK_TRACE)981981+ goto error;982982+ break;983983+ default:984984+ break;954985 }955986956987 return 0;988988+error:989989+ verbose("cannot pass map_type %d into func %d\n",990990+ map->map_type, func_id);991991+ return -EINVAL;957992}958993959994static int check_raw_mode(const struct bpf_func_proto *fn)···21262111 return -E2BIG;21272112 }2128211321292129- /* remember this map */21302130- env->used_maps[env->used_map_cnt++] = map;21312131-21322114 /* hold the map. If the program is rejected by verifier,21332115 * the map will be released by release_maps() or it21342116 * will be used by the valid program until it's unloaded21352117 * and all maps are released in free_bpf_prog_info()21362118 */21372137- bpf_map_inc(map, false);21192119+ map = bpf_map_inc(map, false);21202120+ if (IS_ERR(map)) {21212121+ fdput(f);21222122+ return PTR_ERR(map);21232123+ }21242124+ env->used_maps[env->used_map_cnt++] = map;21252125+21382126 fdput(f);21392127next_insn:21402128 insn++;
+5-2
kernel/cgroup.c
···28252825 size_t nbytes, loff_t off, bool threadgroup)28262826{28272827 struct task_struct *tsk;28282828+ struct cgroup_subsys *ss;28282829 struct cgroup *cgrp;28292830 pid_t pid;28302830- int ret;28312831+ int ssid, ret;2831283228322833 if (kstrtoint(strstrip(buf), 0, &pid) || pid < 0)28332834 return -EINVAL;···28762875 rcu_read_unlock();28772876out_unlock_threadgroup:28782877 percpu_up_write(&cgroup_threadgroup_rwsem);28782878+ for_each_subsys(ss, ssid)28792879+ if (ss->post_attach)28802880+ ss->post_attach();28792881 cgroup_kn_unlock(of->kn);28802880- cpuset_post_attach_flush();28812882 return ret ?: nbytes;28822883}28832884
···412412 if (ret || !write)413413 return ret;414414415415- if (sysctl_perf_cpu_time_max_percent == 100) {415415+ if (sysctl_perf_cpu_time_max_percent == 100 ||416416+ sysctl_perf_cpu_time_max_percent == 0) {416417 printk(KERN_WARNING417418 "perf: Dynamic interrupt throttling disabled, can hang your system!\n");418419 WRITE_ONCE(perf_sample_allowed_ns, 0);···11061105 * function.11071106 *11081107 * Lock order:11081108+ * cred_guard_mutex11091109 * task_struct::perf_event_mutex11101110 * perf_event_context::mutex11111111 * perf_event::child_mutex;···34223420find_lively_task_by_vpid(pid_t vpid)34233421{34243422 struct task_struct *task;34253425- int err;3426342334273424 rcu_read_lock();34283425 if (!vpid)···34353434 if (!task)34363435 return ERR_PTR(-ESRCH);3437343634383438- /* Reuse ptrace permission checks for now. */34393439- err = -EACCES;34403440- if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))34413441- goto errout;34423442-34433437 return task;34443444-errout:34453445- put_task_struct(task);34463446- return ERR_PTR(err);34473447-34483438}3449343934503440/*···8438844684398447 get_online_cpus();8440844884498449+ if (task) {84508450+ err = mutex_lock_interruptible(&task->signal->cred_guard_mutex);84518451+ if (err)84528452+ goto err_cpus;84538453+84548454+ /*84558455+ * Reuse ptrace permission checks for now.84568456+ *84578457+ * We must hold cred_guard_mutex across this and any potential84588458+ * perf_install_in_context() call for this new event to84598459+ * serialize against exec() altering our credentials (and the84608460+ * perf_event_exit_task() that could imply).84618461+ */84628462+ err = -EACCES;84638463+ if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))84648464+ goto err_cred;84658465+ }84668466+84418467 if (flags & PERF_FLAG_PID_CGROUP)84428468 cgroup_fd = pid;84438469···84638453 NULL, NULL, cgroup_fd);84648454 if (IS_ERR(event)) {84658455 err = PTR_ERR(event);84668466- goto err_cpus;84568456+ goto err_cred;84678457 }8468845884698459 if (is_sampling_event(event)) {···85208510 if ((pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE) && group_leader) {85218511 err = -EBUSY;85228512 goto err_context;85238523- }85248524-85258525- if (task) {85268526- put_task_struct(task);85278527- task = NULL;85288513 }8529851485308515 /*···8619861486208615 WARN_ON_ONCE(ctx->parent_ctx);8621861686178617+ /*86188618+ * This is the point on no return; we cannot fail hereafter. This is86198619+ * where we start modifying current state.86208620+ */86218621+86228622 if (move_group) {86238623 /*86248624 * See perf_event_ctx_lock() for comments on the details···86958685 mutex_unlock(&gctx->mutex);86968686 mutex_unlock(&ctx->mutex);8697868786888688+ if (task) {86898689+ mutex_unlock(&task->signal->cred_guard_mutex);86908690+ put_task_struct(task);86918691+ }86928692+86988693 put_online_cpus();8699869487008695 mutex_lock(¤t->perf_event_mutex);···87328717 */87338718 if (!event_file)87348719 free_event(event);87208720+err_cred:87218721+ if (task)87228722+ mutex_unlock(&task->signal->cred_guard_mutex);87358723err_cpus:87368724 put_online_cpus();87378725err_task:···9019900190209002/*90219003 * When a child task exits, feed back event values to parent events.90049004+ *90059005+ * Can be called with cred_guard_mutex held when called from90069006+ * install_exec_creds().90229007 */90239008void perf_event_exit_task(struct task_struct *child)90249009{
+2-1
kernel/kcov.c
···11#define pr_fmt(fmt) "kcov: " fmt2233+#define DISABLE_BRANCH_PROFILING34#include <linux/compiler.h>45#include <linux/types.h>56#include <linux/file.h>···4443 * Entry point from instrumented code.4544 * This is called once per basic-block/edge.4645 */4747-void __sanitizer_cov_trace_pc(void)4646+void notrace __sanitizer_cov_trace_pc(void)4847{4948 struct task_struct *t;5049 enum kcov_mode mode;
···21762176 chain->irq_context = hlock->irq_context;21772177 i = get_first_held_lock(curr, hlock);21782178 chain->depth = curr->lockdep_depth + 1 - i;21792179+21802180+ BUILD_BUG_ON((1UL << 24) <= ARRAY_SIZE(chain_hlocks));21812181+ BUILD_BUG_ON((1UL << 6) <= ARRAY_SIZE(curr->held_locks));21822182+ BUILD_BUG_ON((1UL << 8*sizeof(chain_hlocks[0])) <= ARRAY_SIZE(lock_classes));21832183+21792184 if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {21802185 chain->base = nr_chain_hlocks;21812181- nr_chain_hlocks += chain->depth;21822186 for (j = 0; j < chain->depth - 1; j++, i++) {21832187 int lock_id = curr->held_locks[i].class_idx - 1;21842188 chain_hlocks[chain->base + j] = lock_id;21852189 }21862190 chain_hlocks[chain->base + j] = class - lock_classes;21872191 }21922192+21932193+ if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)21942194+ nr_chain_hlocks += chain->depth;21952195+21962196+#ifdef CONFIG_DEBUG_LOCKDEP21972197+ /*21982198+ * Important for check_no_collision().21992199+ */22002200+ if (unlikely(nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)) {22012201+ if (debug_locks_off_graph_unlock())22022202+ return 0;22032203+22042204+ print_lockdep_off("BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!");22052205+ dump_stack();22062206+ return 0;22072207+ }22082208+#endif22092209+21882210 hlist_add_head_rcu(&chain->entry, hash_head);21892211 debug_atomic_inc(chain_lookup_misses);21902212 inc_chains();···29542932 return 1;29552933}2956293429352935+static inline unsigned int task_irq_context(struct task_struct *task)29362936+{29372937+ return 2 * !!task->hardirq_context + !!task->softirq_context;29382938+}29392939+29572940static int separate_irq_context(struct task_struct *curr,29582941 struct held_lock *hlock)29592942{···29672940 /*29682941 * Keep track of points where we cross into an interrupt context:29692942 */29702970- hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +29712971- curr->softirq_context;29722943 if (depth) {29732944 struct held_lock *prev_hlock;29742945···29962971 struct held_lock *hlock)29972972{29982973 return 1;29742974+}29752975+29762976+static inline unsigned int task_irq_context(struct task_struct *task)29772977+{29782978+ return 0;29992979}3000298030012981static inline int separate_irq_context(struct task_struct *curr,···32713241 hlock->acquire_ip = ip;32723242 hlock->instance = lock;32733243 hlock->nest_lock = nest_lock;32443244+ hlock->irq_context = task_irq_context(curr);32743245 hlock->trylock = trylock;32753246 hlock->read = read;32763247 hlock->check = check;
+2
kernel/locking/lockdep_proc.c
···141141 int i;142142143143 if (v == SEQ_START_TOKEN) {144144+ if (nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)145145+ seq_printf(m, "(buggered) ");144146 seq_printf(m, "all lock chains:\n");145147 return 0;146148 }
+29
kernel/workqueue.c
···666666 */667667 smp_wmb();668668 set_work_data(work, (unsigned long)pool_id << WORK_OFFQ_POOL_SHIFT, 0);669669+ /*670670+ * The following mb guarantees that previous clear of a PENDING bit671671+ * will not be reordered with any speculative LOADS or STORES from672672+ * work->current_func, which is executed afterwards. This possible673673+ * reordering can lead to a missed execution on attempt to qeueue674674+ * the same @work. E.g. consider this case:675675+ *676676+ * CPU#0 CPU#1677677+ * ---------------------------- --------------------------------678678+ *679679+ * 1 STORE event_indicated680680+ * 2 queue_work_on() {681681+ * 3 test_and_set_bit(PENDING)682682+ * 4 } set_..._and_clear_pending() {683683+ * 5 set_work_data() # clear bit684684+ * 6 smp_mb()685685+ * 7 work->current_func() {686686+ * 8 LOAD event_indicated687687+ * }688688+ *689689+ * Without an explicit full barrier speculative LOAD on line 8 can690690+ * be executed before CPU#0 does STORE on line 1. If that happens,691691+ * CPU#0 observes the PENDING bit is still set and new execution of692692+ * a @work is not queued in a hope, that CPU#1 will eventually693693+ * finish the queued @work. Meanwhile CPU#1 does not see694694+ * event_indicated is set, because speculative LOAD was executed695695+ * before actual STORE.696696+ */697697+ smp_mb();669698}670699671700static void clear_work_data(struct work_struct *work)
-4
lib/stackdepot.c
···210210 goto fast_exit;211211212212 hash = hash_stack(trace->entries, trace->nr_entries);213213- /* Bad luck, we won't store this stack. */214214- if (hash == 0)215215- goto exit;216216-217213 bucket = &stack_table[hash & STACK_HASH_MASK];218214219215 /*
+5-7
mm/huge_memory.c
···232232 return READ_ONCE(huge_zero_page);233233}234234235235-static void put_huge_zero_page(void)235235+void put_huge_zero_page(void)236236{237237 /*238238 * Counter should never go to zero here. Only shrinker can put···16841684 if (vma_is_dax(vma)) {16851685 spin_unlock(ptl);16861686 if (is_huge_zero_pmd(orig_pmd))16871687- put_huge_zero_page();16871687+ tlb_remove_page(tlb, pmd_page(orig_pmd));16881688 } else if (is_huge_zero_pmd(orig_pmd)) {16891689 pte_free(tlb->mm, pgtable_trans_huge_withdraw(tlb->mm, pmd));16901690 atomic_long_dec(&tlb->mm->nr_ptes);16911691 spin_unlock(ptl);16921692- put_huge_zero_page();16921692+ tlb_remove_page(tlb, pmd_page(orig_pmd));16931693 } else {16941694 struct page *page = pmd_page(orig_pmd);16951695 page_remove_rmap(page, true);···19601960 * page fault if needed.19611961 */19621962 return 0;19631963- if (vma->vm_ops)19631963+ if (vma->vm_ops || (vm_flags & VM_NO_THP))19641964 /* khugepaged not yet working on file or special mappings */19651965 return 0;19661966- VM_BUG_ON_VMA(vm_flags & VM_NO_THP, vma);19671966 hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;19681967 hend = vma->vm_end & HPAGE_PMD_MASK;19691968 if (hstart < hend)···23512352 return false;23522353 if (is_vma_temporary_stack(vma))23532354 return false;23542354- VM_BUG_ON_VMA(vma->vm_flags & VM_NO_THP, vma);23552355- return true;23552355+ return !(vma->vm_flags & VM_NO_THP);23562356}2357235723582358static void collapse_huge_page(struct mm_struct *mm,
+19-18
mm/memcontrol.c
···207207/* "mc" and its members are protected by cgroup_mutex */208208static struct move_charge_struct {209209 spinlock_t lock; /* for from, to */210210+ struct mm_struct *mm;210211 struct mem_cgroup *from;211212 struct mem_cgroup *to;212213 unsigned long flags;···4668466746694668static void mem_cgroup_clear_mc(void)46704669{46704670+ struct mm_struct *mm = mc.mm;46714671+46714672 /*46724673 * we must clear moving_task before waking up waiters at the end of46734674 * task migration.···46794676 spin_lock(&mc.lock);46804677 mc.from = NULL;46814678 mc.to = NULL;46794679+ mc.mm = NULL;46824680 spin_unlock(&mc.lock);46814681+46824682+ mmput(mm);46834683}4684468446854685static int mem_cgroup_can_attach(struct cgroup_taskset *tset)···47394733 VM_BUG_ON(mc.moved_swap);4740473447414735 spin_lock(&mc.lock);47364736+ mc.mm = mm;47424737 mc.from = from;47434738 mc.to = memcg;47444739 mc.flags = move_flags;···47494742 ret = mem_cgroup_precharge_mc(mm);47504743 if (ret)47514744 mem_cgroup_clear_mc();47454745+ } else {47464746+ mmput(mm);47524747 }47534753- mmput(mm);47544748 return ret;47554749}47564750···48604852 return ret;48614853}4862485448634863-static void mem_cgroup_move_charge(struct mm_struct *mm)48554855+static void mem_cgroup_move_charge(void)48644856{48654857 struct mm_walk mem_cgroup_move_charge_walk = {48664858 .pmd_entry = mem_cgroup_move_charge_pte_range,48674867- .mm = mm,48594859+ .mm = mc.mm,48684860 };4869486148704862 lru_add_drain_all();···48764868 atomic_inc(&mc.from->moving_account);48774869 synchronize_rcu();48784870retry:48794879- if (unlikely(!down_read_trylock(&mm->mmap_sem))) {48714871+ if (unlikely(!down_read_trylock(&mc.mm->mmap_sem))) {48804872 /*48814873 * Someone who are holding the mmap_sem might be waiting in48824874 * waitq. So we cancel all extra charges, wake up all waiters,···48934885 * additional charge, the page walk just aborts.48944886 */48954887 walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk);48964896- up_read(&mm->mmap_sem);48884888+ up_read(&mc.mm->mmap_sem);48974889 atomic_dec(&mc.from->moving_account);48984890}4899489149004900-static void mem_cgroup_move_task(struct cgroup_taskset *tset)48924892+static void mem_cgroup_move_task(void)49014893{49024902- struct cgroup_subsys_state *css;49034903- struct task_struct *p = cgroup_taskset_first(tset, &css);49044904- struct mm_struct *mm = get_task_mm(p);49054905-49064906- if (mm) {49074907- if (mc.to)49084908- mem_cgroup_move_charge(mm);49094909- mmput(mm);49104910- }49114911- if (mc.to)48944894+ if (mc.to) {48954895+ mem_cgroup_move_charge();49124896 mem_cgroup_clear_mc();48974897+ }49134898}49144899#else /* !CONFIG_MMU */49154900static int mem_cgroup_can_attach(struct cgroup_taskset *tset)···49124911static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)49134912{49144913}49154915-static void mem_cgroup_move_task(struct cgroup_taskset *tset)49144914+static void mem_cgroup_move_task(void)49164915{49174916}49184917#endif···51965195 .css_reset = mem_cgroup_css_reset,51975196 .can_attach = mem_cgroup_can_attach,51985197 .cancel_attach = mem_cgroup_cancel_attach,51995199- .attach = mem_cgroup_move_task,51985198+ .post_attach = mem_cgroup_move_task,52005199 .bind = mem_cgroup_bind,52015200 .dfl_cftypes = memory_files,52025201 .legacy_cftypes = mem_cgroup_legacy_files,
···789789 return pfn_to_page(pfn);790790}791791792792+#ifdef CONFIG_TRANSPARENT_HUGEPAGE793793+struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,794794+ pmd_t pmd)795795+{796796+ unsigned long pfn = pmd_pfn(pmd);797797+798798+ /*799799+ * There is no pmd_special() but there may be special pmds, e.g.800800+ * in a direct-access (dax) mapping, so let's just replicate the801801+ * !HAVE_PTE_SPECIAL case from vm_normal_page() here.802802+ */803803+ if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {804804+ if (vma->vm_flags & VM_MIXEDMAP) {805805+ if (!pfn_valid(pfn))806806+ return NULL;807807+ goto out;808808+ } else {809809+ unsigned long off;810810+ off = (addr - vma->vm_start) >> PAGE_SHIFT;811811+ if (pfn == vma->vm_pgoff + off)812812+ return NULL;813813+ if (!is_cow_mapping(vma->vm_flags))814814+ return NULL;815815+ }816816+ }817817+818818+ if (is_zero_pfn(pfn))819819+ return NULL;820820+ if (unlikely(pfn > highest_memmap_pfn))821821+ return NULL;822822+823823+ /*824824+ * NOTE! We still have PageReserved() pages in the page tables.825825+ * eg. VDSO mappings can cause them to exist.826826+ */827827+out:828828+ return pfn_to_page(pfn);829829+}830830+#endif831831+792832/*793833 * copy one vm_area from one task to the other. Assumes the page tables794834 * already present in the new task to be cleared in the whole range
+7-1
mm/migrate.c
···975975 dec_zone_page_state(page, NR_ISOLATED_ANON +976976 page_is_file_cache(page));977977 /* Soft-offlined page shouldn't go through lru cache list */978978- if (reason == MR_MEMORY_FAILURE) {978978+ if (reason == MR_MEMORY_FAILURE && rc == MIGRATEPAGE_SUCCESS) {979979+ /*980980+ * With this release, we free successfully migrated981981+ * page and set PG_HWPoison on just freed page982982+ * intentionally. Although it's rather weird, it's how983983+ * HWPoison flag works at the moment.984984+ */979985 put_page(page);980986 if (!test_set_page_hwpoison(page))981987 num_poisoned_pages_inc();
+5-1
mm/page_io.c
···353353354354 ret = bdev_read_page(sis->bdev, swap_page_sector(page), page);355355 if (!ret) {356356- swap_slot_free_notify(page);356356+ if (trylock_page(page)) {357357+ swap_slot_free_notify(page);358358+ unlock_page(page);359359+ }360360+357361 count_vm_event(PSWPIN);358362 return 0;359363 }
+5
mm/swap.c
···728728 zone = NULL;729729 }730730731731+ if (is_huge_zero_page(page)) {732732+ put_huge_zero_page();733733+ continue;734734+ }735735+731736 page = compound_head(page);732737 if (!put_page_testzero(page))733738 continue;
+15-15
mm/vmscan.c
···25532553 sc->gfp_mask |= __GFP_HIGHMEM;2554255425552555 for_each_zone_zonelist_nodemask(zone, z, zonelist,25562556- requested_highidx, sc->nodemask) {25562556+ gfp_zone(sc->gfp_mask), sc->nodemask) {25572557 enum zone_type classzone_idx;2558255825592559 if (!populated_zone(zone))···33183318 /* Try to sleep for a short interval */33193319 if (prepare_kswapd_sleep(pgdat, order, remaining,33203320 balanced_classzone_idx)) {33213321+ /*33223322+ * Compaction records what page blocks it recently failed to33233323+ * isolate pages from and skips them in the future scanning.33243324+ * When kswapd is going to sleep, it is reasonable to assume33253325+ * that pages and compaction may succeed so reset the cache.33263326+ */33273327+ reset_isolation_suitable(pgdat);33283328+33293329+ /*33303330+ * We have freed the memory, now we should compact it to make33313331+ * allocation of the requested order possible.33323332+ */33333333+ wakeup_kcompactd(pgdat, order, classzone_idx);33343334+33213335 remaining = schedule_timeout(HZ/10);33223336 finish_wait(&pgdat->kswapd_wait, &wait);33233337 prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE);···33543340 * them before going back to sleep.33553341 */33563342 set_pgdat_percpu_threshold(pgdat, calculate_normal_threshold);33573357-33583358- /*33593359- * Compaction records what page blocks it recently failed to33603360- * isolate pages from and skips them in the future scanning.33613361- * When kswapd is going to sleep, it is reasonable to assume33623362- * that pages and compaction may succeed so reset the cache.33633363- */33643364- reset_isolation_suitable(pgdat);33653365-33663366- /*33673367- * We have freed the memory, now we should compact it to make33683368- * allocation of the requested order possible.33693369- */33703370- wakeup_kcompactd(pgdat, order, classzone_idx);3371334333723344 if (!kthread_should_stop())33733345 schedule();
+12
net/batman-adv/bat_v.c
···32323333#include "bat_v_elp.h"3434#include "bat_v_ogm.h"3535+#include "hard-interface.h"3536#include "hash.h"3637#include "originator.h"3738#include "packet.h"3939+4040+static void batadv_v_iface_activate(struct batadv_hard_iface *hard_iface)4141+{4242+ /* B.A.T.M.A.N. V does not use any queuing mechanism, therefore it can4343+ * set the interface as ACTIVE right away, without any risk of race4444+ * condition4545+ */4646+ if (hard_iface->if_status == BATADV_IF_TO_BE_ACTIVATED)4747+ hard_iface->if_status = BATADV_IF_ACTIVE;4848+}38493950static int batadv_v_iface_enable(struct batadv_hard_iface *hard_iface)4051{···285274286275static struct batadv_algo_ops batadv_batman_v __read_mostly = {287276 .name = "BATMAN_V",277277+ .bat_iface_activate = batadv_v_iface_activate,288278 .bat_iface_enable = batadv_v_iface_enable,289279 .bat_iface_disable = batadv_v_iface_disable,290280 .bat_iface_update_mac = batadv_v_iface_update_mac,
+10-7
net/batman-adv/distributed-arp-table.c
···568568 * be sent to569569 * @bat_priv: the bat priv with all the soft interface information570570 * @ip_dst: ipv4 to look up in the DHT571571+ * @vid: VLAN identifier571572 *572573 * An originator O is selected if and only if its DHT_ID value is one of three573574 * closest values (from the LEFT, with wrap around if needed) then the hash···577576 * Return: the candidate array of size BATADV_DAT_CANDIDATE_NUM.578577 */579578static struct batadv_dat_candidate *580580-batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)579579+batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst,580580+ unsigned short vid)581581{582582 int select;583583 batadv_dat_addr_t last_max = BATADV_DAT_ADDR_MAX, ip_key;···594592 return NULL;595593596594 dat.ip = ip_dst;597597- dat.vid = 0;595595+ dat.vid = vid;598596 ip_key = (batadv_dat_addr_t)batadv_hash_dat(&dat,599597 BATADV_DAT_ADDR_MAX);600598···614612 * @bat_priv: the bat priv with all the soft interface information615613 * @skb: payload to send616614 * @ip: the DHT key615615+ * @vid: VLAN identifier617616 * @packet_subtype: unicast4addr packet subtype to use618617 *619618 * This function copies the skb with pskb_copy() and is sent as unicast packet···625622 */626623static bool batadv_dat_send_data(struct batadv_priv *bat_priv,627624 struct sk_buff *skb, __be32 ip,628628- int packet_subtype)625625+ unsigned short vid, int packet_subtype)629626{630627 int i;631628 bool ret = false;···634631 struct sk_buff *tmp_skb;635632 struct batadv_dat_candidate *cand;636633637637- cand = batadv_dat_select_candidates(bat_priv, ip);634634+ cand = batadv_dat_select_candidates(bat_priv, ip, vid);638635 if (!cand)639636 goto out;640637···10251022 ret = true;10261023 } else {10271024 /* Send the request to the DHT */10281028- ret = batadv_dat_send_data(bat_priv, skb, ip_dst,10251025+ ret = batadv_dat_send_data(bat_priv, skb, ip_dst, vid,10291026 BATADV_P_DAT_DHT_GET);10301027 }10311028out:···11531150 /* Send the ARP reply to the candidates for both the IP addresses that11541151 * the node obtained from the ARP reply11551152 */11561156- batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT);11571157- batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT);11531153+ batadv_dat_send_data(bat_priv, skb, ip_src, vid, BATADV_P_DAT_DHT_PUT);11541154+ batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT);11581155}1159115611601157/**
+4-2
net/batman-adv/hard-interface.c
···407407408408 batadv_update_min_mtu(hard_iface->soft_iface);409409410410+ if (bat_priv->bat_algo_ops->bat_iface_activate)411411+ bat_priv->bat_algo_ops->bat_iface_activate(hard_iface);412412+410413out:411414 if (primary_if)412415 batadv_hardif_put(primary_if);···575572 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);576573 struct batadv_hard_iface *primary_if = NULL;577574578578- if (hard_iface->if_status == BATADV_IF_ACTIVE)579579- batadv_hardif_deactivate_interface(hard_iface);575575+ batadv_hardif_deactivate_interface(hard_iface);580576581577 if (hard_iface->if_status != BATADV_IF_INACTIVE)582578 goto out;
···105105 neigh_node = NULL;106106107107 spin_lock_bh(&orig_node->neigh_list_lock);108108+ /* curr_router used earlier may not be the current orig_ifinfo->router109109+ * anymore because it was dereferenced outside of the neigh_list_lock110110+ * protected region. After the new best neighbor has replace the current111111+ * best neighbor the reference counter needs to decrease. Consequently,112112+ * the code needs to ensure the curr_router variable contains a pointer113113+ * to the replaced best neighbor.114114+ */115115+ curr_router = rcu_dereference_protected(orig_ifinfo->router, true);116116+108117 rcu_assign_pointer(orig_ifinfo->router, neigh_node);109118 spin_unlock_bh(&orig_node->neigh_list_lock);110119 batadv_orig_ifinfo_put(orig_ifinfo);
+6
net/batman-adv/send.c
···675675676676 if (pending) {677677 hlist_del(&forw_packet->list);678678+ if (!forw_packet->own)679679+ atomic_inc(&bat_priv->bcast_queue_left);680680+678681 batadv_forw_packet_free(forw_packet);679682 }680683 }···705702706703 if (pending) {707704 hlist_del(&forw_packet->list);705705+ if (!forw_packet->own)706706+ atomic_inc(&bat_priv->batman_queue_left);707707+708708 batadv_forw_packet_free(forw_packet);709709 }710710 }
+6-2
net/batman-adv/soft-interface.c
···408408 */409409 nf_reset(skb);410410411411+ if (unlikely(!pskb_may_pull(skb, ETH_HLEN)))412412+ goto dropped;413413+411414 vid = batadv_get_vid(skb, 0);412415 ethhdr = eth_hdr(skb);413416414417 switch (ntohs(ethhdr->h_proto)) {415418 case ETH_P_8021Q:419419+ if (!pskb_may_pull(skb, VLAN_ETH_HLEN))420420+ goto dropped;421421+416422 vhdr = (struct vlan_ethhdr *)skb->data;417423418424 if (vhdr->h_vlan_encapsulated_proto != ethertype)···430424 }431425432426 /* skb->dev & skb->pkt_type are set here */433433- if (unlikely(!pskb_may_pull(skb, ETH_HLEN)))434434- goto dropped;435427 skb->protocol = eth_type_trans(skb, soft_iface);436428437429 /* should not be necessary anymore as we use skb_pull_rcsum()
+4-38
net/batman-adv/translation-table.c
···215215 tt_local_entry = container_of(ref, struct batadv_tt_local_entry,216216 common.refcount);217217218218+ batadv_softif_vlan_put(tt_local_entry->vlan);219219+218220 kfree_rcu(tt_local_entry, common.rcu);219221}220222···675673 kref_get(&tt_local->common.refcount);676674 tt_local->last_seen = jiffies;677675 tt_local->common.added_at = tt_local->last_seen;676676+ tt_local->vlan = vlan;678677679678 /* the batman interface mac and multicast addresses should never be680679 * purged···994991 struct batadv_tt_common_entry *tt_common_entry;995992 struct batadv_tt_local_entry *tt_local;996993 struct batadv_hard_iface *primary_if;997997- struct batadv_softif_vlan *vlan;998994 struct hlist_head *head;999995 unsigned short vid;1000996 u32 i;···10291027 last_seen_msecs = last_seen_msecs % 1000;1030102810311029 no_purge = tt_common_entry->flags & np_flag;10321032-10331033- vlan = batadv_softif_vlan_get(bat_priv, vid);10341034- if (!vlan) {10351035- seq_printf(seq, "Cannot retrieve VLAN %d\n",10361036- BATADV_PRINT_VID(vid));10371037- continue;10381038- }10391039-10401030 seq_printf(seq,10411031 " * %pM %4i [%c%c%c%c%c%c] %3u.%03u (%#.8x)\n",10421032 tt_common_entry->addr,···10461052 BATADV_TT_CLIENT_ISOLA) ? 'I' : '.'),10471053 no_purge ? 0 : last_seen_secs,10481054 no_purge ? 0 : last_seen_msecs,10491049- vlan->tt.crc);10501050-10511051- batadv_softif_vlan_put(vlan);10551055+ tt_local->vlan->tt.crc);10521056 }10531057 rcu_read_unlock();10541058 }···10911099{10921100 struct batadv_tt_local_entry *tt_local_entry;10931101 u16 flags, curr_flags = BATADV_NO_FLAGS;10941094- struct batadv_softif_vlan *vlan;10951102 void *tt_entry_exists;1096110310971104 tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid);···1129113811301139 /* extra call to free the local tt entry */11311140 batadv_tt_local_entry_put(tt_local_entry);11321132-11331133- /* decrease the reference held for this vlan */11341134- vlan = batadv_softif_vlan_get(bat_priv, vid);11351135- if (!vlan)11361136- goto out;11371137-11381138- batadv_softif_vlan_put(vlan);11391139- batadv_softif_vlan_put(vlan);1140114111411142out:11421143 if (tt_local_entry)···12021219 spinlock_t *list_lock; /* protects write access to the hash lists */12031220 struct batadv_tt_common_entry *tt_common_entry;12041221 struct batadv_tt_local_entry *tt_local;12051205- struct batadv_softif_vlan *vlan;12061222 struct hlist_node *node_tmp;12071223 struct hlist_head *head;12081224 u32 i;···12221240 tt_local = container_of(tt_common_entry,12231241 struct batadv_tt_local_entry,12241242 common);12251225-12261226- /* decrease the reference held for this vlan */12271227- vlan = batadv_softif_vlan_get(bat_priv,12281228- tt_common_entry->vid);12291229- if (vlan) {12301230- batadv_softif_vlan_put(vlan);12311231- batadv_softif_vlan_put(vlan);12321232- }1233124312341244 batadv_tt_local_entry_put(tt_local);12351245 }···32833309 struct batadv_hashtable *hash = bat_priv->tt.local_hash;32843310 struct batadv_tt_common_entry *tt_common;32853311 struct batadv_tt_local_entry *tt_local;32863286- struct batadv_softif_vlan *vlan;32873312 struct hlist_node *node_tmp;32883313 struct hlist_head *head;32893314 spinlock_t *list_lock; /* protects write access to the hash lists */···33113338 tt_local = container_of(tt_common,33123339 struct batadv_tt_local_entry,33133340 common);33143314-33153315- /* decrease the reference held for this vlan */33163316- vlan = batadv_softif_vlan_get(bat_priv, tt_common->vid);33173317- if (vlan) {33183318- batadv_softif_vlan_put(vlan);33193319- batadv_softif_vlan_put(vlan);33203320- }3321334133223342 batadv_tt_local_entry_put(tt_local);33233343 }
+7
net/batman-adv/types.h
···433433 * @ifinfo_lock: lock protecting private ifinfo members and list434434 * @if_incoming: pointer to incoming hard-interface435435 * @last_seen: when last packet via this neighbor was received436436+ * @hardif_neigh: hardif_neigh of this neighbor436437 * @refcount: number of contexts the object is used437438 * @rcu: struct used for freeing in an RCU-safe manner438439 */···445444 spinlock_t ifinfo_lock; /* protects ifinfo_list and its members */446445 struct batadv_hard_iface *if_incoming;447446 unsigned long last_seen;447447+ struct batadv_hardif_neigh_node *hardif_neigh;448448 struct kref refcount;449449 struct rcu_head rcu;450450};···10751073 * struct batadv_tt_local_entry - translation table local entry data10761074 * @common: general translation table data10771075 * @last_seen: timestamp used for purging stale tt local entries10761076+ * @vlan: soft-interface vlan of the entry10781077 */10791078struct batadv_tt_local_entry {10801079 struct batadv_tt_common_entry common;10811080 unsigned long last_seen;10811081+ struct batadv_softif_vlan *vlan;10821082};1083108310841084/**···12541250 * struct batadv_algo_ops - mesh algorithm callbacks12551251 * @list: list node for the batadv_algo_list12561252 * @name: name of the algorithm12531253+ * @bat_iface_activate: start routing mechanisms when hard-interface is brought12541254+ * up12571255 * @bat_iface_enable: init routing info when hard-interface is enabled12581256 * @bat_iface_disable: de-init routing info when hard-interface is disabled12591257 * @bat_iface_update_mac: (re-)init mac addresses of the protocol information···12831277struct batadv_algo_ops {12841278 struct hlist_node list;12851279 char *name;12801280+ void (*bat_iface_activate)(struct batadv_hard_iface *hard_iface);12861281 int (*bat_iface_enable)(struct batadv_hard_iface *hard_iface);12871282 void (*bat_iface_disable)(struct batadv_hard_iface *hard_iface);12881283 void (*bat_iface_update_mac)(struct batadv_hard_iface *hard_iface);
···127127128128/*129129 * This is the only path that sets tc->t_sock. Send and receive trust that130130- * it is set. The RDS_CONN_CONNECTED bit protects those paths from being130130+ * it is set. The RDS_CONN_UP bit protects those paths from being131131 * called while it isn't set.132132 */133133void rds_tcp_set_callbacks(struct socket *sock, struct rds_connection *conn)···216216 if (!tc)217217 return -ENOMEM;218218219219+ mutex_init(&tc->t_conn_lock);219220 tc->t_sock = NULL;220221 tc->t_tinc = NULL;221222 tc->t_tinc_hdr_rem = sizeof(struct rds_header);
···2020#include <sound/core.h>2121#include <sound/hdaudio.h>2222#include <sound/hda_i915.h>2323+#include <sound/hda_register.h>23242425static struct i915_audio_component *hdac_acomp;2526···9897}9998EXPORT_SYMBOL_GPL(snd_hdac_display_power);10099100100+#define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \101101+ ((pci)->device == 0x0c0c) || \102102+ ((pci)->device == 0x0d0c) || \103103+ ((pci)->device == 0x160c))104104+101105/**102102- * snd_hdac_get_display_clk - Get CDCLK in kHz106106+ * snd_hdac_i915_set_bclk - Reprogram BCLK for HSW/BDW103107 * @bus: HDA core bus104108 *105105- * This function is supposed to be used only by a HD-audio controller106106- * driver that needs the interaction with i915 graphics.109109+ * Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK110110+ * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value)111111+ * are used to convert CDClk (Core Display Clock) to 24MHz BCLK:112112+ * BCLK = CDCLK * M / N113113+ * The values will be lost when the display power well is disabled and need to114114+ * be restored to avoid abnormal playback speed.107115 *108108- * This function queries CDCLK value in kHz from the graphics driver and109109- * returns the value. A negative code is returned in error.116116+ * Call this function at initializing and changing power well, as well as117117+ * at ELD notifier for the hotplug.110118 */111111-int snd_hdac_get_display_clk(struct hdac_bus *bus)119119+void snd_hdac_i915_set_bclk(struct hdac_bus *bus)112120{113121 struct i915_audio_component *acomp = bus->audio_component;122122+ struct pci_dev *pci = to_pci_dev(bus->dev);123123+ int cdclk_freq;124124+ unsigned int bclk_m, bclk_n;114125115115- if (!acomp || !acomp->ops)116116- return -ENODEV;126126+ if (!acomp || !acomp->ops || !acomp->ops->get_cdclk_freq)127127+ return; /* only for i915 binding */128128+ if (!CONTROLLER_IN_GPU(pci))129129+ return; /* only HSW/BDW */117130118118- return acomp->ops->get_cdclk_freq(acomp->dev);131131+ cdclk_freq = acomp->ops->get_cdclk_freq(acomp->dev);132132+ switch (cdclk_freq) {133133+ case 337500:134134+ bclk_m = 16;135135+ bclk_n = 225;136136+ break;137137+138138+ case 450000:139139+ default: /* default CDCLK 450MHz */140140+ bclk_m = 4;141141+ bclk_n = 75;142142+ break;143143+144144+ case 540000:145145+ bclk_m = 4;146146+ bclk_n = 90;147147+ break;148148+149149+ case 675000:150150+ bclk_m = 8;151151+ bclk_n = 225;152152+ break;153153+ }154154+155155+ snd_hdac_chip_writew(bus, HSW_EM4, bclk_m);156156+ snd_hdac_chip_writew(bus, HSW_EM5, bclk_n);119157}120120-EXPORT_SYMBOL_GPL(snd_hdac_get_display_clk);158158+EXPORT_SYMBOL_GPL(snd_hdac_i915_set_bclk);121159122160/* There is a fixed mapping between audio pin node and display port123161 * on current Intel platforms:
+4-52
sound/pci/hda/hda_intel.c
···857857#define azx_del_card_list(chip) /* NOP */858858#endif /* CONFIG_PM */859859860860-/* Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK861861- * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value)862862- * are used to convert CDClk (Core Display Clock) to 24MHz BCLK:863863- * BCLK = CDCLK * M / N864864- * The values will be lost when the display power well is disabled and need to865865- * be restored to avoid abnormal playback speed.866866- */867867-static void haswell_set_bclk(struct hda_intel *hda)868868-{869869- struct azx *chip = &hda->chip;870870- int cdclk_freq;871871- unsigned int bclk_m, bclk_n;872872-873873- if (!hda->need_i915_power)874874- return;875875-876876- cdclk_freq = snd_hdac_get_display_clk(azx_bus(chip));877877- switch (cdclk_freq) {878878- case 337500:879879- bclk_m = 16;880880- bclk_n = 225;881881- break;882882-883883- case 450000:884884- default: /* default CDCLK 450MHz */885885- bclk_m = 4;886886- bclk_n = 75;887887- break;888888-889889- case 540000:890890- bclk_m = 4;891891- bclk_n = 90;892892- break;893893-894894- case 675000:895895- bclk_m = 8;896896- bclk_n = 225;897897- break;898898- }899899-900900- azx_writew(chip, HSW_EM4, bclk_m);901901- azx_writew(chip, HSW_EM5, bclk_n);902902-}903903-904860#if defined(CONFIG_PM_SLEEP) || defined(SUPPORT_VGA_SWITCHEROO)905861/*906862 * power management···914958 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL915959 && hda->need_i915_power) {916960 snd_hdac_display_power(azx_bus(chip), true);917917- haswell_set_bclk(hda);961961+ snd_hdac_i915_set_bclk(azx_bus(chip));918962 }919963 if (chip->msi)920964 if (pci_enable_msi(pci) < 0)···10141058 bus = azx_bus(chip);10151059 if (hda->need_i915_power) {10161060 snd_hdac_display_power(bus, true);10171017- haswell_set_bclk(hda);10611061+ snd_hdac_i915_set_bclk(bus);10181062 } else {10191063 /* toggle codec wakeup bit for STATESTS read */10201064 snd_hdac_set_codec_wakeup(bus, true);···17521796 /* initialize chip */17531797 azx_init_pci(chip);1754179817551755- if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) {17561756- struct hda_intel *hda;17571757-17581758- hda = container_of(chip, struct hda_intel, chip);17591759- haswell_set_bclk(hda);17601760- }17991799+ if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL)18001800+ snd_hdac_i915_set_bclk(bus);1761180117621802 hda_intel_init_chip(chip, (probe_only[dev] & 2) == 0);17631803
···307307extern int arizona_init_gpio(struct snd_soc_codec *codec);308308extern int arizona_init_mono(struct snd_soc_codec *codec);309309310310+extern int arizona_free_spk(struct snd_soc_codec *codec);311311+310312extern int arizona_init_dai(struct arizona_priv *priv, int dai);311313312314int arizona_set_output_mode(struct snd_soc_codec *codec, int output,
+13-4
sound/soc/codecs/cs35l32.c
···274274 if (of_property_read_u32(np, "cirrus,sdout-share", &val) >= 0)275275 pdata->sdout_share = val;276276277277- of_property_read_u32(np, "cirrus,boost-manager", &val);277277+ if (of_property_read_u32(np, "cirrus,boost-manager", &val))278278+ val = -1u;279279+278280 switch (val) {279281 case CS35L32_BOOST_MGR_AUTO:280282 case CS35L32_BOOST_MGR_AUTO_AUDIO:···284282 case CS35L32_BOOST_MGR_FIXED:285283 pdata->boost_mng = val;286284 break;285285+ case -1u:287286 default:288287 dev_err(&i2c_client->dev,289288 "Wrong cirrus,boost-manager DT value %d\n", val);290289 pdata->boost_mng = CS35L32_BOOST_MGR_BYPASS;291290 }292291293293- of_property_read_u32(np, "cirrus,sdout-datacfg", &val);292292+ if (of_property_read_u32(np, "cirrus,sdout-datacfg", &val))293293+ val = -1u;294294 switch (val) {295295 case CS35L32_DATA_CFG_LR_VP:296296 case CS35L32_DATA_CFG_LR_STAT:···300296 case CS35L32_DATA_CFG_LR_VPSTAT:301297 pdata->sdout_datacfg = val;302298 break;299299+ case -1u:303300 default:304301 dev_err(&i2c_client->dev,305302 "Wrong cirrus,sdout-datacfg DT value %d\n", val);306303 pdata->sdout_datacfg = CS35L32_DATA_CFG_LR;307304 }308305309309- of_property_read_u32(np, "cirrus,battery-threshold", &val);306306+ if (of_property_read_u32(np, "cirrus,battery-threshold", &val))307307+ val = -1u;310308 switch (val) {311309 case CS35L32_BATT_THRESH_3_1V:312310 case CS35L32_BATT_THRESH_3_2V:···316310 case CS35L32_BATT_THRESH_3_4V:317311 pdata->batt_thresh = val;318312 break;313313+ case -1u:319314 default:320315 dev_err(&i2c_client->dev,321316 "Wrong cirrus,battery-threshold DT value %d\n", val);322317 pdata->batt_thresh = CS35L32_BATT_THRESH_3_3V;323318 }324319325325- of_property_read_u32(np, "cirrus,battery-recovery", &val);320320+ if (of_property_read_u32(np, "cirrus,battery-recovery", &val))321321+ val = -1u;326322 switch (val) {327323 case CS35L32_BATT_RECOV_3_1V:328324 case CS35L32_BATT_RECOV_3_2V:···334326 case CS35L32_BATT_RECOV_3_6V:335327 pdata->batt_recov = val;336328 break;329329+ case -1u:337330 default:338331 dev_err(&i2c_client->dev,339332 "Wrong cirrus,battery-recovery DT value %d\n", val);
···14201420}1421142114221422#ifdef CONFIG_PM14231423-static int hdmi_codec_resume(struct snd_soc_codec *codec)14231423+static int hdmi_codec_prepare(struct device *dev)14241424{14251425- struct hdac_ext_device *edev = snd_soc_codec_get_drvdata(codec);14251425+ struct hdac_ext_device *edev = to_hda_ext_device(dev);14261426+ struct hdac_device *hdac = &edev->hdac;14271427+14281428+ pm_runtime_get_sync(&edev->hdac.dev);14291429+14301430+ /*14311431+ * Power down afg.14321432+ * codec_read is preferred over codec_write to set the power state.14331433+ * This way verb is send to set the power state and response14341434+ * is received. So setting power state is ensured without using loop14351435+ * to read the state.14361436+ */14371437+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,14381438+ AC_PWRST_D3);14391439+14401440+ return 0;14411441+}14421442+14431443+static void hdmi_codec_complete(struct device *dev)14441444+{14451445+ struct hdac_ext_device *edev = to_hda_ext_device(dev);14261446 struct hdac_hdmi_priv *hdmi = edev->private_data;14271447 struct hdac_hdmi_pin *pin;14281448 struct hdac_device *hdac = &edev->hdac;14291429- struct hdac_bus *bus = hdac->bus;14301430- int err;14311431- unsigned long timeout;14491449+14501450+ /* Power up afg */14511451+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,14521452+ AC_PWRST_D0);1432145314331454 hdac_hdmi_skl_enable_all_pins(&edev->hdac);14341455 hdac_hdmi_skl_enable_dp12(&edev->hdac);14351435-14361436- /* Power up afg */14371437- if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)) {14381438-14391439- snd_hdac_codec_write(hdac, hdac->afg, 0,14401440- AC_VERB_SET_POWER_STATE, AC_PWRST_D0);14411441-14421442- /* Wait till power state is set to D0 */14431443- timeout = jiffies + msecs_to_jiffies(1000);14441444- while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)14451445- && time_before(jiffies, timeout)) {14461446- msleep(50);14471447- }14481448- }1449145614501457 /*14511458 * As the ELD notify callback request is not entertained while the···14621455 list_for_each_entry(pin, &hdmi->pin_list, head)14631456 hdac_hdmi_present_sense(pin, 1);1464145714651465- /*14661466- * Codec power is turned ON during controller resume.14671467- * Turn it OFF here14681468- */14691469- err = snd_hdac_display_power(bus, false);14701470- if (err < 0) {14711471- dev_err(bus->dev,14721472- "Cannot turn OFF display power on i915, err: %d\n",14731473- err);14741474- return err;14751475- }14761476-14771477- return 0;14581458+ pm_runtime_put_sync(&edev->hdac.dev);14781459}14791460#else14801480-#define hdmi_codec_resume NULL14611461+#define hdmi_codec_prepare NULL14621462+#define hdmi_codec_complete NULL14811463#endif1482146414831465static struct snd_soc_codec_driver hdmi_hda_codec = {14841466 .probe = hdmi_codec_probe,14851467 .remove = hdmi_codec_remove,14861486- .resume = hdmi_codec_resume,14871468 .idle_bias_off = true,14881469};14891470···15561561 struct hdac_ext_device *edev = to_hda_ext_device(dev);15571562 struct hdac_device *hdac = &edev->hdac;15581563 struct hdac_bus *bus = hdac->bus;15591559- unsigned long timeout;15601564 int err;1561156515621566 dev_dbg(dev, "Enter: %s\n", __func__);···15641570 if (!bus)15651571 return 0;1566157215671567- /* Power down afg */15681568- if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)) {15691569- snd_hdac_codec_write(hdac, hdac->afg, 0,15701570- AC_VERB_SET_POWER_STATE, AC_PWRST_D3);15711571-15721572- /* Wait till power state is set to D3 */15731573- timeout = jiffies + msecs_to_jiffies(1000);15741574- while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)15751575- && time_before(jiffies, timeout)) {15761576-15771577- msleep(50);15781578- }15791579- }15801580-15731573+ /*15741574+ * Power down afg.15751575+ * codec_read is preferred over codec_write to set the power state.15761576+ * This way verb is send to set the power state and response15771577+ * is received. So setting power state is ensured without using loop15781578+ * to read the state.15791579+ */15801580+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,15811581+ AC_PWRST_D3);15811582 err = snd_hdac_display_power(bus, false);15821583 if (err < 0) {15831584 dev_err(bus->dev, "Cannot turn on display power on i915\n");···16051616 hdac_hdmi_skl_enable_dp12(&edev->hdac);1606161716071618 /* Power up afg */16081608- if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0))16091609- snd_hdac_codec_write(hdac, hdac->afg, 0,16101610- AC_VERB_SET_POWER_STATE, AC_PWRST_D0);16191619+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,16201620+ AC_PWRST_D0);1611162116121622 return 0;16131623}···1617162916181630static const struct dev_pm_ops hdac_hdmi_pm = {16191631 SET_RUNTIME_PM_OPS(hdac_hdmi_runtime_suspend, hdac_hdmi_runtime_resume, NULL)16321632+ .prepare = hdmi_codec_prepare,16331633+ .complete = hdmi_codec_complete,16201634};1621163516221636static const struct hda_device_id hdmi_list[] = {
+71-55
sound/soc/codecs/nau8825.c
···343343 SND_SOC_DAPM_SUPPLY("ADC Power", NAU8825_REG_ANALOG_ADC_2, 6, 0, NULL,344344 0),345345346346- /* ADC for button press detection */347347- SND_SOC_DAPM_ADC("SAR", NULL, NAU8825_REG_SAR_CTRL,348348- NAU8825_SAR_ADC_EN_SFT, 0),346346+ /* ADC for button press detection. A dapm supply widget is used to347347+ * prevent dapm_power_widgets keeping the codec at SND_SOC_BIAS_ON348348+ * during suspend.349349+ */350350+ SND_SOC_DAPM_SUPPLY("SAR", NAU8825_REG_SAR_CTRL,351351+ NAU8825_SAR_ADC_EN_SFT, 0, NULL, 0),349352350353 SND_SOC_DAPM_PGA_S("ADACL", 2, NAU8825_REG_RDAC, 12, 0, NULL, 0),351354 SND_SOC_DAPM_PGA_S("ADACR", 2, NAU8825_REG_RDAC, 13, 0, NULL, 0),···610607611608static void nau8825_restart_jack_detection(struct regmap *regmap)612609{610610+ /* Chip needs one FSCLK cycle in order to generate interrupts,611611+ * as we cannot guarantee one will be provided by the system. Turning612612+ * master mode on then off enables us to generate that FSCLK cycle613613+ * with a minimum of contention on the clock bus.614614+ */615615+ regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,616616+ NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER);617617+ regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,618618+ NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE);619619+613620 /* this will restart the entire jack detection process including MIC/GND614621 * switching and create interrupts. We have to go from 0 to 1 and back615622 * to 0 to restart.···741728 struct regmap *regmap = nau8825->regmap;742729 int active_irq, clear_irq = 0, event = 0, event_mask = 0;743730744744- regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq);731731+ if (regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq)) {732732+ dev_err(nau8825->dev, "failed to read irq status\n");733733+ return IRQ_NONE;734734+ }745735746736 if ((active_irq & NAU8825_JACK_EJECTION_IRQ_MASK) ==747737 NAU8825_JACK_EJECTION_DETECTED) {···11571141 return ret;11581142 }11591143 }11601160-11611161- ret = regcache_sync(nau8825->regmap);11621162- if (ret) {11631163- dev_err(codec->dev,11641164- "Failed to sync cache: %d\n", ret);11651165- return ret;11661166- }11671144 }11681168-11691145 break;1170114611711147 case SND_SOC_BIAS_OFF:11721148 if (nau8825->mclk_freq)11731149 clk_disable_unprepare(nau8825->mclk);11741174-11751175- regcache_mark_dirty(nau8825->regmap);11761150 break;11771151 }11781152 return 0;11791153}11541154+11551155+#ifdef CONFIG_PM11561156+static int nau8825_suspend(struct snd_soc_codec *codec)11571157+{11581158+ struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);11591159+11601160+ disable_irq(nau8825->irq);11611161+ regcache_cache_only(nau8825->regmap, true);11621162+ regcache_mark_dirty(nau8825->regmap);11631163+11641164+ return 0;11651165+}11661166+11671167+static int nau8825_resume(struct snd_soc_codec *codec)11681168+{11691169+ struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);11701170+11711171+ /* The chip may lose power and reset in S3. regcache_sync restores11721172+ * register values including configurations for sysclk, irq, and11731173+ * jack/button detection.11741174+ */11751175+ regcache_cache_only(nau8825->regmap, false);11761176+ regcache_sync(nau8825->regmap);11771177+11781178+ /* Check the jack plug status directly. If the headset is unplugged11791179+ * during S3 when the chip has no power, there will be no jack11801180+ * detection irq even after the nau8825_restart_jack_detection below,11811181+ * because the chip just thinks no headset has ever been plugged in.11821182+ */11831183+ if (!nau8825_is_jack_inserted(nau8825->regmap)) {11841184+ nau8825_eject_jack(nau8825);11851185+ snd_soc_jack_report(nau8825->jack, 0, SND_JACK_HEADSET);11861186+ }11871187+11881188+ enable_irq(nau8825->irq);11891189+11901190+ /* Run jack detection to check the type (OMTP or CTIA) of the headset11911191+ * if there is one. This handles the case where a different type of11921192+ * headset is plugged in during S3. This triggers an IRQ iff a headset11931193+ * is already plugged in.11941194+ */11951195+ nau8825_restart_jack_detection(nau8825->regmap);11961196+11971197+ return 0;11981198+}11991199+#else12001200+#define nau8825_suspend NULL12011201+#define nau8825_resume NULL12021202+#endif1180120311811204static struct snd_soc_codec_driver nau8825_codec_driver = {11821205 .probe = nau8825_codec_probe,···12231168 .set_pll = nau8825_set_pll,12241169 .set_bias_level = nau8825_set_bias_level,12251170 .suspend_bias_off = true,11711171+ .suspend = nau8825_suspend,11721172+ .resume = nau8825_resume,1226117312271174 .controls = nau8825_controls,12281175 .num_controls = ARRAY_SIZE(nau8825_controls),···13341277 regmap_update_bits(regmap, NAU8825_REG_ENA_CTRL,13351278 NAU8825_ENABLE_DACR, NAU8825_ENABLE_DACR);1336127913371337- /* Chip needs one FSCLK cycle in order to generate interrupts,13381338- * as we cannot guarantee one will be provided by the system. Turning13391339- * master mode on then off enables us to generate that FSCLK cycle13401340- * with a minimum of contention on the clock bus.13411341- */13421342- regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,13431343- NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER);13441344- regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,13451345- NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE);13461346-13471280 ret = devm_request_threaded_irq(nau8825->dev, nau8825->irq, NULL,13481281 nau8825_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT,13491282 "nau8825", nau8825);···14011354 return 0;14021355}1403135614041404-#ifdef CONFIG_PM_SLEEP14051405-static int nau8825_suspend(struct device *dev)14061406-{14071407- struct i2c_client *client = to_i2c_client(dev);14081408- struct nau8825 *nau8825 = dev_get_drvdata(dev);14091409-14101410- disable_irq(client->irq);14111411- regcache_cache_only(nau8825->regmap, true);14121412- regcache_mark_dirty(nau8825->regmap);14131413-14141414- return 0;14151415-}14161416-14171417-static int nau8825_resume(struct device *dev)14181418-{14191419- struct i2c_client *client = to_i2c_client(dev);14201420- struct nau8825 *nau8825 = dev_get_drvdata(dev);14211421-14221422- regcache_cache_only(nau8825->regmap, false);14231423- regcache_sync(nau8825->regmap);14241424- enable_irq(client->irq);14251425-14261426- return 0;14271427-}14281428-#endif14291429-14301430-static const struct dev_pm_ops nau8825_pm = {14311431- SET_SYSTEM_SLEEP_PM_OPS(nau8825_suspend, nau8825_resume)14321432-};14331433-14341357static const struct i2c_device_id nau8825_i2c_ids[] = {14351358 { "nau8825", 0 },14361359 { }···14271410 .name = "nau8825",14281411 .of_match_table = of_match_ptr(nau8825_of_ids),14291412 .acpi_match_table = ACPI_PTR(nau8825_acpi_match),14301430- .pm = &nau8825_pm,14311413 },14321414 .probe = nau8825_i2c_probe,14331415 .remove = nau8825_i2c_remove,
+1-1
sound/soc/codecs/rt5640.c
···359359360360/* Interface data select */361361static const char * const rt5640_data_select[] = {362362- "Normal", "left copy to right", "right copy to left", "Swap"};362362+ "Normal", "Swap", "left copy to right", "right copy to left"};363363364364static SOC_ENUM_SINGLE_DECL(rt5640_if1_dac_enum, RT5640_DIG_INF_DATA,365365 RT5640_IF1_DAC_SEL_SFT, rt5640_data_select);
···13451345 return 0;1346134613471347 /* wait for pause to complete before we reset the stream */13481348- while (stream->running && tries--)13481348+ while (stream->running && --tries)13491349 msleep(1);13501350 if (!tries) {13511351 dev_err(hsw->dev, "error: reset stream %d still running\n",
···222222 struct hdac_ext_bus *ebus = pci_get_drvdata(pci);223223 struct skl *skl = ebus_to_skl(ebus);224224 struct hdac_bus *bus = ebus_to_hbus(ebus);225225+ int ret = 0;225226226227 /*227228 * Do not suspend if streams which are marked ignore suspend are···233232 enable_irq_wake(bus->irq);234233 pci_save_state(pci);235234 pci_disable_device(pci);236236- return 0;237235 } else {238238- return _skl_suspend(ebus);236236+ ret = _skl_suspend(ebus);237237+ if (ret < 0)238238+ return ret;239239 }240240+241241+ if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) {242242+ ret = snd_hdac_display_power(bus, false);243243+ if (ret < 0)244244+ dev_err(bus->dev,245245+ "Cannot turn OFF display power on i915\n");246246+ }247247+248248+ return ret;240249}241250242251static int skl_resume(struct device *dev)···327316328317 if (bus->irq >= 0)329318 free_irq(bus->irq, (void *)bus);330330- if (bus->remap_addr)331331- iounmap(bus->remap_addr);332332-333319 snd_hdac_bus_free_stream_pages(bus);334320 snd_hdac_stream_free_all(ebus);335321 snd_hdac_link_free_all(ebus);322322+323323+ if (bus->remap_addr)324324+ iounmap(bus->remap_addr);325325+336326 pci_release_regions(skl->pci);337327 pci_disable_device(skl->pci);338328339329 snd_hdac_ext_bus_exit(ebus);340330331331+ if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI))332332+ snd_hdac_i915_exit(&ebus->bus);341333 return 0;342334}343335···733719 if (skl->tplg)734720 release_firmware(skl->tplg);735721736736- if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI))737737- snd_hdac_i915_exit(&ebus->bus);738738-739722 if (pci_dev_run_wake(pci))740723 pm_runtime_get_noresume(&pci->dev);741741- pci_dev_put(pci);724724+725725+ /* codec removal, invoke bus_device_remove */726726+ snd_hdac_ext_bus_device_remove(ebus);727727+742728 skl_platform_unregister(&pci->dev);743729 skl_free_dsp(skl);744730 skl_machine_device_unregister(skl);
+7
sound/soc/soc-dapm.c
···21882188 int count = 0;21892189 char *state = "not set";2190219021912191+ /* card won't be set for the dummy component, as a spot fix21922192+ * we're checking for that case specifically here but in future21932193+ * we will ensure that the dummy component looks like others.21942194+ */21952195+ if (!cmpnt->card)21962196+ return 0;21972197+21912198 list_for_each_entry(w, &cmpnt->card->widgets, list) {21922199 if (w->dapm != dapm)21932200 continue;