Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/emulex/benet/be_main.c
drivers/net/ethernet/intel/igb/igb_main.c
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
include/net/scm.h
net/batman-adv/routing.c
net/ipv4/tcp_input.c

The e{uid,gid} --> {uid,gid} credentials fix conflicted with the
cleanup in net-next to now pass cred structs around.

The be2net driver had a bug fix in 'net' that overlapped with the VLAN
interface changes by Patrick McHardy in net-next.

An IGB conflict existed because in 'net' the build_skb() support was
reverted, and in 'net-next' there was a comment style fix within that
code.

Several batman-adv conflicts were resolved by making sure that all
calls to batadv_is_my_mac() are changed to have a new bat_priv first
argument.

Eric Dumazet's TS ECR fix in TCP in 'net' conflicted with the F-RTO
rewrite in 'net-next', mostly overlapping changes.

Thanks to Stephen Rothwell and Antonio Quartulli for help with several
of these merge resolutions.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3680 -2384
+1 -1
Documentation/DocBook/device-drivers.tmpl
··· 227 227 <chapter id="uart16x50"> 228 228 <title>16x50 UART Driver</title> 229 229 !Edrivers/tty/serial/serial_core.c 230 - !Edrivers/tty/serial/8250/8250.c 230 + !Edrivers/tty/serial/8250/8250_core.c 231 231 </chapter> 232 232 233 233 <chapter id="fbdev">
+26 -3
Documentation/kernel-parameters.txt
··· 596 596 is selected automatically. Check 597 597 Documentation/kdump/kdump.txt for further details. 598 598 599 - crashkernel_low=size[KMG] 600 - [KNL, x86] parts under 4G. 601 - 602 599 crashkernel=range1:size1[,range2:size2,...][@offset] 603 600 [KNL] Same as above, but depends on the memory 604 601 in the running system. The syntax of range is 605 602 start-[end] where start and end are both 606 603 a memory unit (amount[KMG]). See also 607 604 Documentation/kdump/kdump.txt for an example. 605 + 606 + crashkernel=size[KMG],high 607 + [KNL, x86_64] range could be above 4G. Allow kernel 608 + to allocate physical memory region from top, so could 609 + be above 4G if system have more than 4G ram installed. 610 + Otherwise memory region will be allocated below 4G, if 611 + available. 612 + It will be ignored if crashkernel=X is specified. 613 + crashkernel=size[KMG],low 614 + [KNL, x86_64] range under 4G. When crashkernel=X,high 615 + is passed, kernel could allocate physical memory region 616 + above 4G, that cause second kernel crash on system 617 + that require some amount of low memory, e.g. swiotlb 618 + requires at least 64M+32K low memory. Kernel would 619 + try to allocate 72M below 4G automatically. 620 + This one let user to specify own low range under 4G 621 + for second kernel instead. 622 + 0: to disable low allocation. 623 + It will be ignored when crashkernel=X,high is not used 624 + or memory reserved is below 4G. 608 625 609 626 cs89x0_dma= [HW,NET] 610 627 Format: <dma> ··· 804 787 805 788 edd= [EDD] 806 789 Format: {"off" | "on" | "skip[mbr]"} 790 + 791 + efi_no_storage_paranoia [EFI; X86] 792 + Using this parameter you can use more than 50% of 793 + your efi variable storage. Use this parameter only if 794 + you are really sure that your UEFI does sane gc and 795 + fulfills the spec otherwise your board may brick. 807 796 808 797 eisa_irq_edge= [PARISC,HW] 809 798 See header of drivers/parisc/eisa.c.
+1 -1
Documentation/scsi/LICENSE.qla2xxx
··· 1 - Copyright (c) 2003-2012 QLogic Corporation 1 + Copyright (c) 2003-2013 QLogic Corporation 2 2 QLogic Linux FC-FCoE Driver 3 3 4 4 This program includes a device driver for Linux 3.x.
+10 -4
MAINTAINERS
··· 4941 4941 S: Maintained 4942 4942 F: fs/logfs/ 4943 4943 4944 + LPC32XX MACHINE SUPPORT 4945 + M: Roland Stigge <stigge@antcom.de> 4946 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 4947 + S: Maintained 4948 + F: arch/arm/mach-lpc32xx/ 4949 + 4944 4950 LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI) 4945 4951 M: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> 4946 4952 M: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> ··· 6633 6627 F: fs/reiserfs/ 6634 6628 6635 6629 REGISTER MAP ABSTRACTION 6636 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 6630 + M: Mark Brown <broonie@kernel.org> 6637 6631 T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap.git 6638 6632 S: Supported 6639 6633 F: drivers/base/regmap/ ··· 7381 7375 7382 7376 SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC) 7383 7377 M: Liam Girdwood <lgirdwood@gmail.com> 7384 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 7378 + M: Mark Brown <broonie@kernel.org> 7385 7379 T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git 7386 7380 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 7387 7381 W: http://alsa-project.org/main/index.php/ASoC ··· 7470 7464 7471 7465 SPI SUBSYSTEM 7472 7466 M: Grant Likely <grant.likely@secretlab.ca> 7473 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 7467 + M: Mark Brown <broonie@kernel.org> 7474 7468 L: spi-devel-general@lists.sourceforge.net 7475 7469 Q: http://patchwork.kernel.org/project/spi-devel-general/list/ 7476 7470 T: git git://git.secretlab.ca/git/linux-2.6.git ··· 8715 8709 8716 8710 VOLTAGE AND CURRENT REGULATOR FRAMEWORK 8717 8711 M: Liam Girdwood <lrg@ti.com> 8718 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 8712 + M: Mark Brown <broonie@kernel.org> 8719 8713 W: http://opensource.wolfsonmicro.com/node/15 8720 8714 W: http://www.slimlogic.co.uk/?p=48 8721 8715 T: git git://git.kernel.org/pub/scm/linux/kernel/git/lrg/regulator.git
+3 -2
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 9 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc8 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION* ··· 513 513 # Carefully list dependencies so we do not try to build scripts twice 514 514 # in parallel 515 515 PHONY += scripts 516 - scripts: scripts_basic include/config/auto.conf include/config/tristate.conf 516 + scripts: scripts_basic include/config/auto.conf include/config/tristate.conf \ 517 + asm-generic 517 518 $(Q)$(MAKE) $(build)=$(@) 518 519 519 520 # Objects we will link into vmlinux / subdirs we need to visit
+1 -1
arch/alpha/Makefile
··· 12 12 13 13 LDFLAGS_vmlinux := -static -N #-relax 14 14 CHECKFLAGS += -D__alpha__ -m64 15 - cflags-y := -pipe -mno-fp-regs -ffixed-8 -msmall-data 15 + cflags-y := -pipe -mno-fp-regs -ffixed-8 16 16 cflags-y += $(call cc-option, -fno-jump-tables) 17 17 18 18 cpuflags-$(CONFIG_ALPHA_EV4) := -mcpu=ev4
+1 -1
arch/alpha/include/asm/floppy.h
··· 26 26 #define fd_disable_irq() disable_irq(FLOPPY_IRQ) 27 27 #define fd_cacheflush(addr,size) /* nothing */ 28 28 #define fd_request_irq() request_irq(FLOPPY_IRQ, floppy_interrupt,\ 29 - IRQF_DISABLED, "floppy", NULL) 29 + 0, "floppy", NULL) 30 30 #define fd_free_irq() free_irq(FLOPPY_IRQ, NULL) 31 31 32 32 #ifdef CONFIG_PCI
-7
arch/alpha/kernel/irq.c
··· 117 117 return; 118 118 } 119 119 120 - /* 121 - * From here we must proceed with IPL_MAX. Note that we do not 122 - * explicitly enable interrupts afterwards - some MILO PALcode 123 - * (namely LX164 one) seems to have severe problems with RTI 124 - * at IPL 0. 125 - */ 126 - local_irq_disable(); 127 120 irq_enter(); 128 121 generic_handle_irq_desc(irq, desc); 129 122 irq_exit();
+8 -2
arch/alpha/kernel/irq_alpha.c
··· 45 45 unsigned long la_ptr, struct pt_regs *regs) 46 46 { 47 47 struct pt_regs *old_regs; 48 + 49 + /* 50 + * Disable interrupts during IRQ handling. 51 + * Note that there is no matching local_irq_enable() due to 52 + * severe problems with RTI at IPL0 and some MILO PALcode 53 + * (namely LX164). 54 + */ 55 + local_irq_disable(); 48 56 switch (type) { 49 57 case 0: 50 58 #ifdef CONFIG_SMP ··· 70 62 { 71 63 long cpu; 72 64 73 - local_irq_disable(); 74 65 smp_percpu_timer_interrupt(regs); 75 66 cpu = smp_processor_id(); 76 67 if (cpu != boot_cpuid) { ··· 229 222 230 223 struct irqaction timer_irqaction = { 231 224 .handler = timer_interrupt, 232 - .flags = IRQF_DISABLED, 233 225 .name = "timer", 234 226 }; 235 227
+5
arch/alpha/kernel/sys_nautilus.c
··· 188 188 extern void free_reserved_mem(void *, void *); 189 189 extern void pcibios_claim_one_bus(struct pci_bus *); 190 190 191 + static struct resource irongate_io = { 192 + .name = "Irongate PCI IO", 193 + .flags = IORESOURCE_IO, 194 + }; 191 195 static struct resource irongate_mem = { 192 196 .name = "Irongate PCI MEM", 193 197 .flags = IORESOURCE_MEM, ··· 213 209 214 210 irongate = pci_get_bus_and_slot(0, 0); 215 211 bus->self = irongate; 212 + bus->resource[0] = &irongate_io; 216 213 bus->resource[1] = &irongate_mem; 217 214 218 215 pci_bus_size_bridges(bus);
+7 -7
arch/alpha/kernel/sys_titan.c
··· 280 280 * all reported to the kernel as machine checks, so the handler 281 281 * is a nop so it can be called to count the individual events. 282 282 */ 283 - titan_request_irq(63+16, titan_intr_nop, IRQF_DISABLED, 283 + titan_request_irq(63+16, titan_intr_nop, 0, 284 284 "CChip Error", NULL); 285 - titan_request_irq(62+16, titan_intr_nop, IRQF_DISABLED, 285 + titan_request_irq(62+16, titan_intr_nop, 0, 286 286 "PChip 0 H_Error", NULL); 287 - titan_request_irq(61+16, titan_intr_nop, IRQF_DISABLED, 287 + titan_request_irq(61+16, titan_intr_nop, 0, 288 288 "PChip 1 H_Error", NULL); 289 - titan_request_irq(60+16, titan_intr_nop, IRQF_DISABLED, 289 + titan_request_irq(60+16, titan_intr_nop, 0, 290 290 "PChip 0 C_Error", NULL); 291 - titan_request_irq(59+16, titan_intr_nop, IRQF_DISABLED, 291 + titan_request_irq(59+16, titan_intr_nop, 0, 292 292 "PChip 1 C_Error", NULL); 293 293 294 294 /* ··· 348 348 * Hook a couple of extra err interrupts that the 349 349 * common titan code won't. 350 350 */ 351 - titan_request_irq(53+16, titan_intr_nop, IRQF_DISABLED, 351 + titan_request_irq(53+16, titan_intr_nop, 0, 352 352 "NMI", NULL); 353 - titan_request_irq(50+16, titan_intr_nop, IRQF_DISABLED, 353 + titan_request_irq(50+16, titan_intr_nop, 0, 354 354 "Temperature Warning", NULL); 355 355 356 356 /*
+8 -4
arch/arc/include/asm/irqflags.h
··· 39 39 " flag.nz %0 \n" 40 40 : "=r"(temp), "=r"(flags) 41 41 : "n"((STATUS_E1_MASK | STATUS_E2_MASK)) 42 - : "cc"); 42 + : "memory", "cc"); 43 43 44 44 return flags; 45 45 } ··· 53 53 __asm__ __volatile__( 54 54 " flag %0 \n" 55 55 : 56 - : "r"(flags)); 56 + : "r"(flags) 57 + : "memory"); 57 58 } 58 59 59 60 /* ··· 74 73 " and %0, %0, %1 \n" 75 74 " flag %0 \n" 76 75 : "=&r"(temp) 77 - : "n"(~(STATUS_E1_MASK | STATUS_E2_MASK))); 76 + : "n"(~(STATUS_E1_MASK | STATUS_E2_MASK)) 77 + : "memory"); 78 78 } 79 79 80 80 /* ··· 87 85 88 86 __asm__ __volatile__( 89 87 " lr %0, [status32] \n" 90 - : "=&r"(temp)); 88 + : "=&r"(temp) 89 + : 90 + : "memory"); 91 91 92 92 return temp; 93 93 }
-1
arch/arm/boot/dts/imx28-m28evk.dts
··· 152 152 i2c0: i2c@80058000 { 153 153 pinctrl-names = "default"; 154 154 pinctrl-0 = <&i2c0_pins_a>; 155 - clock-frequency = <400000>; 156 155 status = "okay"; 157 156 158 157 sgtl5000: codec@0a {
-1
arch/arm/boot/dts/imx28-sps1.dts
··· 70 70 i2c0: i2c@80058000 { 71 71 pinctrl-names = "default"; 72 72 pinctrl-0 = <&i2c0_pins_a>; 73 - clock-frequency = <400000>; 74 73 status = "okay"; 75 74 76 75 rtc: rtc@51 {
+1
arch/arm/boot/dts/imx6qdl.dtsi
··· 91 91 compatible = "arm,cortex-a9-twd-timer"; 92 92 reg = <0x00a00600 0x20>; 93 93 interrupts = <1 13 0xf01>; 94 + clocks = <&clks 15>; 94 95 }; 95 96 96 97 L2: l2-cache@00a02000 {
+7 -7
arch/arm/boot/dts/kirkwood-iomega_ix2_200.dts
··· 96 96 marvell,function = "gpio"; 97 97 }; 98 98 pmx_led_rebuild_brt_ctrl_1: pmx-led-rebuild-brt-ctrl-1 { 99 - marvell,pins = "mpp44"; 99 + marvell,pins = "mpp46"; 100 100 marvell,function = "gpio"; 101 101 }; 102 102 pmx_led_rebuild_brt_ctrl_2: pmx-led-rebuild-brt-ctrl-2 { 103 - marvell,pins = "mpp45"; 103 + marvell,pins = "mpp47"; 104 104 marvell,function = "gpio"; 105 105 }; 106 106 ··· 157 157 gpios = <&gpio0 16 0>; 158 158 linux,default-trigger = "default-on"; 159 159 }; 160 - health_led1 { 160 + rebuild_led { 161 + label = "status:white:rebuild_led"; 162 + gpios = <&gpio1 4 0>; 163 + }; 164 + health_led { 161 165 label = "status:red:health_led"; 162 166 gpios = <&gpio1 5 0>; 163 - }; 164 - health_led2 { 165 - label = "status:white:health_led"; 166 - gpios = <&gpio1 4 0>; 167 167 }; 168 168 backup_led { 169 169 label = "status:blue:backup_led";
-8
arch/arm/include/asm/glue-cache.h
··· 19 19 #undef _CACHE 20 20 #undef MULTI_CACHE 21 21 22 - #if defined(CONFIG_CPU_CACHE_V3) 23 - # ifdef _CACHE 24 - # define MULTI_CACHE 1 25 - # else 26 - # define _CACHE v3 27 - # endif 28 - #endif 29 - 30 22 #if defined(CONFIG_CPU_CACHE_V4) 31 23 # ifdef _CACHE 32 24 # define MULTI_CACHE 1
+1 -1
arch/arm/include/asm/hardware/iop3xx.h
··· 37 37 * IOP3XX processor registers 38 38 */ 39 39 #define IOP3XX_PERIPHERAL_PHYS_BASE 0xffffe000 40 - #define IOP3XX_PERIPHERAL_VIRT_BASE 0xfeffe000 40 + #define IOP3XX_PERIPHERAL_VIRT_BASE 0xfedfe000 41 41 #define IOP3XX_PERIPHERAL_SIZE 0x00002000 42 42 #define IOP3XX_PERIPHERAL_UPPER_PA (IOP3XX_PERIPHERAL_PHYS_BASE +\ 43 43 IOP3XX_PERIPHERAL_SIZE - 1)
+1 -1
arch/arm/include/asm/pgtable-3level.h
··· 111 111 #define L_PTE_S2_MT_WRITETHROUGH (_AT(pteval_t, 0xa) << 2) /* MemAttr[3:0] */ 112 112 #define L_PTE_S2_MT_WRITEBACK (_AT(pteval_t, 0xf) << 2) /* MemAttr[3:0] */ 113 113 #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ 114 - #define L_PTE_S2_RDWR (_AT(pteval_t, 2) << 6) /* HAP[2:1] */ 114 + #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ 115 115 116 116 /* 117 117 * Hyp-mode PL2 PTE definitions for LPAE.
+2 -9
arch/arm/include/asm/tlbflush.h
··· 14 14 15 15 #include <asm/glue.h> 16 16 17 - #define TLB_V3_PAGE (1 << 0) 18 17 #define TLB_V4_U_PAGE (1 << 1) 19 18 #define TLB_V4_D_PAGE (1 << 2) 20 19 #define TLB_V4_I_PAGE (1 << 3) ··· 21 22 #define TLB_V6_D_PAGE (1 << 5) 22 23 #define TLB_V6_I_PAGE (1 << 6) 23 24 24 - #define TLB_V3_FULL (1 << 8) 25 25 #define TLB_V4_U_FULL (1 << 9) 26 26 #define TLB_V4_D_FULL (1 << 10) 27 27 #define TLB_V4_I_FULL (1 << 11) ··· 50 52 * ============= 51 53 * 52 54 * We have the following to choose from: 53 - * v3 - ARMv3 54 55 * v4 - ARMv4 without write buffer 55 56 * v4wb - ARMv4 with write buffer without I TLB flush entry instruction 56 57 * v4wbi - ARMv4 with write buffer with I TLB flush entry instruction ··· 327 330 if (tlb_flag(TLB_WB)) 328 331 dsb(); 329 332 330 - tlb_op(TLB_V3_FULL, "c6, c0, 0", zero); 331 333 tlb_op(TLB_V4_U_FULL | TLB_V6_U_FULL, "c8, c7, 0", zero); 332 334 tlb_op(TLB_V4_D_FULL | TLB_V6_D_FULL, "c8, c6, 0", zero); 333 335 tlb_op(TLB_V4_I_FULL | TLB_V6_I_FULL, "c8, c5, 0", zero); ··· 347 351 if (tlb_flag(TLB_WB)) 348 352 dsb(); 349 353 350 - if (possible_tlb_flags & (TLB_V3_FULL|TLB_V4_U_FULL|TLB_V4_D_FULL|TLB_V4_I_FULL)) { 354 + if (possible_tlb_flags & (TLB_V4_U_FULL|TLB_V4_D_FULL|TLB_V4_I_FULL)) { 351 355 if (cpumask_test_cpu(get_cpu(), mm_cpumask(mm))) { 352 - tlb_op(TLB_V3_FULL, "c6, c0, 0", zero); 353 356 tlb_op(TLB_V4_U_FULL, "c8, c7, 0", zero); 354 357 tlb_op(TLB_V4_D_FULL, "c8, c6, 0", zero); 355 358 tlb_op(TLB_V4_I_FULL, "c8, c5, 0", zero); ··· 380 385 if (tlb_flag(TLB_WB)) 381 386 dsb(); 382 387 383 - if (possible_tlb_flags & (TLB_V3_PAGE|TLB_V4_U_PAGE|TLB_V4_D_PAGE|TLB_V4_I_PAGE|TLB_V4_I_FULL) && 388 + if (possible_tlb_flags & (TLB_V4_U_PAGE|TLB_V4_D_PAGE|TLB_V4_I_PAGE|TLB_V4_I_FULL) && 384 389 cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) { 385 - tlb_op(TLB_V3_PAGE, "c6, c0, 0", uaddr); 386 390 tlb_op(TLB_V4_U_PAGE, "c8, c7, 1", uaddr); 387 391 tlb_op(TLB_V4_D_PAGE, "c8, c6, 1", uaddr); 388 392 tlb_op(TLB_V4_I_PAGE, "c8, c5, 1", uaddr); ··· 412 418 if (tlb_flag(TLB_WB)) 413 419 dsb(); 414 420 415 - tlb_op(TLB_V3_PAGE, "c6, c0, 0", kaddr); 416 421 tlb_op(TLB_V4_U_PAGE, "c8, c7, 1", kaddr); 417 422 tlb_op(TLB_V4_D_PAGE, "c8, c6, 1", kaddr); 418 423 tlb_op(TLB_V4_I_PAGE, "c8, c5, 1", kaddr);
+1 -1
arch/arm/kernel/hw_breakpoint.c
··· 1043 1043 return NOTIFY_OK; 1044 1044 } 1045 1045 1046 - static struct notifier_block __cpuinitdata dbg_cpu_pm_nb = { 1046 + static struct notifier_block dbg_cpu_pm_nb = { 1047 1047 .notifier_call = dbg_cpu_pm_notify, 1048 1048 }; 1049 1049
+4 -1
arch/arm/kernel/perf_event.c
··· 253 253 struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 254 254 struct pmu *leader_pmu = event->group_leader->pmu; 255 255 256 - if (event->pmu != leader_pmu || event->state <= PERF_EVENT_STATE_OFF) 256 + if (event->pmu != leader_pmu || event->state < PERF_EVENT_STATE_OFF) 257 + return 1; 258 + 259 + if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) 257 260 return 1; 258 261 259 262 return armpmu->get_event_idx(hw_events, event) >= 0;
+2 -2
arch/arm/kernel/sched_clock.c
··· 45 45 46 46 static u32 __read_mostly (*read_sched_clock)(void) = jiffy_sched_clock_read; 47 47 48 - static inline u64 cyc_to_ns(u64 cyc, u32 mult, u32 shift) 48 + static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift) 49 49 { 50 50 return (cyc * mult) >> shift; 51 51 } 52 52 53 - static unsigned long long cyc_to_sched_clock(u32 cyc, u32 mask) 53 + static unsigned long long notrace cyc_to_sched_clock(u32 cyc, u32 mask) 54 54 { 55 55 u64 epoch_ns; 56 56 u32 epoch_cyc;
-3
arch/arm/kernel/setup.c
··· 56 56 #include <asm/virt.h> 57 57 58 58 #include "atags.h" 59 - #include "tcm.h" 60 59 61 60 62 61 #if defined(CONFIG_FPE_NWFPE) || defined(CONFIG_FPE_FASTFPE) ··· 796 797 hyp_mode_check(); 797 798 798 799 reserve_crashkernel(); 799 - 800 - tcm_init(); 801 800 802 801 #ifdef CONFIG_MULTI_IRQ_HANDLER 803 802 handle_arch_irq = mdesc->handle_irq;
-1
arch/arm/kernel/tcm.c
··· 17 17 #include <asm/mach/map.h> 18 18 #include <asm/memory.h> 19 19 #include <asm/system_info.h> 20 - #include "tcm.h" 21 20 22 21 static struct gen_pool *tcm_pool; 23 22 static bool dtcm_present;
arch/arm/kernel/tcm.h arch/arm/mm/tcm.h
+1
arch/arm/kvm/arm.c
··· 201 201 break; 202 202 case KVM_CAP_ARM_SET_DEVICE_ADDR: 203 203 r = 1; 204 + break; 204 205 case KVM_CAP_NR_VCPUS: 205 206 r = num_online_cpus(); 206 207 break;
+2 -2
arch/arm/kvm/coproc.c
··· 79 79 u32 val; 80 80 int cpu; 81 81 82 - cpu = get_cpu(); 83 - 84 82 if (!p->is_write) 85 83 return read_from_write_only(vcpu, p); 84 + 85 + cpu = get_cpu(); 86 86 87 87 cpumask_setall(&vcpu->arch.require_dcache_flush); 88 88 cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
+4 -6
arch/arm/mach-highbank/hotplug.c
··· 28 28 */ 29 29 void __ref highbank_cpu_die(unsigned int cpu) 30 30 { 31 - flush_cache_all(); 32 - 33 31 highbank_set_cpu_jump(cpu, phys_to_virt(0)); 32 + 33 + flush_cache_louis(); 34 34 highbank_set_core_pwr(); 35 35 36 - cpu_do_idle(); 37 - 38 - /* We should never return from idle */ 39 - panic("highbank: cpu %d unexpectedly exit from shutdown\n", cpu); 36 + while (1) 37 + cpu_do_idle(); 40 38 }
+2
arch/arm/mach-imx/clk-imx35.c
··· 257 257 clk_register_clkdev(clk[wdog_gate], NULL, "imx2-wdt.0"); 258 258 clk_register_clkdev(clk[nfc_div], NULL, "imx25-nand.0"); 259 259 clk_register_clkdev(clk[csi_gate], NULL, "mx3-camera.0"); 260 + clk_register_clkdev(clk[admux_gate], "audmux", NULL); 260 261 261 262 clk_prepare_enable(clk[spba_gate]); 262 263 clk_prepare_enable(clk[gpio1_gate]); ··· 266 265 clk_prepare_enable(clk[iim_gate]); 267 266 clk_prepare_enable(clk[emi_gate]); 268 267 clk_prepare_enable(clk[max_gate]); 268 + clk_prepare_enable(clk[iomuxc_gate]); 269 269 270 270 /* 271 271 * SCC is needed to boot via mmc after a watchdog reset. The clock code
+1 -2
arch/arm/mach-imx/clk-imx6q.c
··· 115 115 static const char *gpu3d_core_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll2_pfd2_396m", }; 116 116 static const char *gpu3d_shader_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll2_pfd9_720m", }; 117 117 static const char *ipu_sels[] = { "mmdc_ch0_axi", "pll2_pfd2_396m", "pll3_120m", "pll3_pfd1_540m", }; 118 - static const char *ldb_di_sels[] = { "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_pfd1_540m", }; 118 + static const char *ldb_di_sels[] = { "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_usb_otg", }; 119 119 static const char *ipu_di_pre_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "pll3_pfd1_540m", }; 120 120 static const char *ipu1_di0_sels[] = { "ipu1_di0_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", }; 121 121 static const char *ipu1_di1_sels[] = { "ipu1_di1_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", }; ··· 443 443 444 444 clk_register_clkdev(clk[gpt_ipg], "ipg", "imx-gpt.0"); 445 445 clk_register_clkdev(clk[gpt_ipg_per], "per", "imx-gpt.0"); 446 - clk_register_clkdev(clk[twd], NULL, "smp_twd"); 447 446 clk_register_clkdev(clk[cko1_sel], "cko1_sel", NULL); 448 447 clk_register_clkdev(clk[ahb], "ahb", NULL); 449 448 clk_register_clkdev(clk[cko1], "cko1", NULL);
+6 -1
arch/arm/mach-kirkwood/board-iomega_ix2_200.c
··· 20 20 .duplex = DUPLEX_FULL, 21 21 }; 22 22 23 + static struct mv643xx_eth_platform_data iomega_ix2_200_ge01_data = { 24 + .phy_addr = MV643XX_ETH_PHY_ADDR(11), 25 + }; 26 + 23 27 void __init iomega_ix2_200_init(void) 24 28 { 25 29 /* 26 30 * Basic setup. Needs to be called early. 27 31 */ 28 - kirkwood_ge01_init(&iomega_ix2_200_ge00_data); 32 + kirkwood_ge00_init(&iomega_ix2_200_ge00_data); 33 + kirkwood_ge01_init(&iomega_ix2_200_ge01_data); 29 34 }
+5 -11
arch/arm/mach-mvebu/irq-armada-370-xp.c
··· 61 61 */ 62 62 static void armada_370_xp_irq_mask(struct irq_data *d) 63 63 { 64 - #ifdef CONFIG_SMP 65 64 irq_hw_number_t hwirq = irqd_to_hwirq(d); 66 65 67 66 if (hwirq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ) ··· 69 70 else 70 71 writel(hwirq, per_cpu_int_base + 71 72 ARMADA_370_XP_INT_SET_MASK_OFFS); 72 - #else 73 - writel(irqd_to_hwirq(d), 74 - per_cpu_int_base + ARMADA_370_XP_INT_SET_MASK_OFFS); 75 - #endif 76 73 } 77 74 78 75 static void armada_370_xp_irq_unmask(struct irq_data *d) 79 76 { 80 - #ifdef CONFIG_SMP 81 77 irq_hw_number_t hwirq = irqd_to_hwirq(d); 82 78 83 79 if (hwirq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ) ··· 81 87 else 82 88 writel(hwirq, per_cpu_int_base + 83 89 ARMADA_370_XP_INT_CLEAR_MASK_OFFS); 84 - #else 85 - writel(irqd_to_hwirq(d), 86 - per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); 87 - #endif 88 90 } 89 91 90 92 #ifdef CONFIG_SMP ··· 136 146 unsigned int virq, irq_hw_number_t hw) 137 147 { 138 148 armada_370_xp_irq_mask(irq_get_irq_data(virq)); 139 - writel(hw, main_int_base + ARMADA_370_XP_INT_SET_ENABLE_OFFS); 149 + if (hw != ARMADA_370_XP_TIMER0_PER_CPU_IRQ) 150 + writel(hw, per_cpu_int_base + 151 + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); 152 + else 153 + writel(hw, main_int_base + ARMADA_370_XP_INT_SET_ENABLE_OFFS); 140 154 irq_set_status_flags(virq, IRQ_LEVEL); 141 155 142 156 if (hw == ARMADA_370_XP_TIMER0_PER_CPU_IRQ) {
+1 -3
arch/arm/mach-s3c24xx/include/mach/irqs.h
··· 188 188 189 189 #if defined(CONFIG_CPU_S3C2416) 190 190 #define NR_IRQS (IRQ_S3C2416_I2S1 + 1) 191 - #elif defined(CONFIG_CPU_S3C2443) 192 - #define NR_IRQS (IRQ_S3C2443_AC97+1) 193 191 #else 194 - #define NR_IRQS (IRQ_S3C2440_AC97+1) 192 + #define NR_IRQS (IRQ_S3C2443_AC97 + 1) 195 193 #endif 196 194 197 195 /* compatibility define. */
+1 -1
arch/arm/mach-s3c24xx/irq.c
··· 500 500 base = (void *)0xfd000000; 501 501 502 502 intc->reg_mask = base + 0xa4; 503 - intc->reg_pending = base + 0x08; 503 + intc->reg_pending = base + 0xa8; 504 504 irq_num = 20; 505 505 irq_start = S3C2410_IRQ(32); 506 506 irq_offset = 4;
+1 -4
arch/arm/mm/Kconfig
··· 43 43 depends on !MMU 44 44 select CPU_32v4T 45 45 select CPU_ABRT_LV4T 46 - select CPU_CACHE_V3 # although the core is v4t 46 + select CPU_CACHE_V4 47 47 select CPU_CP15_MPU 48 48 select CPU_PABRT_LEGACY 49 49 help ··· 469 469 bool 470 470 471 471 # The cache model 472 - config CPU_CACHE_V3 473 - bool 474 - 475 472 config CPU_CACHE_V4 476 473 bool 477 474
-1
arch/arm/mm/Makefile
··· 33 33 obj-$(CONFIG_CPU_PABRT_V6) += pabort-v6.o 34 34 obj-$(CONFIG_CPU_PABRT_V7) += pabort-v7.o 35 35 36 - obj-$(CONFIG_CPU_CACHE_V3) += cache-v3.o 37 36 obj-$(CONFIG_CPU_CACHE_V4) += cache-v4.o 38 37 obj-$(CONFIG_CPU_CACHE_V4WT) += cache-v4wt.o 39 38 obj-$(CONFIG_CPU_CACHE_V4WB) += cache-v4wb.o
+1
arch/arm/mm/cache-feroceon-l2.c
··· 343 343 outer_cache.inv_range = feroceon_l2_inv_range; 344 344 outer_cache.clean_range = feroceon_l2_clean_range; 345 345 outer_cache.flush_range = feroceon_l2_flush_range; 346 + outer_cache.inv_all = l2_inv_all; 346 347 347 348 enable_l2(); 348 349
-137
arch/arm/mm/cache-v3.S
··· 1 - /* 2 - * linux/arch/arm/mm/cache-v3.S 3 - * 4 - * Copyright (C) 1997-2002 Russell king 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - */ 10 - #include <linux/linkage.h> 11 - #include <linux/init.h> 12 - #include <asm/page.h> 13 - #include "proc-macros.S" 14 - 15 - /* 16 - * flush_icache_all() 17 - * 18 - * Unconditionally clean and invalidate the entire icache. 19 - */ 20 - ENTRY(v3_flush_icache_all) 21 - mov pc, lr 22 - ENDPROC(v3_flush_icache_all) 23 - 24 - /* 25 - * flush_user_cache_all() 26 - * 27 - * Invalidate all cache entries in a particular address 28 - * space. 29 - * 30 - * - mm - mm_struct describing address space 31 - */ 32 - ENTRY(v3_flush_user_cache_all) 33 - /* FALLTHROUGH */ 34 - /* 35 - * flush_kern_cache_all() 36 - * 37 - * Clean and invalidate the entire cache. 38 - */ 39 - ENTRY(v3_flush_kern_cache_all) 40 - /* FALLTHROUGH */ 41 - 42 - /* 43 - * flush_user_cache_range(start, end, flags) 44 - * 45 - * Invalidate a range of cache entries in the specified 46 - * address space. 47 - * 48 - * - start - start address (may not be aligned) 49 - * - end - end address (exclusive, may not be aligned) 50 - * - flags - vma_area_struct flags describing address space 51 - */ 52 - ENTRY(v3_flush_user_cache_range) 53 - mov ip, #0 54 - mcreq p15, 0, ip, c7, c0, 0 @ flush ID cache 55 - mov pc, lr 56 - 57 - /* 58 - * coherent_kern_range(start, end) 59 - * 60 - * Ensure coherency between the Icache and the Dcache in the 61 - * region described by start. If you have non-snooping 62 - * Harvard caches, you need to implement this function. 63 - * 64 - * - start - virtual start address 65 - * - end - virtual end address 66 - */ 67 - ENTRY(v3_coherent_kern_range) 68 - /* FALLTHROUGH */ 69 - 70 - /* 71 - * coherent_user_range(start, end) 72 - * 73 - * Ensure coherency between the Icache and the Dcache in the 74 - * region described by start. If you have non-snooping 75 - * Harvard caches, you need to implement this function. 76 - * 77 - * - start - virtual start address 78 - * - end - virtual end address 79 - */ 80 - ENTRY(v3_coherent_user_range) 81 - mov r0, #0 82 - mov pc, lr 83 - 84 - /* 85 - * flush_kern_dcache_area(void *page, size_t size) 86 - * 87 - * Ensure no D cache aliasing occurs, either with itself or 88 - * the I cache 89 - * 90 - * - addr - kernel address 91 - * - size - region size 92 - */ 93 - ENTRY(v3_flush_kern_dcache_area) 94 - /* FALLTHROUGH */ 95 - 96 - /* 97 - * dma_flush_range(start, end) 98 - * 99 - * Clean and invalidate the specified virtual address range. 100 - * 101 - * - start - virtual start address 102 - * - end - virtual end address 103 - */ 104 - ENTRY(v3_dma_flush_range) 105 - mov r0, #0 106 - mcr p15, 0, r0, c7, c0, 0 @ flush ID cache 107 - mov pc, lr 108 - 109 - /* 110 - * dma_unmap_area(start, size, dir) 111 - * - start - kernel virtual start address 112 - * - size - size of region 113 - * - dir - DMA direction 114 - */ 115 - ENTRY(v3_dma_unmap_area) 116 - teq r2, #DMA_TO_DEVICE 117 - bne v3_dma_flush_range 118 - /* FALLTHROUGH */ 119 - 120 - /* 121 - * dma_map_area(start, size, dir) 122 - * - start - kernel virtual start address 123 - * - size - size of region 124 - * - dir - DMA direction 125 - */ 126 - ENTRY(v3_dma_map_area) 127 - mov pc, lr 128 - ENDPROC(v3_dma_unmap_area) 129 - ENDPROC(v3_dma_map_area) 130 - 131 - .globl v3_flush_kern_cache_louis 132 - .equ v3_flush_kern_cache_louis, v3_flush_kern_cache_all 133 - 134 - __INITDATA 135 - 136 - @ define struct cpu_cache_fns (see <asm/cacheflush.h> and proc-macros.S) 137 - define_cache_functions v3
+1 -1
arch/arm/mm/cache-v4.S
··· 58 58 ENTRY(v4_flush_user_cache_range) 59 59 #ifdef CONFIG_CPU_CP15 60 60 mov ip, #0 61 - mcreq p15, 0, ip, c7, c7, 0 @ flush ID cache 61 + mcr p15, 0, ip, c7, c7, 0 @ flush ID cache 62 62 mov pc, lr 63 63 #else 64 64 /* FALLTHROUGH */
+2
arch/arm/mm/mmu.c
··· 34 34 #include <asm/mach/pci.h> 35 35 36 36 #include "mm.h" 37 + #include "tcm.h" 37 38 38 39 /* 39 40 * empty_zero_page is a special page that is used for ··· 1278 1277 dma_contiguous_remap(); 1279 1278 devicemaps_init(mdesc); 1280 1279 kmap_init(); 1280 + tcm_init(); 1281 1281 1282 1282 top_pmd = pmd_off_k(0xffff0000); 1283 1283
+17 -13
arch/arm/mm/proc-arm740.S
··· 77 77 mcr p15, 0, r0, c6, c0 @ set area 0, default 78 78 79 79 ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM 80 - ldr r1, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB) 81 - mov r2, #10 @ 11 is the minimum (4KB) 82 - 1: add r2, r2, #1 @ area size *= 2 83 - mov r1, r1, lsr #1 80 + ldr r3, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB) 81 + mov r4, #10 @ 11 is the minimum (4KB) 82 + 1: add r4, r4, #1 @ area size *= 2 83 + movs r3, r3, lsr #1 84 84 bne 1b @ count not zero r-shift 85 - orr r0, r0, r2, lsl #1 @ the area register value 85 + orr r0, r0, r4, lsl #1 @ the area register value 86 86 orr r0, r0, #1 @ set enable bit 87 87 mcr p15, 0, r0, c6, c1 @ set area 1, RAM 88 88 89 89 ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH 90 - ldr r1, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB) 91 - mov r2, #10 @ 11 is the minimum (4KB) 92 - 1: add r2, r2, #1 @ area size *= 2 93 - mov r1, r1, lsr #1 90 + ldr r3, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB) 91 + cmp r3, #0 92 + moveq r0, #0 93 + beq 2f 94 + mov r4, #10 @ 11 is the minimum (4KB) 95 + 1: add r4, r4, #1 @ area size *= 2 96 + movs r3, r3, lsr #1 94 97 bne 1b @ count not zero r-shift 95 - orr r0, r0, r2, lsl #1 @ the area register value 98 + orr r0, r0, r4, lsl #1 @ the area register value 96 99 orr r0, r0, #1 @ set enable bit 97 - mcr p15, 0, r0, c6, c2 @ set area 2, ROM/FLASH 100 + 2: mcr p15, 0, r0, c6, c2 @ set area 2, ROM/FLASH 98 101 99 102 mov r0, #0x06 100 103 mcr p15, 0, r0, c2, c0 @ Region 1&2 cacheable ··· 140 137 .long 0x41807400 141 138 .long 0xfffffff0 142 139 .long 0 140 + .long 0 143 141 b __arm740_setup 144 142 .long cpu_arch_name 145 143 .long cpu_elf_name 146 - .long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT 144 + .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_26BIT 147 145 .long cpu_arm740_name 148 146 .long arm740_processor_functions 149 147 .long 0 150 148 .long 0 151 - .long v3_cache_fns @ cache model 149 + .long v4_cache_fns @ cache model 152 150 .size __arm740_proc_info, . - __arm740_proc_info
+1 -1
arch/arm/mm/proc-arm920.S
··· 387 387 /* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */ 388 388 .globl cpu_arm920_suspend_size 389 389 .equ cpu_arm920_suspend_size, 4 * 3 390 - #ifdef CONFIG_PM_SLEEP 390 + #ifdef CONFIG_ARM_CPU_SUSPEND 391 391 ENTRY(cpu_arm920_do_suspend) 392 392 stmfd sp!, {r4 - r6, lr} 393 393 mrc p15, 0, r4, c13, c0, 0 @ PID
+1 -1
arch/arm/mm/proc-arm926.S
··· 402 402 /* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */ 403 403 .globl cpu_arm926_suspend_size 404 404 .equ cpu_arm926_suspend_size, 4 * 3 405 - #ifdef CONFIG_PM_SLEEP 405 + #ifdef CONFIG_ARM_CPU_SUSPEND 406 406 ENTRY(cpu_arm926_do_suspend) 407 407 stmfd sp!, {r4 - r6, lr} 408 408 mrc p15, 0, r4, c13, c0, 0 @ PID
+1 -1
arch/arm/mm/proc-mohawk.S
··· 350 350 351 351 .globl cpu_mohawk_suspend_size 352 352 .equ cpu_mohawk_suspend_size, 4 * 6 353 - #ifdef CONFIG_PM_SLEEP 353 + #ifdef CONFIG_ARM_CPU_SUSPEND 354 354 ENTRY(cpu_mohawk_do_suspend) 355 355 stmfd sp!, {r4 - r9, lr} 356 356 mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
+1 -1
arch/arm/mm/proc-sa1100.S
··· 172 172 173 173 .globl cpu_sa1100_suspend_size 174 174 .equ cpu_sa1100_suspend_size, 4 * 3 175 - #ifdef CONFIG_PM_SLEEP 175 + #ifdef CONFIG_ARM_CPU_SUSPEND 176 176 ENTRY(cpu_sa1100_do_suspend) 177 177 stmfd sp!, {r4 - r6, lr} 178 178 mrc p15, 0, r4, c3, c0, 0 @ domain ID
+2
arch/arm/mm/proc-syms.c
··· 17 17 18 18 #ifndef MULTI_CPU 19 19 EXPORT_SYMBOL(cpu_dcache_clean_area); 20 + #ifdef CONFIG_MMU 20 21 EXPORT_SYMBOL(cpu_set_pte_ext); 22 + #endif 21 23 #else 22 24 EXPORT_SYMBOL(processor); 23 25 #endif
+1 -1
arch/arm/mm/proc-v6.S
··· 138 138 /* Suspend/resume support: taken from arch/arm/mach-s3c64xx/sleep.S */ 139 139 .globl cpu_v6_suspend_size 140 140 .equ cpu_v6_suspend_size, 4 * 6 141 - #ifdef CONFIG_PM_SLEEP 141 + #ifdef CONFIG_ARM_CPU_SUSPEND 142 142 ENTRY(cpu_v6_do_suspend) 143 143 stmfd sp!, {r4 - r9, lr} 144 144 mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID
+1 -1
arch/arm/mm/proc-xsc3.S
··· 413 413 414 414 .globl cpu_xsc3_suspend_size 415 415 .equ cpu_xsc3_suspend_size, 4 * 6 416 - #ifdef CONFIG_PM_SLEEP 416 + #ifdef CONFIG_ARM_CPU_SUSPEND 417 417 ENTRY(cpu_xsc3_do_suspend) 418 418 stmfd sp!, {r4 - r9, lr} 419 419 mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
+1 -1
arch/arm/mm/proc-xscale.S
··· 528 528 529 529 .globl cpu_xscale_suspend_size 530 530 .equ cpu_xscale_suspend_size, 4 * 6 531 - #ifdef CONFIG_PM_SLEEP 531 + #ifdef CONFIG_ARM_CPU_SUSPEND 532 532 ENTRY(cpu_xscale_do_suspend) 533 533 stmfd sp!, {r4 - r9, lr} 534 534 mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
+4
arch/avr32/include/asm/io.h
··· 165 165 #define readw_be __raw_readw 166 166 #define readl_be __raw_readl 167 167 168 + #define writeb_relaxed writeb 169 + #define writew_relaxed writew 170 + #define writel_relaxed writel 171 + 168 172 #define writeb_be __raw_writeb 169 173 #define writew_be __raw_writew 170 174 #define writel_be __raw_writel
+1 -1
arch/c6x/include/asm/irqflags.h
··· 27 27 /* set interrupt enabled status */ 28 28 static inline void arch_local_irq_restore(unsigned long flags) 29 29 { 30 - asm volatile (" mvc .s2 %0,CSR\n" : : "b"(flags)); 30 + asm volatile (" mvc .s2 %0,CSR\n" : : "b"(flags) : "memory"); 31 31 } 32 32 33 33 /* unconditionally enable interrupts */
+13 -64
arch/ia64/kernel/palinfo.c
··· 849 849 850 850 #define NR_PALINFO_ENTRIES (int) ARRAY_SIZE(palinfo_entries) 851 851 852 - /* 853 - * this array is used to keep track of the proc entries we create. This is 854 - * required in the module mode when we need to remove all entries. The procfs code 855 - * does not do recursion of deletion 856 - * 857 - * Notes: 858 - * - +1 accounts for the cpuN directory entry in /proc/pal 859 - */ 860 - #define NR_PALINFO_PROC_ENTRIES (NR_CPUS*(NR_PALINFO_ENTRIES+1)) 861 - 862 - static struct proc_dir_entry *palinfo_proc_entries[NR_PALINFO_PROC_ENTRIES]; 863 852 static struct proc_dir_entry *palinfo_dir; 864 853 865 854 /* ··· 960 971 static void __cpuinit 961 972 create_palinfo_proc_entries(unsigned int cpu) 962 973 { 963 - # define CPUSTR "cpu%d" 964 - 965 974 pal_func_cpu_u_t f; 966 - struct proc_dir_entry **pdir; 967 975 struct proc_dir_entry *cpu_dir; 968 976 int j; 969 - char cpustr[sizeof(CPUSTR)]; 970 - 971 - 972 - /* 973 - * we keep track of created entries in a depth-first order for 974 - * cleanup purposes. Each entry is stored into palinfo_proc_entries 975 - */ 976 - sprintf(cpustr,CPUSTR, cpu); 977 + char cpustr[3+4+1]; /* cpu numbers are up to 4095 on itanic */ 978 + sprintf(cpustr, "cpu%d", cpu); 977 979 978 980 cpu_dir = proc_mkdir(cpustr, palinfo_dir); 981 + if (!cpu_dir) 982 + return; 979 983 980 984 f.req_cpu = cpu; 981 985 982 - /* 983 - * Compute the location to store per cpu entries 984 - * We dont store the top level entry in this list, but 985 - * remove it finally after removing all cpu entries. 986 - */ 987 - pdir = &palinfo_proc_entries[cpu*(NR_PALINFO_ENTRIES+1)]; 988 - *pdir++ = cpu_dir; 989 986 for (j=0; j < NR_PALINFO_ENTRIES; j++) { 990 987 f.func_id = j; 991 - *pdir = create_proc_read_entry( 992 - palinfo_entries[j].name, 0, cpu_dir, 993 - palinfo_read_entry, (void *)f.value); 994 - pdir++; 988 + create_proc_read_entry( 989 + palinfo_entries[j].name, 0, cpu_dir, 990 + palinfo_read_entry, (void *)f.value); 995 991 } 996 992 } 997 993 998 994 static void 999 995 remove_palinfo_proc_entries(unsigned int hcpu) 1000 996 { 1001 - int j; 1002 - struct proc_dir_entry *cpu_dir, **pdir; 1003 - 1004 - pdir = &palinfo_proc_entries[hcpu*(NR_PALINFO_ENTRIES+1)]; 1005 - cpu_dir = *pdir; 1006 - *pdir++=NULL; 1007 - for (j=0; j < (NR_PALINFO_ENTRIES); j++) { 1008 - if ((*pdir)) { 1009 - remove_proc_entry ((*pdir)->name, cpu_dir); 1010 - *pdir ++= NULL; 1011 - } 1012 - } 1013 - 1014 - if (cpu_dir) { 1015 - remove_proc_entry(cpu_dir->name, palinfo_dir); 1016 - } 997 + char cpustr[3+4+1]; /* cpu numbers are up to 4095 on itanic */ 998 + sprintf(cpustr, "cpu%d", hcpu); 999 + remove_proc_subtree(cpustr, palinfo_dir); 1017 1000 } 1018 1001 1019 1002 static int __cpuinit palinfo_cpu_callback(struct notifier_block *nfb, ··· 1019 1058 1020 1059 printk(KERN_INFO "PAL Information Facility v%s\n", PALINFO_VERSION); 1021 1060 palinfo_dir = proc_mkdir("pal", NULL); 1061 + if (!palinfo_dir) 1062 + return -ENOMEM; 1022 1063 1023 1064 /* Create palinfo dirs in /proc for all online cpus */ 1024 1065 for_each_online_cpu(i) { ··· 1036 1073 static void __exit 1037 1074 palinfo_exit(void) 1038 1075 { 1039 - int i = 0; 1040 - 1041 - /* remove all nodes: depth first pass. Could optimize this */ 1042 - for_each_online_cpu(i) { 1043 - remove_palinfo_proc_entries(i); 1044 - } 1045 - 1046 - /* 1047 - * Remove the top level entry finally 1048 - */ 1049 - remove_proc_entry(palinfo_dir->name, NULL); 1050 - 1051 - /* 1052 - * Unregister from cpu notifier callbacks 1053 - */ 1054 1076 unregister_hotcpu_notifier(&palinfo_cpu_notifier); 1077 + remove_proc_subtree("pal", NULL); 1055 1078 } 1056 1079 1057 1080 module_init(palinfo_init);
+20
arch/m68k/include/asm/gpio.h
··· 86 86 return gpio < MCFGPIO_PIN_MAX ? 0 : __gpio_cansleep(gpio); 87 87 } 88 88 89 + static inline int gpio_request_one(unsigned gpio, unsigned long flags, const char *label) 90 + { 91 + int err; 92 + 93 + err = gpio_request(gpio, label); 94 + if (err) 95 + return err; 96 + 97 + if (flags & GPIOF_DIR_IN) 98 + err = gpio_direction_input(gpio); 99 + else 100 + err = gpio_direction_output(gpio, 101 + (flags & GPIOF_INIT_HIGH) ? 1 : 0); 102 + 103 + if (err) 104 + gpio_free(gpio); 105 + 106 + return err; 107 + } 108 + 89 109 #endif
+2 -2
arch/powerpc/kernel/entry_64.S
··· 304 304 subi r12,r12,TI_FLAGS 305 305 306 306 4: /* Anything else left to do? */ 307 - SET_DEFAULT_THREAD_PPR(r3, r9) /* Set thread.ppr = 3 */ 307 + SET_DEFAULT_THREAD_PPR(r3, r10) /* Set thread.ppr = 3 */ 308 308 andi. r0,r9,(_TIF_SYSCALL_T_OR_A|_TIF_SINGLESTEP) 309 309 beq .ret_from_except_lite 310 310 ··· 657 657 /* Clear _TIF_EMULATE_STACK_STORE flag */ 658 658 lis r11,_TIF_EMULATE_STACK_STORE@h 659 659 addi r5,r9,TI_FLAGS 660 - ldarx r4,0,r5 660 + 0: ldarx r4,0,r5 661 661 andc r4,r4,r11 662 662 stdcx. r4,0,r5 663 663 bne- 0b
+2
arch/powerpc/kernel/process.c
··· 555 555 new->thread.regs->msr |= 556 556 (MSR_FP | new->thread.fpexc_mode); 557 557 } 558 + #ifdef CONFIG_ALTIVEC 558 559 if (msr & MSR_VEC) { 559 560 do_load_up_transact_altivec(&new->thread); 560 561 new->thread.regs->msr |= MSR_VEC; 561 562 } 563 + #endif 562 564 /* We may as well turn on VSX too since all the state is restored now */ 563 565 if (msr & MSR_VSX) 564 566 new->thread.regs->msr |= MSR_VSX;
+2
arch/powerpc/kernel/signal_32.c
··· 866 866 do_load_up_transact_fpu(&current->thread); 867 867 regs->msr |= (MSR_FP | current->thread.fpexc_mode); 868 868 } 869 + #ifdef CONFIG_ALTIVEC 869 870 if (msr & MSR_VEC) { 870 871 do_load_up_transact_altivec(&current->thread); 871 872 regs->msr |= MSR_VEC; 872 873 } 874 + #endif 873 875 874 876 return 0; 875 877 }
+2
arch/powerpc/kernel/signal_64.c
··· 522 522 do_load_up_transact_fpu(&current->thread); 523 523 regs->msr |= (MSR_FP | current->thread.fpexc_mode); 524 524 } 525 + #ifdef CONFIG_ALTIVEC 525 526 if (msr & MSR_VEC) { 526 527 do_load_up_transact_altivec(&current->thread); 527 528 regs->msr |= MSR_VEC; 528 529 } 530 + #endif 529 531 530 532 return err; 531 533 }
+2
arch/powerpc/kernel/tm.S
··· 309 309 or r5, r6, r5 /* Set MSR.FP+.VSX/.VEC */ 310 310 mtmsr r5 311 311 312 + #ifdef CONFIG_ALTIVEC 312 313 /* FP and VEC registers: These are recheckpointed from thread.fpr[] 313 314 * and thread.vr[] respectively. The thread.transact_fpr[] version 314 315 * is more modern, and will be loaded subsequently by any FPUnavailable ··· 324 323 REST_32VRS(0, r5, r3) /* r5 scratch, r3 THREAD ptr */ 325 324 ld r5, THREAD_VRSAVE(r3) 326 325 mtspr SPRN_VRSAVE, r5 326 + #endif 327 327 328 328 dont_restore_vec: 329 329 andi. r0, r4, MSR_FP
+8 -16
arch/powerpc/kvm/e500.h
··· 26 26 #define E500_PID_NUM 3 27 27 #define E500_TLB_NUM 2 28 28 29 - #define E500_TLB_VALID 1 30 - #define E500_TLB_BITMAP 2 29 + /* entry is mapped somewhere in host TLB */ 30 + #define E500_TLB_VALID (1 << 0) 31 + /* TLB1 entry is mapped by host TLB1, tracked by bitmaps */ 32 + #define E500_TLB_BITMAP (1 << 1) 33 + /* TLB1 entry is mapped by host TLB0 */ 31 34 #define E500_TLB_TLB0 (1 << 2) 32 35 33 36 struct tlbe_ref { 34 - pfn_t pfn; 35 - unsigned int flags; /* E500_TLB_* */ 37 + pfn_t pfn; /* valid only for TLB0, except briefly */ 38 + unsigned int flags; /* E500_TLB_* */ 36 39 }; 37 40 38 41 struct tlbe_priv { 39 - struct tlbe_ref ref; /* TLB0 only -- TLB1 uses tlb_refs */ 42 + struct tlbe_ref ref; 40 43 }; 41 44 42 45 #ifdef CONFIG_KVM_E500V2 ··· 66 63 67 64 unsigned int gtlb_nv[E500_TLB_NUM]; 68 65 69 - /* 70 - * information associated with each host TLB entry -- 71 - * TLB1 only for now. If/when guest TLB1 entries can be 72 - * mapped with host TLB0, this will be used for that too. 73 - * 74 - * We don't want to use this for guest TLB0 because then we'd 75 - * have the overhead of doing the translation again even if 76 - * the entry is still in the guest TLB (e.g. we swapped out 77 - * and back, and our host TLB entries got evicted). 78 - */ 79 - struct tlbe_ref *tlb_refs[E500_TLB_NUM]; 80 66 unsigned int host_tlb1_nv; 81 67 82 68 u32 svr;
+27 -57
arch/powerpc/kvm/e500_mmu_host.c
··· 193 193 struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[tlbsel][esel].ref; 194 194 195 195 /* Don't bother with unmapped entries */ 196 - if (!(ref->flags & E500_TLB_VALID)) 197 - return; 196 + if (!(ref->flags & E500_TLB_VALID)) { 197 + WARN(ref->flags & (E500_TLB_BITMAP | E500_TLB_TLB0), 198 + "%s: flags %x\n", __func__, ref->flags); 199 + WARN_ON(tlbsel == 1 && vcpu_e500->g2h_tlb1_map[esel]); 200 + } 198 201 199 202 if (tlbsel == 1 && ref->flags & E500_TLB_BITMAP) { 200 203 u64 tmp = vcpu_e500->g2h_tlb1_map[esel]; ··· 251 248 pfn_t pfn) 252 249 { 253 250 ref->pfn = pfn; 254 - ref->flags = E500_TLB_VALID; 251 + ref->flags |= E500_TLB_VALID; 255 252 256 253 if (tlbe_is_writable(gtlbe)) 257 254 kvm_set_pfn_dirty(pfn); ··· 260 257 static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) 261 258 { 262 259 if (ref->flags & E500_TLB_VALID) { 260 + /* FIXME: don't log bogus pfn for TLB1 */ 263 261 trace_kvm_booke206_ref_release(ref->pfn, ref->flags); 264 262 ref->flags = 0; 265 263 } ··· 278 274 279 275 static void clear_tlb_privs(struct kvmppc_vcpu_e500 *vcpu_e500) 280 276 { 281 - int tlbsel = 0; 277 + int tlbsel; 282 278 int i; 283 279 284 - for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) { 285 - struct tlbe_ref *ref = 286 - &vcpu_e500->gtlb_priv[tlbsel][i].ref; 287 - kvmppc_e500_ref_release(ref); 280 + for (tlbsel = 0; tlbsel <= 1; tlbsel++) { 281 + for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) { 282 + struct tlbe_ref *ref = 283 + &vcpu_e500->gtlb_priv[tlbsel][i].ref; 284 + kvmppc_e500_ref_release(ref); 285 + } 288 286 } 289 - } 290 - 291 - static void clear_tlb_refs(struct kvmppc_vcpu_e500 *vcpu_e500) 292 - { 293 - int stlbsel = 1; 294 - int i; 295 - 296 - kvmppc_e500_tlbil_all(vcpu_e500); 297 - 298 - for (i = 0; i < host_tlb_params[stlbsel].entries; i++) { 299 - struct tlbe_ref *ref = 300 - &vcpu_e500->tlb_refs[stlbsel][i]; 301 - kvmppc_e500_ref_release(ref); 302 - } 303 - 304 - clear_tlb_privs(vcpu_e500); 305 287 } 306 288 307 289 void kvmppc_core_flush_tlb(struct kvm_vcpu *vcpu) 308 290 { 309 291 struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); 310 - clear_tlb_refs(vcpu_e500); 292 + kvmppc_e500_tlbil_all(vcpu_e500); 293 + clear_tlb_privs(vcpu_e500); 311 294 clear_tlb1_bitmap(vcpu_e500); 312 295 } 313 296 ··· 449 458 gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 450 459 } 451 460 452 - /* Drop old ref and setup new one. */ 453 - kvmppc_e500_ref_release(ref); 454 461 kvmppc_e500_ref_setup(ref, gtlbe, pfn); 455 462 456 463 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ··· 496 507 if (unlikely(vcpu_e500->host_tlb1_nv >= tlb1_max_shadow_size())) 497 508 vcpu_e500->host_tlb1_nv = 0; 498 509 499 - vcpu_e500->tlb_refs[1][sesel] = *ref; 500 - vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel; 501 - vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP; 502 510 if (vcpu_e500->h2g_tlb1_rmap[sesel]) { 503 - unsigned int idx = vcpu_e500->h2g_tlb1_rmap[sesel]; 511 + unsigned int idx = vcpu_e500->h2g_tlb1_rmap[sesel] - 1; 504 512 vcpu_e500->g2h_tlb1_map[idx] &= ~(1ULL << sesel); 505 513 } 506 - vcpu_e500->h2g_tlb1_rmap[sesel] = esel; 514 + 515 + vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP; 516 + vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel; 517 + vcpu_e500->h2g_tlb1_rmap[sesel] = esel + 1; 518 + WARN_ON(!(ref->flags & E500_TLB_VALID)); 507 519 508 520 return sesel; 509 521 } ··· 516 526 u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe, 517 527 struct kvm_book3e_206_tlb_entry *stlbe, int esel) 518 528 { 519 - struct tlbe_ref ref; 529 + struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[1][esel].ref; 520 530 int sesel; 521 531 int r; 522 532 523 - ref.flags = 0; 524 533 r = kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe, 525 - &ref); 534 + ref); 526 535 if (r) 527 536 return r; 528 537 ··· 533 544 } 534 545 535 546 /* Otherwise map into TLB1 */ 536 - sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, &ref, esel); 547 + sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, ref, esel); 537 548 write_stlbe(vcpu_e500, gtlbe, stlbe, 1, sesel); 538 549 539 550 return 0; ··· 554 565 case 0: 555 566 priv = &vcpu_e500->gtlb_priv[tlbsel][esel]; 556 567 557 - /* Triggers after clear_tlb_refs or on initial mapping */ 568 + /* Triggers after clear_tlb_privs or on initial mapping */ 558 569 if (!(priv->ref.flags & E500_TLB_VALID)) { 559 570 kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe); 560 571 } else { ··· 654 665 host_tlb_params[0].entries / host_tlb_params[0].ways; 655 666 host_tlb_params[1].sets = 1; 656 667 657 - vcpu_e500->tlb_refs[0] = 658 - kzalloc(sizeof(struct tlbe_ref) * host_tlb_params[0].entries, 659 - GFP_KERNEL); 660 - if (!vcpu_e500->tlb_refs[0]) 661 - goto err; 662 - 663 - vcpu_e500->tlb_refs[1] = 664 - kzalloc(sizeof(struct tlbe_ref) * host_tlb_params[1].entries, 665 - GFP_KERNEL); 666 - if (!vcpu_e500->tlb_refs[1]) 667 - goto err; 668 - 669 668 vcpu_e500->h2g_tlb1_rmap = kzalloc(sizeof(unsigned int) * 670 669 host_tlb_params[1].entries, 671 670 GFP_KERNEL); 672 671 if (!vcpu_e500->h2g_tlb1_rmap) 673 - goto err; 672 + return -EINVAL; 674 673 675 674 return 0; 676 - 677 - err: 678 - kfree(vcpu_e500->tlb_refs[0]); 679 - kfree(vcpu_e500->tlb_refs[1]); 680 - return -EINVAL; 681 675 } 682 676 683 677 void e500_mmu_host_uninit(struct kvmppc_vcpu_e500 *vcpu_e500) 684 678 { 685 679 kfree(vcpu_e500->h2g_tlb1_rmap); 686 - kfree(vcpu_e500->tlb_refs[0]); 687 - kfree(vcpu_e500->tlb_refs[1]); 688 680 }
+6 -1
arch/powerpc/kvm/e500mc.c
··· 108 108 { 109 109 } 110 110 111 + static DEFINE_PER_CPU(struct kvm_vcpu *, last_vcpu_on_cpu); 112 + 111 113 void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu) 112 114 { 113 115 struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); ··· 138 136 mtspr(SPRN_GDEAR, vcpu->arch.shared->dar); 139 137 mtspr(SPRN_GESR, vcpu->arch.shared->esr); 140 138 141 - if (vcpu->arch.oldpir != mfspr(SPRN_PIR)) 139 + if (vcpu->arch.oldpir != mfspr(SPRN_PIR) || 140 + __get_cpu_var(last_vcpu_on_cpu) != vcpu) { 142 141 kvmppc_e500_tlbil_all(vcpu_e500); 142 + __get_cpu_var(last_vcpu_on_cpu) = vcpu; 143 + } 143 144 144 145 kvmppc_load_guest_fp(vcpu); 145 146 }
+7 -1
arch/powerpc/platforms/pseries/lpar.c
··· 186 186 (0x1UL << 4), &dummy1, &dummy2); 187 187 if (lpar_rc == H_SUCCESS) 188 188 return i; 189 - BUG_ON(lpar_rc != H_NOT_FOUND); 189 + 190 + /* 191 + * The test for adjunct partition is performed before the 192 + * ANDCOND test. H_RESOURCE may be returned, so we need to 193 + * check for that as well. 194 + */ 195 + BUG_ON(lpar_rc != H_NOT_FOUND && lpar_rc != H_RESOURCE); 190 196 191 197 slot_offset++; 192 198 slot_offset &= 0x7;
-4
arch/s390/include/asm/io.h
··· 50 50 #define ioremap_nocache(addr, size) ioremap(addr, size) 51 51 #define ioremap_wc ioremap_nocache 52 52 53 - /* TODO: s390 cannot support io_remap_pfn_range... */ 54 - #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ 55 - remap_pfn_range(vma, vaddr, pfn, size, prot) 56 - 57 53 static inline void __iomem *ioremap(unsigned long offset, unsigned long size) 58 54 { 59 55 return (void __iomem *) offset;
+4
arch/s390/include/asm/pgtable.h
··· 57 57 (((unsigned long)(vaddr)) &zero_page_mask)))) 58 58 #define __HAVE_COLOR_ZERO_PAGE 59 59 60 + /* TODO: s390 cannot support io_remap_pfn_range... */ 61 + #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ 62 + remap_pfn_range(vma, vaddr, pfn, size, prot) 63 + 60 64 #endif /* !__ASSEMBLY__ */ 61 65 62 66 /*
+5
arch/sparc/include/asm/Kbuild
··· 2 2 3 3 4 4 generic-y += clkdev.h 5 + generic-y += cputime.h 5 6 generic-y += div64.h 7 + generic-y += emergency-restart.h 6 8 generic-y += exec.h 7 9 generic-y += local64.h 10 + generic-y += mutex.h 8 11 generic-y += irq_regs.h 9 12 generic-y += local.h 10 13 generic-y += module.h 14 + generic-y += serial.h 11 15 generic-y += trace_clock.h 16 + generic-y += types.h 12 17 generic-y += word-at-a-time.h
-6
arch/sparc/include/asm/cputime.h
··· 1 - #ifndef __SPARC_CPUTIME_H 2 - #define __SPARC_CPUTIME_H 3 - 4 - #include <asm-generic/cputime.h> 5 - 6 - #endif /* __SPARC_CPUTIME_H */
-6
arch/sparc/include/asm/emergency-restart.h
··· 1 - #ifndef _ASM_EMERGENCY_RESTART_H 2 - #define _ASM_EMERGENCY_RESTART_H 3 - 4 - #include <asm-generic/emergency-restart.h> 5 - 6 - #endif /* _ASM_EMERGENCY_RESTART_H */
-9
arch/sparc/include/asm/mutex.h
··· 1 - /* 2 - * Pull in the generic implementation for the mutex fastpath. 3 - * 4 - * TODO: implement optimized primitives instead, or leave the generic 5 - * implementation in place, or pick the atomic_xchg() based generic 6 - * implementation. (see asm-generic/mutex-xchg.h for details) 7 - */ 8 - 9 - #include <asm-generic/mutex-dec.h>
+1
arch/sparc/include/asm/pgtable_64.h
··· 915 915 return remap_pfn_range(vma, from, phys_base >> PAGE_SHIFT, size, prot); 916 916 } 917 917 918 + #include <asm/tlbflush.h> 918 919 #include <asm-generic/pgtable.h> 919 920 920 921 /* We provide our own get_unmapped_area to cope with VA holes and
-6
arch/sparc/include/asm/serial.h
··· 1 - #ifndef __SPARC_SERIAL_H 2 - #define __SPARC_SERIAL_H 3 - 4 - #define BASE_BAUD ( 1843200 / 16 ) 5 - 6 - #endif /* __SPARC_SERIAL_H */
-5
arch/sparc/include/asm/smp_32.h
··· 36 36 unsigned long, unsigned long); 37 37 38 38 void cpu_panic(void); 39 - extern void smp4m_irq_rotate(int cpu); 40 39 41 40 /* 42 41 * General functions that each host system must provide. ··· 45 46 void sun4d_init_smp(void); 46 47 47 48 void smp_callin(void); 48 - void smp_boot_cpus(void); 49 49 void smp_store_cpu_info(int); 50 50 51 51 void smp_resched_interrupt(void); ··· 104 106 extern int hard_smp_processor_id(void); 105 107 106 108 #define raw_smp_processor_id() (current_thread_info()->cpu) 107 - 108 - #define prof_multiplier(__cpu) cpu_data(__cpu).multiplier 109 - #define prof_counter(__cpu) cpu_data(__cpu).counter 110 109 111 110 void smp_setup_cpu_possible_map(void); 112 111
+1 -2
arch/sparc/include/asm/switch_to_64.h
··· 18 18 * and 2 stores in this critical code path. -DaveM 19 19 */ 20 20 #define switch_to(prev, next, last) \ 21 - do { flush_tlb_pending(); \ 22 - save_and_clear_fpu(); \ 21 + do { save_and_clear_fpu(); \ 23 22 /* If you are tempted to conditionalize the following */ \ 24 23 /* so that ASI is only written if it changes, think again. */ \ 25 24 __asm__ __volatile__("wr %%g0, %0, %%asi" \
+31 -6
arch/sparc/include/asm/tlbflush_64.h
··· 11 11 struct tlb_batch { 12 12 struct mm_struct *mm; 13 13 unsigned long tlb_nr; 14 + unsigned long active; 14 15 unsigned long vaddrs[TLB_BATCH_NR]; 15 16 }; 16 17 17 18 extern void flush_tsb_kernel_range(unsigned long start, unsigned long end); 18 19 extern void flush_tsb_user(struct tlb_batch *tb); 20 + extern void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr); 19 21 20 22 /* TLB flush operations. */ 21 23 22 - extern void flush_tlb_pending(void); 24 + static inline void flush_tlb_mm(struct mm_struct *mm) 25 + { 26 + } 23 27 24 - #define flush_tlb_range(vma,start,end) \ 25 - do { (void)(start); flush_tlb_pending(); } while (0) 26 - #define flush_tlb_page(vma,addr) flush_tlb_pending() 27 - #define flush_tlb_mm(mm) flush_tlb_pending() 28 + static inline void flush_tlb_page(struct vm_area_struct *vma, 29 + unsigned long vmaddr) 30 + { 31 + } 32 + 33 + static inline void flush_tlb_range(struct vm_area_struct *vma, 34 + unsigned long start, unsigned long end) 35 + { 36 + } 37 + 38 + #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE 39 + 40 + extern void flush_tlb_pending(void); 41 + extern void arch_enter_lazy_mmu_mode(void); 42 + extern void arch_leave_lazy_mmu_mode(void); 43 + #define arch_flush_lazy_mmu_mode() do {} while (0) 28 44 29 45 /* Local cpu only. */ 30 46 extern void __flush_tlb_all(void); 31 - 47 + extern void __flush_tlb_page(unsigned long context, unsigned long vaddr); 32 48 extern void __flush_tlb_kernel_range(unsigned long start, unsigned long end); 33 49 34 50 #ifndef CONFIG_SMP ··· 54 38 __flush_tlb_kernel_range(start,end); \ 55 39 } while (0) 56 40 41 + static inline void global_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) 42 + { 43 + __flush_tlb_page(CTX_HWBITS(mm->context), vaddr); 44 + } 45 + 57 46 #else /* CONFIG_SMP */ 58 47 59 48 extern void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end); 49 + extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr); 60 50 61 51 #define flush_tlb_kernel_range(start, end) \ 62 52 do { flush_tsb_kernel_range(start,end); \ 63 53 smp_flush_tlb_kernel_range(start, end); \ 64 54 } while (0) 55 + 56 + #define global_flush_tlb_page(mm, vaddr) \ 57 + smp_flush_tlb_page(mm, vaddr) 65 58 66 59 #endif /* ! CONFIG_SMP */ 67 60
-1
arch/sparc/include/uapi/asm/Kbuild
··· 44 44 header-y += termbits.h 45 45 header-y += termios.h 46 46 header-y += traps.h 47 - header-y += types.h 48 47 header-y += uctx.h 49 48 header-y += unistd.h 50 49 header-y += utrap.h
-17
arch/sparc/include/uapi/asm/types.h
··· 1 - #ifndef _SPARC_TYPES_H 2 - #define _SPARC_TYPES_H 3 - /* 4 - * This file is never included by application software unless 5 - * explicitly requested (e.g., via linux/types.h) in which case the 6 - * application is Linux specific so (user-) name space pollution is 7 - * not a major issue. However, for interoperability, libraries still 8 - * need to be careful to avoid a name clashes. 9 - */ 10 - 11 - #if defined(__sparc__) 12 - 13 - #include <asm-generic/int-ll64.h> 14 - 15 - #endif /* defined(__sparc__) */ 16 - 17 - #endif /* defined(_SPARC_TYPES_H) */
+38 -5
arch/sparc/kernel/smp_64.c
··· 849 849 } 850 850 851 851 extern unsigned long xcall_flush_tlb_mm; 852 - extern unsigned long xcall_flush_tlb_pending; 852 + extern unsigned long xcall_flush_tlb_page; 853 853 extern unsigned long xcall_flush_tlb_kernel_range; 854 854 extern unsigned long xcall_fetch_glob_regs; 855 855 extern unsigned long xcall_fetch_glob_pmu; ··· 1074 1074 put_cpu(); 1075 1075 } 1076 1076 1077 + struct tlb_pending_info { 1078 + unsigned long ctx; 1079 + unsigned long nr; 1080 + unsigned long *vaddrs; 1081 + }; 1082 + 1083 + static void tlb_pending_func(void *info) 1084 + { 1085 + struct tlb_pending_info *t = info; 1086 + 1087 + __flush_tlb_pending(t->ctx, t->nr, t->vaddrs); 1088 + } 1089 + 1077 1090 void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long *vaddrs) 1078 1091 { 1079 1092 u32 ctx = CTX_HWBITS(mm->context); 1093 + struct tlb_pending_info info; 1094 + int cpu = get_cpu(); 1095 + 1096 + info.ctx = ctx; 1097 + info.nr = nr; 1098 + info.vaddrs = vaddrs; 1099 + 1100 + if (mm == current->mm && atomic_read(&mm->mm_users) == 1) 1101 + cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); 1102 + else 1103 + smp_call_function_many(mm_cpumask(mm), tlb_pending_func, 1104 + &info, 1); 1105 + 1106 + __flush_tlb_pending(ctx, nr, vaddrs); 1107 + 1108 + put_cpu(); 1109 + } 1110 + 1111 + void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) 1112 + { 1113 + unsigned long context = CTX_HWBITS(mm->context); 1080 1114 int cpu = get_cpu(); 1081 1115 1082 1116 if (mm == current->mm && atomic_read(&mm->mm_users) == 1) 1083 1117 cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); 1084 1118 else 1085 - smp_cross_call_masked(&xcall_flush_tlb_pending, 1086 - ctx, nr, (unsigned long) vaddrs, 1119 + smp_cross_call_masked(&xcall_flush_tlb_page, 1120 + context, vaddr, 0, 1087 1121 mm_cpumask(mm)); 1088 - 1089 - __flush_tlb_pending(ctx, nr, vaddrs); 1122 + __flush_tlb_page(context, vaddr); 1090 1123 1091 1124 put_cpu(); 1092 1125 }
+1 -5
arch/sparc/lib/bitext.c
··· 119 119 120 120 void bit_map_init(struct bit_map *t, unsigned long *map, int size) 121 121 { 122 - 123 - if ((size & 07) != 0) 124 - BUG(); 125 - memset(map, 0, size>>3); 126 - 122 + bitmap_zero(map, size); 127 123 memset(t, 0, sizeof *t); 128 124 spin_lock_init(&t->lock); 129 125 t->map = map;
+1 -1
arch/sparc/mm/iommu.c
··· 34 34 #define IOMMU_RNGE IOMMU_RNGE_256MB 35 35 #define IOMMU_START 0xF0000000 36 36 #define IOMMU_WINSIZE (256*1024*1024U) 37 - #define IOMMU_NPTES (IOMMU_WINSIZE/PAGE_SIZE) /* 64K PTEs, 265KB */ 37 + #define IOMMU_NPTES (IOMMU_WINSIZE/PAGE_SIZE) /* 64K PTEs, 256KB */ 38 38 #define IOMMU_ORDER 6 /* 4096 * (1<<6) */ 39 39 40 40 /* srmmu.c */
+3 -1
arch/sparc/mm/srmmu.c
··· 280 280 SRMMU_NOCACHE_ALIGN_MAX, 0UL); 281 281 memset(srmmu_nocache_pool, 0, srmmu_nocache_size); 282 282 283 - srmmu_nocache_bitmap = __alloc_bootmem(bitmap_bits >> 3, SMP_CACHE_BYTES, 0UL); 283 + srmmu_nocache_bitmap = 284 + __alloc_bootmem(BITS_TO_LONGS(bitmap_bits) * sizeof(long), 285 + SMP_CACHE_BYTES, 0UL); 284 286 bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits); 285 287 286 288 srmmu_swapper_pg_dir = __srmmu_get_nocache(SRMMU_PGD_TABLE_SIZE, SRMMU_PGD_TABLE_SIZE);
+34 -4
arch/sparc/mm/tlb.c
··· 24 24 void flush_tlb_pending(void) 25 25 { 26 26 struct tlb_batch *tb = &get_cpu_var(tlb_batch); 27 + struct mm_struct *mm = tb->mm; 27 28 28 - if (tb->tlb_nr) { 29 - flush_tsb_user(tb); 29 + if (!tb->tlb_nr) 30 + goto out; 30 31 31 - if (CTX_VALID(tb->mm->context)) { 32 + flush_tsb_user(tb); 33 + 34 + if (CTX_VALID(mm->context)) { 35 + if (tb->tlb_nr == 1) { 36 + global_flush_tlb_page(mm, tb->vaddrs[0]); 37 + } else { 32 38 #ifdef CONFIG_SMP 33 39 smp_flush_tlb_pending(tb->mm, tb->tlb_nr, 34 40 &tb->vaddrs[0]); ··· 43 37 tb->tlb_nr, &tb->vaddrs[0]); 44 38 #endif 45 39 } 46 - tb->tlb_nr = 0; 47 40 } 48 41 42 + tb->tlb_nr = 0; 43 + 44 + out: 49 45 put_cpu_var(tlb_batch); 46 + } 47 + 48 + void arch_enter_lazy_mmu_mode(void) 49 + { 50 + struct tlb_batch *tb = &__get_cpu_var(tlb_batch); 51 + 52 + tb->active = 1; 53 + } 54 + 55 + void arch_leave_lazy_mmu_mode(void) 56 + { 57 + struct tlb_batch *tb = &__get_cpu_var(tlb_batch); 58 + 59 + if (tb->tlb_nr) 60 + flush_tlb_pending(); 61 + tb->active = 0; 50 62 } 51 63 52 64 static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr, ··· 82 58 if (unlikely(nr != 0 && mm != tb->mm)) { 83 59 flush_tlb_pending(); 84 60 nr = 0; 61 + } 62 + 63 + if (!tb->active) { 64 + global_flush_tlb_page(mm, vaddr); 65 + flush_tsb_user_page(mm, vaddr); 66 + return; 85 67 } 86 68 87 69 if (nr == 0)
+42 -15
arch/sparc/mm/tsb.c
··· 7 7 #include <linux/preempt.h> 8 8 #include <linux/slab.h> 9 9 #include <asm/page.h> 10 - #include <asm/tlbflush.h> 11 - #include <asm/tlb.h> 12 - #include <asm/mmu_context.h> 13 10 #include <asm/pgtable.h> 11 + #include <asm/mmu_context.h> 14 12 #include <asm/tsb.h> 13 + #include <asm/tlb.h> 15 14 #include <asm/oplib.h> 16 15 17 16 extern struct tsb swapper_tsb[KERNEL_TSB_NENTRIES]; ··· 45 46 } 46 47 } 47 48 49 + static void __flush_tsb_one_entry(unsigned long tsb, unsigned long v, 50 + unsigned long hash_shift, 51 + unsigned long nentries) 52 + { 53 + unsigned long tag, ent, hash; 54 + 55 + v &= ~0x1UL; 56 + hash = tsb_hash(v, hash_shift, nentries); 57 + ent = tsb + (hash * sizeof(struct tsb)); 58 + tag = (v >> 22UL); 59 + 60 + tsb_flush(ent, tag); 61 + } 62 + 48 63 static void __flush_tsb_one(struct tlb_batch *tb, unsigned long hash_shift, 49 64 unsigned long tsb, unsigned long nentries) 50 65 { 51 66 unsigned long i; 52 67 53 - for (i = 0; i < tb->tlb_nr; i++) { 54 - unsigned long v = tb->vaddrs[i]; 55 - unsigned long tag, ent, hash; 56 - 57 - v &= ~0x1UL; 58 - 59 - hash = tsb_hash(v, hash_shift, nentries); 60 - ent = tsb + (hash * sizeof(struct tsb)); 61 - tag = (v >> 22UL); 62 - 63 - tsb_flush(ent, tag); 64 - } 68 + for (i = 0; i < tb->tlb_nr; i++) 69 + __flush_tsb_one_entry(tsb, tb->vaddrs[i], hash_shift, nentries); 65 70 } 66 71 67 72 void flush_tsb_user(struct tlb_batch *tb) ··· 88 85 if (tlb_type == cheetah_plus || tlb_type == hypervisor) 89 86 base = __pa(base); 90 87 __flush_tsb_one(tb, HPAGE_SHIFT, base, nentries); 88 + } 89 + #endif 90 + spin_unlock_irqrestore(&mm->context.lock, flags); 91 + } 92 + 93 + void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr) 94 + { 95 + unsigned long nentries, base, flags; 96 + 97 + spin_lock_irqsave(&mm->context.lock, flags); 98 + 99 + base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb; 100 + nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries; 101 + if (tlb_type == cheetah_plus || tlb_type == hypervisor) 102 + base = __pa(base); 103 + __flush_tsb_one_entry(base, vaddr, PAGE_SHIFT, nentries); 104 + 105 + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) 106 + if (mm->context.tsb_block[MM_TSB_HUGE].tsb) { 107 + base = (unsigned long) mm->context.tsb_block[MM_TSB_HUGE].tsb; 108 + nentries = mm->context.tsb_block[MM_TSB_HUGE].tsb_nentries; 109 + if (tlb_type == cheetah_plus || tlb_type == hypervisor) 110 + base = __pa(base); 111 + __flush_tsb_one_entry(base, vaddr, HPAGE_SHIFT, nentries); 91 112 } 92 113 #endif 93 114 spin_unlock_irqrestore(&mm->context.lock, flags);
+95 -24
arch/sparc/mm/ultra.S
··· 53 53 nop 54 54 55 55 .align 32 56 + .globl __flush_tlb_page 57 + __flush_tlb_page: /* 22 insns */ 58 + /* %o0 = context, %o1 = vaddr */ 59 + rdpr %pstate, %g7 60 + andn %g7, PSTATE_IE, %g2 61 + wrpr %g2, %pstate 62 + mov SECONDARY_CONTEXT, %o4 63 + ldxa [%o4] ASI_DMMU, %g2 64 + stxa %o0, [%o4] ASI_DMMU 65 + andcc %o1, 1, %g0 66 + andn %o1, 1, %o3 67 + be,pn %icc, 1f 68 + or %o3, 0x10, %o3 69 + stxa %g0, [%o3] ASI_IMMU_DEMAP 70 + 1: stxa %g0, [%o3] ASI_DMMU_DEMAP 71 + membar #Sync 72 + stxa %g2, [%o4] ASI_DMMU 73 + sethi %hi(KERNBASE), %o4 74 + flush %o4 75 + retl 76 + wrpr %g7, 0x0, %pstate 77 + nop 78 + nop 79 + nop 80 + nop 81 + 82 + .align 32 56 83 .globl __flush_tlb_pending 57 84 __flush_tlb_pending: /* 26 insns */ 58 85 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ ··· 230 203 retl 231 204 wrpr %g7, 0x0, %pstate 232 205 206 + __cheetah_flush_tlb_page: /* 22 insns */ 207 + /* %o0 = context, %o1 = vaddr */ 208 + rdpr %pstate, %g7 209 + andn %g7, PSTATE_IE, %g2 210 + wrpr %g2, 0x0, %pstate 211 + wrpr %g0, 1, %tl 212 + mov PRIMARY_CONTEXT, %o4 213 + ldxa [%o4] ASI_DMMU, %g2 214 + srlx %g2, CTX_PGSZ1_NUC_SHIFT, %o3 215 + sllx %o3, CTX_PGSZ1_NUC_SHIFT, %o3 216 + or %o0, %o3, %o0 /* Preserve nucleus page size fields */ 217 + stxa %o0, [%o4] ASI_DMMU 218 + andcc %o1, 1, %g0 219 + be,pn %icc, 1f 220 + andn %o1, 1, %o3 221 + stxa %g0, [%o3] ASI_IMMU_DEMAP 222 + 1: stxa %g0, [%o3] ASI_DMMU_DEMAP 223 + membar #Sync 224 + stxa %g2, [%o4] ASI_DMMU 225 + sethi %hi(KERNBASE), %o4 226 + flush %o4 227 + wrpr %g0, 0, %tl 228 + retl 229 + wrpr %g7, 0x0, %pstate 230 + 233 231 __cheetah_flush_tlb_pending: /* 27 insns */ 234 232 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ 235 233 rdpr %pstate, %g7 ··· 318 266 ta HV_FAST_TRAP 319 267 brnz,pn %o0, __hypervisor_tlb_tl0_error 320 268 mov HV_FAST_MMU_DEMAP_CTX, %o1 269 + retl 270 + nop 271 + 272 + __hypervisor_flush_tlb_page: /* 11 insns */ 273 + /* %o0 = context, %o1 = vaddr */ 274 + mov %o0, %g2 275 + mov %o1, %o0 /* ARG0: vaddr + IMMU-bit */ 276 + mov %g2, %o1 /* ARG1: mmu context */ 277 + mov HV_MMU_ALL, %o2 /* ARG2: flags */ 278 + srlx %o0, PAGE_SHIFT, %o0 279 + sllx %o0, PAGE_SHIFT, %o0 280 + ta HV_MMU_UNMAP_ADDR_TRAP 281 + brnz,pn %o0, __hypervisor_tlb_tl0_error 282 + mov HV_MMU_UNMAP_ADDR_TRAP, %o1 321 283 retl 322 284 nop 323 285 ··· 405 339 call tlb_patch_one 406 340 mov 19, %o2 407 341 342 + sethi %hi(__flush_tlb_page), %o0 343 + or %o0, %lo(__flush_tlb_page), %o0 344 + sethi %hi(__cheetah_flush_tlb_page), %o1 345 + or %o1, %lo(__cheetah_flush_tlb_page), %o1 346 + call tlb_patch_one 347 + mov 22, %o2 348 + 408 349 sethi %hi(__flush_tlb_pending), %o0 409 350 or %o0, %lo(__flush_tlb_pending), %o0 410 351 sethi %hi(__cheetah_flush_tlb_pending), %o1 ··· 470 397 nop 471 398 nop 472 399 473 - .globl xcall_flush_tlb_pending 474 - xcall_flush_tlb_pending: /* 21 insns */ 475 - /* %g5=context, %g1=nr, %g7=vaddrs[] */ 476 - sllx %g1, 3, %g1 400 + .globl xcall_flush_tlb_page 401 + xcall_flush_tlb_page: /* 17 insns */ 402 + /* %g5=context, %g1=vaddr */ 477 403 mov PRIMARY_CONTEXT, %g4 478 404 ldxa [%g4] ASI_DMMU, %g2 479 405 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %g4 ··· 480 408 or %g5, %g4, %g5 481 409 mov PRIMARY_CONTEXT, %g4 482 410 stxa %g5, [%g4] ASI_DMMU 483 - 1: sub %g1, (1 << 3), %g1 484 - ldx [%g7 + %g1], %g5 485 - andcc %g5, 0x1, %g0 411 + andcc %g1, 0x1, %g0 486 412 be,pn %icc, 2f 487 - 488 - andn %g5, 0x1, %g5 413 + andn %g1, 0x1, %g5 489 414 stxa %g0, [%g5] ASI_IMMU_DEMAP 490 415 2: stxa %g0, [%g5] ASI_DMMU_DEMAP 491 416 membar #Sync 492 - brnz,pt %g1, 1b 493 - nop 494 417 stxa %g2, [%g4] ASI_DMMU 495 418 retry 419 + nop 496 420 nop 497 421 498 422 .globl xcall_flush_tlb_kernel_range ··· 724 656 membar #Sync 725 657 retry 726 658 727 - .globl __hypervisor_xcall_flush_tlb_pending 728 - __hypervisor_xcall_flush_tlb_pending: /* 21 insns */ 729 - /* %g5=ctx, %g1=nr, %g7=vaddrs[], %g2,%g3,%g4,g6=scratch */ 730 - sllx %g1, 3, %g1 659 + .globl __hypervisor_xcall_flush_tlb_page 660 + __hypervisor_xcall_flush_tlb_page: /* 17 insns */ 661 + /* %g5=ctx, %g1=vaddr */ 731 662 mov %o0, %g2 732 663 mov %o1, %g3 733 664 mov %o2, %g4 734 - 1: sub %g1, (1 << 3), %g1 735 - ldx [%g7 + %g1], %o0 /* ARG0: virtual address */ 665 + mov %g1, %o0 /* ARG0: virtual address */ 736 666 mov %g5, %o1 /* ARG1: mmu context */ 737 667 mov HV_MMU_ALL, %o2 /* ARG2: flags */ 738 668 srlx %o0, PAGE_SHIFT, %o0 ··· 739 673 mov HV_MMU_UNMAP_ADDR_TRAP, %g6 740 674 brnz,a,pn %o0, __hypervisor_tlb_xcall_error 741 675 mov %o0, %g5 742 - brnz,pt %g1, 1b 743 - nop 744 676 mov %g2, %o0 745 677 mov %g3, %o1 746 678 mov %g4, %o2 ··· 821 757 call tlb_patch_one 822 758 mov 10, %o2 823 759 760 + sethi %hi(__flush_tlb_page), %o0 761 + or %o0, %lo(__flush_tlb_page), %o0 762 + sethi %hi(__hypervisor_flush_tlb_page), %o1 763 + or %o1, %lo(__hypervisor_flush_tlb_page), %o1 764 + call tlb_patch_one 765 + mov 11, %o2 766 + 824 767 sethi %hi(__flush_tlb_pending), %o0 825 768 or %o0, %lo(__flush_tlb_pending), %o0 826 769 sethi %hi(__hypervisor_flush_tlb_pending), %o1 ··· 859 788 call tlb_patch_one 860 789 mov 21, %o2 861 790 862 - sethi %hi(xcall_flush_tlb_pending), %o0 863 - or %o0, %lo(xcall_flush_tlb_pending), %o0 864 - sethi %hi(__hypervisor_xcall_flush_tlb_pending), %o1 865 - or %o1, %lo(__hypervisor_xcall_flush_tlb_pending), %o1 791 + sethi %hi(xcall_flush_tlb_page), %o0 792 + or %o0, %lo(xcall_flush_tlb_page), %o0 793 + sethi %hi(__hypervisor_xcall_flush_tlb_page), %o1 794 + or %o1, %lo(__hypervisor_xcall_flush_tlb_page), %o1 866 795 call tlb_patch_one 867 - mov 21, %o2 796 + mov 17, %o2 868 797 869 798 sethi %hi(xcall_flush_tlb_kernel_range), %o0 870 799 or %o0, %lo(xcall_flush_tlb_kernel_range), %o0
+9 -1
arch/tile/include/asm/irqflags.h
··· 40 40 #include <asm/percpu.h> 41 41 #include <arch/spr_def.h> 42 42 43 - /* Set and clear kernel interrupt masks. */ 43 + /* 44 + * Set and clear kernel interrupt masks. 45 + * 46 + * NOTE: __insn_mtspr() is a compiler builtin marked as a memory 47 + * clobber. We rely on it being equivalent to a compiler barrier in 48 + * this code since arch_local_irq_save() and friends must act as 49 + * compiler barriers. This compiler semantic is baked into enough 50 + * places that the compiler will maintain it going forward. 51 + */ 44 52 #if CHIP_HAS_SPLIT_INTR_MASK() 45 53 #if INT_PERF_COUNT < 32 || INT_AUX_PERF_COUNT < 32 || INT_MEM_ERROR >= 32 46 54 # error Fix assumptions about which word various interrupts are in
+1
arch/x86/Kconfig
··· 1549 1549 config EFI 1550 1550 bool "EFI runtime service support" 1551 1551 depends on ACPI 1552 + select UCS2_STRING 1552 1553 ---help--- 1553 1554 This enables the kernel to use EFI runtime services that are 1554 1555 available (such as the EFI variable services).
+2 -3
arch/x86/boot/compressed/Makefile
··· 4 4 # create a compressed vmlinux image from the original vmlinux 5 5 # 6 6 7 - targets := vmlinux.lds vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma vmlinux.bin.xz vmlinux.bin.lzo head_$(BITS).o misc.o string.o cmdline.o early_serial_console.o piggy.o 7 + targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma vmlinux.bin.xz vmlinux.bin.lzo 8 8 9 9 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 10 10 KBUILD_CFLAGS += -fno-strict-aliasing -fPIC ··· 29 29 $(obj)/piggy.o 30 30 31 31 $(obj)/eboot.o: KBUILD_CFLAGS += -fshort-wchar -mno-red-zone 32 - $(obj)/efi_stub_$(BITS).o: KBUILD_CLFAGS += -fshort-wchar -mno-red-zone 33 32 34 33 ifeq ($(CONFIG_EFI_STUB), y) 35 34 VMLINUX_OBJS += $(obj)/eboot.o $(obj)/efi_stub_$(BITS).o ··· 42 43 $(obj)/vmlinux.bin: vmlinux FORCE 43 44 $(call if_changed,objcopy) 44 45 45 - targets += vmlinux.bin.all vmlinux.relocs 46 + targets += $(patsubst $(obj)/%,%,$(VMLINUX_OBJS)) vmlinux.bin.all vmlinux.relocs 46 47 47 48 CMD_RELOCS = arch/x86/tools/relocs 48 49 quiet_cmd_relocs = RELOCS $@
+47
arch/x86/boot/compressed/eboot.c
··· 251 251 *size = len; 252 252 } 253 253 254 + static efi_status_t setup_efi_vars(struct boot_params *params) 255 + { 256 + struct setup_data *data; 257 + struct efi_var_bootdata *efidata; 258 + u64 store_size, remaining_size, var_size; 259 + efi_status_t status; 260 + 261 + if (!sys_table->runtime->query_variable_info) 262 + return EFI_UNSUPPORTED; 263 + 264 + data = (struct setup_data *)(unsigned long)params->hdr.setup_data; 265 + 266 + while (data && data->next) 267 + data = (struct setup_data *)(unsigned long)data->next; 268 + 269 + status = efi_call_phys4(sys_table->runtime->query_variable_info, 270 + EFI_VARIABLE_NON_VOLATILE | 271 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 272 + EFI_VARIABLE_RUNTIME_ACCESS, &store_size, 273 + &remaining_size, &var_size); 274 + 275 + if (status != EFI_SUCCESS) 276 + return status; 277 + 278 + status = efi_call_phys3(sys_table->boottime->allocate_pool, 279 + EFI_LOADER_DATA, sizeof(*efidata), &efidata); 280 + 281 + if (status != EFI_SUCCESS) 282 + return status; 283 + 284 + efidata->data.type = SETUP_EFI_VARS; 285 + efidata->data.len = sizeof(struct efi_var_bootdata) - 286 + sizeof(struct setup_data); 287 + efidata->data.next = 0; 288 + efidata->store_size = store_size; 289 + efidata->remaining_size = remaining_size; 290 + efidata->max_var_size = var_size; 291 + 292 + if (data) 293 + data->next = (unsigned long)efidata; 294 + else 295 + params->hdr.setup_data = (unsigned long)efidata; 296 + 297 + } 298 + 254 299 static efi_status_t setup_efi_pci(struct boot_params *params) 255 300 { 256 301 efi_pci_io_protocol *pci; ··· 1201 1156 goto fail; 1202 1157 1203 1158 setup_graphics(boot_params); 1159 + 1160 + setup_efi_vars(boot_params); 1204 1161 1205 1162 setup_efi_pci(boot_params); 1206 1163
+7
arch/x86/include/asm/efi.h
··· 102 102 extern void efi_unmap_memmap(void); 103 103 extern void efi_memory_uc(u64 addr, unsigned long size); 104 104 105 + struct efi_var_bootdata { 106 + struct setup_data data; 107 + u64 store_size; 108 + u64 remaining_size; 109 + u64 max_var_size; 110 + }; 111 + 105 112 #ifdef CONFIG_EFI 106 113 107 114 static inline bool efi_is_native(void)
+4 -1
arch/x86/include/asm/paravirt.h
··· 703 703 PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave); 704 704 } 705 705 706 - void arch_flush_lazy_mmu_mode(void); 706 + static inline void arch_flush_lazy_mmu_mode(void) 707 + { 708 + PVOP_VCALL0(pv_mmu_ops.lazy_mode.flush); 709 + } 707 710 708 711 static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, 709 712 phys_addr_t phys, pgprot_t flags)
+2
arch/x86/include/asm/paravirt_types.h
··· 91 91 /* Set deferred update mode, used for batching operations. */ 92 92 void (*enter)(void); 93 93 void (*leave)(void); 94 + void (*flush)(void); 94 95 }; 95 96 96 97 struct pv_time_ops { ··· 680 679 681 680 void paravirt_enter_lazy_mmu(void); 682 681 void paravirt_leave_lazy_mmu(void); 682 + void paravirt_flush_lazy_mmu(void); 683 683 684 684 void _paravirt_nop(void); 685 685 u32 _paravirt_ident_32(u32);
+2 -2
arch/x86/include/asm/syscall.h
··· 29 29 */ 30 30 static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) 31 31 { 32 - return regs->orig_ax & __SYSCALL_MASK; 32 + return regs->orig_ax; 33 33 } 34 34 35 35 static inline void syscall_rollback(struct task_struct *task, 36 36 struct pt_regs *regs) 37 37 { 38 - regs->ax = regs->orig_ax & __SYSCALL_MASK; 38 + regs->ax = regs->orig_ax; 39 39 } 40 40 41 41 static inline long syscall_get_error(struct task_struct *task,
+1 -1
arch/x86/include/asm/tlb.h
··· 7 7 8 8 #define tlb_flush(tlb) \ 9 9 { \ 10 - if (tlb->fullmm == 0) \ 10 + if (!tlb->fullmm && !tlb->need_flush_all) \ 11 11 flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ 12 12 else \ 13 13 flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \
+1
arch/x86/include/uapi/asm/bootparam.h
··· 6 6 #define SETUP_E820_EXT 1 7 7 #define SETUP_DTB 2 8 8 #define SETUP_PCI 3 9 + #define SETUP_EFI_VARS 4 9 10 10 11 /* ram_size flags */ 11 12 #define RAMDISK_IMAGE_START_MASK 0x07FF
+5 -13
arch/x86/kernel/cpu/mshyperv.c
··· 35 35 if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) 36 36 return false; 37 37 38 - /* 39 - * Xen emulates Hyper-V to support enlightened Windows. 40 - * Check to see first if we are on a Xen Hypervisor. 41 - */ 42 - if (xen_cpuid_base()) 43 - return false; 44 - 45 38 cpuid(HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS, 46 39 &eax, &hyp_signature[0], &hyp_signature[1], &hyp_signature[2]); 47 40 ··· 75 82 76 83 if (ms_hyperv.features & HV_X64_MSR_TIME_REF_COUNT_AVAILABLE) 77 84 clocksource_register_hz(&hyperv_cs, NSEC_PER_SEC/100); 78 - #if IS_ENABLED(CONFIG_HYPERV) 79 - /* 80 - * Setup the IDT for hypervisor callback. 81 - */ 82 - alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector); 83 - #endif 84 85 } 85 86 86 87 const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = { ··· 90 103 91 104 void hv_register_vmbus_handler(int irq, irq_handler_t handler) 92 105 { 106 + /* 107 + * Setup the IDT for hypervisor callback. 108 + */ 109 + alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector); 110 + 93 111 vmbus_irq = irq; 94 112 vmbus_isr = handler; 95 113 }
+16 -4
arch/x86/kernel/cpu/perf_event_intel.c
··· 153 153 }; 154 154 155 155 static struct extra_reg intel_snb_extra_regs[] __read_mostly = { 156 - INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0), 157 - INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1), 156 + INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3f807f8fffull, RSP_0), 157 + INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3f807f8fffull, RSP_1), 158 + EVENT_EXTRA_END 159 + }; 160 + 161 + static struct extra_reg intel_snbep_extra_regs[] __read_mostly = { 162 + INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3fffff8fffull, RSP_0), 163 + INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3fffff8fffull, RSP_1), 158 164 EVENT_EXTRA_END 159 165 }; 160 166 ··· 2103 2097 x86_pmu.event_constraints = intel_snb_event_constraints; 2104 2098 x86_pmu.pebs_constraints = intel_snb_pebs_event_constraints; 2105 2099 x86_pmu.pebs_aliases = intel_pebs_aliases_snb; 2106 - x86_pmu.extra_regs = intel_snb_extra_regs; 2100 + if (boot_cpu_data.x86_model == 45) 2101 + x86_pmu.extra_regs = intel_snbep_extra_regs; 2102 + else 2103 + x86_pmu.extra_regs = intel_snb_extra_regs; 2107 2104 /* all extra regs are per-cpu when HT is on */ 2108 2105 x86_pmu.er_flags |= ERF_HAS_RSP_1; 2109 2106 x86_pmu.er_flags |= ERF_NO_HT_SHARING; ··· 2132 2123 x86_pmu.event_constraints = intel_ivb_event_constraints; 2133 2124 x86_pmu.pebs_constraints = intel_ivb_pebs_event_constraints; 2134 2125 x86_pmu.pebs_aliases = intel_pebs_aliases_snb; 2135 - x86_pmu.extra_regs = intel_snb_extra_regs; 2126 + if (boot_cpu_data.x86_model == 62) 2127 + x86_pmu.extra_regs = intel_snbep_extra_regs; 2128 + else 2129 + x86_pmu.extra_regs = intel_snb_extra_regs; 2136 2130 /* all extra regs are per-cpu when HT is on */ 2137 2131 x86_pmu.er_flags |= ERF_HAS_RSP_1; 2138 2132 x86_pmu.er_flags |= ERF_NO_HT_SHARING;
+2 -1
arch/x86/kernel/cpu/perf_event_intel_ds.c
··· 314 314 if (top <= at) 315 315 return 0; 316 316 317 + memset(&regs, 0, sizeof(regs)); 318 + 317 319 ds->bts_index = ds->bts_buffer_base; 318 320 319 321 perf_sample_data_init(&data, 0, event->hw.last_period); 320 - regs.ip = 0; 321 322 322 323 /* 323 324 * Prepare a generic sample, i.e. fill in the invariant fields.
+31 -7
arch/x86/kernel/microcode_core_early.c
··· 45 45 u32 eax = 0x00000000; 46 46 u32 ebx, ecx = 0, edx; 47 47 48 - if (!have_cpuid_p()) 49 - return X86_VENDOR_UNKNOWN; 50 - 51 48 native_cpuid(&eax, &ebx, &ecx, &edx); 52 49 53 50 if (CPUID_IS(CPUID_INTEL1, CPUID_INTEL2, CPUID_INTEL3, ebx, ecx, edx)) ··· 56 59 return X86_VENDOR_UNKNOWN; 57 60 } 58 61 62 + static int __cpuinit x86_family(void) 63 + { 64 + u32 eax = 0x00000001; 65 + u32 ebx, ecx = 0, edx; 66 + int x86; 67 + 68 + native_cpuid(&eax, &ebx, &ecx, &edx); 69 + 70 + x86 = (eax >> 8) & 0xf; 71 + if (x86 == 15) 72 + x86 += (eax >> 20) & 0xff; 73 + 74 + return x86; 75 + } 76 + 59 77 void __init load_ucode_bsp(void) 60 78 { 61 - int vendor = x86_vendor(); 79 + int vendor, x86; 62 80 63 - if (vendor == X86_VENDOR_INTEL) 81 + if (!have_cpuid_p()) 82 + return; 83 + 84 + vendor = x86_vendor(); 85 + x86 = x86_family(); 86 + 87 + if (vendor == X86_VENDOR_INTEL && x86 >= 6) 64 88 load_ucode_intel_bsp(); 65 89 } 66 90 67 91 void __cpuinit load_ucode_ap(void) 68 92 { 69 - int vendor = x86_vendor(); 93 + int vendor, x86; 70 94 71 - if (vendor == X86_VENDOR_INTEL) 95 + if (!have_cpuid_p()) 96 + return; 97 + 98 + vendor = x86_vendor(); 99 + x86 = x86_family(); 100 + 101 + if (vendor == X86_VENDOR_INTEL && x86 >= 6) 72 102 load_ucode_intel_ap(); 73 103 }
+13 -12
arch/x86/kernel/paravirt.c
··· 263 263 leave_lazy(PARAVIRT_LAZY_MMU); 264 264 } 265 265 266 + void paravirt_flush_lazy_mmu(void) 267 + { 268 + preempt_disable(); 269 + 270 + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { 271 + arch_leave_lazy_mmu_mode(); 272 + arch_enter_lazy_mmu_mode(); 273 + } 274 + 275 + preempt_enable(); 276 + } 277 + 266 278 void paravirt_start_context_switch(struct task_struct *prev) 267 279 { 268 280 BUG_ON(preemptible()); ··· 302 290 return PARAVIRT_LAZY_NONE; 303 291 304 292 return this_cpu_read(paravirt_lazy_mode); 305 - } 306 - 307 - void arch_flush_lazy_mmu_mode(void) 308 - { 309 - preempt_disable(); 310 - 311 - if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { 312 - arch_leave_lazy_mmu_mode(); 313 - arch_enter_lazy_mmu_mode(); 314 - } 315 - 316 - preempt_enable(); 317 293 } 318 294 319 295 struct pv_info pv_info = { ··· 475 475 .lazy_mode = { 476 476 .enter = paravirt_nop, 477 477 .leave = paravirt_nop, 478 + .flush = paravirt_nop, 478 479 }, 479 480 480 481 .set_fixmap = native_set_fixmap,
+37 -8
arch/x86/kernel/setup.c
··· 507 507 /* 508 508 * Keep the crash kernel below this limit. On 32 bits earlier kernels 509 509 * would limit the kernel to the low 512 MiB due to mapping restrictions. 510 + * On 64bit, old kexec-tools need to under 896MiB. 510 511 */ 511 512 #ifdef CONFIG_X86_32 512 - # define CRASH_KERNEL_ADDR_MAX (512 << 20) 513 + # define CRASH_KERNEL_ADDR_LOW_MAX (512 << 20) 514 + # define CRASH_KERNEL_ADDR_HIGH_MAX (512 << 20) 513 515 #else 514 - # define CRASH_KERNEL_ADDR_MAX MAXMEM 516 + # define CRASH_KERNEL_ADDR_LOW_MAX (896UL<<20) 517 + # define CRASH_KERNEL_ADDR_HIGH_MAX MAXMEM 515 518 #endif 516 519 517 520 static void __init reserve_crashkernel_low(void) ··· 524 521 unsigned long long low_base = 0, low_size = 0; 525 522 unsigned long total_low_mem; 526 523 unsigned long long base; 524 + bool auto_set = false; 527 525 int ret; 528 526 529 527 total_low_mem = memblock_mem_size(1UL<<(32-PAGE_SHIFT)); 528 + /* crashkernel=Y,low */ 530 529 ret = parse_crashkernel_low(boot_command_line, total_low_mem, 531 530 &low_size, &base); 532 - if (ret != 0 || low_size <= 0) 533 - return; 531 + if (ret != 0) { 532 + /* 533 + * two parts from lib/swiotlb.c: 534 + * swiotlb size: user specified with swiotlb= or default. 535 + * swiotlb overflow buffer: now is hardcoded to 32k. 536 + * We round it to 8M for other buffers that 537 + * may need to stay low too. 538 + */ 539 + low_size = swiotlb_size_or_default() + (8UL<<20); 540 + auto_set = true; 541 + } else { 542 + /* passed with crashkernel=0,low ? */ 543 + if (!low_size) 544 + return; 545 + } 534 546 535 547 low_base = memblock_find_in_range(low_size, (1ULL<<32), 536 548 low_size, alignment); 537 549 538 550 if (!low_base) { 539 - pr_info("crashkernel low reservation failed - No suitable area found.\n"); 551 + if (!auto_set) 552 + pr_info("crashkernel low reservation failed - No suitable area found.\n"); 540 553 541 554 return; 542 555 } ··· 573 554 const unsigned long long alignment = 16<<20; /* 16M */ 574 555 unsigned long long total_mem; 575 556 unsigned long long crash_size, crash_base; 557 + bool high = false; 576 558 int ret; 577 559 578 560 total_mem = memblock_phys_mem_size(); 579 561 562 + /* crashkernel=XM */ 580 563 ret = parse_crashkernel(boot_command_line, total_mem, 581 564 &crash_size, &crash_base); 582 - if (ret != 0 || crash_size <= 0) 583 - return; 565 + if (ret != 0 || crash_size <= 0) { 566 + /* crashkernel=X,high */ 567 + ret = parse_crashkernel_high(boot_command_line, total_mem, 568 + &crash_size, &crash_base); 569 + if (ret != 0 || crash_size <= 0) 570 + return; 571 + high = true; 572 + } 584 573 585 574 /* 0 means: find the address automatically */ 586 575 if (crash_base <= 0) { ··· 596 569 * kexec want bzImage is below CRASH_KERNEL_ADDR_MAX 597 570 */ 598 571 crash_base = memblock_find_in_range(alignment, 599 - CRASH_KERNEL_ADDR_MAX, crash_size, alignment); 572 + high ? CRASH_KERNEL_ADDR_HIGH_MAX : 573 + CRASH_KERNEL_ADDR_LOW_MAX, 574 + crash_size, alignment); 600 575 601 576 if (!crash_base) { 602 577 pr_info("crashkernel reservation failed - No suitable area found.\n");
+1 -1
arch/x86/kvm/lapic.c
··· 1857 1857 if (!pv_eoi_enabled(vcpu)) 1858 1858 return 0; 1859 1859 return kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.pv_eoi.data, 1860 - addr); 1860 + addr, sizeof(u8)); 1861 1861 } 1862 1862 1863 1863 void kvm_lapic_init(void)
+6 -7
arch/x86/kvm/x86.c
··· 1823 1823 return 0; 1824 1824 } 1825 1825 1826 - if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.apf.data, gpa)) 1826 + if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.apf.data, gpa, 1827 + sizeof(u32))) 1827 1828 return 1; 1828 1829 1829 1830 vcpu->arch.apf.send_user_only = !(data & KVM_ASYNC_PF_SEND_ALWAYS); ··· 1953 1952 1954 1953 gpa_offset = data & ~(PAGE_MASK | 1); 1955 1954 1956 - /* Check that the address is 32-byte aligned. */ 1957 - if (gpa_offset & (sizeof(struct pvclock_vcpu_time_info) - 1)) 1958 - break; 1959 - 1960 1955 if (kvm_gfn_to_hva_cache_init(vcpu->kvm, 1961 - &vcpu->arch.pv_time, data & ~1ULL)) 1956 + &vcpu->arch.pv_time, data & ~1ULL, 1957 + sizeof(struct pvclock_vcpu_time_info))) 1962 1958 vcpu->arch.pv_time_enabled = false; 1963 1959 else 1964 1960 vcpu->arch.pv_time_enabled = true; ··· 1975 1977 return 1; 1976 1978 1977 1979 if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.st.stime, 1978 - data & KVM_STEAL_VALID_BITS)) 1980 + data & KVM_STEAL_VALID_BITS, 1981 + sizeof(struct kvm_steal_time))) 1979 1982 return 1; 1980 1983 1981 1984 vcpu->arch.st.msr_val = data;
+1
arch/x86/lguest/boot.c
··· 1334 1334 pv_mmu_ops.read_cr3 = lguest_read_cr3; 1335 1335 pv_mmu_ops.lazy_mode.enter = paravirt_enter_lazy_mmu; 1336 1336 pv_mmu_ops.lazy_mode.leave = lguest_leave_lazy_mmu_mode; 1337 + pv_mmu_ops.lazy_mode.flush = paravirt_flush_lazy_mmu; 1337 1338 pv_mmu_ops.pte_update = lguest_pte_update; 1338 1339 pv_mmu_ops.pte_update_defer = lguest_pte_update; 1339 1340
+4 -2
arch/x86/mm/fault.c
··· 378 378 if (pgd_none(*pgd_ref)) 379 379 return -1; 380 380 381 - if (pgd_none(*pgd)) 381 + if (pgd_none(*pgd)) { 382 382 set_pgd(pgd, *pgd_ref); 383 - else 383 + arch_flush_lazy_mmu_mode(); 384 + } else { 384 385 BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref)); 386 + } 385 387 386 388 /* 387 389 * Below here mismatches are bugs because these lower tables
+1 -1
arch/x86/mm/pageattr-test.c
··· 68 68 s->gpg++; 69 69 i += GPS/PAGE_SIZE; 70 70 } else if (level == PG_LEVEL_2M) { 71 - if (!(pte_val(*pte) & _PAGE_PSE)) { 71 + if ((pte_val(*pte) & _PAGE_PRESENT) && !(pte_val(*pte) & _PAGE_PSE)) { 72 72 printk(KERN_ERR 73 73 "%lx level %d but not PSE %Lx\n", 74 74 addr, level, (u64)pte_val(*pte));
+7 -5
arch/x86/mm/pageattr.c
··· 467 467 * We are safe now. Check whether the new pgprot is the same: 468 468 */ 469 469 old_pte = *kpte; 470 - old_prot = new_prot = req_prot = pte_pgprot(old_pte); 470 + old_prot = req_prot = pte_pgprot(old_pte); 471 471 472 472 pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr); 473 473 pgprot_val(req_prot) |= pgprot_val(cpa->mask_set); ··· 478 478 * a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL 479 479 * for the ancient hardware that doesn't support it. 480 480 */ 481 - if (pgprot_val(new_prot) & _PAGE_PRESENT) 482 - pgprot_val(new_prot) |= _PAGE_PSE | _PAGE_GLOBAL; 481 + if (pgprot_val(req_prot) & _PAGE_PRESENT) 482 + pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL; 483 483 else 484 - pgprot_val(new_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL); 484 + pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL); 485 485 486 - new_prot = canon_pgprot(new_prot); 486 + req_prot = canon_pgprot(req_prot); 487 487 488 488 /* 489 489 * old_pte points to the large page base address. So we need ··· 1413 1413 * but that can deadlock->flush only current cpu: 1414 1414 */ 1415 1415 __flush_tlb_all(); 1416 + 1417 + arch_flush_lazy_mmu_mode(); 1416 1418 } 1417 1419 1418 1420 #ifdef CONFIG_HIBERNATION
+7
arch/x86/mm/pgtable.c
··· 58 58 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) 59 59 { 60 60 paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); 61 + /* 62 + * NOTE! For PAE, any changes to the top page-directory-pointer-table 63 + * entries need a full cr3 reload to flush. 64 + */ 65 + #ifdef CONFIG_X86_PAE 66 + tlb->need_flush_all = 1; 67 + #endif 61 68 tlb_remove_page(tlb, virt_to_page(pmd)); 62 69 } 63 70
+163 -5
arch/x86/platform/efi/efi.c
··· 41 41 #include <linux/io.h> 42 42 #include <linux/reboot.h> 43 43 #include <linux/bcd.h> 44 + #include <linux/ucs2_string.h> 44 45 45 46 #include <asm/setup.h> 46 47 #include <asm/efi.h> ··· 51 50 #include <asm/x86_init.h> 52 51 53 52 #define EFI_DEBUG 1 53 + 54 + /* 55 + * There's some additional metadata associated with each 56 + * variable. Intel's reference implementation is 60 bytes - bump that 57 + * to account for potential alignment constraints 58 + */ 59 + #define VAR_METADATA_SIZE 64 54 60 55 61 struct efi __read_mostly efi = { 56 62 .mps = EFI_INVALID_TABLE_ADDR, ··· 76 68 77 69 static struct efi efi_phys __initdata; 78 70 static efi_system_table_t efi_systab __initdata; 71 + 72 + static u64 efi_var_store_size; 73 + static u64 efi_var_remaining_size; 74 + static u64 efi_var_max_var_size; 75 + static u64 boot_used_size; 76 + static u64 boot_var_size; 77 + static u64 active_size; 79 78 80 79 unsigned long x86_efi_facility; 81 80 ··· 112 97 return 0; 113 98 } 114 99 early_param("add_efi_memmap", setup_add_efi_memmap); 100 + 101 + static bool efi_no_storage_paranoia; 102 + 103 + static int __init setup_storage_paranoia(char *arg) 104 + { 105 + efi_no_storage_paranoia = true; 106 + return 0; 107 + } 108 + early_param("efi_no_storage_paranoia", setup_storage_paranoia); 115 109 116 110 117 111 static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc) ··· 186 162 efi_char16_t *name, 187 163 efi_guid_t *vendor) 188 164 { 189 - return efi_call_virt3(get_next_variable, 190 - name_size, name, vendor); 165 + efi_status_t status; 166 + static bool finished = false; 167 + static u64 var_size; 168 + 169 + status = efi_call_virt3(get_next_variable, 170 + name_size, name, vendor); 171 + 172 + if (status == EFI_NOT_FOUND) { 173 + finished = true; 174 + if (var_size < boot_used_size) { 175 + boot_var_size = boot_used_size - var_size; 176 + active_size += boot_var_size; 177 + } else { 178 + printk(KERN_WARNING FW_BUG "efi: Inconsistent initial sizes\n"); 179 + } 180 + } 181 + 182 + if (boot_used_size && !finished) { 183 + unsigned long size; 184 + u32 attr; 185 + efi_status_t s; 186 + void *tmp; 187 + 188 + s = virt_efi_get_variable(name, vendor, &attr, &size, NULL); 189 + 190 + if (s != EFI_BUFFER_TOO_SMALL || !size) 191 + return status; 192 + 193 + tmp = kmalloc(size, GFP_ATOMIC); 194 + 195 + if (!tmp) 196 + return status; 197 + 198 + s = virt_efi_get_variable(name, vendor, &attr, &size, tmp); 199 + 200 + if (s == EFI_SUCCESS && (attr & EFI_VARIABLE_NON_VOLATILE)) { 201 + var_size += size; 202 + var_size += ucs2_strsize(name, 1024); 203 + active_size += size; 204 + active_size += VAR_METADATA_SIZE; 205 + active_size += ucs2_strsize(name, 1024); 206 + } 207 + 208 + kfree(tmp); 209 + } 210 + 211 + return status; 191 212 } 192 213 193 214 static efi_status_t virt_efi_set_variable(efi_char16_t *name, ··· 241 172 unsigned long data_size, 242 173 void *data) 243 174 { 244 - return efi_call_virt5(set_variable, 245 - name, vendor, attr, 246 - data_size, data); 175 + efi_status_t status; 176 + u32 orig_attr = 0; 177 + unsigned long orig_size = 0; 178 + 179 + status = virt_efi_get_variable(name, vendor, &orig_attr, &orig_size, 180 + NULL); 181 + 182 + if (status != EFI_BUFFER_TOO_SMALL) 183 + orig_size = 0; 184 + 185 + status = efi_call_virt5(set_variable, 186 + name, vendor, attr, 187 + data_size, data); 188 + 189 + if (status == EFI_SUCCESS) { 190 + if (orig_size) { 191 + active_size -= orig_size; 192 + active_size -= ucs2_strsize(name, 1024); 193 + active_size -= VAR_METADATA_SIZE; 194 + } 195 + if (data_size) { 196 + active_size += data_size; 197 + active_size += ucs2_strsize(name, 1024); 198 + active_size += VAR_METADATA_SIZE; 199 + } 200 + } 201 + 202 + return status; 247 203 } 248 204 249 205 static efi_status_t virt_efi_query_variable_info(u32 attr, ··· 776 682 char vendor[100] = "unknown"; 777 683 int i = 0; 778 684 void *tmp; 685 + struct setup_data *data; 686 + struct efi_var_bootdata *efi_var_data; 687 + u64 pa_data; 779 688 780 689 #ifdef CONFIG_X86_32 781 690 if (boot_params.efi_info.efi_systab_hi || ··· 795 698 796 699 if (efi_systab_init(efi_phys.systab)) 797 700 return; 701 + 702 + pa_data = boot_params.hdr.setup_data; 703 + while (pa_data) { 704 + data = early_ioremap(pa_data, sizeof(*efi_var_data)); 705 + if (data->type == SETUP_EFI_VARS) { 706 + efi_var_data = (struct efi_var_bootdata *)data; 707 + 708 + efi_var_store_size = efi_var_data->store_size; 709 + efi_var_remaining_size = efi_var_data->remaining_size; 710 + efi_var_max_var_size = efi_var_data->max_var_size; 711 + } 712 + pa_data = data->next; 713 + early_iounmap(data, sizeof(*efi_var_data)); 714 + } 715 + 716 + boot_used_size = efi_var_store_size - efi_var_remaining_size; 798 717 799 718 set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility); 800 719 ··· 1112 999 } 1113 1000 return 0; 1114 1001 } 1002 + 1003 + /* 1004 + * Some firmware has serious problems when using more than 50% of the EFI 1005 + * variable store, i.e. it triggers bugs that can brick machines. Ensure that 1006 + * we never use more than this safe limit. 1007 + * 1008 + * Return EFI_SUCCESS if it is safe to write 'size' bytes to the variable 1009 + * store. 1010 + */ 1011 + efi_status_t efi_query_variable_store(u32 attributes, unsigned long size) 1012 + { 1013 + efi_status_t status; 1014 + u64 storage_size, remaining_size, max_size; 1015 + 1016 + status = efi.query_variable_info(attributes, &storage_size, 1017 + &remaining_size, &max_size); 1018 + if (status != EFI_SUCCESS) 1019 + return status; 1020 + 1021 + if (!max_size && remaining_size > size) 1022 + printk_once(KERN_ERR FW_BUG "Broken EFI implementation" 1023 + " is returning MaxVariableSize=0\n"); 1024 + /* 1025 + * Some firmware implementations refuse to boot if there's insufficient 1026 + * space in the variable store. We account for that by refusing the 1027 + * write if permitting it would reduce the available space to under 1028 + * 50%. However, some firmware won't reclaim variable space until 1029 + * after the used (not merely the actively used) space drops below 1030 + * a threshold. We can approximate that case with the value calculated 1031 + * above. If both the firmware and our calculations indicate that the 1032 + * available space would drop below 50%, refuse the write. 1033 + */ 1034 + 1035 + if (!storage_size || size > remaining_size || 1036 + (max_size && size > max_size)) 1037 + return EFI_OUT_OF_RESOURCES; 1038 + 1039 + if (!efi_no_storage_paranoia && 1040 + ((active_size + size + VAR_METADATA_SIZE > storage_size / 2) && 1041 + (remaining_size - size < storage_size / 2))) 1042 + return EFI_OUT_OF_RESOURCES; 1043 + 1044 + return EFI_SUCCESS; 1045 + } 1046 + EXPORT_SYMBOL_GPL(efi_query_variable_store);
+9 -4
arch/x86/xen/mmu.c
··· 1748 1748 } 1749 1749 1750 1750 /* Set the page permissions on an identity-mapped pages */ 1751 - static void set_page_prot(void *addr, pgprot_t prot) 1751 + static void set_page_prot_flags(void *addr, pgprot_t prot, unsigned long flags) 1752 1752 { 1753 1753 unsigned long pfn = __pa(addr) >> PAGE_SHIFT; 1754 1754 pte_t pte = pfn_pte(pfn, prot); 1755 1755 1756 - if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0)) 1756 + if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, flags)) 1757 1757 BUG(); 1758 + } 1759 + static void set_page_prot(void *addr, pgprot_t prot) 1760 + { 1761 + return set_page_prot_flags(addr, prot, UVMF_NONE); 1758 1762 } 1759 1763 #ifdef CONFIG_X86_32 1760 1764 static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn) ··· 1843 1839 unsigned long addr) 1844 1840 { 1845 1841 if (*pt_base == PFN_DOWN(__pa(addr))) { 1846 - set_page_prot((void *)addr, PAGE_KERNEL); 1842 + set_page_prot_flags((void *)addr, PAGE_KERNEL, UVMF_INVLPG); 1847 1843 clear_page((void *)addr); 1848 1844 (*pt_base)++; 1849 1845 } 1850 1846 if (*pt_end == PFN_DOWN(__pa(addr))) { 1851 - set_page_prot((void *)addr, PAGE_KERNEL); 1847 + set_page_prot_flags((void *)addr, PAGE_KERNEL, UVMF_INVLPG); 1852 1848 clear_page((void *)addr); 1853 1849 (*pt_end)--; 1854 1850 } ··· 2200 2196 .lazy_mode = { 2201 2197 .enter = paravirt_enter_lazy_mmu, 2202 2198 .leave = xen_leave_lazy_mmu, 2199 + .flush = paravirt_flush_lazy_mmu, 2203 2200 }, 2204 2201 2205 2202 .set_fixmap = xen_set_fixmap,
+1
block/blk-core.c
··· 39 39 40 40 EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_remap); 41 41 EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_remap); 42 + EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_complete); 42 43 EXPORT_TRACEPOINT_SYMBOL_GPL(block_unplug); 43 44 44 45 DEFINE_IDA(blk_queue_ida);
+2
block/blk-sysfs.c
··· 229 229 unsigned long val; \ 230 230 ssize_t ret; \ 231 231 ret = queue_var_store(&val, page, count); \ 232 + if (ret < 0) \ 233 + return ret; \ 232 234 if (neg) \ 233 235 val = !val; \ 234 236 \
-1
block/partition-generic.c
··· 257 257 258 258 hd_struct_put(part); 259 259 } 260 - EXPORT_SYMBOL(delete_partition); 261 260 262 261 static ssize_t whole_disk_show(struct device *dev, 263 262 struct device_attribute *attr, char *buf)
+14 -3
crypto/gcm.c
··· 44 44 45 45 struct crypto_rfc4543_req_ctx { 46 46 u8 auth_tag[16]; 47 + u8 assocbuf[32]; 47 48 struct scatterlist cipher[1]; 48 49 struct scatterlist payload[2]; 49 50 struct scatterlist assoc[2]; ··· 1134 1133 scatterwalk_crypto_chain(payload, dst, vdst == req->iv + 8, 2); 1135 1134 assoclen += 8 + req->cryptlen - (enc ? 0 : authsize); 1136 1135 1137 - sg_init_table(assoc, 2); 1138 - sg_set_page(assoc, sg_page(req->assoc), req->assoc->length, 1139 - req->assoc->offset); 1136 + if (req->assoc->length == req->assoclen) { 1137 + sg_init_table(assoc, 2); 1138 + sg_set_page(assoc, sg_page(req->assoc), req->assoc->length, 1139 + req->assoc->offset); 1140 + } else { 1141 + BUG_ON(req->assoclen > sizeof(rctx->assocbuf)); 1142 + 1143 + scatterwalk_map_and_copy(rctx->assocbuf, req->assoc, 0, 1144 + req->assoclen, 0); 1145 + 1146 + sg_init_table(assoc, 2); 1147 + sg_set_buf(assoc, rctx->assocbuf, req->assoclen); 1148 + } 1140 1149 scatterwalk_crypto_chain(assoc, payload, 0, 2); 1141 1150 1142 1151 aead_request_set_tfm(subreq, ctx->child);
+37 -39
drivers/acpi/pci_root.c
··· 415 415 struct acpi_pci_root *root; 416 416 struct acpi_pci_driver *driver; 417 417 u32 flags, base_flags; 418 - bool is_osc_granted = false; 419 418 420 419 root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL); 421 420 if (!root) ··· 475 476 flags = base_flags = OSC_PCI_SEGMENT_GROUPS_SUPPORT; 476 477 acpi_pci_osc_support(root, flags); 477 478 479 + /* 480 + * TBD: Need PCI interface for enumeration/configuration of roots. 481 + */ 482 + 483 + mutex_lock(&acpi_pci_root_lock); 484 + list_add_tail(&root->node, &acpi_pci_roots); 485 + mutex_unlock(&acpi_pci_root_lock); 486 + 487 + /* 488 + * Scan the Root Bridge 489 + * -------------------- 490 + * Must do this prior to any attempt to bind the root device, as the 491 + * PCI namespace does not get created until this call is made (and 492 + * thus the root bridge's pci_dev does not exist). 493 + */ 494 + root->bus = pci_acpi_scan_root(root); 495 + if (!root->bus) { 496 + printk(KERN_ERR PREFIX 497 + "Bus %04x:%02x not present in PCI namespace\n", 498 + root->segment, (unsigned int)root->secondary.start); 499 + result = -ENODEV; 500 + goto out_del_root; 501 + } 502 + 478 503 /* Indicate support for various _OSC capabilities. */ 479 504 if (pci_ext_cfg_avail()) 480 505 flags |= OSC_EXT_PCI_CONFIG_SUPPORT; ··· 517 494 flags = base_flags; 518 495 } 519 496 } 497 + 520 498 if (!pcie_ports_disabled 521 499 && (flags & ACPI_PCIE_REQ_SUPPORT) == ACPI_PCIE_REQ_SUPPORT) { 522 500 flags = OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL ··· 538 514 status = acpi_pci_osc_control_set(device->handle, &flags, 539 515 OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL); 540 516 if (ACPI_SUCCESS(status)) { 541 - is_osc_granted = true; 542 517 dev_info(&device->dev, 543 518 "ACPI _OSC control (0x%02x) granted\n", flags); 519 + if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) { 520 + /* 521 + * We have ASPM control, but the FADT indicates 522 + * that it's unsupported. Clear it. 523 + */ 524 + pcie_clear_aspm(root->bus); 525 + } 544 526 } else { 545 - is_osc_granted = false; 546 527 dev_info(&device->dev, 547 528 "ACPI _OSC request failed (%s), " 548 529 "returned control mask: 0x%02x\n", 549 530 acpi_format_exception(status), flags); 531 + pr_info("ACPI _OSC control for PCIe not granted, " 532 + "disabling ASPM\n"); 533 + pcie_no_aspm(); 550 534 } 551 535 } else { 552 536 dev_info(&device->dev, 553 - "Unable to request _OSC control " 554 - "(_OSC support mask: 0x%02x)\n", flags); 555 - } 556 - 557 - /* 558 - * TBD: Need PCI interface for enumeration/configuration of roots. 559 - */ 560 - 561 - mutex_lock(&acpi_pci_root_lock); 562 - list_add_tail(&root->node, &acpi_pci_roots); 563 - mutex_unlock(&acpi_pci_root_lock); 564 - 565 - /* 566 - * Scan the Root Bridge 567 - * -------------------- 568 - * Must do this prior to any attempt to bind the root device, as the 569 - * PCI namespace does not get created until this call is made (and 570 - * thus the root bridge's pci_dev does not exist). 571 - */ 572 - root->bus = pci_acpi_scan_root(root); 573 - if (!root->bus) { 574 - printk(KERN_ERR PREFIX 575 - "Bus %04x:%02x not present in PCI namespace\n", 576 - root->segment, (unsigned int)root->secondary.start); 577 - result = -ENODEV; 578 - goto out_del_root; 579 - } 580 - 581 - /* ASPM setting */ 582 - if (is_osc_granted) { 583 - if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) 584 - pcie_clear_aspm(root->bus); 585 - } else { 586 - pr_info("ACPI _OSC control for PCIe not granted, " 587 - "disabling ASPM\n"); 588 - pcie_no_aspm(); 537 + "Unable to request _OSC control " 538 + "(_OSC support mask: 0x%02x)\n", flags); 589 539 } 590 540 591 541 pci_acpi_add_bus_pm_notifier(device, root->bus);
+13 -1
drivers/ata/ata_piix.c
··· 150 150 tolapai_sata, 151 151 piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */ 152 152 ich8_sata_snb, 153 + ich8_2port_sata_snb, 153 154 }; 154 155 155 156 struct piix_map_db { ··· 305 304 /* SATA Controller IDE (Lynx Point) */ 306 305 { 0x8086, 0x8c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, 307 306 /* SATA Controller IDE (Lynx Point) */ 308 - { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 307 + { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb }, 309 308 /* SATA Controller IDE (Lynx Point) */ 310 309 { 0x8086, 0x8c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 311 310 /* SATA Controller IDE (Lynx Point-LP) */ ··· 440 439 [ich8m_apple_sata] = &ich8m_apple_map_db, 441 440 [tolapai_sata] = &tolapai_map_db, 442 441 [ich8_sata_snb] = &ich8_map_db, 442 + [ich8_2port_sata_snb] = &ich8_2port_map_db, 443 443 }; 444 444 445 445 static struct pci_bits piix_enable_bits[] = { ··· 1239 1237 [ich8_sata_snb] = 1240 1238 { 1241 1239 .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SIDPR | PIIX_FLAG_PIO16, 1240 + .pio_mask = ATA_PIO4, 1241 + .mwdma_mask = ATA_MWDMA2, 1242 + .udma_mask = ATA_UDMA6, 1243 + .port_ops = &piix_sata_ops, 1244 + }, 1245 + 1246 + [ich8_2port_sata_snb] = 1247 + { 1248 + .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SIDPR 1249 + | PIIX_FLAG_PIO16, 1242 1250 .pio_mask = ATA_PIO4, 1243 1251 .mwdma_mask = ATA_MWDMA2, 1244 1252 .udma_mask = ATA_UDMA6,
+5 -1
drivers/ata/libata-core.c
··· 2329 2329 * from SATA Settings page of Identify Device Data Log. 2330 2330 */ 2331 2331 if (ata_id_has_devslp(dev->id)) { 2332 - u8 sata_setting[ATA_SECT_SIZE]; 2332 + u8 *sata_setting = ap->sector_buf; 2333 2333 int i, j; 2334 2334 2335 2335 dev->flags |= ATA_DFLAG_DEVSLP; ··· 2438 2438 if (dev->horkage & ATA_HORKAGE_MAX_SEC_128) 2439 2439 dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_128, 2440 2440 dev->max_sectors); 2441 + 2442 + if (dev->horkage & ATA_HORKAGE_MAX_SEC_LBA48) 2443 + dev->max_sectors = ATA_MAX_SECTORS_LBA48; 2441 2444 2442 2445 if (ap->ops->dev_config) 2443 2446 ap->ops->dev_config(dev); ··· 4103 4100 /* Weird ATAPI devices */ 4104 4101 { "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 }, 4105 4102 { "QUANTUM DAT DAT72-000", NULL, ATA_HORKAGE_ATAPI_MOD16_DMA }, 4103 + { "Slimtype DVD A DS8A8SH", NULL, ATA_HORKAGE_MAX_SEC_LBA48 }, 4106 4104 4107 4105 /* Devices we expect to fail diagnostics */ 4108 4106
+4 -4
drivers/ata/libata-scsi.c
··· 532 532 struct scsi_sense_hdr sshdr; 533 533 scsi_normalize_sense(sensebuf, SCSI_SENSE_BUFFERSIZE, 534 534 &sshdr); 535 - if (sshdr.sense_key == 0 && 536 - sshdr.asc == 0 && sshdr.ascq == 0) 535 + if (sshdr.sense_key == RECOVERED_ERROR && 536 + sshdr.asc == 0 && sshdr.ascq == 0x1d) 537 537 cmd_result &= ~SAM_STAT_CHECK_CONDITION; 538 538 } 539 539 ··· 618 618 struct scsi_sense_hdr sshdr; 619 619 scsi_normalize_sense(sensebuf, SCSI_SENSE_BUFFERSIZE, 620 620 &sshdr); 621 - if (sshdr.sense_key == 0 && 622 - sshdr.asc == 0 && sshdr.ascq == 0) 621 + if (sshdr.sense_key == RECOVERED_ERROR && 622 + sshdr.asc == 0 && sshdr.ascq == 0x1d) 623 623 cmd_result &= ~SAM_STAT_CHECK_CONDITION; 624 624 } 625 625
+2 -1
drivers/base/regmap/regmap.c
··· 943 943 unsigned int ival; 944 944 int val_bytes = map->format.val_bytes; 945 945 for (i = 0; i < val_len / val_bytes; i++) { 946 - ival = map->format.parse_val(val + (i * val_bytes)); 946 + memcpy(map->work_buf, val + (i * val_bytes), val_bytes); 947 + ival = map->format.parse_val(map->work_buf); 947 948 ret = regcache_write(map, reg + (i * map->reg_stride), 948 949 ival); 949 950 if (ret) {
+2 -19
drivers/block/loop.c
··· 1051 1051 lo->lo_state = Lo_unbound; 1052 1052 /* This is safe: open() is still holding a reference. */ 1053 1053 module_put(THIS_MODULE); 1054 + if (lo->lo_flags & LO_FLAGS_PARTSCAN && bdev) 1055 + ioctl_by_bdev(bdev, BLKRRPART, 0); 1054 1056 lo->lo_flags = 0; 1055 1057 if (!part_shift) 1056 1058 lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN; 1057 1059 mutex_unlock(&lo->lo_ctl_mutex); 1058 - 1059 - /* 1060 - * Remove all partitions, since BLKRRPART won't remove user 1061 - * added partitions when max_part=0 1062 - */ 1063 - if (bdev) { 1064 - struct disk_part_iter piter; 1065 - struct hd_struct *part; 1066 - 1067 - mutex_lock_nested(&bdev->bd_mutex, 1); 1068 - invalidate_partition(bdev->bd_disk, 0); 1069 - disk_part_iter_init(&piter, bdev->bd_disk, 1070 - DISK_PITER_INCL_EMPTY); 1071 - while ((part = disk_part_iter_next(&piter))) 1072 - delete_partition(bdev->bd_disk, part->partno); 1073 - disk_part_iter_exit(&piter); 1074 - mutex_unlock(&bdev->bd_mutex); 1075 - } 1076 - 1077 1060 /* 1078 1061 * Need not hold lo_ctl_mutex to fput backing file. 1079 1062 * Calling fput holding lo_ctl_mutex triggers a circular
+232 -95
drivers/block/mtip32xx/mtip32xx.c
··· 81 81 /* Device instance number, incremented each time a device is probed. */ 82 82 static int instance; 83 83 84 + struct list_head online_list; 85 + struct list_head removing_list; 86 + spinlock_t dev_lock; 87 + 84 88 /* 85 89 * Global variable used to hold the major block device number 86 90 * allocated in mtip_init(). 87 91 */ 88 92 static int mtip_major; 89 93 static struct dentry *dfs_parent; 94 + static struct dentry *dfs_device_status; 90 95 91 96 static u32 cpu_use[NR_CPUS]; 92 97 ··· 248 243 /* 249 244 * Reset the HBA (without sleeping) 250 245 * 251 - * Just like hba_reset, except does not call sleep, so can be 252 - * run from interrupt/tasklet context. 253 - * 254 246 * @dd Pointer to the driver data structure. 255 247 * 256 248 * return value 257 249 * 0 The reset was successful. 258 250 * -1 The HBA Reset bit did not clear. 259 251 */ 260 - static int hba_reset_nosleep(struct driver_data *dd) 252 + static int mtip_hba_reset(struct driver_data *dd) 261 253 { 262 254 unsigned long timeout; 263 - 264 - /* Chip quirk: quiesce any chip function */ 265 - mdelay(10); 266 255 267 256 /* Set the reset bit */ 268 257 writel(HOST_RESET, dd->mmio + HOST_CTL); ··· 264 265 /* Flush */ 265 266 readl(dd->mmio + HOST_CTL); 266 267 267 - /* 268 - * Wait 10ms then spin for up to 1 second 269 - * waiting for reset acknowledgement 270 - */ 271 - timeout = jiffies + msecs_to_jiffies(1000); 272 - mdelay(10); 273 - while ((readl(dd->mmio + HOST_CTL) & HOST_RESET) 274 - && time_before(jiffies, timeout)) 275 - mdelay(1); 268 + /* Spin for up to 2 seconds, waiting for reset acknowledgement */ 269 + timeout = jiffies + msecs_to_jiffies(2000); 270 + do { 271 + mdelay(10); 272 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) 273 + return -1; 276 274 277 - if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) 278 - return -1; 275 + } while ((readl(dd->mmio + HOST_CTL) & HOST_RESET) 276 + && time_before(jiffies, timeout)); 279 277 280 278 if (readl(dd->mmio + HOST_CTL) & HOST_RESET) 281 279 return -1; ··· 477 481 dev_warn(&port->dd->pdev->dev, 478 482 "PxCMD.CR not clear, escalating reset\n"); 479 483 480 - if (hba_reset_nosleep(port->dd)) 484 + if (mtip_hba_reset(port->dd)) 481 485 dev_err(&port->dd->pdev->dev, 482 486 "HBA reset escalation failed.\n"); 483 487 ··· 521 525 mtip_init_port(port); 522 526 mtip_start_port(port); 523 527 528 + } 529 + 530 + static int mtip_device_reset(struct driver_data *dd) 531 + { 532 + int rv = 0; 533 + 534 + if (mtip_check_surprise_removal(dd->pdev)) 535 + return 0; 536 + 537 + if (mtip_hba_reset(dd) < 0) 538 + rv = -EFAULT; 539 + 540 + mdelay(1); 541 + mtip_init_port(dd->port); 542 + mtip_start_port(dd->port); 543 + 544 + /* Enable interrupts on the HBA. */ 545 + writel(readl(dd->mmio + HOST_CTL) | HOST_IRQ_EN, 546 + dd->mmio + HOST_CTL); 547 + return rv; 524 548 } 525 549 526 550 /* ··· 648 632 if (cmdto_cnt) { 649 633 print_tags(port->dd, "timed out", tagaccum, cmdto_cnt); 650 634 if (!test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) { 651 - mtip_restart_port(port); 635 + mtip_device_reset(port->dd); 652 636 wake_up_interruptible(&port->svc_wait); 653 637 } 654 638 clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags); ··· 1299 1283 int rv = 0, ready2go = 1; 1300 1284 struct mtip_cmd *int_cmd = &port->commands[MTIP_TAG_INTERNAL]; 1301 1285 unsigned long to; 1286 + struct driver_data *dd = port->dd; 1302 1287 1303 1288 /* Make sure the buffer is 8 byte aligned. This is asic specific. */ 1304 1289 if (buffer & 0x00000007) { 1305 - dev_err(&port->dd->pdev->dev, 1306 - "SG buffer is not 8 byte aligned\n"); 1290 + dev_err(&dd->pdev->dev, "SG buffer is not 8 byte aligned\n"); 1307 1291 return -EFAULT; 1308 1292 } 1309 1293 ··· 1316 1300 mdelay(100); 1317 1301 } while (time_before(jiffies, to)); 1318 1302 if (!ready2go) { 1319 - dev_warn(&port->dd->pdev->dev, 1303 + dev_warn(&dd->pdev->dev, 1320 1304 "Internal cmd active. new cmd [%02X]\n", fis->command); 1321 1305 return -EBUSY; 1322 1306 } 1323 1307 set_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); 1324 1308 port->ic_pause_timer = 0; 1325 1309 1326 - if (fis->command == ATA_CMD_SEC_ERASE_UNIT) 1327 - clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags); 1328 - else if (fis->command == ATA_CMD_DOWNLOAD_MICRO) 1329 - clear_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags); 1310 + clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags); 1311 + clear_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags); 1330 1312 1331 1313 if (atomic == GFP_KERNEL) { 1332 1314 if (fis->command != ATA_CMD_STANDBYNOW1) { 1333 1315 /* wait for io to complete if non atomic */ 1334 1316 if (mtip_quiesce_io(port, 5000) < 0) { 1335 - dev_warn(&port->dd->pdev->dev, 1317 + dev_warn(&dd->pdev->dev, 1336 1318 "Failed to quiesce IO\n"); 1337 1319 release_slot(port, MTIP_TAG_INTERNAL); 1338 1320 clear_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); ··· 1375 1361 /* Issue the command to the hardware */ 1376 1362 mtip_issue_non_ncq_command(port, MTIP_TAG_INTERNAL); 1377 1363 1378 - /* Poll if atomic, wait_for_completion otherwise */ 1379 1364 if (atomic == GFP_KERNEL) { 1380 1365 /* Wait for the command to complete or timeout. */ 1381 - if (wait_for_completion_timeout( 1366 + if (wait_for_completion_interruptible_timeout( 1382 1367 &wait, 1383 - msecs_to_jiffies(timeout)) == 0) { 1384 - dev_err(&port->dd->pdev->dev, 1385 - "Internal command did not complete [%d] " 1386 - "within timeout of %lu ms\n", 1387 - atomic, timeout); 1388 - if (mtip_check_surprise_removal(port->dd->pdev) || 1368 + msecs_to_jiffies(timeout)) <= 0) { 1369 + if (rv == -ERESTARTSYS) { /* interrupted */ 1370 + dev_err(&dd->pdev->dev, 1371 + "Internal command [%02X] was interrupted after %lu ms\n", 1372 + fis->command, timeout); 1373 + rv = -EINTR; 1374 + goto exec_ic_exit; 1375 + } else if (rv == 0) /* timeout */ 1376 + dev_err(&dd->pdev->dev, 1377 + "Internal command did not complete [%02X] within timeout of %lu ms\n", 1378 + fis->command, timeout); 1379 + else 1380 + dev_err(&dd->pdev->dev, 1381 + "Internal command [%02X] wait returned code [%d] after %lu ms - unhandled\n", 1382 + fis->command, rv, timeout); 1383 + 1384 + if (mtip_check_surprise_removal(dd->pdev) || 1389 1385 test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1390 - &port->dd->dd_flag)) { 1386 + &dd->dd_flag)) { 1387 + dev_err(&dd->pdev->dev, 1388 + "Internal command [%02X] wait returned due to SR\n", 1389 + fis->command); 1391 1390 rv = -ENXIO; 1392 1391 goto exec_ic_exit; 1393 1392 } 1393 + mtip_device_reset(dd); /* recover from timeout issue */ 1394 1394 rv = -EAGAIN; 1395 + goto exec_ic_exit; 1395 1396 } 1396 1397 } else { 1398 + u32 hba_stat, port_stat; 1399 + 1397 1400 /* Spin for <timeout> checking if command still outstanding */ 1398 1401 timeout = jiffies + msecs_to_jiffies(timeout); 1399 1402 while ((readl(port->cmd_issue[MTIP_TAG_INTERNAL]) 1400 1403 & (1 << MTIP_TAG_INTERNAL)) 1401 1404 && time_before(jiffies, timeout)) { 1402 - if (mtip_check_surprise_removal(port->dd->pdev)) { 1405 + if (mtip_check_surprise_removal(dd->pdev)) { 1403 1406 rv = -ENXIO; 1404 1407 goto exec_ic_exit; 1405 1408 } 1406 1409 if ((fis->command != ATA_CMD_STANDBYNOW1) && 1407 1410 test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1408 - &port->dd->dd_flag)) { 1411 + &dd->dd_flag)) { 1409 1412 rv = -ENXIO; 1410 1413 goto exec_ic_exit; 1411 1414 } 1412 - if (readl(port->mmio + PORT_IRQ_STAT) & PORT_IRQ_ERR) { 1413 - atomic_inc(&int_cmd->active); /* error */ 1414 - break; 1415 + port_stat = readl(port->mmio + PORT_IRQ_STAT); 1416 + if (!port_stat) 1417 + continue; 1418 + 1419 + if (port_stat & PORT_IRQ_ERR) { 1420 + dev_err(&dd->pdev->dev, 1421 + "Internal command [%02X] failed\n", 1422 + fis->command); 1423 + mtip_device_reset(dd); 1424 + rv = -EIO; 1425 + goto exec_ic_exit; 1426 + } else { 1427 + writel(port_stat, port->mmio + PORT_IRQ_STAT); 1428 + hba_stat = readl(dd->mmio + HOST_IRQ_STAT); 1429 + if (hba_stat) 1430 + writel(hba_stat, 1431 + dd->mmio + HOST_IRQ_STAT); 1415 1432 } 1433 + break; 1416 1434 } 1417 1435 } 1418 1436 1419 - if (atomic_read(&int_cmd->active) > 1) { 1420 - dev_err(&port->dd->pdev->dev, 1421 - "Internal command [%02X] failed\n", fis->command); 1422 - rv = -EIO; 1423 - } 1424 1437 if (readl(port->cmd_issue[MTIP_TAG_INTERNAL]) 1425 1438 & (1 << MTIP_TAG_INTERNAL)) { 1426 1439 rv = -ENXIO; 1427 - if (!test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1428 - &port->dd->dd_flag)) { 1429 - mtip_restart_port(port); 1440 + if (!test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) { 1441 + mtip_device_reset(dd); 1430 1442 rv = -EAGAIN; 1431 1443 } 1432 1444 } ··· 1764 1724 * -EINVAL Invalid parameters passed in, trim not supported 1765 1725 * -EIO Error submitting trim request to hw 1766 1726 */ 1767 - static int mtip_send_trim(struct driver_data *dd, unsigned int lba, unsigned int len) 1727 + static int mtip_send_trim(struct driver_data *dd, unsigned int lba, 1728 + unsigned int len) 1768 1729 { 1769 1730 int i, rv = 0; 1770 1731 u64 tlba, tlen, sect_left; ··· 1849 1808 total = raw0 | raw1<<16 | raw2<<32 | raw3<<48; 1850 1809 *sectors = total; 1851 1810 return (bool) !!port->identify_valid; 1852 - } 1853 - 1854 - /* 1855 - * Reset the HBA. 1856 - * 1857 - * Resets the HBA by setting the HBA Reset bit in the Global 1858 - * HBA Control register. After setting the HBA Reset bit the 1859 - * function waits for 1 second before reading the HBA Reset 1860 - * bit to make sure it has cleared. If HBA Reset is not clear 1861 - * an error is returned. Cannot be used in non-blockable 1862 - * context. 1863 - * 1864 - * @dd Pointer to the driver data structure. 1865 - * 1866 - * return value 1867 - * 0 The reset was successful. 1868 - * -1 The HBA Reset bit did not clear. 1869 - */ 1870 - static int mtip_hba_reset(struct driver_data *dd) 1871 - { 1872 - mtip_deinit_port(dd->port); 1873 - 1874 - /* Set the reset bit */ 1875 - writel(HOST_RESET, dd->mmio + HOST_CTL); 1876 - 1877 - /* Flush */ 1878 - readl(dd->mmio + HOST_CTL); 1879 - 1880 - /* Wait for reset to clear */ 1881 - ssleep(1); 1882 - 1883 - /* Check the bit has cleared */ 1884 - if (readl(dd->mmio + HOST_CTL) & HOST_RESET) { 1885 - dev_err(&dd->pdev->dev, 1886 - "Reset bit did not clear.\n"); 1887 - return -1; 1888 - } 1889 - 1890 - return 0; 1891 1811 } 1892 1812 1893 1813 /* ··· 2712 2710 2713 2711 static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL); 2714 2712 2713 + /* debugsfs entries */ 2714 + 2715 + static ssize_t show_device_status(struct device_driver *drv, char *buf) 2716 + { 2717 + int size = 0; 2718 + struct driver_data *dd, *tmp; 2719 + unsigned long flags; 2720 + char id_buf[42]; 2721 + u16 status = 0; 2722 + 2723 + spin_lock_irqsave(&dev_lock, flags); 2724 + size += sprintf(&buf[size], "Devices Present:\n"); 2725 + list_for_each_entry_safe(dd, tmp, &online_list, online_list) { 2726 + if (dd->pdev) { 2727 + if (dd->port && 2728 + dd->port->identify && 2729 + dd->port->identify_valid) { 2730 + strlcpy(id_buf, 2731 + (char *) (dd->port->identify + 10), 21); 2732 + status = *(dd->port->identify + 141); 2733 + } else { 2734 + memset(id_buf, 0, 42); 2735 + status = 0; 2736 + } 2737 + 2738 + if (dd->port && 2739 + test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags)) { 2740 + size += sprintf(&buf[size], 2741 + " device %s %s (ftl rebuild %d %%)\n", 2742 + dev_name(&dd->pdev->dev), 2743 + id_buf, 2744 + status); 2745 + } else { 2746 + size += sprintf(&buf[size], 2747 + " device %s %s\n", 2748 + dev_name(&dd->pdev->dev), 2749 + id_buf); 2750 + } 2751 + } 2752 + } 2753 + 2754 + size += sprintf(&buf[size], "Devices Being Removed:\n"); 2755 + list_for_each_entry_safe(dd, tmp, &removing_list, remove_list) { 2756 + if (dd->pdev) { 2757 + if (dd->port && 2758 + dd->port->identify && 2759 + dd->port->identify_valid) { 2760 + strlcpy(id_buf, 2761 + (char *) (dd->port->identify+10), 21); 2762 + status = *(dd->port->identify + 141); 2763 + } else { 2764 + memset(id_buf, 0, 42); 2765 + status = 0; 2766 + } 2767 + 2768 + if (dd->port && 2769 + test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags)) { 2770 + size += sprintf(&buf[size], 2771 + " device %s %s (ftl rebuild %d %%)\n", 2772 + dev_name(&dd->pdev->dev), 2773 + id_buf, 2774 + status); 2775 + } else { 2776 + size += sprintf(&buf[size], 2777 + " device %s %s\n", 2778 + dev_name(&dd->pdev->dev), 2779 + id_buf); 2780 + } 2781 + } 2782 + } 2783 + spin_unlock_irqrestore(&dev_lock, flags); 2784 + 2785 + return size; 2786 + } 2787 + 2788 + static ssize_t mtip_hw_read_device_status(struct file *f, char __user *ubuf, 2789 + size_t len, loff_t *offset) 2790 + { 2791 + int size = *offset; 2792 + char buf[MTIP_DFS_MAX_BUF_SIZE]; 2793 + 2794 + if (!len || *offset) 2795 + return 0; 2796 + 2797 + size += show_device_status(NULL, buf); 2798 + 2799 + *offset = size <= len ? size : len; 2800 + size = copy_to_user(ubuf, buf, *offset); 2801 + if (size) 2802 + return -EFAULT; 2803 + 2804 + return *offset; 2805 + } 2806 + 2715 2807 static ssize_t mtip_hw_read_registers(struct file *f, char __user *ubuf, 2716 2808 size_t len, loff_t *offset) 2717 2809 { ··· 2899 2803 2900 2804 return *offset; 2901 2805 } 2806 + 2807 + static const struct file_operations mtip_device_status_fops = { 2808 + .owner = THIS_MODULE, 2809 + .open = simple_open, 2810 + .read = mtip_hw_read_device_status, 2811 + .llseek = no_llseek, 2812 + }; 2902 2813 2903 2814 static const struct file_operations mtip_regs_fops = { 2904 2815 .owner = THIS_MODULE, ··· 4264 4161 const struct cpumask *node_mask; 4265 4162 int cpu, i = 0, j = 0; 4266 4163 int my_node = NUMA_NO_NODE; 4164 + unsigned long flags; 4267 4165 4268 4166 /* Allocate memory for this devices private data. */ 4269 4167 my_node = pcibus_to_node(pdev->bus); ··· 4321 4217 dd->instance = instance; 4322 4218 dd->pdev = pdev; 4323 4219 dd->numa_node = my_node; 4220 + 4221 + INIT_LIST_HEAD(&dd->online_list); 4222 + INIT_LIST_HEAD(&dd->remove_list); 4324 4223 4325 4224 memset(dd->workq_name, 0, 32); 4326 4225 snprintf(dd->workq_name, 31, "mtipq%d", dd->instance); ··· 4412 4305 instance++; 4413 4306 if (rv != MTIP_FTL_REBUILD_MAGIC) 4414 4307 set_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag); 4308 + else 4309 + rv = 0; /* device in rebuild state, return 0 from probe */ 4310 + 4311 + /* Add to online list even if in ftl rebuild */ 4312 + spin_lock_irqsave(&dev_lock, flags); 4313 + list_add(&dd->online_list, &online_list); 4314 + spin_unlock_irqrestore(&dev_lock, flags); 4315 + 4415 4316 goto done; 4416 4317 4417 4318 block_initialize_err: ··· 4453 4338 { 4454 4339 struct driver_data *dd = pci_get_drvdata(pdev); 4455 4340 int counter = 0; 4341 + unsigned long flags; 4456 4342 4457 4343 set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag); 4344 + 4345 + spin_lock_irqsave(&dev_lock, flags); 4346 + list_del_init(&dd->online_list); 4347 + list_add(&dd->remove_list, &removing_list); 4348 + spin_unlock_irqrestore(&dev_lock, flags); 4458 4349 4459 4350 if (mtip_check_surprise_removal(pdev)) { 4460 4351 while (!test_bit(MTIP_DDF_CLEANUP_BIT, &dd->dd_flag)) { ··· 4486 4365 } 4487 4366 4488 4367 pci_disable_msi(pdev); 4368 + 4369 + spin_lock_irqsave(&dev_lock, flags); 4370 + list_del_init(&dd->remove_list); 4371 + spin_unlock_irqrestore(&dev_lock, flags); 4489 4372 4490 4373 kfree(dd); 4491 4374 pcim_iounmap_regions(pdev, 1 << MTIP_ABAR); ··· 4638 4513 4639 4514 pr_info(MTIP_DRV_NAME " Version " MTIP_DRV_VERSION "\n"); 4640 4515 4516 + spin_lock_init(&dev_lock); 4517 + 4518 + INIT_LIST_HEAD(&online_list); 4519 + INIT_LIST_HEAD(&removing_list); 4520 + 4641 4521 /* Allocate a major block device number to use with this driver. */ 4642 4522 error = register_blkdev(0, MTIP_DRV_NAME); 4643 4523 if (error <= 0) { ··· 4652 4522 } 4653 4523 mtip_major = error; 4654 4524 4655 - if (!dfs_parent) { 4656 - dfs_parent = debugfs_create_dir("rssd", NULL); 4657 - if (IS_ERR_OR_NULL(dfs_parent)) { 4658 - pr_warn("Error creating debugfs parent\n"); 4659 - dfs_parent = NULL; 4525 + dfs_parent = debugfs_create_dir("rssd", NULL); 4526 + if (IS_ERR_OR_NULL(dfs_parent)) { 4527 + pr_warn("Error creating debugfs parent\n"); 4528 + dfs_parent = NULL; 4529 + } 4530 + if (dfs_parent) { 4531 + dfs_device_status = debugfs_create_file("device_status", 4532 + S_IRUGO, dfs_parent, NULL, 4533 + &mtip_device_status_fops); 4534 + if (IS_ERR_OR_NULL(dfs_device_status)) { 4535 + pr_err("Error creating device_status node\n"); 4536 + dfs_device_status = NULL; 4660 4537 } 4661 4538 } 4662 4539
+11 -7
drivers/block/mtip32xx/mtip32xx.h
··· 129 129 MTIP_PF_EH_ACTIVE_BIT = 1, /* error handling */ 130 130 MTIP_PF_SE_ACTIVE_BIT = 2, /* secure erase */ 131 131 MTIP_PF_DM_ACTIVE_BIT = 3, /* download microcde */ 132 - MTIP_PF_PAUSE_IO = ((1 << MTIP_PF_IC_ACTIVE_BIT) | \ 133 - (1 << MTIP_PF_EH_ACTIVE_BIT) | \ 134 - (1 << MTIP_PF_SE_ACTIVE_BIT) | \ 132 + MTIP_PF_PAUSE_IO = ((1 << MTIP_PF_IC_ACTIVE_BIT) | 133 + (1 << MTIP_PF_EH_ACTIVE_BIT) | 134 + (1 << MTIP_PF_SE_ACTIVE_BIT) | 135 135 (1 << MTIP_PF_DM_ACTIVE_BIT)), 136 136 137 137 MTIP_PF_SVC_THD_ACTIVE_BIT = 4, ··· 144 144 MTIP_DDF_REMOVE_PENDING_BIT = 1, 145 145 MTIP_DDF_OVER_TEMP_BIT = 2, 146 146 MTIP_DDF_WRITE_PROTECT_BIT = 3, 147 - MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) | \ 148 - (1 << MTIP_DDF_SEC_LOCK_BIT) | \ 149 - (1 << MTIP_DDF_OVER_TEMP_BIT) | \ 147 + MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) | 148 + (1 << MTIP_DDF_SEC_LOCK_BIT) | 149 + (1 << MTIP_DDF_OVER_TEMP_BIT) | 150 150 (1 << MTIP_DDF_WRITE_PROTECT_BIT)), 151 151 152 152 MTIP_DDF_CLEANUP_BIT = 5, ··· 180 180 181 181 #define MTIP_TRIM_TIMEOUT_MS 240000 182 182 #define MTIP_MAX_TRIM_ENTRIES 8 183 - #define MTIP_MAX_TRIM_ENTRY_LEN 0xfff8 183 + #define MTIP_MAX_TRIM_ENTRY_LEN 0xfff8 184 184 185 185 struct mtip_trim_entry { 186 186 u32 lba; /* starting lba of region */ ··· 501 501 atomic_t irq_workers_active; 502 502 503 503 int isr_binding; 504 + 505 + struct list_head online_list; /* linkage for online list */ 506 + 507 + struct list_head remove_list; /* linkage for removing list */ 504 508 }; 505 509 506 510 #endif
+2 -1
drivers/block/rbd.c
··· 1742 1742 struct rbd_device *rbd_dev = img_request->rbd_dev; 1743 1743 struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc; 1744 1744 struct rbd_obj_request *obj_request; 1745 + struct rbd_obj_request *next_obj_request; 1745 1746 1746 1747 dout("%s: img %p\n", __func__, img_request); 1747 - for_each_obj_request(img_request, obj_request) { 1748 + for_each_obj_request_safe(img_request, obj_request, next_obj_request) { 1748 1749 int ret; 1749 1750 1750 1751 obj_request->callback = rbd_img_obj_callback;
+1 -13
drivers/char/hpet.c
··· 373 373 struct hpet_dev *devp; 374 374 unsigned long addr; 375 375 376 - if (((vma->vm_end - vma->vm_start) != PAGE_SIZE) || vma->vm_pgoff) 377 - return -EINVAL; 378 - 379 376 devp = file->private_data; 380 377 addr = devp->hd_hpets->hp_hpet_phys; 381 378 382 379 if (addr & (PAGE_SIZE - 1)) 383 380 return -ENOSYS; 384 381 385 - vma->vm_flags |= VM_IO; 386 382 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 387 - 388 - if (io_remap_pfn_range(vma, vma->vm_start, addr >> PAGE_SHIFT, 389 - PAGE_SIZE, vma->vm_page_prot)) { 390 - printk(KERN_ERR "%s: io_remap_pfn_range failed\n", 391 - __func__); 392 - return -EAGAIN; 393 - } 394 - 395 - return 0; 383 + return vm_iomap_memory(vma, addr, PAGE_SIZE); 396 384 #else 397 385 return -ENOSYS; 398 386 #endif
-1
drivers/cpufreq/intel_pstate.c
··· 502 502 503 503 sample_time = cpu->pstate_policy->sample_rate_ms; 504 504 delay = msecs_to_jiffies(sample_time); 505 - delay -= jiffies % delay; 506 505 mod_timer_pinned(&cpu->timer, jiffies + delay); 507 506 } 508 507
+1 -1
drivers/crypto/ux500/cryp/cryp_core.c
··· 1750 1750 .shutdown = ux500_cryp_shutdown, 1751 1751 .driver = { 1752 1752 .owner = THIS_MODULE, 1753 - .name = "cryp1" 1753 + .name = "cryp1", 1754 1754 .pm = &ux500_cryp_pm, 1755 1755 } 1756 1756 };
+4 -5
drivers/dma/at_hdmac.c
··· 310 310 311 311 dev_vdbg(chan2dev(&atchan->chan_common), "complete all\n"); 312 312 313 - BUG_ON(atc_chan_is_enabled(atchan)); 314 - 315 313 /* 316 314 * Submit queued descriptors ASAP, i.e. before we go through 317 315 * the completed ones. ··· 365 367 static void atc_advance_work(struct at_dma_chan *atchan) 366 368 { 367 369 dev_vdbg(chan2dev(&atchan->chan_common), "advance_work\n"); 370 + 371 + if (atc_chan_is_enabled(atchan)) 372 + return; 368 373 369 374 if (list_empty(&atchan->active_list) || 370 375 list_is_singular(&atchan->active_list)) { ··· 1079 1078 return; 1080 1079 1081 1080 spin_lock_irqsave(&atchan->lock, flags); 1082 - if (!atc_chan_is_enabled(atchan)) { 1083 - atc_advance_work(atchan); 1084 - } 1081 + atc_advance_work(atchan); 1085 1082 spin_unlock_irqrestore(&atchan->lock, flags); 1086 1083 } 1087 1084
+14 -6
drivers/dma/omap-dma.c
··· 276 276 277 277 spin_lock_irqsave(&c->vc.lock, flags); 278 278 if (vchan_issue_pending(&c->vc) && !c->desc) { 279 - struct omap_dmadev *d = to_omap_dma_dev(chan->device); 280 - spin_lock(&d->lock); 281 - if (list_empty(&c->node)) 282 - list_add_tail(&c->node, &d->pending); 283 - spin_unlock(&d->lock); 284 - tasklet_schedule(&d->task); 279 + /* 280 + * c->cyclic is used only by audio and in this case the DMA need 281 + * to be started without delay. 282 + */ 283 + if (!c->cyclic) { 284 + struct omap_dmadev *d = to_omap_dma_dev(chan->device); 285 + spin_lock(&d->lock); 286 + if (list_empty(&c->node)) 287 + list_add_tail(&c->node, &d->pending); 288 + spin_unlock(&d->lock); 289 + tasklet_schedule(&d->task); 290 + } else { 291 + omap_dma_start_desc(c); 292 + } 285 293 } 286 294 spin_unlock_irqrestore(&c->vc.lock, flags); 287 295 }
+27 -11
drivers/dma/pl330.c
··· 2882 2882 { 2883 2883 struct dma_pl330_platdata *pdat; 2884 2884 struct dma_pl330_dmac *pdmac; 2885 - struct dma_pl330_chan *pch; 2885 + struct dma_pl330_chan *pch, *_p; 2886 2886 struct pl330_info *pi; 2887 2887 struct dma_device *pd; 2888 2888 struct resource *res; ··· 2984 2984 ret = dma_async_device_register(pd); 2985 2985 if (ret) { 2986 2986 dev_err(&adev->dev, "unable to register DMAC\n"); 2987 - goto probe_err2; 2987 + goto probe_err3; 2988 + } 2989 + 2990 + if (adev->dev.of_node) { 2991 + ret = of_dma_controller_register(adev->dev.of_node, 2992 + of_dma_pl330_xlate, pdmac); 2993 + if (ret) { 2994 + dev_err(&adev->dev, 2995 + "unable to register DMA to the generic DT DMA helpers\n"); 2996 + } 2988 2997 } 2989 2998 2990 2999 dev_info(&adev->dev, ··· 3004 2995 pi->pcfg.data_bus_width / 8, pi->pcfg.num_chan, 3005 2996 pi->pcfg.num_peri, pi->pcfg.num_events); 3006 2997 3007 - ret = of_dma_controller_register(adev->dev.of_node, 3008 - of_dma_pl330_xlate, pdmac); 3009 - if (ret) { 3010 - dev_err(&adev->dev, 3011 - "unable to register DMA to the generic DT DMA helpers\n"); 3012 - goto probe_err2; 3013 - } 3014 - 3015 2998 return 0; 2999 + probe_err3: 3000 + amba_set_drvdata(adev, NULL); 3016 3001 3002 + /* Idle the DMAC */ 3003 + list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels, 3004 + chan.device_node) { 3005 + 3006 + /* Remove the channel */ 3007 + list_del(&pch->chan.device_node); 3008 + 3009 + /* Flush the channel */ 3010 + pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0); 3011 + pl330_free_chan_resources(&pch->chan); 3012 + } 3017 3013 probe_err2: 3018 3014 pl330_del(pi); 3019 3015 probe_err1: ··· 3037 3023 if (!pdmac) 3038 3024 return 0; 3039 3025 3040 - of_dma_controller_free(adev->dev.of_node); 3026 + if (adev->dev.of_node) 3027 + of_dma_controller_free(adev->dev.of_node); 3041 3028 3029 + dma_async_device_unregister(&pdmac->ddma); 3042 3030 amba_set_drvdata(adev, NULL); 3043 3031 3044 3032 /* Idle the DMAC */
+47 -22
drivers/eisa/pci_eisa.c
··· 19 19 /* There is only *one* pci_eisa device per machine, right ? */ 20 20 static struct eisa_root_device pci_eisa_root; 21 21 22 - static int __init pci_eisa_init(struct pci_dev *pdev, 23 - const struct pci_device_id *ent) 22 + static int __init pci_eisa_init(struct pci_dev *pdev) 24 23 { 25 - int rc; 24 + int rc, i; 25 + struct resource *res, *bus_res = NULL; 26 26 27 27 if ((rc = pci_enable_device (pdev))) { 28 28 printk (KERN_ERR "pci_eisa : Could not enable device %s\n", ··· 30 30 return rc; 31 31 } 32 32 33 + /* 34 + * The Intel 82375 PCI-EISA bridge is a subtractive-decode PCI 35 + * device, so the resources available on EISA are the same as those 36 + * available on the 82375 bus. This works the same as a PCI-PCI 37 + * bridge in subtractive-decode mode (see pci_read_bridge_bases()). 38 + * We assume other PCI-EISA bridges are similar. 39 + * 40 + * eisa_root_register() can only deal with a single io port resource, 41 + * so we use the first valid io port resource. 42 + */ 43 + pci_bus_for_each_resource(pdev->bus, res, i) 44 + if (res && (res->flags & IORESOURCE_IO)) { 45 + bus_res = res; 46 + break; 47 + } 48 + 49 + if (!bus_res) { 50 + dev_err(&pdev->dev, "No resources available\n"); 51 + return -1; 52 + } 53 + 33 54 pci_eisa_root.dev = &pdev->dev; 34 - pci_eisa_root.res = pdev->bus->resource[0]; 35 - pci_eisa_root.bus_base_addr = pdev->bus->resource[0]->start; 55 + pci_eisa_root.res = bus_res; 56 + pci_eisa_root.bus_base_addr = bus_res->start; 36 57 pci_eisa_root.slots = EISA_MAX_SLOTS; 37 58 pci_eisa_root.dma_mask = pdev->dma_mask; 38 59 dev_set_drvdata(pci_eisa_root.dev, &pci_eisa_root); ··· 66 45 return 0; 67 46 } 68 47 69 - static struct pci_device_id pci_eisa_pci_tbl[] = { 70 - { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, 71 - PCI_CLASS_BRIDGE_EISA << 8, 0xffff00, 0 }, 72 - { 0, } 73 - }; 74 - 75 - static struct pci_driver __refdata pci_eisa_driver = { 76 - .name = "pci_eisa", 77 - .id_table = pci_eisa_pci_tbl, 78 - .probe = pci_eisa_init, 79 - }; 80 - 81 - static int __init pci_eisa_init_module (void) 48 + /* 49 + * We have to call pci_eisa_init_early() before pnpacpi_init()/isapnp_init(). 50 + * Otherwise pnp resource will get enabled early and could prevent eisa 51 + * to be initialized. 52 + * Also need to make sure pci_eisa_init_early() is called after 53 + * x86/pci_subsys_init(). 54 + * So need to use subsys_initcall_sync with it. 55 + */ 56 + static int __init pci_eisa_init_early(void) 82 57 { 83 - return pci_register_driver (&pci_eisa_driver); 84 - } 58 + struct pci_dev *dev = NULL; 59 + int ret; 85 60 86 - device_initcall(pci_eisa_init_module); 87 - MODULE_DEVICE_TABLE(pci, pci_eisa_pci_tbl); 61 + for_each_pci_dev(dev) 62 + if ((dev->class >> 8) == PCI_CLASS_BRIDGE_EISA) { 63 + ret = pci_eisa_init(dev); 64 + if (ret) 65 + return ret; 66 + } 67 + 68 + return 0; 69 + } 70 + subsys_initcall_sync(pci_eisa_init_early);
+1
drivers/firmware/Kconfig
··· 39 39 config EFI_VARS 40 40 tristate "EFI Variable Support via sysfs" 41 41 depends on EFI 42 + select UCS2_STRING 42 43 default n 43 44 help 44 45 If you say Y here, you are able to get EFI (Extensible Firmware
+21 -77
drivers/firmware/efivars.c
··· 80 80 #include <linux/slab.h> 81 81 #include <linux/pstore.h> 82 82 #include <linux/ctype.h> 83 + #include <linux/ucs2_string.h> 83 84 84 85 #include <linux/fs.h> 85 86 #include <linux/ramfs.h> ··· 173 172 static DECLARE_WORK(efivar_work, efivar_update_sysfs_entries); 174 173 static bool efivar_wq_enabled = true; 175 174 176 - /* Return the number of unicode characters in data */ 177 - static unsigned long 178 - utf16_strnlen(efi_char16_t *s, size_t maxlength) 179 - { 180 - unsigned long length = 0; 181 - 182 - while (*s++ != 0 && length < maxlength) 183 - length++; 184 - return length; 185 - } 186 - 187 - static inline unsigned long 188 - utf16_strlen(efi_char16_t *s) 189 - { 190 - return utf16_strnlen(s, ~0UL); 191 - } 192 - 193 - /* 194 - * Return the number of bytes is the length of this string 195 - * Note: this is NOT the same as the number of unicode characters 196 - */ 197 - static inline unsigned long 198 - utf16_strsize(efi_char16_t *data, unsigned long maxlength) 199 - { 200 - return utf16_strnlen(data, maxlength/sizeof(efi_char16_t)) * sizeof(efi_char16_t); 201 - } 202 - 203 - static inline int 204 - utf16_strncmp(const efi_char16_t *a, const efi_char16_t *b, size_t len) 205 - { 206 - while (1) { 207 - if (len == 0) 208 - return 0; 209 - if (*a < *b) 210 - return -1; 211 - if (*a > *b) 212 - return 1; 213 - if (*a == 0) /* implies *b == 0 */ 214 - return 0; 215 - a++; 216 - b++; 217 - len--; 218 - } 219 - } 220 - 221 175 static bool 222 176 validate_device_path(struct efi_variable *var, int match, u8 *buffer, 223 177 unsigned long len) ··· 224 268 u16 filepathlength; 225 269 int i, desclength = 0, namelen; 226 270 227 - namelen = utf16_strnlen(var->VariableName, sizeof(var->VariableName)); 271 + namelen = ucs2_strnlen(var->VariableName, sizeof(var->VariableName)); 228 272 229 273 /* Either "Boot" or "Driver" followed by four digits of hex */ 230 274 for (i = match; i < match+4; i++) { ··· 247 291 * There's no stored length for the description, so it has to be 248 292 * found by hand 249 293 */ 250 - desclength = utf16_strsize((efi_char16_t *)(buffer + 6), len - 6) + 2; 294 + desclength = ucs2_strsize((efi_char16_t *)(buffer + 6), len - 6) + 2; 251 295 252 296 /* Each boot entry must have a descriptor */ 253 297 if (!desclength) ··· 392 436 check_var_size_locked(struct efivars *efivars, u32 attributes, 393 437 unsigned long size) 394 438 { 395 - u64 storage_size, remaining_size, max_size; 396 - efi_status_t status; 397 439 const struct efivar_operations *fops = efivars->ops; 398 440 399 - if (!efivars->ops->query_variable_info) 441 + if (!efivars->ops->query_variable_store) 400 442 return EFI_UNSUPPORTED; 401 443 402 - status = fops->query_variable_info(attributes, &storage_size, 403 - &remaining_size, &max_size); 404 - 405 - if (status != EFI_SUCCESS) 406 - return status; 407 - 408 - if (!storage_size || size > remaining_size || size > max_size || 409 - (remaining_size - size) < (storage_size / 2)) 410 - return EFI_OUT_OF_RESOURCES; 411 - 412 - return status; 444 + return fops->query_variable_store(attributes, size); 413 445 } 414 446 415 447 ··· 537 593 spin_lock_irq(&efivars->lock); 538 594 539 595 status = check_var_size_locked(efivars, new_var->Attributes, 540 - new_var->DataSize + utf16_strsize(new_var->VariableName, 1024)); 596 + new_var->DataSize + ucs2_strsize(new_var->VariableName, 1024)); 541 597 542 598 if (status == EFI_SUCCESS || status == EFI_UNSUPPORTED) 543 599 status = efivars->ops->set_variable(new_var->VariableName, ··· 715 771 * QueryVariableInfo() isn't supported by the firmware. 716 772 */ 717 773 718 - varsize = datasize + utf16_strsize(var->var.VariableName, 1024); 774 + varsize = datasize + ucs2_strsize(var->var.VariableName, 1024); 719 775 status = check_var_size(efivars, attributes, varsize); 720 776 721 777 if (status != EFI_SUCCESS) { ··· 1167 1223 1168 1224 inode = NULL; 1169 1225 1170 - len = utf16_strlen(entry->var.VariableName); 1226 + len = ucs2_strlen(entry->var.VariableName); 1171 1227 1172 1228 /* name, plus '-', plus GUID, plus NUL*/ 1173 1229 name = kmalloc(len + 1 + GUID_LEN + 1, GFP_ATOMIC); ··· 1425 1481 1426 1482 if (efi_guidcmp(entry->var.VendorGuid, vendor)) 1427 1483 continue; 1428 - if (utf16_strncmp(entry->var.VariableName, efi_name, 1429 - utf16_strlen(efi_name))) { 1484 + if (ucs2_strncmp(entry->var.VariableName, efi_name, 1485 + ucs2_strlen(efi_name))) { 1430 1486 /* 1431 1487 * Check if an old format, 1432 1488 * which doesn't support holding ··· 1438 1494 for (i = 0; i < DUMP_NAME_LEN; i++) 1439 1495 efi_name_old[i] = name_old[i]; 1440 1496 1441 - if (utf16_strncmp(entry->var.VariableName, efi_name_old, 1442 - utf16_strlen(efi_name_old))) 1497 + if (ucs2_strncmp(entry->var.VariableName, efi_name_old, 1498 + ucs2_strlen(efi_name_old))) 1443 1499 continue; 1444 1500 } 1445 1501 ··· 1517 1573 * Does this variable already exist? 1518 1574 */ 1519 1575 list_for_each_entry_safe(search_efivar, n, &efivars->list, list) { 1520 - strsize1 = utf16_strsize(search_efivar->var.VariableName, 1024); 1521 - strsize2 = utf16_strsize(new_var->VariableName, 1024); 1576 + strsize1 = ucs2_strsize(search_efivar->var.VariableName, 1024); 1577 + strsize2 = ucs2_strsize(new_var->VariableName, 1024); 1522 1578 if (strsize1 == strsize2 && 1523 1579 !memcmp(&(search_efivar->var.VariableName), 1524 1580 new_var->VariableName, strsize1) && ··· 1534 1590 } 1535 1591 1536 1592 status = check_var_size_locked(efivars, new_var->Attributes, 1537 - new_var->DataSize + utf16_strsize(new_var->VariableName, 1024)); 1593 + new_var->DataSize + ucs2_strsize(new_var->VariableName, 1024)); 1538 1594 1539 1595 if (status && status != EFI_UNSUPPORTED) { 1540 1596 spin_unlock_irq(&efivars->lock); ··· 1558 1614 1559 1615 /* Create the entry in sysfs. Locking is not required here */ 1560 1616 status = efivar_create_sysfs_entry(efivars, 1561 - utf16_strsize(new_var->VariableName, 1617 + ucs2_strsize(new_var->VariableName, 1562 1618 1024), 1563 1619 new_var->VariableName, 1564 1620 &new_var->VendorGuid); ··· 1588 1644 * Does this variable already exist? 1589 1645 */ 1590 1646 list_for_each_entry_safe(search_efivar, n, &efivars->list, list) { 1591 - strsize1 = utf16_strsize(search_efivar->var.VariableName, 1024); 1592 - strsize2 = utf16_strsize(del_var->VariableName, 1024); 1647 + strsize1 = ucs2_strsize(search_efivar->var.VariableName, 1024); 1648 + strsize2 = ucs2_strsize(del_var->VariableName, 1024); 1593 1649 if (strsize1 == strsize2 && 1594 1650 !memcmp(&(search_efivar->var.VariableName), 1595 1651 del_var->VariableName, strsize1) && ··· 1635 1691 unsigned long strsize1, strsize2; 1636 1692 bool found = false; 1637 1693 1638 - strsize1 = utf16_strsize(variable_name, 1024); 1694 + strsize1 = ucs2_strsize(variable_name, 1024); 1639 1695 list_for_each_entry_safe(entry, n, &efivars->list, list) { 1640 - strsize2 = utf16_strsize(entry->var.VariableName, 1024); 1696 + strsize2 = ucs2_strsize(entry->var.VariableName, 1024); 1641 1697 if (strsize1 == strsize2 && 1642 1698 !memcmp(variable_name, &(entry->var.VariableName), 1643 1699 strsize2) && ··· 2075 2131 ops.get_variable = efi.get_variable; 2076 2132 ops.set_variable = efi.set_variable; 2077 2133 ops.get_next_variable = efi.get_next_variable; 2078 - ops.query_variable_info = efi.query_variable_info; 2134 + ops.query_variable_store = efi_query_variable_store; 2079 2135 2080 2136 error = register_efivars(&__efivars, &ops, efi_kobj); 2081 2137 if (error)
+1 -1
drivers/gpio/gpio-pca953x.c
··· 575 575 chip->gpio_chip.ngpio, 576 576 irq_base, 577 577 &pca953x_irq_simple_ops, 578 - NULL); 578 + chip); 579 579 if (!chip->domain) 580 580 return -ENODEV; 581 581
+5 -3
drivers/gpu/drm/drm_fb_helper.c
··· 1544 1544 if (!fb_helper->fb) 1545 1545 return 0; 1546 1546 1547 - drm_modeset_lock_all(dev); 1547 + mutex_lock(&fb_helper->dev->mode_config.mutex); 1548 1548 if (!drm_fb_helper_is_bound(fb_helper)) { 1549 1549 fb_helper->delayed_hotplug = true; 1550 - drm_modeset_unlock_all(dev); 1550 + mutex_unlock(&fb_helper->dev->mode_config.mutex); 1551 1551 return 0; 1552 1552 } 1553 1553 DRM_DEBUG_KMS("\n"); ··· 1558 1558 1559 1559 count = drm_fb_helper_probe_connector_modes(fb_helper, max_width, 1560 1560 max_height); 1561 + mutex_unlock(&fb_helper->dev->mode_config.mutex); 1562 + 1563 + drm_modeset_lock_all(dev); 1561 1564 drm_setup_crtcs(fb_helper); 1562 1565 drm_modeset_unlock_all(dev); 1563 - 1564 1566 drm_fb_helper_set_par(fb_helper->fbdev); 1565 1567 1566 1568 return 0;
+3 -10
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 751 751 int i; 752 752 unsigned char misc = 0; 753 753 unsigned char ext_vga[6]; 754 - unsigned char ext_vga_index24; 755 - unsigned char dac_index90 = 0; 756 754 u8 bppshift; 757 755 758 756 static unsigned char dacvalue[] = { ··· 801 803 option2 = 0x0000b000; 802 804 break; 803 805 case G200_ER: 804 - dac_index90 = 0; 805 806 break; 806 807 } 807 808 ··· 849 852 WREG_DAC(i, dacvalue[i]); 850 853 } 851 854 852 - if (mdev->type == G200_ER) { 853 - WREG_DAC(0x90, dac_index90); 854 - } 855 - 855 + if (mdev->type == G200_ER) 856 + WREG_DAC(0x90, 0); 856 857 857 858 if (option) 858 859 pci_write_config_dword(dev->pdev, PCI_MGA_OPTION, option); ··· 947 952 if (mdev->type == G200_WB) 948 953 ext_vga[1] |= 0x88; 949 954 950 - ext_vga_index24 = 0x05; 951 - 952 955 /* Set pixel clocks */ 953 956 misc = 0x2d; 954 957 WREG8(MGA_MISC_OUT, misc); ··· 958 965 } 959 966 960 967 if (mdev->type == G200_ER) 961 - WREG_ECRT(24, ext_vga_index24); 968 + WREG_ECRT(0x24, 0x5); 962 969 963 970 if (mdev->type == G200_EV) { 964 971 WREG_ECRT(6, 0);
+17
drivers/gpu/drm/nouveau/core/subdev/bios/base.c
··· 248 248 } 249 249 } 250 250 251 + static void 252 + nouveau_bios_shadow_platform(struct nouveau_bios *bios) 253 + { 254 + struct pci_dev *pdev = nv_device(bios)->pdev; 255 + size_t size; 256 + 257 + void __iomem *rom = pci_platform_rom(pdev, &size); 258 + if (rom && size) { 259 + bios->data = kmalloc(size, GFP_KERNEL); 260 + if (bios->data) { 261 + memcpy_fromio(bios->data, rom, size); 262 + bios->size = size; 263 + } 264 + } 265 + } 266 + 251 267 static int 252 268 nouveau_bios_score(struct nouveau_bios *bios, const bool writeable) 253 269 { ··· 304 288 { "PROM", nouveau_bios_shadow_prom, false, 0, 0, NULL }, 305 289 { "ACPI", nouveau_bios_shadow_acpi, true, 0, 0, NULL }, 306 290 { "PCIROM", nouveau_bios_shadow_pci, true, 0, 0, NULL }, 291 + { "PLATFORM", nouveau_bios_shadow_platform, true, 0, 0, NULL }, 307 292 {} 308 293 }; 309 294 struct methods *mthd, *best;
+1 -1
drivers/gpu/drm/nouveau/nv50_display.c
··· 479 479 { 480 480 struct nv50_display_flip *flip = data; 481 481 if (nouveau_bo_rd32(flip->disp->sync, flip->chan->addr / 4) == 482 - flip->chan->data); 482 + flip->chan->data) 483 483 return true; 484 484 usleep_range(1, 2); 485 485 return false;
+26
drivers/gpu/drm/radeon/radeon_bios.c
··· 99 99 return true; 100 100 } 101 101 102 + static bool radeon_read_platform_bios(struct radeon_device *rdev) 103 + { 104 + uint8_t __iomem *bios; 105 + size_t size; 106 + 107 + rdev->bios = NULL; 108 + 109 + bios = pci_platform_rom(rdev->pdev, &size); 110 + if (!bios) { 111 + return false; 112 + } 113 + 114 + if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) { 115 + return false; 116 + } 117 + rdev->bios = kmemdup(bios, size, GFP_KERNEL); 118 + if (rdev->bios == NULL) { 119 + return false; 120 + } 121 + 122 + return true; 123 + } 124 + 102 125 #ifdef CONFIG_ACPI 103 126 /* ATRM is used to get the BIOS on the discrete cards in 104 127 * dual-gpu systems. ··· 642 619 r = radeon_read_bios(rdev); 643 620 if (r == false) { 644 621 r = radeon_read_disabled_bios(rdev); 622 + } 623 + if (r == false) { 624 + r = radeon_read_platform_bios(rdev); 645 625 } 646 626 if (r == false || rdev->bios == NULL) { 647 627 DRM_ERROR("Unable to locate a BIOS ROM\n");
+4
drivers/gpu/drm/udl/udl_connector.c
··· 61 61 int ret; 62 62 63 63 edid = (struct edid *)udl_get_edid(udl); 64 + if (!edid) { 65 + drm_mode_connector_update_edid_property(connector, NULL); 66 + return 0; 67 + } 64 68 65 69 /* 66 70 * We only read the main block, but if the monitor reports extension
+2
drivers/hwspinlock/hwspinlock_core.c
··· 416 416 ret = pm_runtime_get_sync(dev); 417 417 if (ret < 0) { 418 418 dev_err(dev, "%s: can't power on device\n", __func__); 419 + pm_runtime_put_noidle(dev); 420 + module_put(dev->driver->owner); 419 421 return ret; 420 422 } 421 423
+1
drivers/idle/intel_idle.c
··· 465 465 ICPU(0x3c, idle_cpu_hsw), 466 466 ICPU(0x3f, idle_cpu_hsw), 467 467 ICPU(0x45, idle_cpu_hsw), 468 + ICPU(0x46, idle_cpu_hsw), 468 469 {} 469 470 }; 470 471 MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids);
+4 -4
drivers/input/tablet/wacom_wac.c
··· 359 359 case 0x802: /* Intuos4 General Pen */ 360 360 case 0x804: /* Intuos4 Marker Pen */ 361 361 case 0x40802: /* Intuos4 Classic Pen */ 362 - case 0x18803: /* DTH2242 Grip Pen */ 362 + case 0x18802: /* DTH2242 Grip Pen */ 363 363 case 0x022: 364 364 wacom->tool[idx] = BTN_TOOL_PEN; 365 365 break; ··· 1912 1912 { "Wacom Intuos4 12x19", WACOM_PKGLEN_INTUOS, 97536, 60960, 2047, 1913 1913 63, INTUOS4L, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES }; 1914 1914 static const struct wacom_features wacom_features_0xBC = 1915 - { "Wacom Intuos4 WL", WACOM_PKGLEN_INTUOS, 40840, 25400, 2047, 1915 + { "Wacom Intuos4 WL", WACOM_PKGLEN_INTUOS, 40640, 25400, 2047, 1916 1916 63, INTUOS4, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES }; 1917 1917 static const struct wacom_features wacom_features_0x26 = 1918 1918 { "Wacom Intuos5 touch S", WACOM_PKGLEN_INTUOS, 31496, 19685, 2047, ··· 2144 2144 { USB_DEVICE_WACOM(0x44) }, 2145 2145 { USB_DEVICE_WACOM(0x45) }, 2146 2146 { USB_DEVICE_WACOM(0x59) }, 2147 - { USB_DEVICE_WACOM(0x5D) }, 2147 + { USB_DEVICE_DETAILED(0x5D, USB_CLASS_HID, 0, 0) }, 2148 2148 { USB_DEVICE_WACOM(0xB0) }, 2149 2149 { USB_DEVICE_WACOM(0xB1) }, 2150 2150 { USB_DEVICE_WACOM(0xB2) }, ··· 2209 2209 { USB_DEVICE_WACOM(0x47) }, 2210 2210 { USB_DEVICE_WACOM(0xF4) }, 2211 2211 { USB_DEVICE_WACOM(0xF8) }, 2212 - { USB_DEVICE_WACOM(0xF6) }, 2212 + { USB_DEVICE_DETAILED(0xF6, USB_CLASS_HID, 0, 0) }, 2213 2213 { USB_DEVICE_WACOM(0xFA) }, 2214 2214 { USB_DEVICE_LENOVO(0x6004) }, 2215 2215 { }
+2 -1
drivers/irqchip/irq-gic.c
··· 236 236 if (gic_arch_extn.irq_retrigger) 237 237 return gic_arch_extn.irq_retrigger(d); 238 238 239 - return -ENXIO; 239 + /* the genirq layer expects 0 if we can't retrigger in hardware */ 240 + return 0; 240 241 } 241 242 242 243 #ifdef CONFIG_SMP
+38 -13
drivers/md/dm-cache-target.c
··· 6 6 7 7 #include "dm.h" 8 8 #include "dm-bio-prison.h" 9 + #include "dm-bio-record.h" 9 10 #include "dm-cache-metadata.h" 10 11 11 12 #include <linux/dm-io.h> ··· 202 201 unsigned req_nr:2; 203 202 struct dm_deferred_entry *all_io_entry; 204 203 205 - /* writethrough fields */ 204 + /* 205 + * writethrough fields. These MUST remain at the end of this 206 + * structure and the 'cache' member must be the first as it 207 + * is used to determine the offsetof the writethrough fields. 208 + */ 206 209 struct cache *cache; 207 210 dm_cblock_t cblock; 208 211 bio_end_io_t *saved_bi_end_io; 212 + struct dm_bio_details bio_details; 209 213 }; 210 214 211 215 struct dm_cache_migration { ··· 519 513 /*---------------------------------------------------------------- 520 514 * Per bio data 521 515 *--------------------------------------------------------------*/ 522 - static struct per_bio_data *get_per_bio_data(struct bio *bio) 516 + 517 + /* 518 + * If using writeback, leave out struct per_bio_data's writethrough fields. 519 + */ 520 + #define PB_DATA_SIZE_WB (offsetof(struct per_bio_data, cache)) 521 + #define PB_DATA_SIZE_WT (sizeof(struct per_bio_data)) 522 + 523 + static size_t get_per_bio_data_size(struct cache *cache) 523 524 { 524 - struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data)); 525 + return cache->features.write_through ? PB_DATA_SIZE_WT : PB_DATA_SIZE_WB; 526 + } 527 + 528 + static struct per_bio_data *get_per_bio_data(struct bio *bio, size_t data_size) 529 + { 530 + struct per_bio_data *pb = dm_per_bio_data(bio, data_size); 525 531 BUG_ON(!pb); 526 532 return pb; 527 533 } 528 534 529 - static struct per_bio_data *init_per_bio_data(struct bio *bio) 535 + static struct per_bio_data *init_per_bio_data(struct bio *bio, size_t data_size) 530 536 { 531 - struct per_bio_data *pb = get_per_bio_data(bio); 537 + struct per_bio_data *pb = get_per_bio_data(bio, data_size); 532 538 533 539 pb->tick = false; 534 540 pb->req_nr = dm_bio_get_target_bio_nr(bio); ··· 574 556 static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio) 575 557 { 576 558 unsigned long flags; 577 - struct per_bio_data *pb = get_per_bio_data(bio); 559 + size_t pb_data_size = get_per_bio_data_size(cache); 560 + struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size); 578 561 579 562 spin_lock_irqsave(&cache->lock, flags); 580 563 if (cache->need_tick_bio && ··· 654 635 655 636 static void writethrough_endio(struct bio *bio, int err) 656 637 { 657 - struct per_bio_data *pb = get_per_bio_data(bio); 638 + struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT); 658 639 bio->bi_end_io = pb->saved_bi_end_io; 659 640 660 641 if (err) { ··· 662 643 return; 663 644 } 664 645 646 + dm_bio_restore(&pb->bio_details, bio); 665 647 remap_to_cache(pb->cache, bio, pb->cblock); 666 648 667 649 /* ··· 682 662 static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio, 683 663 dm_oblock_t oblock, dm_cblock_t cblock) 684 664 { 685 - struct per_bio_data *pb = get_per_bio_data(bio); 665 + struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT); 686 666 687 667 pb->cache = cache; 688 668 pb->cblock = cblock; 689 669 pb->saved_bi_end_io = bio->bi_end_io; 670 + dm_bio_record(&pb->bio_details, bio); 690 671 bio->bi_end_io = writethrough_endio; 691 672 692 673 remap_to_origin_clear_discard(pb->cache, bio, oblock); ··· 1056 1035 1057 1036 static void process_flush_bio(struct cache *cache, struct bio *bio) 1058 1037 { 1059 - struct per_bio_data *pb = get_per_bio_data(bio); 1038 + size_t pb_data_size = get_per_bio_data_size(cache); 1039 + struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size); 1060 1040 1061 1041 BUG_ON(bio->bi_size); 1062 1042 if (!pb->req_nr) ··· 1129 1107 dm_oblock_t block = get_bio_block(cache, bio); 1130 1108 struct dm_bio_prison_cell *cell_prealloc, *old_ocell, *new_ocell; 1131 1109 struct policy_result lookup_result; 1132 - struct per_bio_data *pb = get_per_bio_data(bio); 1110 + size_t pb_data_size = get_per_bio_data_size(cache); 1111 + struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size); 1133 1112 bool discarded_block = is_discarded_oblock(cache, block); 1134 1113 bool can_migrate = discarded_block || spare_migration_bandwidth(cache); 1135 1114 ··· 1904 1881 1905 1882 cache->ti = ca->ti; 1906 1883 ti->private = cache; 1907 - ti->per_bio_data_size = sizeof(struct per_bio_data); 1908 1884 ti->num_flush_bios = 2; 1909 1885 ti->flush_supported = true; 1910 1886 ··· 1912 1890 ti->discard_zeroes_data_unsupported = true; 1913 1891 1914 1892 memcpy(&cache->features, &ca->features, sizeof(cache->features)); 1893 + ti->per_bio_data_size = get_per_bio_data_size(cache); 1915 1894 1916 1895 cache->callbacks.congested_fn = cache_is_congested; 1917 1896 dm_table_add_target_callbacks(ti->table, &cache->callbacks); ··· 2115 2092 2116 2093 int r; 2117 2094 dm_oblock_t block = get_bio_block(cache, bio); 2095 + size_t pb_data_size = get_per_bio_data_size(cache); 2118 2096 bool can_migrate = false; 2119 2097 bool discarded_block; 2120 2098 struct dm_bio_prison_cell *cell; ··· 2132 2108 return DM_MAPIO_REMAPPED; 2133 2109 } 2134 2110 2135 - pb = init_per_bio_data(bio); 2111 + pb = init_per_bio_data(bio, pb_data_size); 2136 2112 2137 2113 if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { 2138 2114 defer_bio(cache, bio); ··· 2217 2193 { 2218 2194 struct cache *cache = ti->private; 2219 2195 unsigned long flags; 2220 - struct per_bio_data *pb = get_per_bio_data(bio); 2196 + size_t pb_data_size = get_per_bio_data_size(cache); 2197 + struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size); 2221 2198 2222 2199 if (pb->tick) { 2223 2200 policy_tick(cache->policy);
+1
drivers/md/dm.c
··· 611 611 queue_io(md, bio); 612 612 } else { 613 613 /* done with normal IO or empty flush */ 614 + trace_block_bio_complete(md->queue, bio, io_error); 614 615 bio_endio(bio, io_error); 615 616 } 616 617 }
+10 -1
drivers/md/raid5.c
··· 184 184 return_bi = bi->bi_next; 185 185 bi->bi_next = NULL; 186 186 bi->bi_size = 0; 187 + trace_block_bio_complete(bdev_get_queue(bi->bi_bdev), 188 + bi, 0); 187 189 bio_endio(bi, 0); 188 190 bi = return_bi; 189 191 } ··· 3916 3914 rdev_dec_pending(rdev, conf->mddev); 3917 3915 3918 3916 if (!error && uptodate) { 3917 + trace_block_bio_complete(bdev_get_queue(raid_bi->bi_bdev), 3918 + raid_bi, 0); 3919 3919 bio_endio(raid_bi, 0); 3920 3920 if (atomic_dec_and_test(&conf->active_aligned_reads)) 3921 3921 wake_up(&conf->wait_for_stripe); ··· 4386 4382 if ( rw == WRITE ) 4387 4383 md_write_end(mddev); 4388 4384 4385 + trace_block_bio_complete(bdev_get_queue(bi->bi_bdev), 4386 + bi, 0); 4389 4387 bio_endio(bi, 0); 4390 4388 } 4391 4389 } ··· 4764 4758 handled++; 4765 4759 } 4766 4760 remaining = raid5_dec_bi_active_stripes(raid_bio); 4767 - if (remaining == 0) 4761 + if (remaining == 0) { 4762 + trace_block_bio_complete(bdev_get_queue(raid_bio->bi_bdev), 4763 + raid_bio, 0); 4768 4764 bio_endio(raid_bio, 0); 4765 + } 4769 4766 if (atomic_dec_and_test(&conf->active_aligned_reads)) 4770 4767 wake_up(&conf->wait_for_stripe); 4771 4768 return handled;
+1 -1
drivers/misc/vmw_vmci/Kconfig
··· 4 4 5 5 config VMWARE_VMCI 6 6 tristate "VMware VMCI Driver" 7 - depends on X86 && PCI 7 + depends on X86 && PCI && NET 8 8 help 9 9 This is VMware's Virtual Machine Communication Interface. It enables 10 10 high-speed communication between host and guest in a virtual
+2 -57
drivers/mtd/mtdchar.c
··· 1123 1123 } 1124 1124 #endif 1125 1125 1126 - static inline unsigned long get_vm_size(struct vm_area_struct *vma) 1127 - { 1128 - return vma->vm_end - vma->vm_start; 1129 - } 1130 - 1131 - static inline resource_size_t get_vm_offset(struct vm_area_struct *vma) 1132 - { 1133 - return (resource_size_t) vma->vm_pgoff << PAGE_SHIFT; 1134 - } 1135 - 1136 - /* 1137 - * Set a new vm offset. 1138 - * 1139 - * Verify that the incoming offset really works as a page offset, 1140 - * and that the offset and size fit in a resource_size_t. 1141 - */ 1142 - static inline int set_vm_offset(struct vm_area_struct *vma, resource_size_t off) 1143 - { 1144 - pgoff_t pgoff = off >> PAGE_SHIFT; 1145 - if (off != (resource_size_t) pgoff << PAGE_SHIFT) 1146 - return -EINVAL; 1147 - if (off + get_vm_size(vma) - 1 < off) 1148 - return -EINVAL; 1149 - vma->vm_pgoff = pgoff; 1150 - return 0; 1151 - } 1152 - 1153 1126 /* 1154 1127 * set up a mapping for shared memory segments 1155 1128 */ ··· 1132 1159 struct mtd_file_info *mfi = file->private_data; 1133 1160 struct mtd_info *mtd = mfi->mtd; 1134 1161 struct map_info *map = mtd->priv; 1135 - resource_size_t start, off; 1136 - unsigned long len, vma_len; 1137 1162 1138 1163 /* This is broken because it assumes the MTD device is map-based 1139 1164 and that mtd->priv is a valid struct map_info. It should be 1140 1165 replaced with something that uses the mtd_get_unmapped_area() 1141 1166 operation properly. */ 1142 1167 if (0 /*mtd->type == MTD_RAM || mtd->type == MTD_ROM*/) { 1143 - off = get_vm_offset(vma); 1144 - start = map->phys; 1145 - len = PAGE_ALIGN((start & ~PAGE_MASK) + map->size); 1146 - start &= PAGE_MASK; 1147 - vma_len = get_vm_size(vma); 1148 - 1149 - /* Overflow in off+len? */ 1150 - if (vma_len + off < off) 1151 - return -EINVAL; 1152 - /* Does it fit in the mapping? */ 1153 - if (vma_len + off > len) 1154 - return -EINVAL; 1155 - 1156 - off += start; 1157 - /* Did that overflow? */ 1158 - if (off < start) 1159 - return -EINVAL; 1160 - if (set_vm_offset(vma, off) < 0) 1161 - return -EINVAL; 1162 - vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; 1163 - 1164 1168 #ifdef pgprot_noncached 1165 - if (file->f_flags & O_DSYNC || off >= __pa(high_memory)) 1169 + if (file->f_flags & O_DSYNC || map->phys >= __pa(high_memory)) 1166 1170 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1167 1171 #endif 1168 - if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT, 1169 - vma->vm_end - vma->vm_start, 1170 - vma->vm_page_prot)) 1171 - return -EAGAIN; 1172 - 1173 - return 0; 1172 + return vm_iomap_memory(vma, map->phys, map->size); 1174 1173 } 1175 1174 return -ENOSYS; 1176 1175 #else
+73 -28
drivers/net/bonding/bond_main.c
··· 848 848 if (bond->dev->flags & IFF_ALLMULTI) 849 849 dev_set_allmulti(old_active->dev, -1); 850 850 851 + netif_addr_lock_bh(bond->dev); 851 852 netdev_for_each_mc_addr(ha, bond->dev) 852 853 dev_mc_del(old_active->dev, ha->addr); 854 + netif_addr_unlock_bh(bond->dev); 853 855 } 854 856 855 857 if (new_active) { ··· 862 860 if (bond->dev->flags & IFF_ALLMULTI) 863 861 dev_set_allmulti(new_active->dev, 1); 864 862 863 + netif_addr_lock_bh(bond->dev); 865 864 netdev_for_each_mc_addr(ha, bond->dev) 866 865 dev_mc_add(new_active->dev, ha->addr); 866 + netif_addr_unlock_bh(bond->dev); 867 867 } 868 868 } 869 869 ··· 1907 1903 bond_destroy_slave_symlinks(bond_dev, slave_dev); 1908 1904 1909 1905 err_detach: 1906 + if (!USES_PRIMARY(bond->params.mode)) { 1907 + netif_addr_lock_bh(bond_dev); 1908 + bond_mc_list_flush(bond_dev, slave_dev); 1909 + netif_addr_unlock_bh(bond_dev); 1910 + } 1911 + bond_del_vlans_from_slave(bond, slave_dev); 1910 1912 write_lock_bh(&bond->lock); 1911 1913 bond_detach_slave(bond, new_slave); 1914 + if (bond->primary_slave == new_slave) 1915 + bond->primary_slave = NULL; 1912 1916 write_unlock_bh(&bond->lock); 1917 + if (bond->curr_active_slave == new_slave) { 1918 + read_lock(&bond->lock); 1919 + write_lock_bh(&bond->curr_slave_lock); 1920 + bond_change_active_slave(bond, NULL); 1921 + bond_select_active_slave(bond); 1922 + write_unlock_bh(&bond->curr_slave_lock); 1923 + read_unlock(&bond->lock); 1924 + } 1925 + slave_disable_netpoll(new_slave); 1913 1926 1914 1927 err_close: 1928 + slave_dev->priv_flags &= ~IFF_BONDING; 1915 1929 dev_close(slave_dev); 1916 1930 1917 1931 err_unset_master: ··· 3194 3172 struct net_device *slave_dev) 3195 3173 { 3196 3174 struct slave *slave = bond_slave_get_rtnl(slave_dev); 3197 - struct bonding *bond = slave->bond; 3198 - struct net_device *bond_dev = slave->bond->dev; 3175 + struct bonding *bond; 3176 + struct net_device *bond_dev; 3199 3177 u32 old_speed; 3200 3178 u8 old_duplex; 3179 + 3180 + /* A netdev event can be generated while enslaving a device 3181 + * before netdev_rx_handler_register is called in which case 3182 + * slave will be NULL 3183 + */ 3184 + if (!slave) 3185 + return NOTIFY_DONE; 3186 + bond_dev = slave->bond->dev; 3187 + bond = slave->bond; 3201 3188 3202 3189 switch (event) { 3203 3190 case NETDEV_UNREGISTER: ··· 3321 3290 */ 3322 3291 static int bond_xmit_hash_policy_l23(struct sk_buff *skb, int count) 3323 3292 { 3324 - struct ethhdr *data = (struct ethhdr *)skb->data; 3325 - struct iphdr *iph; 3326 - struct ipv6hdr *ipv6h; 3293 + const struct ethhdr *data; 3294 + const struct iphdr *iph; 3295 + const struct ipv6hdr *ipv6h; 3327 3296 u32 v6hash; 3328 - __be32 *s, *d; 3297 + const __be32 *s, *d; 3329 3298 3330 3299 if (skb->protocol == htons(ETH_P_IP) && 3331 - skb_network_header_len(skb) >= sizeof(*iph)) { 3300 + pskb_network_may_pull(skb, sizeof(*iph))) { 3332 3301 iph = ip_hdr(skb); 3302 + data = (struct ethhdr *)skb->data; 3333 3303 return ((ntohl(iph->saddr ^ iph->daddr) & 0xffff) ^ 3334 3304 (data->h_dest[5] ^ data->h_source[5])) % count; 3335 3305 } else if (skb->protocol == htons(ETH_P_IPV6) && 3336 - skb_network_header_len(skb) >= sizeof(*ipv6h)) { 3306 + pskb_network_may_pull(skb, sizeof(*ipv6h))) { 3337 3307 ipv6h = ipv6_hdr(skb); 3308 + data = (struct ethhdr *)skb->data; 3338 3309 s = &ipv6h->saddr.s6_addr32[0]; 3339 3310 d = &ipv6h->daddr.s6_addr32[0]; 3340 3311 v6hash = (s[1] ^ d[1]) ^ (s[2] ^ d[2]) ^ (s[3] ^ d[3]); ··· 3355 3322 static int bond_xmit_hash_policy_l34(struct sk_buff *skb, int count) 3356 3323 { 3357 3324 u32 layer4_xor = 0; 3358 - struct iphdr *iph; 3359 - struct ipv6hdr *ipv6h; 3360 - __be32 *s, *d; 3361 - __be16 *layer4hdr; 3325 + const struct iphdr *iph; 3326 + const struct ipv6hdr *ipv6h; 3327 + const __be32 *s, *d; 3328 + const __be16 *l4 = NULL; 3329 + __be16 _l4[2]; 3330 + int noff = skb_network_offset(skb); 3331 + int poff; 3362 3332 3363 3333 if (skb->protocol == htons(ETH_P_IP) && 3364 - skb_network_header_len(skb) >= sizeof(*iph)) { 3334 + pskb_may_pull(skb, noff + sizeof(*iph))) { 3365 3335 iph = ip_hdr(skb); 3366 - if (!ip_is_fragment(iph) && 3367 - (iph->protocol == IPPROTO_TCP || 3368 - iph->protocol == IPPROTO_UDP) && 3369 - (skb_headlen(skb) - skb_network_offset(skb) >= 3370 - iph->ihl * sizeof(u32) + sizeof(*layer4hdr) * 2)) { 3371 - layer4hdr = (__be16 *)((u32 *)iph + iph->ihl); 3372 - layer4_xor = ntohs(*layer4hdr ^ *(layer4hdr + 1)); 3336 + poff = proto_ports_offset(iph->protocol); 3337 + 3338 + if (!ip_is_fragment(iph) && poff >= 0) { 3339 + l4 = skb_header_pointer(skb, noff + (iph->ihl << 2) + poff, 3340 + sizeof(_l4), &_l4); 3341 + if (l4) 3342 + layer4_xor = ntohs(l4[0] ^ l4[1]); 3373 3343 } 3374 3344 return (layer4_xor ^ 3375 3345 ((ntohl(iph->saddr ^ iph->daddr)) & 0xffff)) % count; 3376 3346 } else if (skb->protocol == htons(ETH_P_IPV6) && 3377 - skb_network_header_len(skb) >= sizeof(*ipv6h)) { 3347 + pskb_may_pull(skb, noff + sizeof(*ipv6h))) { 3378 3348 ipv6h = ipv6_hdr(skb); 3379 - if ((ipv6h->nexthdr == IPPROTO_TCP || 3380 - ipv6h->nexthdr == IPPROTO_UDP) && 3381 - (skb_headlen(skb) - skb_network_offset(skb) >= 3382 - sizeof(*ipv6h) + sizeof(*layer4hdr) * 2)) { 3383 - layer4hdr = (__be16 *)(ipv6h + 1); 3384 - layer4_xor = ntohs(*layer4hdr ^ *(layer4hdr + 1)); 3349 + poff = proto_ports_offset(ipv6h->nexthdr); 3350 + if (poff >= 0) { 3351 + l4 = skb_header_pointer(skb, noff + sizeof(*ipv6h) + poff, 3352 + sizeof(_l4), &_l4); 3353 + if (l4) 3354 + layer4_xor = ntohs(l4[0] ^ l4[1]); 3385 3355 } 3386 3356 s = &ipv6h->saddr.s6_addr32[0]; 3387 3357 d = &ipv6h->daddr.s6_addr32[0]; ··· 4918 4882 static void __net_exit bond_net_exit(struct net *net) 4919 4883 { 4920 4884 struct bond_net *bn = net_generic(net, bond_net_id); 4885 + struct bonding *bond, *tmp_bond; 4886 + LIST_HEAD(list); 4921 4887 4922 4888 bond_destroy_sysfs(bn); 4923 4889 bond_destroy_proc_dir(bn); 4890 + 4891 + /* Kill off any bonds created after unregistering bond rtnl ops */ 4892 + rtnl_lock(); 4893 + list_for_each_entry_safe(bond, tmp_bond, &bn->dev_list, bond_list) 4894 + unregister_netdevice_queue(bond->dev, &list); 4895 + unregister_netdevice_many(&list); 4896 + rtnl_unlock(); 4924 4897 } 4925 4898 4926 4899 static struct pernet_operations bond_net_ops = { ··· 4983 4938 4984 4939 bond_destroy_debugfs(); 4985 4940 4986 - unregister_pernet_subsys(&bond_net_ops); 4987 4941 rtnl_link_unregister(&bond_link_ops); 4942 + unregister_pernet_subsys(&bond_net_ops); 4988 4943 4989 4944 #ifdef CONFIG_NET_POLL_CONTROLLER 4990 4945 /*
+8 -2
drivers/net/can/mcp251x.c
··· 929 929 struct mcp251x_priv *priv = netdev_priv(net); 930 930 struct spi_device *spi = priv->spi; 931 931 struct mcp251x_platform_data *pdata = spi->dev.platform_data; 932 + unsigned long flags; 932 933 int ret; 933 934 934 935 ret = open_candev(net); ··· 946 945 priv->tx_skb = NULL; 947 946 priv->tx_len = 0; 948 947 948 + flags = IRQF_ONESHOT; 949 + if (pdata->irq_flags) 950 + flags |= pdata->irq_flags; 951 + else 952 + flags |= IRQF_TRIGGER_FALLING; 953 + 949 954 ret = request_threaded_irq(spi->irq, NULL, mcp251x_can_ist, 950 - pdata->irq_flags ? pdata->irq_flags : IRQF_TRIGGER_FALLING, 951 - DEVICE_NAME, priv); 955 + flags, DEVICE_NAME, priv); 952 956 if (ret) { 953 957 dev_err(&spi->dev, "failed to acquire irq %d\n", spi->irq); 954 958 if (pdata->transceiver_enable)
+15 -16
drivers/net/can/sja1000/sja1000_of_platform.c
··· 96 96 struct net_device *dev; 97 97 struct sja1000_priv *priv; 98 98 struct resource res; 99 - const u32 *prop; 100 - int err, irq, res_size, prop_size; 99 + u32 prop; 100 + int err, irq, res_size; 101 101 void __iomem *base; 102 102 103 103 err = of_address_to_resource(np, 0, &res); ··· 138 138 priv->read_reg = sja1000_ofp_read_reg; 139 139 priv->write_reg = sja1000_ofp_write_reg; 140 140 141 - prop = of_get_property(np, "nxp,external-clock-frequency", &prop_size); 142 - if (prop && (prop_size == sizeof(u32))) 143 - priv->can.clock.freq = *prop / 2; 141 + err = of_property_read_u32(np, "nxp,external-clock-frequency", &prop); 142 + if (!err) 143 + priv->can.clock.freq = prop / 2; 144 144 else 145 145 priv->can.clock.freq = SJA1000_OFP_CAN_CLOCK; /* default */ 146 146 147 - prop = of_get_property(np, "nxp,tx-output-mode", &prop_size); 148 - if (prop && (prop_size == sizeof(u32))) 149 - priv->ocr |= *prop & OCR_MODE_MASK; 147 + err = of_property_read_u32(np, "nxp,tx-output-mode", &prop); 148 + if (!err) 149 + priv->ocr |= prop & OCR_MODE_MASK; 150 150 else 151 151 priv->ocr |= OCR_MODE_NORMAL; /* default */ 152 152 153 - prop = of_get_property(np, "nxp,tx-output-config", &prop_size); 154 - if (prop && (prop_size == sizeof(u32))) 155 - priv->ocr |= (*prop << OCR_TX_SHIFT) & OCR_TX_MASK; 153 + err = of_property_read_u32(np, "nxp,tx-output-config", &prop); 154 + if (!err) 155 + priv->ocr |= (prop << OCR_TX_SHIFT) & OCR_TX_MASK; 156 156 else 157 157 priv->ocr |= OCR_TX0_PULLDOWN; /* default */ 158 158 159 - prop = of_get_property(np, "nxp,clock-out-frequency", &prop_size); 160 - if (prop && (prop_size == sizeof(u32)) && *prop) { 161 - u32 divider = priv->can.clock.freq * 2 / *prop; 159 + err = of_property_read_u32(np, "nxp,clock-out-frequency", &prop); 160 + if (!err && prop) { 161 + u32 divider = priv->can.clock.freq * 2 / prop; 162 162 163 163 if (divider > 1) 164 164 priv->cdr |= divider / 2 - 1; ··· 168 168 priv->cdr |= CDR_CLK_OFF; /* default */ 169 169 } 170 170 171 - prop = of_get_property(np, "nxp,no-comparator-bypass", NULL); 172 - if (!prop) 171 + if (!of_property_read_bool(np, "nxp,no-comparator-bypass")) 173 172 priv->cdr |= CDR_CBP; /* default */ 174 173 175 174 priv->irq_flags = IRQF_SHARED;
+1 -1
drivers/net/ethernet/8390/ax88796.c
··· 828 828 struct ei_device *ei_local; 829 829 struct ax_device *ax; 830 830 struct resource *irq, *mem, *mem2; 831 - resource_size_t mem_size, mem2_size = 0; 831 + unsigned long mem_size, mem2_size = 0; 832 832 int ret = 0; 833 833 834 834 dev = ax__alloc_ei_netdev(sizeof(struct ax_device));
+5 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 2615 2615 } 2616 2616 } 2617 2617 2618 + /* initialize FW coalescing state machines in RAM */ 2619 + bnx2x_update_coalesce(bp); 2620 + 2618 2621 /* setup the leading queue */ 2619 2622 rc = bnx2x_setup_leading(bp); 2620 2623 if (rc) { ··· 4740 4737 u32 enable_flag = disable ? 0 : (1 << HC_INDEX_DATA_HC_ENABLED_SHIFT); 4741 4738 u32 addr = BAR_CSTRORM_INTMEM + 4742 4739 CSTORM_STATUS_BLOCK_DATA_FLAGS_OFFSET(fw_sb_id, sb_index); 4743 - u16 flags = REG_RD16(bp, addr); 4740 + u8 flags = REG_RD8(bp, addr); 4744 4741 /* clear and set */ 4745 4742 flags &= ~HC_INDEX_DATA_HC_ENABLED; 4746 4743 flags |= enable_flag; 4747 - REG_WR16(bp, addr, flags); 4744 + REG_WR8(bp, addr, flags); 4748 4745 DP(NETIF_MSG_IFUP, 4749 4746 "port %x fw_sb_id %d sb_index %d disable %d\n", 4750 4747 port, fw_sb_id, sb_index, disable);
+6 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 4959 4959 q); 4960 4960 } 4961 4961 4962 - if (!NO_FCOE(bp)) { 4962 + if (!NO_FCOE(bp) && CNIC_ENABLED(bp)) { 4963 4963 fp = &bp->fp[FCOE_IDX(bp)]; 4964 4964 queue_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj; 4965 4965 ··· 9946 9946 REG_RD(bp, NIG_REG_NIG_INT_STS_CLR_0); 9947 9947 } 9948 9948 } 9949 + if (!CHIP_IS_E1x(bp)) 9950 + /* block FW from writing to host */ 9951 + REG_WR(bp, PGLUE_B_REG_INTERNAL_PFID_ENABLE_MASTER, 0); 9952 + 9949 9953 /* wait until BRB is empty */ 9950 9954 tmp_reg = REG_RD(bp, BRB1_REG_NUM_OF_FULL_BLOCKS); 9951 9955 while (timer_count) { ··· 13454 13450 RCU_INIT_POINTER(bp->cnic_ops, NULL); 13455 13451 mutex_unlock(&bp->cnic_mutex); 13456 13452 synchronize_rcu(); 13453 + bp->cnic_enabled = false; 13457 13454 kfree(bp->cnic_kwq); 13458 13455 bp->cnic_kwq = NULL; 13459 13456
+3 -2
drivers/net/ethernet/emulex/benet/be_main.c
··· 772 772 773 773 if (vlan_tx_tag_present(skb)) { 774 774 vlan_tag = be_get_tx_vlan_tag(adapter, skb); 775 - __vlan_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); 776 - skb->vlan_tci = 0; 775 + skb = __vlan_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); 776 + if (skb) 777 + skb->vlan_tci = 0; 777 778 } 778 779 779 780 return skb;
+1
drivers/net/ethernet/freescale/fec_main.c
··· 997 997 } else { 998 998 if (fep->link) { 999 999 fec_stop(ndev); 1000 + fep->link = phy_dev->link; 1000 1001 status_change = 1; 1001 1002 } 1002 1003 }
+25 -11
drivers/net/ethernet/intel/e100.c
··· 870 870 } 871 871 872 872 static int e100_exec_cb(struct nic *nic, struct sk_buff *skb, 873 - void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *)) 873 + int (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *)) 874 874 { 875 875 struct cb *cb; 876 876 unsigned long flags; ··· 888 888 nic->cbs_avail--; 889 889 cb->skb = skb; 890 890 891 + err = cb_prepare(nic, cb, skb); 892 + if (err) 893 + goto err_unlock; 894 + 891 895 if (unlikely(!nic->cbs_avail)) 892 896 err = -ENOSPC; 893 897 894 - cb_prepare(nic, cb, skb); 895 898 896 899 /* Order is important otherwise we'll be in a race with h/w: 897 900 * set S-bit in current first, then clear S-bit in previous. */ ··· 1094 1091 nic->mii.mdio_write = mdio_write; 1095 1092 } 1096 1093 1097 - static void e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb) 1094 + static int e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb) 1098 1095 { 1099 1096 struct config *config = &cb->u.config; 1100 1097 u8 *c = (u8 *)config; ··· 1184 1181 netif_printk(nic, hw, KERN_DEBUG, nic->netdev, 1185 1182 "[16-23]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n", 1186 1183 c[16], c[17], c[18], c[19], c[20], c[21], c[22], c[23]); 1184 + return 0; 1187 1185 } 1188 1186 1189 1187 /************************************************************************* ··· 1335 1331 return fw; 1336 1332 } 1337 1333 1338 - static void e100_setup_ucode(struct nic *nic, struct cb *cb, 1334 + static int e100_setup_ucode(struct nic *nic, struct cb *cb, 1339 1335 struct sk_buff *skb) 1340 1336 { 1341 1337 const struct firmware *fw = (void *)skb; ··· 1362 1358 cb->u.ucode[min_size] |= cpu_to_le32((BUNDLESMALL) ? 0xFFFF : 0xFF80); 1363 1359 1364 1360 cb->command = cpu_to_le16(cb_ucode | cb_el); 1361 + return 0; 1365 1362 } 1366 1363 1367 1364 static inline int e100_load_ucode_wait(struct nic *nic) ··· 1405 1400 return err; 1406 1401 } 1407 1402 1408 - static void e100_setup_iaaddr(struct nic *nic, struct cb *cb, 1403 + static int e100_setup_iaaddr(struct nic *nic, struct cb *cb, 1409 1404 struct sk_buff *skb) 1410 1405 { 1411 1406 cb->command = cpu_to_le16(cb_iaaddr); 1412 1407 memcpy(cb->u.iaaddr, nic->netdev->dev_addr, ETH_ALEN); 1408 + return 0; 1413 1409 } 1414 1410 1415 - static void e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb) 1411 + static int e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb) 1416 1412 { 1417 1413 cb->command = cpu_to_le16(cb_dump); 1418 1414 cb->u.dump_buffer_addr = cpu_to_le32(nic->dma_addr + 1419 1415 offsetof(struct mem, dump_buf)); 1416 + return 0; 1420 1417 } 1421 1418 1422 1419 static int e100_phy_check_without_mii(struct nic *nic) ··· 1588 1581 return 0; 1589 1582 } 1590 1583 1591 - static void e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb) 1584 + static int e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb) 1592 1585 { 1593 1586 struct net_device *netdev = nic->netdev; 1594 1587 struct netdev_hw_addr *ha; ··· 1603 1596 memcpy(&cb->u.multi.addr[i++ * ETH_ALEN], &ha->addr, 1604 1597 ETH_ALEN); 1605 1598 } 1599 + return 0; 1606 1600 } 1607 1601 1608 1602 static void e100_set_multicast_list(struct net_device *netdev) ··· 1764 1756 round_jiffies(jiffies + E100_WATCHDOG_PERIOD)); 1765 1757 } 1766 1758 1767 - static void e100_xmit_prepare(struct nic *nic, struct cb *cb, 1759 + static int e100_xmit_prepare(struct nic *nic, struct cb *cb, 1768 1760 struct sk_buff *skb) 1769 1761 { 1762 + dma_addr_t dma_addr; 1770 1763 cb->command = nic->tx_command; 1764 + 1765 + dma_addr = pci_map_single(nic->pdev, 1766 + skb->data, skb->len, PCI_DMA_TODEVICE); 1767 + /* If we can't map the skb, have the upper layer try later */ 1768 + if (pci_dma_mapping_error(nic->pdev, dma_addr)) 1769 + return -ENOMEM; 1771 1770 1772 1771 /* 1773 1772 * Use the last 4 bytes of the SKB payload packet as the CRC, used for ··· 1792 1777 cb->u.tcb.tcb_byte_count = 0; 1793 1778 cb->u.tcb.threshold = nic->tx_threshold; 1794 1779 cb->u.tcb.tbd_count = 1; 1795 - cb->u.tcb.tbd.buf_addr = cpu_to_le32(pci_map_single(nic->pdev, 1796 - skb->data, skb->len, PCI_DMA_TODEVICE)); 1797 - /* check for mapping failure? */ 1780 + cb->u.tcb.tbd.buf_addr = cpu_to_le32(dma_addr); 1798 1781 cb->u.tcb.tbd.size = cpu_to_le16(skb->len); 1799 1782 skb_tx_timestamp(skb); 1783 + return 0; 1800 1784 } 1801 1785 1802 1786 static netdev_tx_t e100_xmit_frame(struct sk_buff *skb,
-8
drivers/net/ethernet/intel/igb/igb.h
··· 293 293 enum e1000_ring_flags_t { 294 294 IGB_RING_FLAG_RX_SCTP_CSUM, 295 295 IGB_RING_FLAG_RX_LB_VLAN_BSWAP, 296 - IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, 297 296 IGB_RING_FLAG_TX_CTX_IDX, 298 297 IGB_RING_FLAG_TX_DETECT_HANG 299 298 }; 300 - 301 - #define ring_uses_build_skb(ring) \ 302 - test_bit(IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags) 303 - #define set_ring_build_skb_enabled(ring) \ 304 - set_bit(IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags) 305 - #define clear_ring_build_skb_enabled(ring) \ 306 - clear_bit(IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags) 307 299 308 300 #define IGB_TXD_DCMD (E1000_ADVTXD_DCMD_EOP | E1000_ADVTXD_DCMD_RS) 309 301
+4 -106
drivers/net/ethernet/intel/igb/igb_main.c
··· 3387 3387 wr32(E1000_RXDCTL(reg_idx), rxdctl); 3388 3388 } 3389 3389 3390 - static void igb_set_rx_buffer_len(struct igb_adapter *adapter, 3391 - struct igb_ring *rx_ring) 3392 - { 3393 - #define IGB_MAX_BUILD_SKB_SIZE \ 3394 - (SKB_WITH_OVERHEAD(IGB_RX_BUFSZ) - \ 3395 - (NET_SKB_PAD + NET_IP_ALIGN + IGB_TS_HDR_LEN)) 3396 - 3397 - /* set build_skb flag */ 3398 - if (adapter->max_frame_size <= IGB_MAX_BUILD_SKB_SIZE) 3399 - set_ring_build_skb_enabled(rx_ring); 3400 - else 3401 - clear_ring_build_skb_enabled(rx_ring); 3402 - } 3403 - 3404 3390 /** 3405 3391 * igb_configure_rx - Configure receive Unit after Reset 3406 3392 * @adapter: board private structure ··· 3407 3421 /* Setup the HW Rx Head and Tail Descriptor Pointers and 3408 3422 * the Base and Length of the Rx Descriptor Ring 3409 3423 */ 3410 - for (i = 0; i < adapter->num_rx_queues; i++) { 3411 - struct igb_ring *rx_ring = adapter->rx_ring[i]; 3412 - igb_set_rx_buffer_len(adapter, rx_ring); 3413 - igb_configure_rx_ring(adapter, rx_ring); 3414 - } 3424 + for (i = 0; i < adapter->num_rx_queues; i++) 3425 + igb_configure_rx_ring(adapter, adapter->rx_ring[i]); 3415 3426 } 3416 3427 3417 3428 /** ··· 6221 6238 return igb_can_reuse_rx_page(rx_buffer, page, truesize); 6222 6239 } 6223 6240 6224 - static struct sk_buff *igb_build_rx_buffer(struct igb_ring *rx_ring, 6225 - union e1000_adv_rx_desc *rx_desc) 6226 - { 6227 - struct igb_rx_buffer *rx_buffer; 6228 - struct sk_buff *skb; 6229 - struct page *page; 6230 - void *page_addr; 6231 - unsigned int size = le16_to_cpu(rx_desc->wb.upper.length); 6232 - #if (PAGE_SIZE < 8192) 6233 - unsigned int truesize = IGB_RX_BUFSZ; 6234 - #else 6235 - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + 6236 - SKB_DATA_ALIGN(NET_SKB_PAD + 6237 - NET_IP_ALIGN + 6238 - size); 6239 - #endif 6240 - 6241 - /* If we spanned a buffer we have a huge mess so test for it */ 6242 - BUG_ON(unlikely(!igb_test_staterr(rx_desc, E1000_RXD_STAT_EOP))); 6243 - 6244 - rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; 6245 - page = rx_buffer->page; 6246 - prefetchw(page); 6247 - 6248 - page_addr = page_address(page) + rx_buffer->page_offset; 6249 - 6250 - /* prefetch first cache line of first page */ 6251 - prefetch(page_addr + NET_SKB_PAD + NET_IP_ALIGN); 6252 - #if L1_CACHE_BYTES < 128 6253 - prefetch(page_addr + L1_CACHE_BYTES + NET_SKB_PAD + NET_IP_ALIGN); 6254 - #endif 6255 - 6256 - /* build an skb to around the page buffer */ 6257 - skb = build_skb(page_addr, truesize); 6258 - if (unlikely(!skb)) { 6259 - rx_ring->rx_stats.alloc_failed++; 6260 - return NULL; 6261 - } 6262 - 6263 - /* we are reusing so sync this buffer for CPU use */ 6264 - dma_sync_single_range_for_cpu(rx_ring->dev, 6265 - rx_buffer->dma, 6266 - rx_buffer->page_offset, 6267 - IGB_RX_BUFSZ, 6268 - DMA_FROM_DEVICE); 6269 - 6270 - /* update pointers within the skb to store the data */ 6271 - skb_reserve(skb, NET_IP_ALIGN + NET_SKB_PAD); 6272 - __skb_put(skb, size); 6273 - 6274 - /* pull timestamp out of packet data */ 6275 - if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) { 6276 - igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb); 6277 - __skb_pull(skb, IGB_TS_HDR_LEN); 6278 - } 6279 - 6280 - if (igb_can_reuse_rx_page(rx_buffer, page, truesize)) { 6281 - /* hand second half of page back to the ring */ 6282 - igb_reuse_rx_page(rx_ring, rx_buffer); 6283 - } else { 6284 - /* we are not reusing the buffer so unmap it */ 6285 - dma_unmap_page(rx_ring->dev, rx_buffer->dma, 6286 - PAGE_SIZE, DMA_FROM_DEVICE); 6287 - } 6288 - 6289 - /* clear contents of buffer_info */ 6290 - rx_buffer->dma = 0; 6291 - rx_buffer->page = NULL; 6292 - 6293 - return skb; 6294 - } 6295 - 6296 6241 static struct sk_buff *igb_fetch_rx_buffer(struct igb_ring *rx_ring, 6297 6242 union e1000_adv_rx_desc *rx_desc, 6298 6243 struct sk_buff *skb) ··· 6630 6719 rmb(); 6631 6720 6632 6721 /* retrieve a buffer from the ring */ 6633 - if (ring_uses_build_skb(rx_ring)) 6634 - skb = igb_build_rx_buffer(rx_ring, rx_desc); 6635 - else 6636 - skb = igb_fetch_rx_buffer(rx_ring, rx_desc, skb); 6722 + skb = igb_fetch_rx_buffer(rx_ring, rx_desc, skb); 6637 6723 6638 6724 /* exit if we failed to retrieve a buffer */ 6639 6725 if (!skb) ··· 6716 6808 return true; 6717 6809 } 6718 6810 6719 - static inline unsigned int igb_rx_offset(struct igb_ring *rx_ring) 6720 - { 6721 - if (ring_uses_build_skb(rx_ring)) 6722 - return NET_SKB_PAD + NET_IP_ALIGN; 6723 - else 6724 - return 0; 6725 - } 6726 - 6727 6811 /** 6728 6812 * igb_alloc_rx_buffers - Replace used receive buffers; packet split 6729 6813 * @adapter: address of board private structure ··· 6741 6841 /* Refresh the desc even if buffer_addrs didn't change 6742 6842 * because each write-back erases this info. 6743 6843 */ 6744 - rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + 6745 - bi->page_offset + 6746 - igb_rx_offset(rx_ring)); 6844 + rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset); 6747 6845 6748 6846 rx_desc++; 6749 6847 bi++;
+6
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
··· 1052 1052 if ((vf >= adapter->num_vfs) || (vlan > 4095) || (qos > 7)) 1053 1053 return -EINVAL; 1054 1054 if (vlan || qos) { 1055 + if (adapter->vfinfo[vf].pf_vlan) 1056 + err = ixgbe_set_vf_vlan(adapter, false, 1057 + adapter->vfinfo[vf].pf_vlan, 1058 + vf); 1059 + if (err) 1060 + goto out; 1055 1061 err = ixgbe_set_vf_vlan(adapter, true, vlan, vf); 1056 1062 if (err) 1057 1063 goto out;
+1 -1
drivers/net/ethernet/marvell/Kconfig
··· 33 33 34 34 config MVMDIO 35 35 tristate "Marvell MDIO interface support" 36 + select PHYLIB 36 37 ---help--- 37 38 This driver supports the MDIO interface found in the network 38 39 interface units of the Marvell EBU SoCs (Kirkwood, Orion5x, ··· 44 43 config MVNETA 45 44 tristate "Marvell Armada 370/XP network interface support" 46 45 depends on MACH_ARMADA_370_XP 47 - select PHYLIB 48 46 select MVMDIO 49 47 ---help--- 50 48 This driver supports the network interface units in the
+9 -9
drivers/net/ethernet/marvell/mvneta.c
··· 374 374 static int txq_number = 8; 375 375 376 376 static int rxq_def; 377 - static int txq_def; 378 377 379 378 #define MVNETA_DRIVER_NAME "mvneta" 380 379 #define MVNETA_DRIVER_VERSION "1.0" ··· 1474 1475 static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) 1475 1476 { 1476 1477 struct mvneta_port *pp = netdev_priv(dev); 1477 - struct mvneta_tx_queue *txq = &pp->txqs[txq_def]; 1478 + u16 txq_id = skb_get_queue_mapping(skb); 1479 + struct mvneta_tx_queue *txq = &pp->txqs[txq_id]; 1478 1480 struct mvneta_tx_desc *tx_desc; 1479 1481 struct netdev_queue *nq; 1480 1482 int frags = 0; ··· 1485 1485 goto out; 1486 1486 1487 1487 frags = skb_shinfo(skb)->nr_frags + 1; 1488 - nq = netdev_get_tx_queue(dev, txq_def); 1488 + nq = netdev_get_tx_queue(dev, txq_id); 1489 1489 1490 1490 /* Get a descriptor for the first part of the packet */ 1491 1491 tx_desc = mvneta_txq_next_desc_get(txq); ··· 2679 2679 return -EINVAL; 2680 2680 } 2681 2681 2682 - dev = alloc_etherdev_mq(sizeof(struct mvneta_port), 8); 2682 + dev = alloc_etherdev_mqs(sizeof(struct mvneta_port), txq_number, rxq_number); 2683 2683 if (!dev) 2684 2684 return -ENOMEM; 2685 2685 ··· 2761 2761 2762 2762 netif_napi_add(dev, &pp->napi, mvneta_poll, pp->weight); 2763 2763 2764 + dev->features = NETIF_F_SG | NETIF_F_IP_CSUM; 2765 + dev->hw_features |= NETIF_F_SG | NETIF_F_IP_CSUM; 2766 + dev->vlan_features |= NETIF_F_SG | NETIF_F_IP_CSUM; 2767 + dev->priv_flags |= IFF_UNICAST_FLT; 2768 + 2764 2769 err = register_netdev(dev); 2765 2770 if (err < 0) { 2766 2771 dev_err(&pdev->dev, "failed to register\n"); 2767 2772 goto err_deinit; 2768 2773 } 2769 - 2770 - dev->features = NETIF_F_SG | NETIF_F_IP_CSUM; 2771 - dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM; 2772 - dev->priv_flags |= IFF_UNICAST_FLT; 2773 2774 2774 2775 netdev_info(dev, "mac: %pM\n", dev->dev_addr); 2775 2776 ··· 2834 2833 module_param(txq_number, int, S_IRUGO); 2835 2834 2836 2835 module_param(rxq_def, int, S_IRUGO); 2837 - module_param(txq_def, int, S_IRUGO);
+10 -5
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 1619 1619 } 1620 1620 } while ((adapter->ahw->linkup && ahw->has_link_events) != 1); 1621 1621 1622 + /* Make sure carrier is off and queue is stopped during loopback */ 1623 + if (netif_running(netdev)) { 1624 + netif_carrier_off(netdev); 1625 + netif_stop_queue(netdev); 1626 + } 1627 + 1622 1628 ret = qlcnic_do_lb_test(adapter, mode); 1623 1629 1624 1630 qlcnic_83xx_clear_lb_mode(adapter, mode); ··· 2950 2944 void qlcnic_83xx_get_stats(struct qlcnic_adapter *adapter, u64 *data) 2951 2945 { 2952 2946 struct qlcnic_cmd_args cmd; 2947 + struct net_device *netdev = adapter->netdev; 2953 2948 int ret = 0; 2954 2949 2955 2950 qlcnic_alloc_mbx_args(&cmd, adapter, QLCNIC_CMD_GET_STATISTICS); ··· 2960 2953 data = qlcnic_83xx_fill_stats(adapter, &cmd, data, 2961 2954 QLC_83XX_STAT_TX, &ret); 2962 2955 if (ret) { 2963 - dev_info(&adapter->pdev->dev, "Error getting MAC stats\n"); 2956 + netdev_err(netdev, "Error getting Tx stats\n"); 2964 2957 goto out; 2965 2958 } 2966 2959 /* Get MAC stats */ ··· 2970 2963 data = qlcnic_83xx_fill_stats(adapter, &cmd, data, 2971 2964 QLC_83XX_STAT_MAC, &ret); 2972 2965 if (ret) { 2973 - dev_info(&adapter->pdev->dev, 2974 - "Error getting Rx stats\n"); 2966 + netdev_err(netdev, "Error getting MAC stats\n"); 2975 2967 goto out; 2976 2968 } 2977 2969 /* Get Rx stats */ ··· 2980 2974 data = qlcnic_83xx_fill_stats(adapter, &cmd, data, 2981 2975 QLC_83XX_STAT_RX, &ret); 2982 2976 if (ret) 2983 - dev_info(&adapter->pdev->dev, 2984 - "Error getting Tx stats\n"); 2977 + netdev_err(netdev, "Error getting Rx stats\n"); 2985 2978 out: 2986 2979 qlcnic_free_mbx_args(&cmd); 2987 2980 }
+1 -2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 362 362 memcpy(&first_desc->eth_addr, skb->data, ETH_ALEN); 363 363 } 364 364 opcode = TX_ETHER_PKT; 365 - if ((adapter->netdev->features & (NETIF_F_TSO | NETIF_F_TSO6)) && 366 - skb_shinfo(skb)->gso_size > 0) { 365 + if (skb_is_gso(skb)) { 367 366 hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); 368 367 first_desc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size); 369 368 first_desc->total_hdr_length = hdr_len;
+2 -2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
··· 198 198 } 199 199 200 200 err = qlcnic_config_led(adapter, b_state, b_rate); 201 - if (!err) 201 + if (!err) { 202 202 err = len; 203 - else 204 203 ahw->beacon_state = b_state; 204 + } 205 205 206 206 if (test_and_clear_bit(__QLCNIC_DIAG_RES_ALLOC, &adapter->state)) 207 207 qlcnic_diag_free_res(adapter->netdev, max_sds_rings);
+1 -1
drivers/net/ethernet/qlogic/qlge/qlge.h
··· 18 18 */ 19 19 #define DRV_NAME "qlge" 20 20 #define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver " 21 - #define DRV_VERSION "v1.00.00.31" 21 + #define DRV_VERSION "v1.00.00.32" 22 22 23 23 #define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */ 24 24
+1 -1
drivers/net/ethernet/qlogic/qlge/qlge_ethtool.c
··· 379 379 380 380 ecmd->supported = SUPPORTED_10000baseT_Full; 381 381 ecmd->advertising = ADVERTISED_10000baseT_Full; 382 - ecmd->autoneg = AUTONEG_ENABLE; 383 382 ecmd->transceiver = XCVR_EXTERNAL; 384 383 if ((qdev->link_status & STS_LINK_TYPE_MASK) == 385 384 STS_LINK_TYPE_10GBASET) { 386 385 ecmd->supported |= (SUPPORTED_TP | SUPPORTED_Autoneg); 387 386 ecmd->advertising |= (ADVERTISED_TP | ADVERTISED_Autoneg); 388 387 ecmd->port = PORT_TP; 388 + ecmd->autoneg = AUTONEG_ENABLE; 389 389 } else { 390 390 ecmd->supported |= SUPPORTED_FIBRE; 391 391 ecmd->advertising |= ADVERTISED_FIBRE;
+29 -7
drivers/net/ethernet/qlogic/qlge/qlge_main.c
··· 1432 1432 } 1433 1433 1434 1434 /* Categorizing receive firmware frame errors */ 1435 - static void ql_categorize_rx_err(struct ql_adapter *qdev, u8 rx_err) 1435 + static void ql_categorize_rx_err(struct ql_adapter *qdev, u8 rx_err, 1436 + struct rx_ring *rx_ring) 1436 1437 { 1437 1438 struct nic_stats *stats = &qdev->nic_stats; 1438 1439 1439 1440 stats->rx_err_count++; 1441 + rx_ring->rx_errors++; 1440 1442 1441 1443 switch (rx_err & IB_MAC_IOCB_RSP_ERR_MASK) { 1442 1444 case IB_MAC_IOCB_RSP_ERR_CODE_ERR: ··· 1474 1472 struct bq_desc *lbq_desc = ql_get_curr_lchunk(qdev, rx_ring); 1475 1473 struct napi_struct *napi = &rx_ring->napi; 1476 1474 1475 + /* Frame error, so drop the packet. */ 1476 + if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { 1477 + ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring); 1478 + put_page(lbq_desc->p.pg_chunk.page); 1479 + return; 1480 + } 1477 1481 napi->dev = qdev->ndev; 1478 1482 1479 1483 skb = napi_get_frags(napi); ··· 1532 1524 1533 1525 addr = lbq_desc->p.pg_chunk.va; 1534 1526 prefetch(addr); 1527 + 1528 + /* Frame error, so drop the packet. */ 1529 + if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { 1530 + ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring); 1531 + goto err_out; 1532 + } 1535 1533 1536 1534 /* The max framesize filter on this chip is set higher than 1537 1535 * MTU since FCoE uses 2k frames. ··· 1621 1607 skb_reserve(new_skb, NET_IP_ALIGN); 1622 1608 memcpy(skb_put(new_skb, length), skb->data, length); 1623 1609 skb = new_skb; 1610 + 1611 + /* Frame error, so drop the packet. */ 1612 + if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { 1613 + ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring); 1614 + dev_kfree_skb_any(skb); 1615 + return; 1616 + } 1624 1617 1625 1618 /* loopback self test for ethtool */ 1626 1619 if (test_bit(QL_SELFTEST, &qdev->flags)) { ··· 1934 1913 return; 1935 1914 } 1936 1915 1916 + /* Frame error, so drop the packet. */ 1917 + if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { 1918 + ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring); 1919 + dev_kfree_skb_any(skb); 1920 + return; 1921 + } 1922 + 1937 1923 /* The max framesize filter on this chip is set higher than 1938 1924 * MTU since FCoE uses 2k frames. 1939 1925 */ ··· 2021 1993 IB_MAC_IOCB_RSP_VLAN_MASK)) : 0xffff; 2022 1994 2023 1995 QL_DUMP_IB_MAC_RSP(ib_mac_rsp); 2024 - 2025 - /* Frame error, so drop the packet. */ 2026 - if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { 2027 - ql_categorize_rx_err(qdev, ib_mac_rsp->flags2); 2028 - return (unsigned long)length; 2029 - } 2030 1996 2031 1997 if (ib_mac_rsp->flags4 & IB_MAC_IOCB_RSP_HV) { 2032 1998 /* The data and headers are split into
+1
drivers/net/ethernet/stmicro/stmmac/mmc_core.c
··· 149 149 { 150 150 writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_INTR_MASK); 151 151 writel(MMC_DEFAULT_MASK, ioaddr + MMC_TX_INTR_MASK); 152 + writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_IPC_INTR_MASK); 152 153 } 153 154 154 155 /* This reads the MAC core counters (if actaully supported).
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 1520 1520 memcpy(slave_data->mac_addr, mac_addr, ETH_ALEN); 1521 1521 1522 1522 if (data->dual_emac) { 1523 - if (of_property_read_u32(node, "dual_emac_res_vlan", 1523 + if (of_property_read_u32(slave_node, "dual_emac_res_vlan", 1524 1524 &prop)) { 1525 1525 pr_err("Missing dual_emac_res_vlan in DT.\n"); 1526 1526 slave_data->dual_emac_res_vlan = i+1;
+12 -5
drivers/net/hyperv/netvsc.c
··· 470 470 packet->trans_id; 471 471 472 472 /* Notify the layer above us */ 473 - nvsc_packet->completion.send.send_completion( 474 - nvsc_packet->completion.send.send_completion_ctx); 473 + if (nvsc_packet) 474 + nvsc_packet->completion.send.send_completion( 475 + nvsc_packet->completion.send. 476 + send_completion_ctx); 475 477 476 478 num_outstanding_sends = 477 479 atomic_dec_return(&net_device->num_outstanding_sends); ··· 500 498 int ret = 0; 501 499 struct nvsp_message sendMessage; 502 500 struct net_device *ndev; 501 + u64 req_id; 503 502 504 503 net_device = get_outbound_net_device(device); 505 504 if (!net_device) ··· 521 518 0xFFFFFFFF; 522 519 sendMessage.msg.v1_msg.send_rndis_pkt.send_buf_section_size = 0; 523 520 521 + if (packet->completion.send.send_completion) 522 + req_id = (u64)packet; 523 + else 524 + req_id = 0; 525 + 524 526 if (packet->page_buf_cnt) { 525 527 ret = vmbus_sendpacket_pagebuffer(device->channel, 526 528 packet->page_buf, 527 529 packet->page_buf_cnt, 528 530 &sendMessage, 529 531 sizeof(struct nvsp_message), 530 - (unsigned long)packet); 532 + req_id); 531 533 } else { 532 534 ret = vmbus_sendpacket(device->channel, &sendMessage, 533 535 sizeof(struct nvsp_message), 534 - (unsigned long)packet, 536 + req_id, 535 537 VM_PKT_DATA_INBAND, 536 538 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 537 - 538 539 } 539 540 540 541 if (ret == 0) {
-2
drivers/net/hyperv/netvsc_drv.c
··· 241 241 242 242 if (status == 1) { 243 243 netif_carrier_on(net); 244 - netif_wake_queue(net); 245 244 ndev_ctx = netdev_priv(net); 246 245 schedule_delayed_work(&ndev_ctx->dwork, 0); 247 246 schedule_delayed_work(&ndev_ctx->dwork, msecs_to_jiffies(20)); 248 247 } else { 249 248 netif_carrier_off(net); 250 - netif_tx_disable(net); 251 249 } 252 250 } 253 251
+1 -13
drivers/net/hyperv/rndis_filter.c
··· 61 61 62 62 static void rndis_filter_send_completion(void *ctx); 63 63 64 - static void rndis_filter_send_request_completion(void *ctx); 65 - 66 - 67 64 68 65 static struct rndis_device *get_rndis_device(void) 69 66 { ··· 238 241 packet->page_buf[0].len; 239 242 } 240 243 241 - packet->completion.send.send_completion_ctx = req;/* packet; */ 242 - packet->completion.send.send_completion = 243 - rndis_filter_send_request_completion; 244 - packet->completion.send.send_completion_tid = (unsigned long)dev; 244 + packet->completion.send.send_completion = NULL; 245 245 246 246 ret = netvsc_send(dev->net_dev->dev, packet); 247 247 return ret; ··· 992 998 993 999 /* Pass it back to the original handler */ 994 1000 filter_pkt->completion(filter_pkt->completion_ctx); 995 - } 996 - 997 - 998 - static void rndis_filter_send_request_completion(void *ctx) 999 - { 1000 - /* Noop */ 1001 1001 }
+1 -1
drivers/net/tun.c
··· 1594 1594 1595 1595 if (tun->flags & TUN_TAP_MQ && 1596 1596 (tun->numqueues + tun->numdisabled > 1)) 1597 - return err; 1597 + return -EBUSY; 1598 1598 } 1599 1599 else { 1600 1600 char *name;
+1 -1
drivers/net/usb/cdc_mbim.c
··· 134 134 goto error; 135 135 136 136 if (skb) { 137 - if (skb->len <= sizeof(ETH_HLEN)) 137 + if (skb->len <= ETH_HLEN) 138 138 goto error; 139 139 140 140 /* mapping VLANs to MBIM sessions:
+104
drivers/net/usb/qmi_wwan.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/netdevice.h> 15 15 #include <linux/ethtool.h> 16 + #include <linux/etherdevice.h> 16 17 #include <linux/mii.h> 17 18 #include <linux/usb.h> 18 19 #include <linux/usb/cdc.h> ··· 51 50 unsigned long unused; 52 51 struct usb_interface *control; 53 52 struct usb_interface *data; 53 + }; 54 + 55 + /* default ethernet address used by the modem */ 56 + static const u8 default_modem_addr[ETH_ALEN] = {0x02, 0x50, 0xf3}; 57 + 58 + /* Make up an ethernet header if the packet doesn't have one. 59 + * 60 + * A firmware bug common among several devices cause them to send raw 61 + * IP packets under some circumstances. There is no way for the 62 + * driver/host to know when this will happen. And even when the bug 63 + * hits, some packets will still arrive with an intact header. 64 + * 65 + * The supported devices are only capably of sending IPv4, IPv6 and 66 + * ARP packets on a point-to-point link. Any packet with an ethernet 67 + * header will have either our address or a broadcast/multicast 68 + * address as destination. ARP packets will always have a header. 69 + * 70 + * This means that this function will reliably add the appropriate 71 + * header iff necessary, provided our hardware address does not start 72 + * with 4 or 6. 73 + * 74 + * Another common firmware bug results in all packets being addressed 75 + * to 00:a0:c6:00:00:00 despite the host address being different. 76 + * This function will also fixup such packets. 77 + */ 78 + static int qmi_wwan_rx_fixup(struct usbnet *dev, struct sk_buff *skb) 79 + { 80 + __be16 proto; 81 + 82 + /* usbnet rx_complete guarantees that skb->len is at least 83 + * hard_header_len, so we can inspect the dest address without 84 + * checking skb->len 85 + */ 86 + switch (skb->data[0] & 0xf0) { 87 + case 0x40: 88 + proto = htons(ETH_P_IP); 89 + break; 90 + case 0x60: 91 + proto = htons(ETH_P_IPV6); 92 + break; 93 + case 0x00: 94 + if (is_multicast_ether_addr(skb->data)) 95 + return 1; 96 + /* possibly bogus destination - rewrite just in case */ 97 + skb_reset_mac_header(skb); 98 + goto fix_dest; 99 + default: 100 + /* pass along other packets without modifications */ 101 + return 1; 102 + } 103 + if (skb_headroom(skb) < ETH_HLEN) 104 + return 0; 105 + skb_push(skb, ETH_HLEN); 106 + skb_reset_mac_header(skb); 107 + eth_hdr(skb)->h_proto = proto; 108 + memset(eth_hdr(skb)->h_source, 0, ETH_ALEN); 109 + fix_dest: 110 + memcpy(eth_hdr(skb)->h_dest, dev->net->dev_addr, ETH_ALEN); 111 + return 1; 112 + } 113 + 114 + /* very simplistic detection of IPv4 or IPv6 headers */ 115 + static bool possibly_iphdr(const char *data) 116 + { 117 + return (data[0] & 0xd0) == 0x40; 118 + } 119 + 120 + /* disallow addresses which may be confused with IP headers */ 121 + static int qmi_wwan_mac_addr(struct net_device *dev, void *p) 122 + { 123 + int ret; 124 + struct sockaddr *addr = p; 125 + 126 + ret = eth_prepare_mac_addr_change(dev, p); 127 + if (ret < 0) 128 + return ret; 129 + if (possibly_iphdr(addr->sa_data)) 130 + return -EADDRNOTAVAIL; 131 + eth_commit_mac_addr_change(dev, p); 132 + return 0; 133 + } 134 + 135 + static const struct net_device_ops qmi_wwan_netdev_ops = { 136 + .ndo_open = usbnet_open, 137 + .ndo_stop = usbnet_stop, 138 + .ndo_start_xmit = usbnet_start_xmit, 139 + .ndo_tx_timeout = usbnet_tx_timeout, 140 + .ndo_change_mtu = usbnet_change_mtu, 141 + .ndo_set_mac_address = qmi_wwan_mac_addr, 142 + .ndo_validate_addr = eth_validate_addr, 54 143 }; 55 144 56 145 /* using a counter to merge subdriver requests with our own into a combined state */ ··· 320 229 usb_driver_release_interface(driver, info->data); 321 230 } 322 231 232 + /* Never use the same address on both ends of the link, even 233 + * if the buggy firmware told us to. 234 + */ 235 + if (!compare_ether_addr(dev->net->dev_addr, default_modem_addr)) 236 + eth_hw_addr_random(dev->net); 237 + 238 + /* make MAC addr easily distinguishable from an IP header */ 239 + if (possibly_iphdr(dev->net->dev_addr)) { 240 + dev->net->dev_addr[0] |= 0x02; /* set local assignment bit */ 241 + dev->net->dev_addr[0] &= 0xbf; /* clear "IP" bit */ 242 + } 243 + dev->net->netdev_ops = &qmi_wwan_netdev_ops; 323 244 err: 324 245 return status; 325 246 } ··· 410 307 .bind = qmi_wwan_bind, 411 308 .unbind = qmi_wwan_unbind, 412 309 .manage_power = qmi_wwan_manage_power, 310 + .rx_fixup = qmi_wwan_rx_fixup, 413 311 }; 414 312 415 313 #define HUAWEI_VENDOR_ID 0x12D1
+1 -1
drivers/net/wireless/ath/ath9k/ar9580_1p0_initvals.h
··· 519 519 {0x00008258, 0x00000000}, 520 520 {0x0000825c, 0x40000000}, 521 521 {0x00008260, 0x00080922}, 522 - {0x00008264, 0x9bc00010}, 522 + {0x00008264, 0x9d400010}, 523 523 {0x00008268, 0xffffffff}, 524 524 {0x0000826c, 0x0000ffff}, 525 525 {0x00008270, 0x00000000},
+2 -2
drivers/net/wireless/ath/ath9k/dfs_pattern_detector.c
··· 143 143 u32 sz, i; 144 144 struct channel_detector *cd; 145 145 146 - cd = kmalloc(sizeof(*cd), GFP_KERNEL); 146 + cd = kmalloc(sizeof(*cd), GFP_ATOMIC); 147 147 if (cd == NULL) 148 148 goto fail; 149 149 150 150 INIT_LIST_HEAD(&cd->head); 151 151 cd->freq = freq; 152 152 sz = sizeof(cd->detectors) * dpd->num_radar_types; 153 - cd->detectors = kzalloc(sz, GFP_KERNEL); 153 + cd->detectors = kzalloc(sz, GFP_ATOMIC); 154 154 if (cd->detectors == NULL) 155 155 goto fail; 156 156
+2 -2
drivers/net/wireless/ath/ath9k/dfs_pri_detector.c
··· 218 218 { 219 219 struct pulse_elem *p = pool_get_pulse_elem(); 220 220 if (p == NULL) { 221 - p = kmalloc(sizeof(*p), GFP_KERNEL); 221 + p = kmalloc(sizeof(*p), GFP_ATOMIC); 222 222 if (p == NULL) { 223 223 DFS_POOL_STAT_INC(pulse_alloc_error); 224 224 return false; ··· 299 299 ps.deadline_ts = ps.first_ts + ps.dur; 300 300 new_ps = pool_get_pseq_elem(); 301 301 if (new_ps == NULL) { 302 - new_ps = kmalloc(sizeof(*new_ps), GFP_KERNEL); 302 + new_ps = kmalloc(sizeof(*new_ps), GFP_ATOMIC); 303 303 if (new_ps == NULL) { 304 304 DFS_POOL_STAT_INC(pseq_alloc_error); 305 305 return false;
+1 -1
drivers/net/wireless/ath/ath9k/htc_drv_init.c
··· 796 796 * required version. 797 797 */ 798 798 if (priv->fw_version_major != MAJOR_VERSION_REQ || 799 - priv->fw_version_minor != MINOR_VERSION_REQ) { 799 + priv->fw_version_minor < MINOR_VERSION_REQ) { 800 800 dev_err(priv->dev, "ath9k_htc: Please upgrade to FW version %d.%d\n", 801 801 MAJOR_VERSION_REQ, MINOR_VERSION_REQ); 802 802 return -EINVAL;
+2 -1
drivers/net/wireless/b43/phy_n.c
··· 5161 5161 #endif 5162 5162 #ifdef CONFIG_B43_SSB 5163 5163 case B43_BUS_SSB: 5164 - /* FIXME */ 5164 + ssb_pmu_spuravoid_pllupdate(&dev->dev->sdev->bus->chipco, 5165 + avoid); 5165 5166 break; 5166 5167 #endif 5167 5168 }
+1 -6
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 4078 4078 }, 4079 4079 { 4080 4080 .max = 1, 4081 - .types = BIT(NL80211_IFTYPE_P2P_DEVICE) 4082 - }, 4083 - { 4084 - .max = 1, 4085 4081 .types = BIT(NL80211_IFTYPE_P2P_CLIENT) | 4086 4082 BIT(NL80211_IFTYPE_P2P_GO) 4087 4083 }, ··· 4138 4142 BIT(NL80211_IFTYPE_ADHOC) | 4139 4143 BIT(NL80211_IFTYPE_AP) | 4140 4144 BIT(NL80211_IFTYPE_P2P_CLIENT) | 4141 - BIT(NL80211_IFTYPE_P2P_GO) | 4142 - BIT(NL80211_IFTYPE_P2P_DEVICE); 4145 + BIT(NL80211_IFTYPE_P2P_GO); 4143 4146 wiphy->iface_combinations = brcmf_iface_combos; 4144 4147 wiphy->n_iface_combinations = ARRAY_SIZE(brcmf_iface_combos); 4145 4148 wiphy->bands[IEEE80211_BAND_2GHZ] = &__wl_band_2ghz;
+132 -133
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
··· 276 276 } 277 277 } 278 278 279 + /** 280 + * This function frees the WL per-device resources. 281 + * 282 + * This function frees resources owned by the WL device pointed to 283 + * by the wl parameter. 284 + * 285 + * precondition: can both be called locked and unlocked 286 + * 287 + */ 288 + static void brcms_free(struct brcms_info *wl) 289 + { 290 + struct brcms_timer *t, *next; 291 + 292 + /* free ucode data */ 293 + if (wl->fw.fw_cnt) 294 + brcms_ucode_data_free(&wl->ucode); 295 + if (wl->irq) 296 + free_irq(wl->irq, wl); 297 + 298 + /* kill dpc */ 299 + tasklet_kill(&wl->tasklet); 300 + 301 + if (wl->pub) { 302 + brcms_debugfs_detach(wl->pub); 303 + brcms_c_module_unregister(wl->pub, "linux", wl); 304 + } 305 + 306 + /* free common resources */ 307 + if (wl->wlc) { 308 + brcms_c_detach(wl->wlc); 309 + wl->wlc = NULL; 310 + wl->pub = NULL; 311 + } 312 + 313 + /* virtual interface deletion is deferred so we cannot spinwait */ 314 + 315 + /* wait for all pending callbacks to complete */ 316 + while (atomic_read(&wl->callbacks) > 0) 317 + schedule(); 318 + 319 + /* free timers */ 320 + for (t = wl->timers; t; t = next) { 321 + next = t->next; 322 + #ifdef DEBUG 323 + kfree(t->name); 324 + #endif 325 + kfree(t); 326 + } 327 + } 328 + 329 + /* 330 + * called from both kernel as from this kernel module (error flow on attach) 331 + * precondition: perimeter lock is not acquired. 332 + */ 333 + static void brcms_remove(struct bcma_device *pdev) 334 + { 335 + struct ieee80211_hw *hw = bcma_get_drvdata(pdev); 336 + struct brcms_info *wl = hw->priv; 337 + 338 + if (wl->wlc) { 339 + wiphy_rfkill_set_hw_state(wl->pub->ieee_hw->wiphy, false); 340 + wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy); 341 + ieee80211_unregister_hw(hw); 342 + } 343 + 344 + brcms_free(wl); 345 + 346 + bcma_set_drvdata(pdev, NULL); 347 + ieee80211_free_hw(hw); 348 + } 349 + 350 + /* 351 + * Precondition: Since this function is called in brcms_pci_probe() context, 352 + * no locking is required. 353 + */ 354 + static void brcms_release_fw(struct brcms_info *wl) 355 + { 356 + int i; 357 + for (i = 0; i < MAX_FW_IMAGES; i++) { 358 + release_firmware(wl->fw.fw_bin[i]); 359 + release_firmware(wl->fw.fw_hdr[i]); 360 + } 361 + } 362 + 363 + /* 364 + * Precondition: Since this function is called in brcms_pci_probe() context, 365 + * no locking is required. 366 + */ 367 + static int brcms_request_fw(struct brcms_info *wl, struct bcma_device *pdev) 368 + { 369 + int status; 370 + struct device *device = &pdev->dev; 371 + char fw_name[100]; 372 + int i; 373 + 374 + memset(&wl->fw, 0, sizeof(struct brcms_firmware)); 375 + for (i = 0; i < MAX_FW_IMAGES; i++) { 376 + if (brcms_firmwares[i] == NULL) 377 + break; 378 + sprintf(fw_name, "%s-%d.fw", brcms_firmwares[i], 379 + UCODE_LOADER_API_VER); 380 + status = request_firmware(&wl->fw.fw_bin[i], fw_name, device); 381 + if (status) { 382 + wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n", 383 + KBUILD_MODNAME, fw_name); 384 + return status; 385 + } 386 + sprintf(fw_name, "%s_hdr-%d.fw", brcms_firmwares[i], 387 + UCODE_LOADER_API_VER); 388 + status = request_firmware(&wl->fw.fw_hdr[i], fw_name, device); 389 + if (status) { 390 + wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n", 391 + KBUILD_MODNAME, fw_name); 392 + return status; 393 + } 394 + wl->fw.hdr_num_entries[i] = 395 + wl->fw.fw_hdr[i]->size / (sizeof(struct firmware_hdr)); 396 + } 397 + wl->fw.fw_cnt = i; 398 + status = brcms_ucode_data_init(wl, &wl->ucode); 399 + brcms_release_fw(wl); 400 + return status; 401 + } 402 + 279 403 static void brcms_ops_tx(struct ieee80211_hw *hw, 280 404 struct ieee80211_tx_control *control, 281 405 struct sk_buff *skb) ··· 431 307 spin_unlock_bh(&wl->lock); 432 308 if (!blocked) 433 309 wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy); 310 + 311 + if (!wl->ucode.bcm43xx_bomminor) { 312 + err = brcms_request_fw(wl, wl->wlc->hw->d11core); 313 + if (err) { 314 + brcms_remove(wl->wlc->hw->d11core); 315 + return -ENOENT; 316 + } 317 + } 434 318 435 319 spin_lock_bh(&wl->lock); 436 320 /* avoid acknowledging frames before a non-monitor device is added */ ··· 988 856 wake_up(&wl->tx_flush_wq); 989 857 } 990 858 991 - /* 992 - * Precondition: Since this function is called in brcms_pci_probe() context, 993 - * no locking is required. 994 - */ 995 - static int brcms_request_fw(struct brcms_info *wl, struct bcma_device *pdev) 996 - { 997 - int status; 998 - struct device *device = &pdev->dev; 999 - char fw_name[100]; 1000 - int i; 1001 - 1002 - memset(&wl->fw, 0, sizeof(struct brcms_firmware)); 1003 - for (i = 0; i < MAX_FW_IMAGES; i++) { 1004 - if (brcms_firmwares[i] == NULL) 1005 - break; 1006 - sprintf(fw_name, "%s-%d.fw", brcms_firmwares[i], 1007 - UCODE_LOADER_API_VER); 1008 - status = request_firmware(&wl->fw.fw_bin[i], fw_name, device); 1009 - if (status) { 1010 - wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n", 1011 - KBUILD_MODNAME, fw_name); 1012 - return status; 1013 - } 1014 - sprintf(fw_name, "%s_hdr-%d.fw", brcms_firmwares[i], 1015 - UCODE_LOADER_API_VER); 1016 - status = request_firmware(&wl->fw.fw_hdr[i], fw_name, device); 1017 - if (status) { 1018 - wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n", 1019 - KBUILD_MODNAME, fw_name); 1020 - return status; 1021 - } 1022 - wl->fw.hdr_num_entries[i] = 1023 - wl->fw.fw_hdr[i]->size / (sizeof(struct firmware_hdr)); 1024 - } 1025 - wl->fw.fw_cnt = i; 1026 - return brcms_ucode_data_init(wl, &wl->ucode); 1027 - } 1028 - 1029 - /* 1030 - * Precondition: Since this function is called in brcms_pci_probe() context, 1031 - * no locking is required. 1032 - */ 1033 - static void brcms_release_fw(struct brcms_info *wl) 1034 - { 1035 - int i; 1036 - for (i = 0; i < MAX_FW_IMAGES; i++) { 1037 - release_firmware(wl->fw.fw_bin[i]); 1038 - release_firmware(wl->fw.fw_hdr[i]); 1039 - } 1040 - } 1041 - 1042 - /** 1043 - * This function frees the WL per-device resources. 1044 - * 1045 - * This function frees resources owned by the WL device pointed to 1046 - * by the wl parameter. 1047 - * 1048 - * precondition: can both be called locked and unlocked 1049 - * 1050 - */ 1051 - static void brcms_free(struct brcms_info *wl) 1052 - { 1053 - struct brcms_timer *t, *next; 1054 - 1055 - /* free ucode data */ 1056 - if (wl->fw.fw_cnt) 1057 - brcms_ucode_data_free(&wl->ucode); 1058 - if (wl->irq) 1059 - free_irq(wl->irq, wl); 1060 - 1061 - /* kill dpc */ 1062 - tasklet_kill(&wl->tasklet); 1063 - 1064 - if (wl->pub) { 1065 - brcms_debugfs_detach(wl->pub); 1066 - brcms_c_module_unregister(wl->pub, "linux", wl); 1067 - } 1068 - 1069 - /* free common resources */ 1070 - if (wl->wlc) { 1071 - brcms_c_detach(wl->wlc); 1072 - wl->wlc = NULL; 1073 - wl->pub = NULL; 1074 - } 1075 - 1076 - /* virtual interface deletion is deferred so we cannot spinwait */ 1077 - 1078 - /* wait for all pending callbacks to complete */ 1079 - while (atomic_read(&wl->callbacks) > 0) 1080 - schedule(); 1081 - 1082 - /* free timers */ 1083 - for (t = wl->timers; t; t = next) { 1084 - next = t->next; 1085 - #ifdef DEBUG 1086 - kfree(t->name); 1087 - #endif 1088 - kfree(t); 1089 - } 1090 - } 1091 - 1092 - /* 1093 - * called from both kernel as from this kernel module (error flow on attach) 1094 - * precondition: perimeter lock is not acquired. 1095 - */ 1096 - static void brcms_remove(struct bcma_device *pdev) 1097 - { 1098 - struct ieee80211_hw *hw = bcma_get_drvdata(pdev); 1099 - struct brcms_info *wl = hw->priv; 1100 - 1101 - if (wl->wlc) { 1102 - brcms_led_unregister(wl); 1103 - wiphy_rfkill_set_hw_state(wl->pub->ieee_hw->wiphy, false); 1104 - wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy); 1105 - ieee80211_unregister_hw(hw); 1106 - } 1107 - 1108 - brcms_free(wl); 1109 - 1110 - bcma_set_drvdata(pdev, NULL); 1111 - ieee80211_free_hw(hw); 1112 - } 1113 - 1114 859 static irqreturn_t brcms_isr(int irq, void *dev_id) 1115 860 { 1116 861 struct brcms_info *wl; ··· 1129 1120 spin_lock_init(&wl->lock); 1130 1121 spin_lock_init(&wl->isr_lock); 1131 1122 1132 - /* prepare ucode */ 1133 - if (brcms_request_fw(wl, pdev) < 0) { 1134 - wiphy_err(wl->wiphy, "%s: Failed to find firmware usually in " 1135 - "%s\n", KBUILD_MODNAME, "/lib/firmware/brcm"); 1136 - brcms_release_fw(wl); 1137 - brcms_remove(pdev); 1138 - return NULL; 1139 - } 1140 - 1141 1123 /* common load-time initialization */ 1142 1124 wl->wlc = brcms_c_attach((void *)wl, pdev, unit, false, &err); 1143 - brcms_release_fw(wl); 1144 1125 if (!wl->wlc) { 1145 1126 wiphy_err(wl->wiphy, "%s: attach() failed with code %d\n", 1146 1127 KBUILD_MODNAME, err);
+8 -7
drivers/pci/pci-acpi.c
··· 53 53 return; 54 54 } 55 55 56 - if (!pci_dev->pm_cap || !pci_dev->pme_support 57 - || pci_check_pme_status(pci_dev)) { 58 - if (pci_dev->pme_poll) 59 - pci_dev->pme_poll = false; 56 + /* Clear PME Status if set. */ 57 + if (pci_dev->pme_support) 58 + pci_check_pme_status(pci_dev); 60 59 61 - pci_wakeup_event(pci_dev); 62 - pm_runtime_resume(&pci_dev->dev); 63 - } 60 + if (pci_dev->pme_poll) 61 + pci_dev->pme_poll = false; 62 + 63 + pci_wakeup_event(pci_dev); 64 + pm_runtime_resume(&pci_dev->dev); 64 65 65 66 if (pci_dev->subordinate) 66 67 pci_pme_wakeup_bus(pci_dev->subordinate);
+3 -2
drivers/pci/pci-driver.c
··· 390 390 391 391 /* 392 392 * Turn off Bus Master bit on the device to tell it to not 393 - * continue to do DMA 393 + * continue to do DMA. Don't touch devices in D3cold or unknown states. 394 394 */ 395 - pci_clear_master(pci_dev); 395 + if (pci_dev->current_state <= PCI_D3hot) 396 + pci_clear_master(pci_dev); 396 397 } 397 398 398 399 #ifdef CONFIG_PM
-13
drivers/pci/pcie/portdrv_pci.c
··· 185 185 #endif /* !PM */ 186 186 187 187 /* 188 - * PCIe port runtime suspend is broken for some chipsets, so use a 189 - * black list to disable runtime PM for these chipsets. 190 - */ 191 - static const struct pci_device_id port_runtime_pm_black_list[] = { 192 - { /* end: all zeroes */ } 193 - }; 194 - 195 - /* 196 188 * pcie_portdrv_probe - Probe PCI-Express port devices 197 189 * @dev: PCI-Express port device being probed 198 190 * ··· 217 225 * it by default. 218 226 */ 219 227 dev->d3cold_allowed = false; 220 - if (!pci_match_id(port_runtime_pm_black_list, dev)) 221 - pm_runtime_put_noidle(&dev->dev); 222 - 223 228 return 0; 224 229 } 225 230 226 231 static void pcie_portdrv_remove(struct pci_dev *dev) 227 232 { 228 - if (!pci_match_id(port_runtime_pm_black_list, dev)) 229 - pm_runtime_get_noresume(&dev->dev); 230 233 pcie_port_device_remove(dev); 231 234 pci_disable_device(dev); 232 235 }
+31 -36
drivers/pci/rom.c
··· 100 100 return min((size_t)(image - rom), size); 101 101 } 102 102 103 - static loff_t pci_find_rom(struct pci_dev *pdev, size_t *size) 104 - { 105 - struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 106 - loff_t start; 107 - 108 - /* assign the ROM an address if it doesn't have one */ 109 - if (res->parent == NULL && pci_assign_resource(pdev, PCI_ROM_RESOURCE)) 110 - return 0; 111 - start = pci_resource_start(pdev, PCI_ROM_RESOURCE); 112 - *size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 113 - 114 - if (*size == 0) 115 - return 0; 116 - 117 - /* Enable ROM space decodes */ 118 - if (pci_enable_rom(pdev)) 119 - return 0; 120 - 121 - return start; 122 - } 123 - 124 103 /** 125 104 * pci_map_rom - map a PCI ROM to kernel space 126 105 * @pdev: pointer to pci device struct ··· 114 135 void __iomem *pci_map_rom(struct pci_dev *pdev, size_t *size) 115 136 { 116 137 struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 117 - loff_t start = 0; 138 + loff_t start; 118 139 void __iomem *rom; 119 140 120 141 /* ··· 133 154 return (void __iomem *)(unsigned long) 134 155 pci_resource_start(pdev, PCI_ROM_RESOURCE); 135 156 } else { 136 - start = pci_find_rom(pdev, size); 157 + /* assign the ROM an address if it doesn't have one */ 158 + if (res->parent == NULL && 159 + pci_assign_resource(pdev,PCI_ROM_RESOURCE)) 160 + return NULL; 161 + start = pci_resource_start(pdev, PCI_ROM_RESOURCE); 162 + *size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 163 + if (*size == 0) 164 + return NULL; 165 + 166 + /* Enable ROM space decodes */ 167 + if (pci_enable_rom(pdev)) 168 + return NULL; 137 169 } 138 170 } 139 - 140 - /* 141 - * Some devices may provide ROMs via a source other than the BAR 142 - */ 143 - if (!start && pdev->rom && pdev->romlen) { 144 - *size = pdev->romlen; 145 - return phys_to_virt(pdev->rom); 146 - } 147 - 148 - if (!start) 149 - return NULL; 150 171 151 172 rom = ioremap(start, *size); 152 173 if (!rom) { ··· 181 202 if (res->flags & (IORESOURCE_ROM_COPY | IORESOURCE_ROM_BIOS_COPY)) 182 203 return; 183 204 184 - if (!pdev->rom || !pdev->romlen) 185 - iounmap(rom); 205 + iounmap(rom); 186 206 187 207 /* Disable again before continuing, leave enabled if pci=rom */ 188 208 if (!(res->flags & (IORESOURCE_ROM_ENABLE | IORESOURCE_ROM_SHADOW))) ··· 205 227 } 206 228 } 207 229 230 + /** 231 + * pci_platform_rom - provides a pointer to any ROM image provided by the 232 + * platform 233 + * @pdev: pointer to pci device struct 234 + * @size: pointer to receive size of pci window over ROM 235 + */ 236 + void __iomem *pci_platform_rom(struct pci_dev *pdev, size_t *size) 237 + { 238 + if (pdev->rom && pdev->romlen) { 239 + *size = pdev->romlen; 240 + return phys_to_virt((phys_addr_t)pdev->rom); 241 + } 242 + 243 + return NULL; 244 + } 245 + 208 246 EXPORT_SYMBOL(pci_map_rom); 209 247 EXPORT_SYMBOL(pci_unmap_rom); 210 248 EXPORT_SYMBOL_GPL(pci_enable_rom); 211 249 EXPORT_SYMBOL_GPL(pci_disable_rom); 250 + EXPORT_SYMBOL(pci_platform_rom);
-4
drivers/platform/x86/hp-wmi.c
··· 134 134 { KE_KEY, 0x2142, { KEY_MEDIA } }, 135 135 { KE_KEY, 0x213b, { KEY_INFO } }, 136 136 { KE_KEY, 0x2169, { KEY_DIRECTION } }, 137 - { KE_KEY, 0x216a, { KEY_SETUP } }, 138 137 { KE_KEY, 0x231b, { KEY_HELP } }, 139 138 { KE_END, 0 } 140 139 }; ··· 924 925 err = hp_wmi_input_setup(); 925 926 if (err) 926 927 return err; 927 - 928 - //Enable magic for hotkeys that run on the SMBus 929 - ec_write(0xe6,0x6e); 930 928 } 931 929 932 930 if (bios_capable) {
-10
drivers/platform/x86/thinkpad_acpi.c
··· 1964 1964 /* kthread for the hotkey poller */ 1965 1965 static struct task_struct *tpacpi_hotkey_task; 1966 1966 1967 - /* Acquired while the poller kthread is running, use to sync start/stop */ 1968 - static struct mutex hotkey_thread_mutex; 1969 - 1970 1967 /* 1971 1968 * Acquire mutex to write poller control variables as an 1972 1969 * atomic block. ··· 2459 2462 unsigned int poll_freq; 2460 2463 bool was_frozen; 2461 2464 2462 - mutex_lock(&hotkey_thread_mutex); 2463 - 2464 2465 if (tpacpi_lifecycle == TPACPI_LIFE_EXITING) 2465 2466 goto exit; 2466 2467 ··· 2518 2523 } 2519 2524 2520 2525 exit: 2521 - mutex_unlock(&hotkey_thread_mutex); 2522 2526 return 0; 2523 2527 } 2524 2528 ··· 2527 2533 if (tpacpi_hotkey_task) { 2528 2534 kthread_stop(tpacpi_hotkey_task); 2529 2535 tpacpi_hotkey_task = NULL; 2530 - mutex_lock(&hotkey_thread_mutex); 2531 - /* at this point, the thread did exit */ 2532 - mutex_unlock(&hotkey_thread_mutex); 2533 2536 } 2534 2537 } 2535 2538 ··· 3225 3234 mutex_init(&hotkey_mutex); 3226 3235 3227 3236 #ifdef CONFIG_THINKPAD_ACPI_HOTKEY_POLL 3228 - mutex_init(&hotkey_thread_mutex); 3229 3237 mutex_init(&hotkey_thread_data_mutex); 3230 3238 #endif 3231 3239
+1 -1
drivers/remoteproc/Kconfig
··· 4 4 config REMOTEPROC 5 5 tristate 6 6 depends on HAS_DMA 7 - select FW_CONFIG 7 + select FW_LOADER 8 8 select VIRTIO 9 9 10 10 config OMAP_REMOTEPROC
+4 -2
drivers/remoteproc/remoteproc_core.c
··· 217 217 * TODO: support predefined notifyids (via resource table) 218 218 */ 219 219 ret = idr_alloc(&rproc->notifyids, rvring, 0, 0, GFP_KERNEL); 220 - if (ret) { 220 + if (ret < 0) { 221 221 dev_err(dev, "idr_alloc failed: %d\n", ret); 222 222 dma_free_coherent(dev->parent, size, va, dma); 223 223 return ret; ··· 366 366 /* it is now safe to add the virtio device */ 367 367 ret = rproc_add_virtio_dev(rvdev, rsc->id); 368 368 if (ret) 369 - goto free_rvdev; 369 + goto remove_rvdev; 370 370 371 371 return 0; 372 372 373 + remove_rvdev: 374 + list_del(&rvdev->node); 373 375 free_rvdev: 374 376 kfree(rvdev); 375 377 return ret;
+6 -1
drivers/remoteproc/ste_modem_rproc.c
··· 240 240 241 241 /* Unregister as remoteproc device */ 242 242 rproc_del(sproc->rproc); 243 + dma_free_coherent(sproc->rproc->dev.parent, SPROC_FW_SIZE, 244 + sproc->fw_addr, sproc->fw_dma_addr); 243 245 rproc_put(sproc->rproc); 244 246 245 247 mdev->drv_data = NULL; ··· 299 297 /* Register as a remoteproc device */ 300 298 err = rproc_add(rproc); 301 299 if (err) 302 - goto free_rproc; 300 + goto free_mem; 303 301 304 302 return 0; 305 303 304 + free_mem: 305 + dma_free_coherent(rproc->dev.parent, SPROC_FW_SIZE, 306 + sproc->fw_addr, sproc->fw_dma_addr); 306 307 free_rproc: 307 308 /* Reset device data upon error */ 308 309 mdev->drv_data = NULL;
+3
drivers/s390/net/qeth_core.h
··· 769 769 unsigned long thread_start_mask; 770 770 unsigned long thread_allowed_mask; 771 771 unsigned long thread_running_mask; 772 + struct task_struct *recovery_task; 772 773 spinlock_t ip_lock; 773 774 struct list_head ip_list; 774 775 struct list_head *ip_tbd_list; ··· 863 862 extern struct kmem_cache *qeth_core_header_cache; 864 863 extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS]; 865 864 865 + void qeth_set_recovery_task(struct qeth_card *); 866 + void qeth_clear_recovery_task(struct qeth_card *); 866 867 void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int); 867 868 int qeth_threads_running(struct qeth_card *, unsigned long); 868 869 int qeth_wait_for_threads(struct qeth_card *, unsigned long);
+19
drivers/s390/net/qeth_core_main.c
··· 177 177 return "n/a"; 178 178 } 179 179 180 + void qeth_set_recovery_task(struct qeth_card *card) 181 + { 182 + card->recovery_task = current; 183 + } 184 + EXPORT_SYMBOL_GPL(qeth_set_recovery_task); 185 + 186 + void qeth_clear_recovery_task(struct qeth_card *card) 187 + { 188 + card->recovery_task = NULL; 189 + } 190 + EXPORT_SYMBOL_GPL(qeth_clear_recovery_task); 191 + 192 + static bool qeth_is_recovery_task(const struct qeth_card *card) 193 + { 194 + return card->recovery_task == current; 195 + } 196 + 180 197 void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads, 181 198 int clear_start_mask) 182 199 { ··· 222 205 223 206 int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads) 224 207 { 208 + if (qeth_is_recovery_task(card)) 209 + return 0; 225 210 return wait_event_interruptible(card->wait_q, 226 211 qeth_threads_running(card, threads) == 0); 227 212 }
+2
drivers/s390/net/qeth_l2_main.c
··· 1144 1144 QETH_CARD_TEXT(card, 2, "recover2"); 1145 1145 dev_warn(&card->gdev->dev, 1146 1146 "A recovery process has been started for the device\n"); 1147 + qeth_set_recovery_task(card); 1147 1148 __qeth_l2_set_offline(card->gdev, 1); 1148 1149 rc = __qeth_l2_set_online(card->gdev, 1); 1149 1150 if (!rc) ··· 1155 1154 dev_warn(&card->gdev->dev, "The qeth device driver " 1156 1155 "failed to recover an error on the device\n"); 1157 1156 } 1157 + qeth_clear_recovery_task(card); 1158 1158 qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD); 1159 1159 qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD); 1160 1160 return 0;
+2
drivers/s390/net/qeth_l3_main.c
··· 3520 3520 QETH_CARD_TEXT(card, 2, "recover2"); 3521 3521 dev_warn(&card->gdev->dev, 3522 3522 "A recovery process has been started for the device\n"); 3523 + qeth_set_recovery_task(card); 3523 3524 __qeth_l3_set_offline(card->gdev, 1); 3524 3525 rc = __qeth_l3_set_online(card->gdev, 1); 3525 3526 if (!rc) ··· 3531 3530 dev_warn(&card->gdev->dev, "The qeth device driver " 3532 3531 "failed to recover an error on the device\n"); 3533 3532 } 3533 + qeth_clear_recovery_task(card); 3534 3534 qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD); 3535 3535 qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD); 3536 3536 return 0;
+2 -2
drivers/sbus/char/bbc_i2c.c
··· 282 282 return IRQ_HANDLED; 283 283 } 284 284 285 - static void __init reset_one_i2c(struct bbc_i2c_bus *bp) 285 + static void reset_one_i2c(struct bbc_i2c_bus *bp) 286 286 { 287 287 writeb(I2C_PCF_PIN, bp->i2c_control_regs + 0x0); 288 288 writeb(bp->own, bp->i2c_control_regs + 0x1); ··· 291 291 writeb(I2C_PCF_IDLE, bp->i2c_control_regs + 0x0); 292 292 } 293 293 294 - static struct bbc_i2c_bus * __init attach_one_i2c(struct platform_device *op, int index) 294 + static struct bbc_i2c_bus * attach_one_i2c(struct platform_device *op, int index) 295 295 { 296 296 struct bbc_i2c_bus *bp; 297 297 struct device_node *dp;
+1 -1
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 1899 1899 sdev->allow_restart = 1; 1900 1900 blk_queue_rq_timeout(sdev->request_queue, 120 * HZ); 1901 1901 } 1902 - scsi_adjust_queue_depth(sdev, 0, shost->cmd_per_lun); 1903 1902 spin_unlock_irqrestore(shost->host_lock, lock_flags); 1903 + scsi_adjust_queue_depth(sdev, 0, shost->cmd_per_lun); 1904 1904 return 0; 1905 1905 } 1906 1906
+10 -3
drivers/scsi/ipr.c
··· 5148 5148 ipr_trace; 5149 5149 } 5150 5150 5151 - list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q); 5151 + list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 5152 5152 if (!ipr_is_naca_model(res)) 5153 5153 res->needs_sync_complete = 1; 5154 5154 ··· 9349 9349 int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg); 9350 9350 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 9351 9351 9352 - rc = request_irq(pdev->irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg); 9352 + if (ioa_cfg->intr_flag == IPR_USE_MSIX) 9353 + rc = request_irq(ioa_cfg->vectors_info[0].vec, ipr_test_intr, 0, IPR_NAME, ioa_cfg); 9354 + else 9355 + rc = request_irq(pdev->irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg); 9353 9356 if (rc) { 9354 9357 dev_err(&pdev->dev, "Can not assign irq %d\n", pdev->irq); 9355 9358 return rc; ··· 9374 9371 9375 9372 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 9376 9373 9377 - free_irq(pdev->irq, ioa_cfg); 9374 + if (ioa_cfg->intr_flag == IPR_USE_MSIX) 9375 + free_irq(ioa_cfg->vectors_info[0].vec, ioa_cfg); 9376 + else 9377 + free_irq(pdev->irq, ioa_cfg); 9378 9378 9379 9379 LEAVE; 9380 9380 ··· 9728 9722 spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags); 9729 9723 wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload); 9730 9724 flush_work(&ioa_cfg->work_q); 9725 + INIT_LIST_HEAD(&ioa_cfg->used_res_q); 9731 9726 spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags); 9732 9727 9733 9728 spin_lock(&ipr_driver_lock);
+13 -1
drivers/scsi/libsas/sas_expander.c
··· 235 235 linkrate = phy->linkrate; 236 236 memcpy(sas_addr, phy->attached_sas_addr, SAS_ADDR_SIZE); 237 237 238 + /* Handle vacant phy - rest of dr data is not valid so skip it */ 239 + if (phy->phy_state == PHY_VACANT) { 240 + memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE); 241 + phy->attached_dev_type = NO_DEVICE; 242 + if (!test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) { 243 + phy->phy_id = phy_id; 244 + goto skip; 245 + } else 246 + goto out; 247 + } 248 + 238 249 phy->attached_dev_type = to_dev_type(dr); 239 250 if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) 240 251 goto out; ··· 283 272 phy->phy->maximum_linkrate = dr->pmax_linkrate; 284 273 phy->phy->negotiated_linkrate = phy->linkrate; 285 274 275 + skip: 286 276 if (new_phy) 287 277 if (sas_phy_add(phy->phy)) { 288 278 sas_phy_free(phy->phy); ··· 400 388 if (!disc_req) 401 389 return -ENOMEM; 402 390 403 - disc_resp = alloc_smp_req(DISCOVER_RESP_SIZE); 391 + disc_resp = alloc_smp_resp(DISCOVER_RESP_SIZE); 404 392 if (!disc_resp) { 405 393 kfree(disc_req); 406 394 return -ENOMEM;
+2 -1
drivers/scsi/lpfc/lpfc_sli.c
··· 438 438 struct lpfc_rqe *temp_hrqe; 439 439 struct lpfc_rqe *temp_drqe; 440 440 struct lpfc_register doorbell; 441 - int put_index = hq->host_index; 441 + int put_index; 442 442 443 443 /* sanity check on queue memory */ 444 444 if (unlikely(!hq) || unlikely(!dq)) 445 445 return -ENOMEM; 446 + put_index = hq->host_index; 446 447 temp_hrqe = hq->qe[hq->host_index].rqe; 447 448 temp_drqe = dq->qe[dq->host_index].rqe; 448 449
-5
drivers/scsi/qla2xxx/qla_attr.c
··· 1938 1938 "Timer for the VP[%d] has stopped\n", vha->vp_idx); 1939 1939 } 1940 1940 1941 - /* No pending activities shall be there on the vha now */ 1942 - if (ql2xextended_error_logging & ql_dbg_user) 1943 - msleep(random32()%10); /* Just to see if something falls on 1944 - * the net we have placed below */ 1945 - 1946 1941 BUG_ON(atomic_read(&vha->vref_count)); 1947 1942 1948 1943 qla2x00_free_fcports(vha);
+2 -1
drivers/scsi/qla2xxx/qla_dbg.c
··· 15 15 * | Mailbox commands | 0x115b | 0x111a-0x111b | 16 16 * | | | 0x112c-0x112e | 17 17 * | | | 0x113a | 18 + * | | | 0x1155-0x1158 | 18 19 * | Device Discovery | 0x2087 | 0x2020-0x2022, | 19 20 * | | | 0x2016 | 20 21 * | Queue Command and IO tracing | 0x3031 | 0x3006-0x300b | ··· 402 401 void *ring; 403 402 } aq, *aqp; 404 403 405 - if (!ha->tgt.atio_q_length) 404 + if (!ha->tgt.atio_ring) 406 405 return ptr; 407 406 408 407 num_queues = 1;
-1
drivers/scsi/qla2xxx/qla_def.h
··· 863 863 #define MBX_1 BIT_1 864 864 #define MBX_0 BIT_0 865 865 866 - #define RNID_TYPE_SET_VERSION 0x9 867 866 #define RNID_TYPE_ASIC_TEMP 0xC 868 867 869 868 /*
-3
drivers/scsi/qla2xxx/qla_gbl.h
··· 358 358 qla2x00_disable_fce_trace(scsi_qla_host_t *, uint64_t *, uint64_t *); 359 359 360 360 extern int 361 - qla2x00_set_driver_version(scsi_qla_host_t *, char *); 362 - 363 - extern int 364 361 qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint8_t *, 365 362 uint16_t, uint16_t, uint16_t, uint16_t); 366 363
+1 -3
drivers/scsi/qla2xxx/qla_init.c
··· 619 619 if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha)) 620 620 qla24xx_read_fcp_prio_cfg(vha); 621 621 622 - qla2x00_set_driver_version(vha, QLA2XXX_VERSION); 623 - 624 622 return (rval); 625 623 } 626 624 ··· 1397 1399 mq_size += ha->max_rsp_queues * 1398 1400 (rsp->length * sizeof(response_t)); 1399 1401 } 1400 - if (ha->tgt.atio_q_length) 1402 + if (ha->tgt.atio_ring) 1401 1403 mq_size += ha->tgt.atio_q_length * sizeof(request_t); 1402 1404 /* Allocate memory for Fibre Channel Event Buffer. */ 1403 1405 if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha))
-58
drivers/scsi/qla2xxx/qla_mbx.c
··· 3866 3866 return rval; 3867 3867 } 3868 3868 3869 - int 3870 - qla2x00_set_driver_version(scsi_qla_host_t *vha, char *version) 3871 - { 3872 - int rval; 3873 - mbx_cmd_t mc; 3874 - mbx_cmd_t *mcp = &mc; 3875 - int len; 3876 - uint16_t dwlen; 3877 - uint8_t *str; 3878 - dma_addr_t str_dma; 3879 - struct qla_hw_data *ha = vha->hw; 3880 - 3881 - if (!IS_FWI2_CAPABLE(ha) || IS_QLA82XX(ha)) 3882 - return QLA_FUNCTION_FAILED; 3883 - 3884 - ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1155, 3885 - "Entered %s.\n", __func__); 3886 - 3887 - str = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &str_dma); 3888 - if (!str) { 3889 - ql_log(ql_log_warn, vha, 0x1156, 3890 - "Failed to allocate driver version param.\n"); 3891 - return QLA_MEMORY_ALLOC_FAILED; 3892 - } 3893 - 3894 - memcpy(str, "\x7\x3\x11\x0", 4); 3895 - dwlen = str[0]; 3896 - len = dwlen * sizeof(uint32_t) - 4; 3897 - memset(str + 4, 0, len); 3898 - if (len > strlen(version)) 3899 - len = strlen(version); 3900 - memcpy(str + 4, version, len); 3901 - 3902 - mcp->mb[0] = MBC_SET_RNID_PARAMS; 3903 - mcp->mb[1] = RNID_TYPE_SET_VERSION << 8 | dwlen; 3904 - mcp->mb[2] = MSW(LSD(str_dma)); 3905 - mcp->mb[3] = LSW(LSD(str_dma)); 3906 - mcp->mb[6] = MSW(MSD(str_dma)); 3907 - mcp->mb[7] = LSW(MSD(str_dma)); 3908 - mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; 3909 - mcp->in_mb = MBX_0; 3910 - mcp->tov = MBX_TOV_SECONDS; 3911 - mcp->flags = 0; 3912 - rval = qla2x00_mailbox_command(vha, mcp); 3913 - 3914 - if (rval != QLA_SUCCESS) { 3915 - ql_dbg(ql_dbg_mbx, vha, 0x1157, 3916 - "Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]); 3917 - } else { 3918 - ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1158, 3919 - "Done %s.\n", __func__); 3920 - } 3921 - 3922 - dma_pool_free(ha->s_dma_pool, str, str_dma); 3923 - 3924 - return rval; 3925 - } 3926 - 3927 3869 static int 3928 3870 qla2x00_read_asic_temperature(scsi_qla_host_t *vha, uint16_t *temp) 3929 3871 {
+1 -1
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "8.04.00.08-k" 10 + #define QLA2XXX_VERSION "8.04.00.13-k" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 8 13 13 #define QLA_DRIVER_MINOR_VER 4
+7 -1
drivers/scsi/st.c
··· 4112 4112 tpnt->disk = disk; 4113 4113 disk->private_data = &tpnt->driver; 4114 4114 disk->queue = SDp->request_queue; 4115 + /* SCSI tape doesn't register this gendisk via add_disk(). Manually 4116 + * take queue reference that release_disk() expects. */ 4117 + if (!blk_get_queue(disk->queue)) 4118 + goto out_put_disk; 4115 4119 tpnt->driver = &st_template; 4116 4120 4117 4121 tpnt->device = SDp; ··· 4189 4185 idr_preload_end(); 4190 4186 if (error < 0) { 4191 4187 pr_warn("st: idr allocation failed: %d\n", error); 4192 - goto out_put_disk; 4188 + goto out_put_queue; 4193 4189 } 4194 4190 tpnt->index = error; 4195 4191 sprintf(disk->disk_name, "st%d", tpnt->index); ··· 4215 4211 spin_lock(&st_index_lock); 4216 4212 idr_remove(&st_index_idr, tpnt->index); 4217 4213 spin_unlock(&st_index_lock); 4214 + out_put_queue: 4215 + blk_put_queue(disk->queue); 4218 4216 out_put_disk: 4219 4217 put_disk(disk); 4220 4218 kfree(tpnt);
+29
drivers/ssb/driver_chipcommon_pmu.c
··· 670 670 return 0; 671 671 } 672 672 } 673 + 674 + void ssb_pmu_spuravoid_pllupdate(struct ssb_chipcommon *cc, int spuravoid) 675 + { 676 + u32 pmu_ctl = 0; 677 + 678 + switch (cc->dev->bus->chip_id) { 679 + case 0x4322: 680 + ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL0, 0x11100070); 681 + ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL1, 0x1014140a); 682 + ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL5, 0x88888854); 683 + if (spuravoid == 1) 684 + ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL2, 0x05201828); 685 + else 686 + ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL2, 0x05001828); 687 + pmu_ctl = SSB_CHIPCO_PMU_CTL_PLL_UPD; 688 + break; 689 + case 43222: 690 + /* TODO: BCM43222 requires updating PLLs too */ 691 + return; 692 + default: 693 + ssb_printk(KERN_ERR PFX 694 + "Unknown spuravoidance settings for chip 0x%04X, not changing PLL\n", 695 + cc->dev->bus->chip_id); 696 + return; 697 + } 698 + 699 + chipco_set32(cc, SSB_CHIPCO_PMU_CTL, pmu_ctl); 700 + } 701 + EXPORT_SYMBOL_GPL(ssb_pmu_spuravoid_pllupdate);
+3
drivers/target/target_core_alua.c
··· 409 409 case REPORT_LUNS: 410 410 case RECEIVE_DIAGNOSTIC: 411 411 case SEND_DIAGNOSTIC: 412 + return 0; 412 413 case MAINTENANCE_IN: 413 414 switch (cdb[1] & 0x1f) { 414 415 case MI_REPORT_TARGET_PGS: ··· 452 451 switch (cdb[0]) { 453 452 case INQUIRY: 454 453 case REPORT_LUNS: 454 + return 0; 455 455 case MAINTENANCE_IN: 456 456 switch (cdb[1] & 0x1f) { 457 457 case MI_REPORT_TARGET_PGS: ··· 493 491 switch (cdb[0]) { 494 492 case INQUIRY: 495 493 case REPORT_LUNS: 494 + return 0; 496 495 case MAINTENANCE_IN: 497 496 switch (cdb[1] & 0x1f) { 498 497 case MI_REPORT_TARGET_PGS:
+4 -4
drivers/tty/mxser.c
··· 2643 2643 mxvar_sdriver, brd->idx + i, &pdev->dev); 2644 2644 if (IS_ERR(tty_dev)) { 2645 2645 retval = PTR_ERR(tty_dev); 2646 - for (i--; i >= 0; i--) 2646 + for (; i > 0; i--) 2647 2647 tty_unregister_device(mxvar_sdriver, 2648 - brd->idx + i); 2648 + brd->idx + i - 1); 2649 2649 goto err_relbrd; 2650 2650 } 2651 2651 } ··· 2751 2751 tty_dev = tty_port_register_device(&brd->ports[i].port, 2752 2752 mxvar_sdriver, brd->idx + i, NULL); 2753 2753 if (IS_ERR(tty_dev)) { 2754 - for (i--; i >= 0; i--) 2754 + for (; i > 0; i--) 2755 2755 tty_unregister_device(mxvar_sdriver, 2756 - brd->idx + i); 2756 + brd->idx + i - 1); 2757 2757 for (i = 0; i < brd->info->nports; i++) 2758 2758 tty_port_destroy(&brd->ports[i].port); 2759 2759 free_irq(brd->irq, brd);
+5 -7
drivers/tty/serial/8250/8250_pnp.c
··· 429 429 { 430 430 struct uart_8250_port uart; 431 431 int ret, line, flags = dev_id->driver_data; 432 - struct resource *res = NULL; 433 432 434 433 if (flags & UNKNOWN_DEV) { 435 434 ret = serial_pnp_guess_board(dev); ··· 439 440 memset(&uart, 0, sizeof(uart)); 440 441 if (pnp_irq_valid(dev, 0)) 441 442 uart.port.irq = pnp_irq(dev, 0); 442 - if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) 443 - res = pnp_get_resource(dev, IORESOURCE_IO, 2); 444 - else if (pnp_port_valid(dev, 0)) 445 - res = pnp_get_resource(dev, IORESOURCE_IO, 0); 446 - if (pnp_resource_enabled(res)) { 447 - uart.port.iobase = res->start; 443 + if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) { 444 + uart.port.iobase = pnp_port_start(dev, 2); 445 + uart.port.iotype = UPIO_PORT; 446 + } else if (pnp_port_valid(dev, 0)) { 447 + uart.port.iobase = pnp_port_start(dev, 0); 448 448 uart.port.iotype = UPIO_PORT; 449 449 } else if (pnp_mem_valid(dev, 0)) { 450 450 uart.port.mapbase = pnp_mem_start(dev, 0);
+11
drivers/tty/serial/omap-serial.c
··· 886 886 serial_out(up, UART_MCR, up->mcr | UART_MCR_TCRTLR); 887 887 /* FIFO ENABLE, DMA MODE */ 888 888 889 + up->scr |= OMAP_UART_SCR_RX_TRIG_GRANU1_MASK; 890 + /* 891 + * NOTE: Setting OMAP_UART_SCR_RX_TRIG_GRANU1_MASK 892 + * sets Enables the granularity of 1 for TRIGGER RX 893 + * level. Along with setting RX FIFO trigger level 894 + * to 1 (as noted below, 16 characters) and TLR[3:0] 895 + * to zero this will result RX FIFO threshold level 896 + * to 1 character, instead of 16 as noted in comment 897 + * below. 898 + */ 899 + 889 900 /* Set receive FIFO threshold to 16 characters and 890 901 * transmit FIFO threshold to 16 spaces 891 902 */
+2 -1
drivers/vfio/pci/vfio_pci.c
··· 346 346 347 347 if (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE)) { 348 348 size_t size; 349 + int max = vfio_pci_get_irq_count(vdev, hdr.index); 349 350 350 351 if (hdr.flags & VFIO_IRQ_SET_DATA_BOOL) 351 352 size = sizeof(uint8_t); ··· 356 355 return -EINVAL; 357 356 358 357 if (hdr.argsz - minsz < hdr.count * size || 359 - hdr.count > vfio_pci_get_irq_count(vdev, hdr.index)) 358 + hdr.start >= max || hdr.start + hdr.count > max) 360 359 return -EINVAL; 361 360 362 361 data = memdup_user((void __user *)(arg + minsz),
+130 -68
drivers/vhost/tcm_vhost.c
··· 74 74 75 75 struct vhost_scsi { 76 76 /* Protected by vhost_scsi->dev.mutex */ 77 - struct tcm_vhost_tpg *vs_tpg[VHOST_SCSI_MAX_TARGET]; 77 + struct tcm_vhost_tpg **vs_tpg; 78 78 char vs_vhost_wwpn[TRANSPORT_IQN_LEN]; 79 - bool vs_endpoint; 80 79 81 80 struct vhost_dev dev; 82 81 struct vhost_virtqueue vqs[VHOST_SCSI_MAX_VQ]; ··· 578 579 } 579 580 } 580 581 582 + static void vhost_scsi_send_bad_target(struct vhost_scsi *vs, 583 + struct vhost_virtqueue *vq, int head, unsigned out) 584 + { 585 + struct virtio_scsi_cmd_resp __user *resp; 586 + struct virtio_scsi_cmd_resp rsp; 587 + int ret; 588 + 589 + memset(&rsp, 0, sizeof(rsp)); 590 + rsp.response = VIRTIO_SCSI_S_BAD_TARGET; 591 + resp = vq->iov[out].iov_base; 592 + ret = __copy_to_user(resp, &rsp, sizeof(rsp)); 593 + if (!ret) 594 + vhost_add_used_and_signal(&vs->dev, vq, head, 0); 595 + else 596 + pr_err("Faulted on virtio_scsi_cmd_resp\n"); 597 + } 598 + 581 599 static void vhost_scsi_handle_vq(struct vhost_scsi *vs, 582 600 struct vhost_virtqueue *vq) 583 601 { 602 + struct tcm_vhost_tpg **vs_tpg; 584 603 struct virtio_scsi_cmd_req v_req; 585 604 struct tcm_vhost_tpg *tv_tpg; 586 605 struct tcm_vhost_cmd *tv_cmd; ··· 607 590 int head, ret; 608 591 u8 target; 609 592 610 - /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */ 611 - if (unlikely(!vs->vs_endpoint)) 593 + /* 594 + * We can handle the vq only after the endpoint is setup by calling the 595 + * VHOST_SCSI_SET_ENDPOINT ioctl. 596 + * 597 + * TODO: Check that we are running from vhost_worker which acts 598 + * as read-side critical section for vhost kind of RCU. 599 + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h 600 + */ 601 + vs_tpg = rcu_dereference_check(vq->private_data, 1); 602 + if (!vs_tpg) 612 603 return; 613 604 614 605 mutex_lock(&vq->mutex); ··· 686 661 687 662 /* Extract the tpgt */ 688 663 target = v_req.lun[1]; 689 - tv_tpg = vs->vs_tpg[target]; 664 + tv_tpg = ACCESS_ONCE(vs_tpg[target]); 690 665 691 666 /* Target does not exist, fail the request */ 692 667 if (unlikely(!tv_tpg)) { 693 - struct virtio_scsi_cmd_resp __user *resp; 694 - struct virtio_scsi_cmd_resp rsp; 695 - 696 - memset(&rsp, 0, sizeof(rsp)); 697 - rsp.response = VIRTIO_SCSI_S_BAD_TARGET; 698 - resp = vq->iov[out].iov_base; 699 - ret = __copy_to_user(resp, &rsp, sizeof(rsp)); 700 - if (!ret) 701 - vhost_add_used_and_signal(&vs->dev, 702 - vq, head, 0); 703 - else 704 - pr_err("Faulted on virtio_scsi_cmd_resp\n"); 705 - 668 + vhost_scsi_send_bad_target(vs, vq, head, out); 706 669 continue; 707 670 } 708 671 ··· 703 690 if (IS_ERR(tv_cmd)) { 704 691 vq_err(vq, "vhost_scsi_allocate_cmd failed %ld\n", 705 692 PTR_ERR(tv_cmd)); 706 - break; 693 + goto err_cmd; 707 694 } 708 695 pr_debug("Allocated tv_cmd: %p exp_data_len: %d, data_direction" 709 696 ": %d\n", tv_cmd, exp_data_len, data_direction); 710 697 711 698 tv_cmd->tvc_vhost = vs; 712 699 tv_cmd->tvc_vq = vq; 713 - 714 - if (unlikely(vq->iov[out].iov_len != 715 - sizeof(struct virtio_scsi_cmd_resp))) { 716 - vq_err(vq, "Expecting virtio_scsi_cmd_resp, got %zu" 717 - " bytes, out: %d, in: %d\n", 718 - vq->iov[out].iov_len, out, in); 719 - break; 720 - } 721 - 722 700 tv_cmd->tvc_resp = vq->iov[out].iov_base; 723 701 724 702 /* ··· 729 725 " exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n", 730 726 scsi_command_size(tv_cmd->tvc_cdb), 731 727 TCM_VHOST_MAX_CDB_SIZE); 732 - break; /* TODO */ 728 + goto err_free; 733 729 } 734 730 tv_cmd->tvc_lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF; 735 731 ··· 742 738 data_direction == DMA_TO_DEVICE); 743 739 if (unlikely(ret)) { 744 740 vq_err(vq, "Failed to map iov to sgl\n"); 745 - break; /* TODO */ 741 + goto err_free; 746 742 } 747 743 } 748 744 ··· 762 758 queue_work(tcm_vhost_workqueue, &tv_cmd->work); 763 759 } 764 760 761 + mutex_unlock(&vq->mutex); 762 + return; 763 + 764 + err_free: 765 + vhost_scsi_free_cmd(tv_cmd); 766 + err_cmd: 767 + vhost_scsi_send_bad_target(vs, vq, head, out); 765 768 mutex_unlock(&vq->mutex); 766 769 } 767 770 ··· 791 780 vhost_scsi_handle_vq(vs, vq); 792 781 } 793 782 783 + static void vhost_scsi_flush_vq(struct vhost_scsi *vs, int index) 784 + { 785 + vhost_poll_flush(&vs->dev.vqs[index].poll); 786 + } 787 + 788 + static void vhost_scsi_flush(struct vhost_scsi *vs) 789 + { 790 + int i; 791 + 792 + for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) 793 + vhost_scsi_flush_vq(vs, i); 794 + vhost_work_flush(&vs->dev, &vs->vs_completion_work); 795 + } 796 + 794 797 /* 795 798 * Called from vhost_scsi_ioctl() context to walk the list of available 796 799 * tcm_vhost_tpg with an active struct tcm_vhost_nexus ··· 815 790 { 816 791 struct tcm_vhost_tport *tv_tport; 817 792 struct tcm_vhost_tpg *tv_tpg; 793 + struct tcm_vhost_tpg **vs_tpg; 794 + struct vhost_virtqueue *vq; 795 + int index, ret, i, len; 818 796 bool match = false; 819 - int index, ret; 820 797 821 798 mutex_lock(&vs->dev.mutex); 822 799 /* Verify that ring has been setup correctly. */ ··· 829 802 return -EFAULT; 830 803 } 831 804 } 805 + 806 + len = sizeof(vs_tpg[0]) * VHOST_SCSI_MAX_TARGET; 807 + vs_tpg = kzalloc(len, GFP_KERNEL); 808 + if (!vs_tpg) { 809 + mutex_unlock(&vs->dev.mutex); 810 + return -ENOMEM; 811 + } 812 + if (vs->vs_tpg) 813 + memcpy(vs_tpg, vs->vs_tpg, len); 832 814 833 815 mutex_lock(&tcm_vhost_mutex); 834 816 list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) { ··· 853 817 tv_tport = tv_tpg->tport; 854 818 855 819 if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) { 856 - if (vs->vs_tpg[tv_tpg->tport_tpgt]) { 820 + if (vs->vs_tpg && vs->vs_tpg[tv_tpg->tport_tpgt]) { 857 821 mutex_unlock(&tv_tpg->tv_tpg_mutex); 858 822 mutex_unlock(&tcm_vhost_mutex); 859 823 mutex_unlock(&vs->dev.mutex); 824 + kfree(vs_tpg); 860 825 return -EEXIST; 861 826 } 862 827 tv_tpg->tv_tpg_vhost_count++; 863 - vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; 828 + vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; 864 829 smp_mb__after_atomic_inc(); 865 830 match = true; 866 831 } ··· 872 835 if (match) { 873 836 memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn, 874 837 sizeof(vs->vs_vhost_wwpn)); 875 - vs->vs_endpoint = true; 838 + for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) { 839 + vq = &vs->vqs[i]; 840 + /* Flushing the vhost_work acts as synchronize_rcu */ 841 + mutex_lock(&vq->mutex); 842 + rcu_assign_pointer(vq->private_data, vs_tpg); 843 + vhost_init_used(vq); 844 + mutex_unlock(&vq->mutex); 845 + } 876 846 ret = 0; 877 847 } else { 878 848 ret = -EEXIST; 879 849 } 850 + 851 + /* 852 + * Act as synchronize_rcu to make sure access to 853 + * old vs->vs_tpg is finished. 854 + */ 855 + vhost_scsi_flush(vs); 856 + kfree(vs->vs_tpg); 857 + vs->vs_tpg = vs_tpg; 880 858 881 859 mutex_unlock(&vs->dev.mutex); 882 860 return ret; ··· 903 851 { 904 852 struct tcm_vhost_tport *tv_tport; 905 853 struct tcm_vhost_tpg *tv_tpg; 854 + struct vhost_virtqueue *vq; 855 + bool match = false; 906 856 int index, ret, i; 907 857 u8 target; 908 858 ··· 916 862 goto err_dev; 917 863 } 918 864 } 865 + 866 + if (!vs->vs_tpg) { 867 + mutex_unlock(&vs->dev.mutex); 868 + return 0; 869 + } 870 + 919 871 for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) { 920 872 target = i; 921 - 922 873 tv_tpg = vs->vs_tpg[target]; 923 874 if (!tv_tpg) 924 875 continue; ··· 945 886 } 946 887 tv_tpg->tv_tpg_vhost_count--; 947 888 vs->vs_tpg[target] = NULL; 948 - vs->vs_endpoint = false; 889 + match = true; 949 890 mutex_unlock(&tv_tpg->tv_tpg_mutex); 950 891 } 892 + if (match) { 893 + for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) { 894 + vq = &vs->vqs[i]; 895 + /* Flushing the vhost_work acts as synchronize_rcu */ 896 + mutex_lock(&vq->mutex); 897 + rcu_assign_pointer(vq->private_data, NULL); 898 + mutex_unlock(&vq->mutex); 899 + } 900 + } 901 + /* 902 + * Act as synchronize_rcu to make sure access to 903 + * old vs->vs_tpg is finished. 904 + */ 905 + vhost_scsi_flush(vs); 906 + kfree(vs->vs_tpg); 907 + vs->vs_tpg = NULL; 951 908 mutex_unlock(&vs->dev.mutex); 909 + 952 910 return 0; 953 911 954 912 err_tpg: ··· 973 897 err_dev: 974 898 mutex_unlock(&vs->dev.mutex); 975 899 return ret; 900 + } 901 + 902 + static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features) 903 + { 904 + if (features & ~VHOST_SCSI_FEATURES) 905 + return -EOPNOTSUPP; 906 + 907 + mutex_lock(&vs->dev.mutex); 908 + if ((features & (1 << VHOST_F_LOG_ALL)) && 909 + !vhost_log_access_ok(&vs->dev)) { 910 + mutex_unlock(&vs->dev.mutex); 911 + return -EFAULT; 912 + } 913 + vs->dev.acked_features = features; 914 + smp_wmb(); 915 + vhost_scsi_flush(vs); 916 + mutex_unlock(&vs->dev.mutex); 917 + return 0; 976 918 } 977 919 978 920 static int vhost_scsi_open(struct inode *inode, struct file *f) ··· 1030 936 vhost_dev_stop(&s->dev); 1031 937 vhost_dev_cleanup(&s->dev, false); 1032 938 kfree(s); 1033 - return 0; 1034 - } 1035 - 1036 - static void vhost_scsi_flush_vq(struct vhost_scsi *vs, int index) 1037 - { 1038 - vhost_poll_flush(&vs->dev.vqs[index].poll); 1039 - } 1040 - 1041 - static void vhost_scsi_flush(struct vhost_scsi *vs) 1042 - { 1043 - int i; 1044 - 1045 - for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) 1046 - vhost_scsi_flush_vq(vs, i); 1047 - vhost_work_flush(&vs->dev, &vs->vs_completion_work); 1048 - } 1049 - 1050 - static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features) 1051 - { 1052 - if (features & ~VHOST_SCSI_FEATURES) 1053 - return -EOPNOTSUPP; 1054 - 1055 - mutex_lock(&vs->dev.mutex); 1056 - if ((features & (1 << VHOST_F_LOG_ALL)) && 1057 - !vhost_log_access_ok(&vs->dev)) { 1058 - mutex_unlock(&vs->dev.mutex); 1059 - return -EFAULT; 1060 - } 1061 - vs->dev.acked_features = features; 1062 - smp_wmb(); 1063 - vhost_scsi_flush(vs); 1064 - mutex_unlock(&vs->dev.mutex); 1065 939 return 0; 1066 940 } 1067 941
+14 -25
drivers/video/fbmem.c
··· 1373 1373 { 1374 1374 struct fb_info *info = file_fb_info(file); 1375 1375 struct fb_ops *fb; 1376 - unsigned long off; 1376 + unsigned long mmio_pgoff; 1377 1377 unsigned long start; 1378 1378 u32 len; 1379 1379 1380 1380 if (!info) 1381 1381 return -ENODEV; 1382 - if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) 1383 - return -EINVAL; 1384 - off = vma->vm_pgoff << PAGE_SHIFT; 1385 1382 fb = info->fbops; 1386 1383 if (!fb) 1387 1384 return -ENODEV; ··· 1390 1393 return res; 1391 1394 } 1392 1395 1393 - /* frame buffer memory */ 1396 + /* 1397 + * Ugh. This can be either the frame buffer mapping, or 1398 + * if pgoff points past it, the mmio mapping. 1399 + */ 1394 1400 start = info->fix.smem_start; 1395 - len = PAGE_ALIGN((start & ~PAGE_MASK) + info->fix.smem_len); 1396 - if (off >= len) { 1397 - /* memory mapped io */ 1398 - off -= len; 1399 - if (info->var.accel_flags) { 1400 - mutex_unlock(&info->mm_lock); 1401 - return -EINVAL; 1402 - } 1401 + len = info->fix.smem_len; 1402 + mmio_pgoff = PAGE_ALIGN((start & ~PAGE_MASK) + len) >> PAGE_SHIFT; 1403 + if (vma->vm_pgoff >= mmio_pgoff) { 1404 + vma->vm_pgoff -= mmio_pgoff; 1403 1405 start = info->fix.mmio_start; 1404 - len = PAGE_ALIGN((start & ~PAGE_MASK) + info->fix.mmio_len); 1406 + len = info->fix.mmio_len; 1405 1407 } 1406 1408 mutex_unlock(&info->mm_lock); 1407 - start &= PAGE_MASK; 1408 - if ((vma->vm_end - vma->vm_start + off) > len) 1409 - return -EINVAL; 1410 - off += start; 1411 - vma->vm_pgoff = off >> PAGE_SHIFT; 1412 - /* VM_IO | VM_DONTEXPAND | VM_DONTDUMP are set by io_remap_pfn_range()*/ 1409 + 1413 1410 vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); 1414 - fb_pgprotect(file, vma, off); 1415 - if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT, 1416 - vma->vm_end - vma->vm_start, vma->vm_page_prot)) 1417 - return -EAGAIN; 1418 - return 0; 1411 + fb_pgprotect(file, vma, start); 1412 + 1413 + return vm_iomap_memory(vma, start, len); 1419 1414 } 1420 1415 1421 1416 static int
-2
drivers/video/mmp/core.c
··· 252 252 253 253 kfree(path); 254 254 mutex_unlock(&disp_lock); 255 - 256 - dev_info(path->dev, "de-register %s\n", path->name); 257 255 } 258 256 EXPORT_SYMBOL_GPL(mmp_unregister_path);
+1 -1
drivers/watchdog/Kconfig
··· 117 117 118 118 config AT91RM9200_WATCHDOG 119 119 tristate "AT91RM9200 watchdog" 120 - depends on ARCH_AT91 120 + depends on ARCH_AT91RM9200 121 121 help 122 122 Watchdog timer embedded into AT91RM9200 chips. This will reboot your 123 123 system when the timeout is reached.
+15 -4
drivers/xen/events.c
··· 1316 1316 { 1317 1317 int start_word_idx, start_bit_idx; 1318 1318 int word_idx, bit_idx; 1319 - int i; 1319 + int i, irq; 1320 1320 int cpu = get_cpu(); 1321 1321 struct shared_info *s = HYPERVISOR_shared_info; 1322 1322 struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); ··· 1324 1324 1325 1325 do { 1326 1326 xen_ulong_t pending_words; 1327 + xen_ulong_t pending_bits; 1328 + struct irq_desc *desc; 1327 1329 1328 1330 vcpu_info->evtchn_upcall_pending = 0; 1329 1331 ··· 1337 1335 * selector flag. xchg_xen_ulong must contain an 1338 1336 * appropriate barrier. 1339 1337 */ 1338 + if ((irq = per_cpu(virq_to_irq, cpu)[VIRQ_TIMER]) != -1) { 1339 + int evtchn = evtchn_from_irq(irq); 1340 + word_idx = evtchn / BITS_PER_LONG; 1341 + pending_bits = evtchn % BITS_PER_LONG; 1342 + if (active_evtchns(cpu, s, word_idx) & (1ULL << pending_bits)) { 1343 + desc = irq_to_desc(irq); 1344 + if (desc) 1345 + generic_handle_irq_desc(irq, desc); 1346 + } 1347 + } 1348 + 1340 1349 pending_words = xchg_xen_ulong(&vcpu_info->evtchn_pending_sel, 0); 1341 1350 1342 1351 start_word_idx = __this_cpu_read(current_word_idx); ··· 1356 1343 word_idx = start_word_idx; 1357 1344 1358 1345 for (i = 0; pending_words != 0; i++) { 1359 - xen_ulong_t pending_bits; 1360 1346 xen_ulong_t words; 1361 1347 1362 1348 words = MASK_LSBS(pending_words, word_idx); ··· 1384 1372 1385 1373 do { 1386 1374 xen_ulong_t bits; 1387 - int port, irq; 1388 - struct irq_desc *desc; 1375 + int port; 1389 1376 1390 1377 bits = MASK_LSBS(pending_bits, bit_idx); 1391 1378
+1
fs/binfmt_elf.c
··· 1137 1137 goto whole; 1138 1138 if (!(vma->vm_flags & VM_SHARED) && FILTER(HUGETLB_PRIVATE)) 1139 1139 goto whole; 1140 + return 0; 1140 1141 } 1141 1142 1142 1143 /* Do not dump I/O mapped devices or special mappings */
-2
fs/bio.c
··· 1428 1428 else if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) 1429 1429 error = -EIO; 1430 1430 1431 - trace_block_bio_complete(bio, error); 1432 - 1433 1431 if (bio->bi_end_io) 1434 1432 bio->bi_end_io(bio, error); 1435 1433 }
+42 -6
fs/btrfs/tree-log.c
··· 317 317 unsigned long src_ptr; 318 318 unsigned long dst_ptr; 319 319 int overwrite_root = 0; 320 + bool inode_item = key->type == BTRFS_INODE_ITEM_KEY; 320 321 321 322 if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) 322 323 overwrite_root = 1; ··· 327 326 328 327 /* look for the key in the destination tree */ 329 328 ret = btrfs_search_slot(NULL, root, key, path, 0, 0); 329 + if (ret < 0) 330 + return ret; 331 + 330 332 if (ret == 0) { 331 333 char *src_copy; 332 334 char *dst_copy; ··· 371 367 return 0; 372 368 } 373 369 370 + /* 371 + * We need to load the old nbytes into the inode so when we 372 + * replay the extents we've logged we get the right nbytes. 373 + */ 374 + if (inode_item) { 375 + struct btrfs_inode_item *item; 376 + u64 nbytes; 377 + 378 + item = btrfs_item_ptr(path->nodes[0], path->slots[0], 379 + struct btrfs_inode_item); 380 + nbytes = btrfs_inode_nbytes(path->nodes[0], item); 381 + item = btrfs_item_ptr(eb, slot, 382 + struct btrfs_inode_item); 383 + btrfs_set_inode_nbytes(eb, item, nbytes); 384 + } 385 + } else if (inode_item) { 386 + struct btrfs_inode_item *item; 387 + 388 + /* 389 + * New inode, set nbytes to 0 so that the nbytes comes out 390 + * properly when we replay the extents. 391 + */ 392 + item = btrfs_item_ptr(eb, slot, struct btrfs_inode_item); 393 + btrfs_set_inode_nbytes(eb, item, 0); 374 394 } 375 395 insert: 376 396 btrfs_release_path(path); ··· 514 486 int found_type; 515 487 u64 extent_end; 516 488 u64 start = key->offset; 517 - u64 saved_nbytes; 489 + u64 nbytes = 0; 518 490 struct btrfs_file_extent_item *item; 519 491 struct inode *inode = NULL; 520 492 unsigned long size; ··· 524 496 found_type = btrfs_file_extent_type(eb, item); 525 497 526 498 if (found_type == BTRFS_FILE_EXTENT_REG || 527 - found_type == BTRFS_FILE_EXTENT_PREALLOC) 528 - extent_end = start + btrfs_file_extent_num_bytes(eb, item); 529 - else if (found_type == BTRFS_FILE_EXTENT_INLINE) { 499 + found_type == BTRFS_FILE_EXTENT_PREALLOC) { 500 + nbytes = btrfs_file_extent_num_bytes(eb, item); 501 + extent_end = start + nbytes; 502 + 503 + /* 504 + * We don't add to the inodes nbytes if we are prealloc or a 505 + * hole. 506 + */ 507 + if (btrfs_file_extent_disk_bytenr(eb, item) == 0) 508 + nbytes = 0; 509 + } else if (found_type == BTRFS_FILE_EXTENT_INLINE) { 530 510 size = btrfs_file_extent_inline_len(eb, item); 511 + nbytes = btrfs_file_extent_ram_bytes(eb, item); 531 512 extent_end = ALIGN(start + size, root->sectorsize); 532 513 } else { 533 514 ret = 0; ··· 585 548 } 586 549 btrfs_release_path(path); 587 550 588 - saved_nbytes = inode_get_bytes(inode); 589 551 /* drop any overlapping extents */ 590 552 ret = btrfs_drop_extents(trans, root, inode, start, extent_end, 1); 591 553 BUG_ON(ret); ··· 671 635 BUG_ON(ret); 672 636 } 673 637 674 - inode_set_bytes(inode, saved_nbytes); 638 + inode_add_bytes(inode, nbytes); 675 639 ret = btrfs_update_inode(trans, root, inode); 676 640 out: 677 641 if (inode)
+13 -3
fs/cifs/connect.c
··· 1575 1575 } 1576 1576 break; 1577 1577 case Opt_blank_pass: 1578 - vol->password = NULL; 1579 - break; 1580 - case Opt_pass: 1581 1578 /* passwords have to be handled differently 1582 1579 * to allow the character used for deliminator 1583 1580 * to be passed within them 1584 1581 */ 1585 1582 1583 + /* 1584 + * Check if this is a case where the password 1585 + * starts with a delimiter 1586 + */ 1587 + tmp_end = strchr(data, '='); 1588 + tmp_end++; 1589 + if (!(tmp_end < end && tmp_end[1] == delim)) { 1590 + /* No it is not. Set the password to NULL */ 1591 + vol->password = NULL; 1592 + break; 1593 + } 1594 + /* Yes it is. Drop down to Opt_pass below.*/ 1595 + case Opt_pass: 1586 1596 /* Obtain the value string */ 1587 1597 value = strchr(data, '='); 1588 1598 value++;
+2 -12
fs/ecryptfs/miscdev.c
··· 80 80 int rc; 81 81 82 82 mutex_lock(&ecryptfs_daemon_hash_mux); 83 - rc = try_module_get(THIS_MODULE); 84 - if (rc == 0) { 85 - rc = -EIO; 86 - printk(KERN_ERR "%s: Error attempting to increment module use " 87 - "count; rc = [%d]\n", __func__, rc); 88 - goto out_unlock_daemon_list; 89 - } 90 83 rc = ecryptfs_find_daemon_by_euid(&daemon); 91 84 if (!rc) { 92 85 rc = -EINVAL; ··· 89 96 if (rc) { 90 97 printk(KERN_ERR "%s: Error attempting to spawn daemon; " 91 98 "rc = [%d]\n", __func__, rc); 92 - goto out_module_put_unlock_daemon_list; 99 + goto out_unlock_daemon_list; 93 100 } 94 101 mutex_lock(&daemon->mux); 95 102 if (daemon->flags & ECRYPTFS_DAEMON_MISCDEV_OPEN) { ··· 101 108 atomic_inc(&ecryptfs_num_miscdev_opens); 102 109 out_unlock_daemon: 103 110 mutex_unlock(&daemon->mux); 104 - out_module_put_unlock_daemon_list: 105 - if (rc) 106 - module_put(THIS_MODULE); 107 111 out_unlock_daemon_list: 108 112 mutex_unlock(&ecryptfs_daemon_hash_mux); 109 113 return rc; ··· 137 147 "bug.\n", __func__, rc); 138 148 BUG(); 139 149 } 140 - module_put(THIS_MODULE); 141 150 return rc; 142 151 } 143 152 ··· 460 471 461 472 462 473 static const struct file_operations ecryptfs_miscdev_fops = { 474 + .owner = THIS_MODULE, 463 475 .open = ecryptfs_miscdev_open, 464 476 .poll = ecryptfs_miscdev_poll, 465 477 .read = ecryptfs_miscdev_read,
+1 -1
fs/hfsplus/extents.c
··· 533 533 struct address_space *mapping = inode->i_mapping; 534 534 struct page *page; 535 535 void *fsdata; 536 - u32 size = inode->i_size; 536 + loff_t size = inode->i_size; 537 537 538 538 res = pagecache_write_begin(NULL, mapping, size, 0, 539 539 AOP_FLAG_UNINTERRUPTIBLE,
+1 -1
fs/hugetlbfs/inode.c
··· 110 110 * way when do_mmap_pgoff unwinds (may be important on powerpc 111 111 * and ia64). 112 112 */ 113 - vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND | VM_DONTDUMP; 113 + vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND; 114 114 vma->vm_ops = &hugetlb_vm_ops; 115 115 116 116 if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
+1 -1
fs/inode.c
··· 725 725 * inode to the back of the list so we don't spin on it. 726 726 */ 727 727 if (!spin_trylock(&inode->i_lock)) { 728 - list_move_tail(&inode->i_lru, &sb->s_inode_lru); 728 + list_move(&inode->i_lru, &sb->s_inode_lru); 729 729 continue; 730 730 } 731 731
+1 -1
fs/namespace.c
··· 1690 1690 1691 1691 if (IS_ERR(mnt)) { 1692 1692 err = PTR_ERR(mnt); 1693 - goto out; 1693 + goto out2; 1694 1694 } 1695 1695 1696 1696 err = graft_tree(mnt, path);
+29 -16
fs/nfs/nfs4client.c
··· 300 300 struct rpc_cred *cred) 301 301 { 302 302 struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id); 303 - struct nfs_client *pos, *n, *prev = NULL; 303 + struct nfs_client *pos, *prev = NULL; 304 304 struct nfs4_setclientid_res clid = { 305 305 .clientid = new->cl_clientid, 306 306 .confirm = new->cl_confirm, ··· 308 308 int status = -NFS4ERR_STALE_CLIENTID; 309 309 310 310 spin_lock(&nn->nfs_client_lock); 311 - list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) { 311 + list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 312 312 /* If "pos" isn't marked ready, we can't trust the 313 313 * remaining fields in "pos" */ 314 - if (pos->cl_cons_state < NFS_CS_READY) 314 + if (pos->cl_cons_state > NFS_CS_READY) { 315 + atomic_inc(&pos->cl_count); 316 + spin_unlock(&nn->nfs_client_lock); 317 + 318 + if (prev) 319 + nfs_put_client(prev); 320 + prev = pos; 321 + 322 + status = nfs_wait_client_init_complete(pos); 323 + spin_lock(&nn->nfs_client_lock); 324 + if (status < 0) 325 + continue; 326 + } 327 + if (pos->cl_cons_state != NFS_CS_READY) 315 328 continue; 316 329 317 330 if (pos->rpc_ops != new->rpc_ops) ··· 436 423 struct rpc_cred *cred) 437 424 { 438 425 struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id); 439 - struct nfs_client *pos, *n, *prev = NULL; 426 + struct nfs_client *pos, *prev = NULL; 440 427 int status = -NFS4ERR_STALE_CLIENTID; 441 428 442 429 spin_lock(&nn->nfs_client_lock); 443 - list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) { 430 + list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 444 431 /* If "pos" isn't marked ready, we can't trust the 445 432 * remaining fields in "pos", especially the client 446 433 * ID and serverowner fields. Wait for CREATE_SESSION 447 434 * to finish. */ 448 - if (pos->cl_cons_state < NFS_CS_READY) { 435 + if (pos->cl_cons_state > NFS_CS_READY) { 449 436 atomic_inc(&pos->cl_count); 450 437 spin_unlock(&nn->nfs_client_lock); 451 438 ··· 453 440 nfs_put_client(prev); 454 441 prev = pos; 455 442 456 - nfs4_schedule_lease_recovery(pos); 457 443 status = nfs_wait_client_init_complete(pos); 458 - if (status < 0) { 459 - nfs_put_client(pos); 460 - spin_lock(&nn->nfs_client_lock); 461 - continue; 444 + if (status == 0) { 445 + nfs4_schedule_lease_recovery(pos); 446 + status = nfs4_wait_clnt_recover(pos); 462 447 } 463 - status = pos->cl_cons_state; 464 448 spin_lock(&nn->nfs_client_lock); 465 449 if (status < 0) 466 450 continue; 467 451 } 452 + if (pos->cl_cons_state != NFS_CS_READY) 453 + continue; 468 454 469 455 if (pos->rpc_ops != new->rpc_ops) 470 456 continue; ··· 481 469 continue; 482 470 483 471 atomic_inc(&pos->cl_count); 484 - spin_unlock(&nn->nfs_client_lock); 472 + *result = pos; 473 + status = 0; 485 474 dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n", 486 475 __func__, pos, atomic_read(&pos->cl_count)); 487 - 488 - *result = pos; 489 - return 0; 476 + break; 490 477 } 491 478 492 479 /* No matching nfs_client found. */ 493 480 spin_unlock(&nn->nfs_client_lock); 494 481 dprintk("NFS: <-- %s status = %d\n", __func__, status); 482 + if (prev) 483 + nfs_put_client(prev); 495 484 return status; 496 485 } 497 486 #endif /* CONFIG_NFS_V4_1 */
+1
fs/nfs/nfs4proc.c
··· 1046 1046 /* Save the delegation */ 1047 1047 nfs4_stateid_copy(&stateid, &delegation->stateid); 1048 1048 rcu_read_unlock(); 1049 + nfs_release_seqid(opendata->o_arg.seqid); 1049 1050 ret = nfs_may_open(state->inode, state->owner->so_cred, open_mode); 1050 1051 if (ret != 0) 1051 1052 goto out;
+7 -1
fs/nfs/nfs4state.c
··· 1886 1886 status = PTR_ERR(clnt); 1887 1887 break; 1888 1888 } 1889 - clp->cl_rpcclient = clnt; 1889 + /* Note: this is safe because we haven't yet marked the 1890 + * client as ready, so we are the only user of 1891 + * clp->cl_rpcclient 1892 + */ 1893 + clnt = xchg(&clp->cl_rpcclient, clnt); 1894 + rpc_shutdown_client(clnt); 1895 + clnt = clp->cl_rpcclient; 1890 1896 goto again; 1891 1897 1892 1898 case -NFS4ERR_MINOR_VERS_MISMATCH:
+1
fs/proc/array.c
··· 143 143 "x (dead)", /* 64 */ 144 144 "K (wakekill)", /* 128 */ 145 145 "W (waking)", /* 256 */ 146 + "P (parked)", /* 512 */ 146 147 }; 147 148 148 149 static inline const char *get_task_state(struct task_struct *tsk)
+89 -30
fs/proc/generic.c
··· 755 755 free_proc_entry(pde); 756 756 } 757 757 758 - /* 759 - * Remove a /proc entry and free it if it's not currently in use. 760 - */ 761 - void remove_proc_entry(const char *name, struct proc_dir_entry *parent) 758 + static void entry_rundown(struct proc_dir_entry *de) 762 759 { 763 - struct proc_dir_entry **p; 764 - struct proc_dir_entry *de = NULL; 765 - const char *fn = name; 766 - unsigned int len; 767 - 768 - spin_lock(&proc_subdir_lock); 769 - if (__xlate_proc_name(name, &parent, &fn) != 0) { 770 - spin_unlock(&proc_subdir_lock); 771 - return; 772 - } 773 - len = strlen(fn); 774 - 775 - for (p = &parent->subdir; *p; p=&(*p)->next ) { 776 - if (proc_match(len, fn, *p)) { 777 - de = *p; 778 - *p = de->next; 779 - de->next = NULL; 780 - break; 781 - } 782 - } 783 - spin_unlock(&proc_subdir_lock); 784 - if (!de) { 785 - WARN(1, "name '%s'\n", name); 786 - return; 787 - } 788 - 789 760 spin_lock(&de->pde_unload_lock); 790 761 /* 791 762 * Stop accepting new callers into module. If you're ··· 788 817 spin_lock(&de->pde_unload_lock); 789 818 } 790 819 spin_unlock(&de->pde_unload_lock); 820 + } 821 + 822 + /* 823 + * Remove a /proc entry and free it if it's not currently in use. 824 + */ 825 + void remove_proc_entry(const char *name, struct proc_dir_entry *parent) 826 + { 827 + struct proc_dir_entry **p; 828 + struct proc_dir_entry *de = NULL; 829 + const char *fn = name; 830 + unsigned int len; 831 + 832 + spin_lock(&proc_subdir_lock); 833 + if (__xlate_proc_name(name, &parent, &fn) != 0) { 834 + spin_unlock(&proc_subdir_lock); 835 + return; 836 + } 837 + len = strlen(fn); 838 + 839 + for (p = &parent->subdir; *p; p=&(*p)->next ) { 840 + if (proc_match(len, fn, *p)) { 841 + de = *p; 842 + *p = de->next; 843 + de->next = NULL; 844 + break; 845 + } 846 + } 847 + spin_unlock(&proc_subdir_lock); 848 + if (!de) { 849 + WARN(1, "name '%s'\n", name); 850 + return; 851 + } 852 + 853 + entry_rundown(de); 791 854 792 855 if (S_ISDIR(de->mode)) 793 856 parent->nlink--; ··· 832 827 pde_put(de); 833 828 } 834 829 EXPORT_SYMBOL(remove_proc_entry); 830 + 831 + int remove_proc_subtree(const char *name, struct proc_dir_entry *parent) 832 + { 833 + struct proc_dir_entry **p; 834 + struct proc_dir_entry *root = NULL, *de, *next; 835 + const char *fn = name; 836 + unsigned int len; 837 + 838 + spin_lock(&proc_subdir_lock); 839 + if (__xlate_proc_name(name, &parent, &fn) != 0) { 840 + spin_unlock(&proc_subdir_lock); 841 + return -ENOENT; 842 + } 843 + len = strlen(fn); 844 + 845 + for (p = &parent->subdir; *p; p=&(*p)->next ) { 846 + if (proc_match(len, fn, *p)) { 847 + root = *p; 848 + *p = root->next; 849 + root->next = NULL; 850 + break; 851 + } 852 + } 853 + if (!root) { 854 + spin_unlock(&proc_subdir_lock); 855 + return -ENOENT; 856 + } 857 + de = root; 858 + while (1) { 859 + next = de->subdir; 860 + if (next) { 861 + de->subdir = next->next; 862 + next->next = NULL; 863 + de = next; 864 + continue; 865 + } 866 + spin_unlock(&proc_subdir_lock); 867 + 868 + entry_rundown(de); 869 + next = de->parent; 870 + if (S_ISDIR(de->mode)) 871 + next->nlink--; 872 + de->nlink = 0; 873 + if (de == root) 874 + break; 875 + pde_put(de); 876 + 877 + spin_lock(&proc_subdir_lock); 878 + de = next; 879 + } 880 + pde_put(root); 881 + return 0; 882 + } 883 + EXPORT_SYMBOL(remove_proc_subtree);
+6 -1
include/asm-generic/tlb.h
··· 99 99 unsigned int need_flush : 1, /* Did free PTEs */ 100 100 fast_mode : 1; /* No batching */ 101 101 102 - unsigned int fullmm; 102 + /* we are in the middle of an operation to clear 103 + * a full mm and can make some optimizations */ 104 + unsigned int fullmm : 1, 105 + /* we have performed an operation which 106 + * requires a complete flush of the tlb */ 107 + need_flush_all : 1; 103 108 104 109 struct mmu_gather_batch *active; 105 110 struct mmu_gather_batch local;
+1 -1
include/linux/ata.h
··· 954 954 } 955 955 } 956 956 957 - static inline bool atapi_command_packet_set(const u16 *dev_id) 957 + static inline int atapi_command_packet_set(const u16 *dev_id) 958 958 { 959 959 return (dev_id[ATA_ID_CONFIG] >> 8) & 0x1f; 960 960 }
-1
include/linux/blktrace_api.h
··· 12 12 13 13 struct blk_trace { 14 14 int trace_state; 15 - bool rq_based; 16 15 struct rchan *rchan; 17 16 unsigned long __percpu *sequence; 18 17 unsigned char __percpu *msg_data;
+2
include/linux/capability.h
··· 35 35 #define _KERNEL_CAP_T_SIZE (sizeof(kernel_cap_t)) 36 36 37 37 38 + struct file; 38 39 struct inode; 39 40 struct dentry; 40 41 struct user_namespace; ··· 212 211 extern bool ns_capable(struct user_namespace *ns, int cap); 213 212 extern bool nsown_capable(int cap); 214 213 extern bool inode_capable(const struct inode *inode, int cap); 214 + extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap); 215 215 216 216 /* audit system wants to get cap info from files as well */ 217 217 extern int get_vfs_caps_from_disk(const struct dentry *dentry, struct cpu_vfs_cap_data *cpu_caps);
+8 -1
include/linux/efi.h
··· 333 333 unsigned long count, 334 334 u64 *max_size, 335 335 int *reset_type); 336 + typedef efi_status_t efi_query_variable_store_t(u32 attributes, unsigned long size); 336 337 337 338 /* 338 339 * EFI Configuration Table and GUID definitions ··· 576 575 #ifdef CONFIG_X86 577 576 extern void efi_late_init(void); 578 577 extern void efi_free_boot_services(void); 578 + extern efi_status_t efi_query_variable_store(u32 attributes, unsigned long size); 579 579 #else 580 580 static inline void efi_late_init(void) {} 581 581 static inline void efi_free_boot_services(void) {} 582 + 583 + static inline efi_status_t efi_query_variable_store(u32 attributes, unsigned long size) 584 + { 585 + return EFI_SUCCESS; 586 + } 582 587 #endif 583 588 extern void __iomem *efi_lookup_mapped_addr(u64 phys_addr); 584 589 extern u64 efi_get_iobase (void); ··· 738 731 efi_get_variable_t *get_variable; 739 732 efi_get_next_variable_t *get_next_variable; 740 733 efi_set_variable_t *set_variable; 741 - efi_query_variable_info_t *query_variable_info; 734 + efi_query_variable_store_t *query_variable_store; 742 735 }; 743 736 744 737 struct efivars {
+4 -1
include/linux/ftrace.h
··· 89 89 * that the call back has its own recursion protection. If it does 90 90 * not set this, then the ftrace infrastructure will add recursion 91 91 * protection for the caller. 92 + * STUB - The ftrace_ops is just a place holder. 92 93 */ 93 94 enum { 94 95 FTRACE_OPS_FL_ENABLED = 1 << 0, ··· 99 98 FTRACE_OPS_FL_SAVE_REGS = 1 << 4, 100 99 FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = 1 << 5, 101 100 FTRACE_OPS_FL_RECURSION_SAFE = 1 << 6, 101 + FTRACE_OPS_FL_STUB = 1 << 7, 102 102 }; 103 103 104 104 struct ftrace_ops { ··· 396 394 size_t cnt, loff_t *ppos); 397 395 ssize_t ftrace_notrace_write(struct file *file, const char __user *ubuf, 398 396 size_t cnt, loff_t *ppos); 399 - loff_t ftrace_regex_lseek(struct file *file, loff_t offset, int whence); 400 397 int ftrace_regex_release(struct inode *inode, struct file *file); 401 398 402 399 void __init ··· 567 566 static inline int 568 567 ftrace_regex_release(struct inode *inode, struct file *file) { return -ENODEV; } 569 568 #endif /* CONFIG_DYNAMIC_FTRACE */ 569 + 570 + loff_t ftrace_filter_lseek(struct file *file, loff_t offset, int whence); 570 571 571 572 /* totally disable ftrace - can not re-enable after this */ 572 573 void ftrace_kill(void);
+2
include/linux/kexec.h
··· 200 200 201 201 int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, 202 202 unsigned long long *crash_size, unsigned long long *crash_base); 203 + int parse_crashkernel_high(char *cmdline, unsigned long long system_ram, 204 + unsigned long long *crash_size, unsigned long long *crash_base); 203 205 int parse_crashkernel_low(char *cmdline, unsigned long long system_ram, 204 206 unsigned long long *crash_size, unsigned long long *crash_base); 205 207 int crash_shrink_memory(unsigned long new_size);
+1 -1
include/linux/kvm_host.h
··· 518 518 int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, 519 519 void *data, unsigned long len); 520 520 int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, 521 - gpa_t gpa); 521 + gpa_t gpa, unsigned long len); 522 522 int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len); 523 523 int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len); 524 524 struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn);
+1
include/linux/kvm_types.h
··· 71 71 u64 generation; 72 72 gpa_t gpa; 73 73 unsigned long hva; 74 + unsigned long len; 74 75 struct kvm_memory_slot *memslot; 75 76 }; 76 77
+1
include/linux/libata.h
··· 398 398 ATA_HORKAGE_NOSETXFER = (1 << 14), /* skip SETXFER, SATA only */ 399 399 ATA_HORKAGE_BROKEN_FPDMA_AA = (1 << 15), /* skip AA */ 400 400 ATA_HORKAGE_DUMP_ID = (1 << 16), /* dump IDENTIFY data */ 401 + ATA_HORKAGE_MAX_SEC_LBA48 = (1 << 17), /* Set max sects to 65535 */ 401 402 402 403 /* DMA mask for user DMA control: User visible values; DO NOT 403 404 renumber */
+2
include/linux/mm.h
··· 1611 1611 unsigned long pfn); 1612 1612 int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, 1613 1613 unsigned long pfn); 1614 + int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len); 1615 + 1614 1616 1615 1617 struct page *follow_page_mask(struct vm_area_struct *vma, 1616 1618 unsigned long address, unsigned int foll_flags,
+23 -7
include/linux/netfilter/ipset/ip_set_ahash.h
··· 291 291 #define type_pf_data_tlist TOKEN(TYPE, PF, _data_tlist) 292 292 #define type_pf_data_next TOKEN(TYPE, PF, _data_next) 293 293 #define type_pf_data_flags TOKEN(TYPE, PF, _data_flags) 294 + #define type_pf_data_reset_flags TOKEN(TYPE, PF, _data_reset_flags) 294 295 #ifdef IP_SET_HASH_WITH_NETS 295 296 #define type_pf_data_match TOKEN(TYPE, PF, _data_match) 296 297 #else ··· 386 385 struct ip_set_hash *h = set->data; 387 386 struct htable *t, *orig = h->table; 388 387 u8 htable_bits = orig->htable_bits; 389 - const struct type_pf_elem *data; 388 + struct type_pf_elem *data; 390 389 struct hbucket *n, *m; 391 - u32 i, j; 390 + u32 i, j, flags = 0; 392 391 int ret; 393 392 394 393 retry: ··· 413 412 n = hbucket(orig, i); 414 413 for (j = 0; j < n->pos; j++) { 415 414 data = ahash_data(n, j); 415 + #ifdef IP_SET_HASH_WITH_NETS 416 + flags = 0; 417 + type_pf_data_reset_flags(data, &flags); 418 + #endif 416 419 m = hbucket(t, HKEY(data, h->initval, htable_bits)); 417 - ret = type_pf_elem_add(m, data, AHASH_MAX(h), 0); 420 + ret = type_pf_elem_add(m, data, AHASH_MAX(h), flags); 418 421 if (ret < 0) { 422 + #ifdef IP_SET_HASH_WITH_NETS 423 + type_pf_data_flags(data, flags); 424 + #endif 419 425 read_unlock_bh(&set->lock); 420 426 ahash_destroy(t); 421 427 if (ret == -EAGAIN) ··· 844 836 struct ip_set_hash *h = set->data; 845 837 struct htable *t, *orig = h->table; 846 838 u8 htable_bits = orig->htable_bits; 847 - const struct type_pf_elem *data; 839 + struct type_pf_elem *data; 848 840 struct hbucket *n, *m; 849 - u32 i, j; 841 + u32 i, j, flags = 0; 850 842 int ret; 851 843 852 844 /* Try to cleanup once */ ··· 881 873 n = hbucket(orig, i); 882 874 for (j = 0; j < n->pos; j++) { 883 875 data = ahash_tdata(n, j); 876 + #ifdef IP_SET_HASH_WITH_NETS 877 + flags = 0; 878 + type_pf_data_reset_flags(data, &flags); 879 + #endif 884 880 m = hbucket(t, HKEY(data, h->initval, htable_bits)); 885 - ret = type_pf_elem_tadd(m, data, AHASH_MAX(h), 0, 886 - ip_set_timeout_get(type_pf_data_timeout(data))); 881 + ret = type_pf_elem_tadd(m, data, AHASH_MAX(h), flags, 882 + ip_set_timeout_get(type_pf_data_timeout(data))); 887 883 if (ret < 0) { 884 + #ifdef IP_SET_HASH_WITH_NETS 885 + type_pf_data_flags(data, flags); 886 + #endif 888 887 read_unlock_bh(&set->lock); 889 888 ahash_destroy(t); 890 889 if (ret == -EAGAIN) ··· 1202 1187 #undef type_pf_data_tlist 1203 1188 #undef type_pf_data_next 1204 1189 #undef type_pf_data_flags 1190 + #undef type_pf_data_reset_flags 1205 1191 #undef type_pf_data_match 1206 1192 1207 1193 #undef type_pf_elem
+1
include/linux/pci.h
··· 916 916 void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size); 917 917 void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom); 918 918 size_t pci_get_rom_size(struct pci_dev *pdev, void __iomem *rom, size_t size); 919 + void __iomem __must_check *pci_platform_rom(struct pci_dev *pdev, size_t *size); 919 920 920 921 /* Power management related routines */ 921 922 int pci_save_state(struct pci_dev *dev);
+13 -7
include/linux/preempt.h
··· 93 93 94 94 #else /* !CONFIG_PREEMPT_COUNT */ 95 95 96 - #define preempt_disable() do { } while (0) 97 - #define sched_preempt_enable_no_resched() do { } while (0) 98 - #define preempt_enable_no_resched() do { } while (0) 99 - #define preempt_enable() do { } while (0) 96 + /* 97 + * Even if we don't have any preemption, we need preempt disable/enable 98 + * to be barriers, so that we don't have things like get_user/put_user 99 + * that can cause faults and scheduling migrate into our preempt-protected 100 + * region. 101 + */ 102 + #define preempt_disable() barrier() 103 + #define sched_preempt_enable_no_resched() barrier() 104 + #define preempt_enable_no_resched() barrier() 105 + #define preempt_enable() barrier() 100 106 101 - #define preempt_disable_notrace() do { } while (0) 102 - #define preempt_enable_no_resched_notrace() do { } while (0) 103 - #define preempt_enable_notrace() do { } while (0) 107 + #define preempt_disable_notrace() barrier() 108 + #define preempt_enable_no_resched_notrace() barrier() 109 + #define preempt_enable_notrace() barrier() 104 110 105 111 #endif /* CONFIG_PREEMPT_COUNT */ 106 112
+2
include/linux/proc_fs.h
··· 117 117 const struct file_operations *proc_fops, 118 118 void *data); 119 119 extern void remove_proc_entry(const char *name, struct proc_dir_entry *parent); 120 + extern int remove_proc_subtree(const char *name, struct proc_dir_entry *parent); 120 121 121 122 struct pid_namespace; 122 123 ··· 203 202 return NULL; 204 203 } 205 204 #define remove_proc_entry(name, parent) do {} while (0) 205 + #define remove_proc_subtree(name, parent) do {} while (0) 206 206 207 207 static inline struct proc_dir_entry *proc_symlink(const char *name, 208 208 struct proc_dir_entry *parent,const char *dest) {return NULL;}
+3 -2
include/linux/sched.h
··· 163 163 #define TASK_DEAD 64 164 164 #define TASK_WAKEKILL 128 165 165 #define TASK_WAKING 256 166 - #define TASK_STATE_MAX 512 166 + #define TASK_PARKED 512 167 + #define TASK_STATE_MAX 1024 167 168 168 - #define TASK_STATE_TO_CHAR_STR "RSDTtZXxKW" 169 + #define TASK_STATE_TO_CHAR_STR "RSDTtZXxKWP" 169 170 170 171 extern char ___assert_task_state[1 - 2*!!( 171 172 sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1)];
+12
include/linux/security.h
··· 1012 1012 * This hook can be used by the module to update any security state 1013 1013 * associated with the TUN device's security structure. 1014 1014 * @security pointer to the TUN devices's security structure. 1015 + * @skb_owned_by: 1016 + * This hook sets the packet's owning sock. 1017 + * @skb is the packet. 1018 + * @sk the sock which owns the packet. 1015 1019 * 1016 1020 * Security hooks for XFRM operations. 1017 1021 * ··· 1642 1638 int (*tun_dev_attach_queue) (void *security); 1643 1639 int (*tun_dev_attach) (struct sock *sk, void *security); 1644 1640 int (*tun_dev_open) (void *security); 1641 + void (*skb_owned_by) (struct sk_buff *skb, struct sock *sk); 1645 1642 #endif /* CONFIG_SECURITY_NETWORK */ 1646 1643 1647 1644 #ifdef CONFIG_SECURITY_NETWORK_XFRM ··· 2593 2588 int security_tun_dev_attach(struct sock *sk, void *security); 2594 2589 int security_tun_dev_open(void *security); 2595 2590 2591 + void security_skb_owned_by(struct sk_buff *skb, struct sock *sk); 2592 + 2596 2593 #else /* CONFIG_SECURITY_NETWORK */ 2597 2594 static inline int security_unix_stream_connect(struct sock *sock, 2598 2595 struct sock *other, ··· 2786 2779 { 2787 2780 return 0; 2788 2781 } 2782 + 2783 + static inline void security_skb_owned_by(struct sk_buff *skb, struct sock *sk) 2784 + { 2785 + } 2786 + 2789 2787 #endif /* CONFIG_SECURITY_NETWORK */ 2790 2788 2791 2789 #ifdef CONFIG_SECURITY_NETWORK_XFRM
+18 -11
include/linux/spinlock_up.h
··· 16 16 * In the debug case, 1 means unlocked, 0 means locked. (the values 17 17 * are inverted, to catch initialization bugs) 18 18 * 19 - * No atomicity anywhere, we are on UP. 19 + * No atomicity anywhere, we are on UP. However, we still need 20 + * the compiler barriers, because we do not want the compiler to 21 + * move potentially faulting instructions (notably user accesses) 22 + * into the locked sequence, resulting in non-atomic execution. 20 23 */ 21 24 22 25 #ifdef CONFIG_DEBUG_SPINLOCK ··· 28 25 static inline void arch_spin_lock(arch_spinlock_t *lock) 29 26 { 30 27 lock->slock = 0; 28 + barrier(); 31 29 } 32 30 33 31 static inline void ··· 36 32 { 37 33 local_irq_save(flags); 38 34 lock->slock = 0; 35 + barrier(); 39 36 } 40 37 41 38 static inline int arch_spin_trylock(arch_spinlock_t *lock) ··· 44 39 char oldval = lock->slock; 45 40 46 41 lock->slock = 0; 42 + barrier(); 47 43 48 44 return oldval > 0; 49 45 } 50 46 51 47 static inline void arch_spin_unlock(arch_spinlock_t *lock) 52 48 { 49 + barrier(); 53 50 lock->slock = 1; 54 51 } 55 52 56 53 /* 57 54 * Read-write spinlocks. No debug version. 58 55 */ 59 - #define arch_read_lock(lock) do { (void)(lock); } while (0) 60 - #define arch_write_lock(lock) do { (void)(lock); } while (0) 61 - #define arch_read_trylock(lock) ({ (void)(lock); 1; }) 62 - #define arch_write_trylock(lock) ({ (void)(lock); 1; }) 63 - #define arch_read_unlock(lock) do { (void)(lock); } while (0) 64 - #define arch_write_unlock(lock) do { (void)(lock); } while (0) 56 + #define arch_read_lock(lock) do { barrier(); (void)(lock); } while (0) 57 + #define arch_write_lock(lock) do { barrier(); (void)(lock); } while (0) 58 + #define arch_read_trylock(lock) ({ barrier(); (void)(lock); 1; }) 59 + #define arch_write_trylock(lock) ({ barrier(); (void)(lock); 1; }) 60 + #define arch_read_unlock(lock) do { barrier(); (void)(lock); } while (0) 61 + #define arch_write_unlock(lock) do { barrier(); (void)(lock); } while (0) 65 62 66 63 #else /* DEBUG_SPINLOCK */ 67 64 #define arch_spin_is_locked(lock) ((void)(lock), 0) 68 65 /* for sched.c and kernel_lock.c: */ 69 - # define arch_spin_lock(lock) do { (void)(lock); } while (0) 70 - # define arch_spin_lock_flags(lock, flags) do { (void)(lock); } while (0) 71 - # define arch_spin_unlock(lock) do { (void)(lock); } while (0) 72 - # define arch_spin_trylock(lock) ({ (void)(lock); 1; }) 66 + # define arch_spin_lock(lock) do { barrier(); (void)(lock); } while (0) 67 + # define arch_spin_lock_flags(lock, flags) do { barrier(); (void)(lock); } while (0) 68 + # define arch_spin_unlock(lock) do { barrier(); (void)(lock); } while (0) 69 + # define arch_spin_trylock(lock) ({ barrier(); (void)(lock); 1; }) 73 70 #endif /* DEBUG_SPINLOCK */ 74 71 75 72 #define arch_spin_is_contended(lock) (((void)(lock), 0))
+2
include/linux/ssb/ssb_driver_chipcommon.h
··· 219 219 #define SSB_CHIPCO_PMU_CTL 0x0600 /* PMU control */ 220 220 #define SSB_CHIPCO_PMU_CTL_ILP_DIV 0xFFFF0000 /* ILP div mask */ 221 221 #define SSB_CHIPCO_PMU_CTL_ILP_DIV_SHIFT 16 222 + #define SSB_CHIPCO_PMU_CTL_PLL_UPD 0x00000400 222 223 #define SSB_CHIPCO_PMU_CTL_NOILPONW 0x00000200 /* No ILP on wait */ 223 224 #define SSB_CHIPCO_PMU_CTL_HTREQEN 0x00000100 /* HT req enable */ 224 225 #define SSB_CHIPCO_PMU_CTL_ALPREQEN 0x00000080 /* ALP req enable */ ··· 668 667 void ssb_pmu_set_ldo_voltage(struct ssb_chipcommon *cc, 669 668 enum ssb_pmu_ldo_volt_id id, u32 voltage); 670 669 void ssb_pmu_set_ldo_paref(struct ssb_chipcommon *cc, bool on); 670 + void ssb_pmu_spuravoid_pllupdate(struct ssb_chipcommon *cc, int spuravoid); 671 671 672 672 #endif /* LINUX_SSB_CHIPCO_H_ */
+1
include/linux/swiotlb.h
··· 25 25 extern void swiotlb_init(int verbose); 26 26 int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); 27 27 extern unsigned long swiotlb_nr_tbl(void); 28 + unsigned long swiotlb_size_or_default(void); 28 29 extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); 29 30 30 31 /*
+14
include/linux/ucs2_string.h
··· 1 + #ifndef _LINUX_UCS2_STRING_H_ 2 + #define _LINUX_UCS2_STRING_H_ 3 + 4 + #include <linux/types.h> /* for size_t */ 5 + #include <linux/stddef.h> /* for NULL */ 6 + 7 + typedef u16 ucs2_char_t; 8 + 9 + unsigned long ucs2_strnlen(const ucs2_char_t *s, size_t maxlength); 10 + unsigned long ucs2_strlen(const ucs2_char_t *s); 11 + unsigned long ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength); 12 + int ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len); 13 + 14 + #endif /* _LINUX_UCS2_STRING_H_ */
+1
include/net/addrconf.h
··· 199 199 /* Device notifier */ 200 200 extern int register_inet6addr_notifier(struct notifier_block *nb); 201 201 extern int unregister_inet6addr_notifier(struct notifier_block *nb); 202 + extern int inet6addr_notifier_call_chain(unsigned long val, void *v); 202 203 203 204 extern void inet6_netconf_notify_devconf(struct net *net, int type, int ifindex, 204 205 struct ipv6_devconf *devconf);
+2 -1
include/net/irda/irlmp.h
··· 256 256 return (self && self->lap) ? self->lap->daddr : 0; 257 257 } 258 258 259 - extern const char *irlmp_reasons[]; 259 + const char *irlmp_reason_str(LM_REASON reason); 260 + 260 261 extern int sysctl_discovery_timeout; 261 262 extern int sysctl_discovery_slots; 262 263 extern int sysctl_discovery;
+8
include/net/iucv/af_iucv.h
··· 130 130 enum iucv_tx_notify n); 131 131 }; 132 132 133 + struct iucv_skb_cb { 134 + u32 class; /* target class of message */ 135 + u32 tag; /* tag associated with message */ 136 + u32 offset; /* offset for skb receival */ 137 + }; 138 + 139 + #define IUCV_SKB_CB(__skb) ((struct iucv_skb_cb *)&((__skb)->cb[0])) 140 + 133 141 /* iucv socket options (SOL_IUCV) */ 134 142 #define SO_IPRMDATA_MSG 0x0080 /* send/recv IPRM_DATA msgs */ 135 143 #define SO_MSGLIMIT 0x1000 /* get/set IUCV MSGLIMIT */
+1 -1
include/net/scm.h
··· 78 78 scm->creds.uid = INVALID_UID; 79 79 scm->creds.gid = INVALID_GID; 80 80 if (forcecreds) 81 - scm_set_cred(scm, task_tgid(current), current_euid(), current_egid()); 81 + scm_set_cred(scm, task_tgid(current), current_uid(), current_gid()); 82 82 unix_get_peersec_dgram(sock, scm); 83 83 if (msg->msg_controllen <= 0) 84 84 return 0;
+4 -4
include/trace/events/block.h
··· 257 257 258 258 /** 259 259 * block_bio_complete - completed all work on the block operation 260 + * @q: queue holding the block operation 260 261 * @bio: block operation completed 261 262 * @error: io error value 262 263 * ··· 266 265 */ 267 266 TRACE_EVENT(block_bio_complete, 268 267 269 - TP_PROTO(struct bio *bio, int error), 268 + TP_PROTO(struct request_queue *q, struct bio *bio, int error), 270 269 271 - TP_ARGS(bio, error), 270 + TP_ARGS(q, bio, error), 272 271 273 272 TP_STRUCT__entry( 274 273 __field( dev_t, dev ) ··· 279 278 ), 280 279 281 280 TP_fast_assign( 282 - __entry->dev = bio->bi_bdev ? 283 - bio->bi_bdev->bd_dev : 0; 281 + __entry->dev = bio->bi_bdev->bd_dev; 284 282 __entry->sector = bio->bi_sector; 285 283 __entry->nr_sector = bio->bi_size >> 9; 286 284 __entry->error = error;
+1 -1
include/trace/events/sched.h
··· 147 147 __print_flags(__entry->prev_state & (TASK_STATE_MAX-1), "|", 148 148 { 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" }, 149 149 { 16, "Z" }, { 32, "X" }, { 64, "x" }, 150 - { 128, "W" }) : "R", 150 + { 128, "K" }, { 256, "W" }, { 512, "P" }) : "R", 151 151 __entry->prev_state & TASK_STATE_MAX ? "+" : "", 152 152 __entry->next_comm, __entry->next_pid, __entry->next_prio) 153 153 );
+216 -220
include/uapi/linux/fuse.h
··· 95 95 #ifndef _LINUX_FUSE_H 96 96 #define _LINUX_FUSE_H 97 97 98 - #ifdef __linux__ 98 + #ifdef __KERNEL__ 99 99 #include <linux/types.h> 100 100 #else 101 101 #include <stdint.h> 102 - #define __u64 uint64_t 103 - #define __s64 int64_t 104 - #define __u32 uint32_t 105 - #define __s32 int32_t 106 - #define __u16 uint16_t 107 102 #endif 108 103 109 104 /* ··· 134 139 userspace works under 64bit kernels */ 135 140 136 141 struct fuse_attr { 137 - __u64 ino; 138 - __u64 size; 139 - __u64 blocks; 140 - __u64 atime; 141 - __u64 mtime; 142 - __u64 ctime; 143 - __u32 atimensec; 144 - __u32 mtimensec; 145 - __u32 ctimensec; 146 - __u32 mode; 147 - __u32 nlink; 148 - __u32 uid; 149 - __u32 gid; 150 - __u32 rdev; 151 - __u32 blksize; 152 - __u32 padding; 142 + uint64_t ino; 143 + uint64_t size; 144 + uint64_t blocks; 145 + uint64_t atime; 146 + uint64_t mtime; 147 + uint64_t ctime; 148 + uint32_t atimensec; 149 + uint32_t mtimensec; 150 + uint32_t ctimensec; 151 + uint32_t mode; 152 + uint32_t nlink; 153 + uint32_t uid; 154 + uint32_t gid; 155 + uint32_t rdev; 156 + uint32_t blksize; 157 + uint32_t padding; 153 158 }; 154 159 155 160 struct fuse_kstatfs { 156 - __u64 blocks; 157 - __u64 bfree; 158 - __u64 bavail; 159 - __u64 files; 160 - __u64 ffree; 161 - __u32 bsize; 162 - __u32 namelen; 163 - __u32 frsize; 164 - __u32 padding; 165 - __u32 spare[6]; 161 + uint64_t blocks; 162 + uint64_t bfree; 163 + uint64_t bavail; 164 + uint64_t files; 165 + uint64_t ffree; 166 + uint32_t bsize; 167 + uint32_t namelen; 168 + uint32_t frsize; 169 + uint32_t padding; 170 + uint32_t spare[6]; 166 171 }; 167 172 168 173 struct fuse_file_lock { 169 - __u64 start; 170 - __u64 end; 171 - __u32 type; 172 - __u32 pid; /* tgid */ 174 + uint64_t start; 175 + uint64_t end; 176 + uint32_t type; 177 + uint32_t pid; /* tgid */ 173 178 }; 174 179 175 180 /** ··· 359 364 #define FUSE_COMPAT_ENTRY_OUT_SIZE 120 360 365 361 366 struct fuse_entry_out { 362 - __u64 nodeid; /* Inode ID */ 363 - __u64 generation; /* Inode generation: nodeid:gen must 364 - be unique for the fs's lifetime */ 365 - __u64 entry_valid; /* Cache timeout for the name */ 366 - __u64 attr_valid; /* Cache timeout for the attributes */ 367 - __u32 entry_valid_nsec; 368 - __u32 attr_valid_nsec; 367 + uint64_t nodeid; /* Inode ID */ 368 + uint64_t generation; /* Inode generation: nodeid:gen must 369 + be unique for the fs's lifetime */ 370 + uint64_t entry_valid; /* Cache timeout for the name */ 371 + uint64_t attr_valid; /* Cache timeout for the attributes */ 372 + uint32_t entry_valid_nsec; 373 + uint32_t attr_valid_nsec; 369 374 struct fuse_attr attr; 370 375 }; 371 376 372 377 struct fuse_forget_in { 373 - __u64 nlookup; 378 + uint64_t nlookup; 374 379 }; 375 380 376 381 struct fuse_forget_one { 377 - __u64 nodeid; 378 - __u64 nlookup; 382 + uint64_t nodeid; 383 + uint64_t nlookup; 379 384 }; 380 385 381 386 struct fuse_batch_forget_in { 382 - __u32 count; 383 - __u32 dummy; 387 + uint32_t count; 388 + uint32_t dummy; 384 389 }; 385 390 386 391 struct fuse_getattr_in { 387 - __u32 getattr_flags; 388 - __u32 dummy; 389 - __u64 fh; 392 + uint32_t getattr_flags; 393 + uint32_t dummy; 394 + uint64_t fh; 390 395 }; 391 396 392 397 #define FUSE_COMPAT_ATTR_OUT_SIZE 96 393 398 394 399 struct fuse_attr_out { 395 - __u64 attr_valid; /* Cache timeout for the attributes */ 396 - __u32 attr_valid_nsec; 397 - __u32 dummy; 400 + uint64_t attr_valid; /* Cache timeout for the attributes */ 401 + uint32_t attr_valid_nsec; 402 + uint32_t dummy; 398 403 struct fuse_attr attr; 399 404 }; 400 405 401 406 #define FUSE_COMPAT_MKNOD_IN_SIZE 8 402 407 403 408 struct fuse_mknod_in { 404 - __u32 mode; 405 - __u32 rdev; 406 - __u32 umask; 407 - __u32 padding; 409 + uint32_t mode; 410 + uint32_t rdev; 411 + uint32_t umask; 412 + uint32_t padding; 408 413 }; 409 414 410 415 struct fuse_mkdir_in { 411 - __u32 mode; 412 - __u32 umask; 416 + uint32_t mode; 417 + uint32_t umask; 413 418 }; 414 419 415 420 struct fuse_rename_in { 416 - __u64 newdir; 421 + uint64_t newdir; 417 422 }; 418 423 419 424 struct fuse_link_in { 420 - __u64 oldnodeid; 425 + uint64_t oldnodeid; 421 426 }; 422 427 423 428 struct fuse_setattr_in { 424 - __u32 valid; 425 - __u32 padding; 426 - __u64 fh; 427 - __u64 size; 428 - __u64 lock_owner; 429 - __u64 atime; 430 - __u64 mtime; 431 - __u64 unused2; 432 - __u32 atimensec; 433 - __u32 mtimensec; 434 - __u32 unused3; 435 - __u32 mode; 436 - __u32 unused4; 437 - __u32 uid; 438 - __u32 gid; 439 - __u32 unused5; 429 + uint32_t valid; 430 + uint32_t padding; 431 + uint64_t fh; 432 + uint64_t size; 433 + uint64_t lock_owner; 434 + uint64_t atime; 435 + uint64_t mtime; 436 + uint64_t unused2; 437 + uint32_t atimensec; 438 + uint32_t mtimensec; 439 + uint32_t unused3; 440 + uint32_t mode; 441 + uint32_t unused4; 442 + uint32_t uid; 443 + uint32_t gid; 444 + uint32_t unused5; 440 445 }; 441 446 442 447 struct fuse_open_in { 443 - __u32 flags; 444 - __u32 unused; 448 + uint32_t flags; 449 + uint32_t unused; 445 450 }; 446 451 447 452 struct fuse_create_in { 448 - __u32 flags; 449 - __u32 mode; 450 - __u32 umask; 451 - __u32 padding; 453 + uint32_t flags; 454 + uint32_t mode; 455 + uint32_t umask; 456 + uint32_t padding; 452 457 }; 453 458 454 459 struct fuse_open_out { 455 - __u64 fh; 456 - __u32 open_flags; 457 - __u32 padding; 460 + uint64_t fh; 461 + uint32_t open_flags; 462 + uint32_t padding; 458 463 }; 459 464 460 465 struct fuse_release_in { 461 - __u64 fh; 462 - __u32 flags; 463 - __u32 release_flags; 464 - __u64 lock_owner; 466 + uint64_t fh; 467 + uint32_t flags; 468 + uint32_t release_flags; 469 + uint64_t lock_owner; 465 470 }; 466 471 467 472 struct fuse_flush_in { 468 - __u64 fh; 469 - __u32 unused; 470 - __u32 padding; 471 - __u64 lock_owner; 473 + uint64_t fh; 474 + uint32_t unused; 475 + uint32_t padding; 476 + uint64_t lock_owner; 472 477 }; 473 478 474 479 struct fuse_read_in { 475 - __u64 fh; 476 - __u64 offset; 477 - __u32 size; 478 - __u32 read_flags; 479 - __u64 lock_owner; 480 - __u32 flags; 481 - __u32 padding; 480 + uint64_t fh; 481 + uint64_t offset; 482 + uint32_t size; 483 + uint32_t read_flags; 484 + uint64_t lock_owner; 485 + uint32_t flags; 486 + uint32_t padding; 482 487 }; 483 488 484 489 #define FUSE_COMPAT_WRITE_IN_SIZE 24 485 490 486 491 struct fuse_write_in { 487 - __u64 fh; 488 - __u64 offset; 489 - __u32 size; 490 - __u32 write_flags; 491 - __u64 lock_owner; 492 - __u32 flags; 493 - __u32 padding; 492 + uint64_t fh; 493 + uint64_t offset; 494 + uint32_t size; 495 + uint32_t write_flags; 496 + uint64_t lock_owner; 497 + uint32_t flags; 498 + uint32_t padding; 494 499 }; 495 500 496 501 struct fuse_write_out { 497 - __u32 size; 498 - __u32 padding; 502 + uint32_t size; 503 + uint32_t padding; 499 504 }; 500 505 501 506 #define FUSE_COMPAT_STATFS_SIZE 48 ··· 505 510 }; 506 511 507 512 struct fuse_fsync_in { 508 - __u64 fh; 509 - __u32 fsync_flags; 510 - __u32 padding; 513 + uint64_t fh; 514 + uint32_t fsync_flags; 515 + uint32_t padding; 511 516 }; 512 517 513 518 struct fuse_setxattr_in { 514 - __u32 size; 515 - __u32 flags; 519 + uint32_t size; 520 + uint32_t flags; 516 521 }; 517 522 518 523 struct fuse_getxattr_in { 519 - __u32 size; 520 - __u32 padding; 524 + uint32_t size; 525 + uint32_t padding; 521 526 }; 522 527 523 528 struct fuse_getxattr_out { 524 - __u32 size; 525 - __u32 padding; 529 + uint32_t size; 530 + uint32_t padding; 526 531 }; 527 532 528 533 struct fuse_lk_in { 529 - __u64 fh; 530 - __u64 owner; 534 + uint64_t fh; 535 + uint64_t owner; 531 536 struct fuse_file_lock lk; 532 - __u32 lk_flags; 533 - __u32 padding; 537 + uint32_t lk_flags; 538 + uint32_t padding; 534 539 }; 535 540 536 541 struct fuse_lk_out { ··· 538 543 }; 539 544 540 545 struct fuse_access_in { 541 - __u32 mask; 542 - __u32 padding; 546 + uint32_t mask; 547 + uint32_t padding; 543 548 }; 544 549 545 550 struct fuse_init_in { 546 - __u32 major; 547 - __u32 minor; 548 - __u32 max_readahead; 549 - __u32 flags; 551 + uint32_t major; 552 + uint32_t minor; 553 + uint32_t max_readahead; 554 + uint32_t flags; 550 555 }; 551 556 552 557 struct fuse_init_out { 553 - __u32 major; 554 - __u32 minor; 555 - __u32 max_readahead; 556 - __u32 flags; 557 - __u16 max_background; 558 - __u16 congestion_threshold; 559 - __u32 max_write; 558 + uint32_t major; 559 + uint32_t minor; 560 + uint32_t max_readahead; 561 + uint32_t flags; 562 + uint16_t max_background; 563 + uint16_t congestion_threshold; 564 + uint32_t max_write; 560 565 }; 561 566 562 567 #define CUSE_INIT_INFO_MAX 4096 563 568 564 569 struct cuse_init_in { 565 - __u32 major; 566 - __u32 minor; 567 - __u32 unused; 568 - __u32 flags; 570 + uint32_t major; 571 + uint32_t minor; 572 + uint32_t unused; 573 + uint32_t flags; 569 574 }; 570 575 571 576 struct cuse_init_out { 572 - __u32 major; 573 - __u32 minor; 574 - __u32 unused; 575 - __u32 flags; 576 - __u32 max_read; 577 - __u32 max_write; 578 - __u32 dev_major; /* chardev major */ 579 - __u32 dev_minor; /* chardev minor */ 580 - __u32 spare[10]; 577 + uint32_t major; 578 + uint32_t minor; 579 + uint32_t unused; 580 + uint32_t flags; 581 + uint32_t max_read; 582 + uint32_t max_write; 583 + uint32_t dev_major; /* chardev major */ 584 + uint32_t dev_minor; /* chardev minor */ 585 + uint32_t spare[10]; 581 586 }; 582 587 583 588 struct fuse_interrupt_in { 584 - __u64 unique; 589 + uint64_t unique; 585 590 }; 586 591 587 592 struct fuse_bmap_in { 588 - __u64 block; 589 - __u32 blocksize; 590 - __u32 padding; 593 + uint64_t block; 594 + uint32_t blocksize; 595 + uint32_t padding; 591 596 }; 592 597 593 598 struct fuse_bmap_out { 594 - __u64 block; 599 + uint64_t block; 595 600 }; 596 601 597 602 struct fuse_ioctl_in { 598 - __u64 fh; 599 - __u32 flags; 600 - __u32 cmd; 601 - __u64 arg; 602 - __u32 in_size; 603 - __u32 out_size; 603 + uint64_t fh; 604 + uint32_t flags; 605 + uint32_t cmd; 606 + uint64_t arg; 607 + uint32_t in_size; 608 + uint32_t out_size; 604 609 }; 605 610 606 611 struct fuse_ioctl_iovec { 607 - __u64 base; 608 - __u64 len; 612 + uint64_t base; 613 + uint64_t len; 609 614 }; 610 615 611 616 struct fuse_ioctl_out { 612 - __s32 result; 613 - __u32 flags; 614 - __u32 in_iovs; 615 - __u32 out_iovs; 617 + int32_t result; 618 + uint32_t flags; 619 + uint32_t in_iovs; 620 + uint32_t out_iovs; 616 621 }; 617 622 618 623 struct fuse_poll_in { 619 - __u64 fh; 620 - __u64 kh; 621 - __u32 flags; 622 - __u32 events; 624 + uint64_t fh; 625 + uint64_t kh; 626 + uint32_t flags; 627 + uint32_t events; 623 628 }; 624 629 625 630 struct fuse_poll_out { 626 - __u32 revents; 627 - __u32 padding; 631 + uint32_t revents; 632 + uint32_t padding; 628 633 }; 629 634 630 635 struct fuse_notify_poll_wakeup_out { 631 - __u64 kh; 636 + uint64_t kh; 632 637 }; 633 638 634 639 struct fuse_fallocate_in { 635 - __u64 fh; 636 - __u64 offset; 637 - __u64 length; 638 - __u32 mode; 639 - __u32 padding; 640 + uint64_t fh; 641 + uint64_t offset; 642 + uint64_t length; 643 + uint32_t mode; 644 + uint32_t padding; 640 645 }; 641 646 642 647 struct fuse_in_header { 643 - __u32 len; 644 - __u32 opcode; 645 - __u64 unique; 646 - __u64 nodeid; 647 - __u32 uid; 648 - __u32 gid; 649 - __u32 pid; 650 - __u32 padding; 648 + uint32_t len; 649 + uint32_t opcode; 650 + uint64_t unique; 651 + uint64_t nodeid; 652 + uint32_t uid; 653 + uint32_t gid; 654 + uint32_t pid; 655 + uint32_t padding; 651 656 }; 652 657 653 658 struct fuse_out_header { 654 - __u32 len; 655 - __s32 error; 656 - __u64 unique; 659 + uint32_t len; 660 + int32_t error; 661 + uint64_t unique; 657 662 }; 658 663 659 664 struct fuse_dirent { 660 - __u64 ino; 661 - __u64 off; 662 - __u32 namelen; 663 - __u32 type; 665 + uint64_t ino; 666 + uint64_t off; 667 + uint32_t namelen; 668 + uint32_t type; 664 669 char name[]; 665 670 }; 666 671 667 672 #define FUSE_NAME_OFFSET offsetof(struct fuse_dirent, name) 668 - #define FUSE_DIRENT_ALIGN(x) (((x) + sizeof(__u64) - 1) & ~(sizeof(__u64) - 1)) 673 + #define FUSE_DIRENT_ALIGN(x) \ 674 + (((x) + sizeof(uint64_t) - 1) & ~(sizeof(uint64_t) - 1)) 669 675 #define FUSE_DIRENT_SIZE(d) \ 670 676 FUSE_DIRENT_ALIGN(FUSE_NAME_OFFSET + (d)->namelen) 671 677 ··· 681 685 FUSE_DIRENT_ALIGN(FUSE_NAME_OFFSET_DIRENTPLUS + (d)->dirent.namelen) 682 686 683 687 struct fuse_notify_inval_inode_out { 684 - __u64 ino; 685 - __s64 off; 686 - __s64 len; 688 + uint64_t ino; 689 + int64_t off; 690 + int64_t len; 687 691 }; 688 692 689 693 struct fuse_notify_inval_entry_out { 690 - __u64 parent; 691 - __u32 namelen; 692 - __u32 padding; 694 + uint64_t parent; 695 + uint32_t namelen; 696 + uint32_t padding; 693 697 }; 694 698 695 699 struct fuse_notify_delete_out { 696 - __u64 parent; 697 - __u64 child; 698 - __u32 namelen; 699 - __u32 padding; 700 + uint64_t parent; 701 + uint64_t child; 702 + uint32_t namelen; 703 + uint32_t padding; 700 704 }; 701 705 702 706 struct fuse_notify_store_out { 703 - __u64 nodeid; 704 - __u64 offset; 705 - __u32 size; 706 - __u32 padding; 707 + uint64_t nodeid; 708 + uint64_t offset; 709 + uint32_t size; 710 + uint32_t padding; 707 711 }; 708 712 709 713 struct fuse_notify_retrieve_out { 710 - __u64 notify_unique; 711 - __u64 nodeid; 712 - __u64 offset; 713 - __u32 size; 714 - __u32 padding; 714 + uint64_t notify_unique; 715 + uint64_t nodeid; 716 + uint64_t offset; 717 + uint32_t size; 718 + uint32_t padding; 715 719 }; 716 720 717 721 /* Matches the size of fuse_write_in */ 718 722 struct fuse_notify_retrieve_in { 719 - __u64 dummy1; 720 - __u64 offset; 721 - __u32 size; 722 - __u32 dummy2; 723 - __u64 dummy3; 724 - __u64 dummy4; 723 + uint64_t dummy1; 724 + uint64_t offset; 725 + uint32_t size; 726 + uint32_t dummy2; 727 + uint64_t dummy3; 728 + uint64_t dummy4; 725 729 }; 726 730 727 731 #endif /* _LINUX_FUSE_H */
+24
kernel/capability.c
··· 393 393 EXPORT_SYMBOL(ns_capable); 394 394 395 395 /** 396 + * file_ns_capable - Determine if the file's opener had a capability in effect 397 + * @file: The file we want to check 398 + * @ns: The usernamespace we want the capability in 399 + * @cap: The capability to be tested for 400 + * 401 + * Return true if task that opened the file had a capability in effect 402 + * when the file was opened. 403 + * 404 + * This does not set PF_SUPERPRIV because the caller may not 405 + * actually be privileged. 406 + */ 407 + bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap) 408 + { 409 + if (WARN_ON_ONCE(!cap_valid(cap))) 410 + return false; 411 + 412 + if (security_capable(file->f_cred, ns, cap) == 0) 413 + return true; 414 + 415 + return false; 416 + } 417 + EXPORT_SYMBOL(file_ns_capable); 418 + 419 + /** 396 420 * capable - Determine if the current task has a superior capability in effect 397 421 * @cap: The capability to be tested for 398 422 *
+4 -2
kernel/events/core.c
··· 4737 4737 } else { 4738 4738 if (arch_vma_name(mmap_event->vma)) { 4739 4739 name = strncpy(tmp, arch_vma_name(mmap_event->vma), 4740 - sizeof(tmp)); 4740 + sizeof(tmp) - 1); 4741 + tmp[sizeof(tmp) - 1] = '\0'; 4741 4742 goto got_name; 4742 4743 } 4743 4744 ··· 5331 5330 5332 5331 static int perf_swevent_init(struct perf_event *event) 5333 5332 { 5334 - int event_id = event->attr.config; 5333 + u64 event_id = event->attr.config; 5335 5334 5336 5335 if (event->attr.type != PERF_TYPE_SOFTWARE) 5337 5336 return -ENOENT; ··· 5987 5986 if (pmu->pmu_cpu_context) 5988 5987 goto got_cpu_context; 5989 5988 5989 + ret = -ENOMEM; 5990 5990 pmu->pmu_cpu_context = alloc_percpu(struct perf_cpu_context); 5991 5991 if (!pmu->pmu_cpu_context) 5992 5992 goto free_dev;
+1 -1
kernel/events/internal.h
··· 16 16 int page_order; /* allocation order */ 17 17 #endif 18 18 int nr_pages; /* nr of data pages */ 19 - int writable; /* are we writable */ 19 + int overwrite; /* can overwrite itself */ 20 20 21 21 atomic_t poll; /* POLL_ for wakeups */ 22 22
+18 -4
kernel/events/ring_buffer.c
··· 18 18 static bool perf_output_space(struct ring_buffer *rb, unsigned long tail, 19 19 unsigned long offset, unsigned long head) 20 20 { 21 - unsigned long mask; 21 + unsigned long sz = perf_data_size(rb); 22 + unsigned long mask = sz - 1; 22 23 23 - if (!rb->writable) 24 + /* 25 + * check if user-writable 26 + * overwrite : over-write its own tail 27 + * !overwrite: buffer possibly drops events. 28 + */ 29 + if (rb->overwrite) 24 30 return true; 25 31 26 - mask = perf_data_size(rb) - 1; 32 + /* 33 + * verify that payload is not bigger than buffer 34 + * otherwise masking logic may fail to detect 35 + * the "not enough space" condition 36 + */ 37 + if ((head - offset) > sz) 38 + return false; 27 39 28 40 offset = (offset - tail) & mask; 29 41 head = (head - tail) & mask; ··· 224 212 rb->watermark = max_size / 2; 225 213 226 214 if (flags & RING_BUFFER_WRITABLE) 227 - rb->writable = 1; 215 + rb->overwrite = 0; 216 + else 217 + rb->overwrite = 1; 228 218 229 219 atomic_set(&rb->refcount, 1); 230 220
+1 -2
kernel/hrtimer.c
··· 63 63 DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) = 64 64 { 65 65 66 + .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock), 66 67 .clock_base = 67 68 { 68 69 { ··· 1642 1641 { 1643 1642 struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu); 1644 1643 int i; 1645 - 1646 - raw_spin_lock_init(&cpu_base->lock); 1647 1644 1648 1645 for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { 1649 1646 cpu_base->clock_base[i].cpu_base = cpu_base;
+105 -13
kernel/kexec.c
··· 55 55 .flags = IORESOURCE_BUSY | IORESOURCE_MEM 56 56 }; 57 57 struct resource crashk_low_res = { 58 - .name = "Crash kernel low", 58 + .name = "Crash kernel", 59 59 .start = 0, 60 60 .end = 0, 61 61 .flags = IORESOURCE_BUSY | IORESOURCE_MEM ··· 1368 1368 return 0; 1369 1369 } 1370 1370 1371 + #define SUFFIX_HIGH 0 1372 + #define SUFFIX_LOW 1 1373 + #define SUFFIX_NULL 2 1374 + static __initdata char *suffix_tbl[] = { 1375 + [SUFFIX_HIGH] = ",high", 1376 + [SUFFIX_LOW] = ",low", 1377 + [SUFFIX_NULL] = NULL, 1378 + }; 1379 + 1371 1380 /* 1372 - * That function is the entry point for command line parsing and should be 1373 - * called from the arch-specific code. 1381 + * That function parses "suffix" crashkernel command lines like 1382 + * 1383 + * crashkernel=size,[high|low] 1384 + * 1385 + * It returns 0 on success and -EINVAL on failure. 1374 1386 */ 1387 + static int __init parse_crashkernel_suffix(char *cmdline, 1388 + unsigned long long *crash_size, 1389 + unsigned long long *crash_base, 1390 + const char *suffix) 1391 + { 1392 + char *cur = cmdline; 1393 + 1394 + *crash_size = memparse(cmdline, &cur); 1395 + if (cmdline == cur) { 1396 + pr_warn("crashkernel: memory value expected\n"); 1397 + return -EINVAL; 1398 + } 1399 + 1400 + /* check with suffix */ 1401 + if (strncmp(cur, suffix, strlen(suffix))) { 1402 + pr_warn("crashkernel: unrecognized char\n"); 1403 + return -EINVAL; 1404 + } 1405 + cur += strlen(suffix); 1406 + if (*cur != ' ' && *cur != '\0') { 1407 + pr_warn("crashkernel: unrecognized char\n"); 1408 + return -EINVAL; 1409 + } 1410 + 1411 + return 0; 1412 + } 1413 + 1414 + static __init char *get_last_crashkernel(char *cmdline, 1415 + const char *name, 1416 + const char *suffix) 1417 + { 1418 + char *p = cmdline, *ck_cmdline = NULL; 1419 + 1420 + /* find crashkernel and use the last one if there are more */ 1421 + p = strstr(p, name); 1422 + while (p) { 1423 + char *end_p = strchr(p, ' '); 1424 + char *q; 1425 + 1426 + if (!end_p) 1427 + end_p = p + strlen(p); 1428 + 1429 + if (!suffix) { 1430 + int i; 1431 + 1432 + /* skip the one with any known suffix */ 1433 + for (i = 0; suffix_tbl[i]; i++) { 1434 + q = end_p - strlen(suffix_tbl[i]); 1435 + if (!strncmp(q, suffix_tbl[i], 1436 + strlen(suffix_tbl[i]))) 1437 + goto next; 1438 + } 1439 + ck_cmdline = p; 1440 + } else { 1441 + q = end_p - strlen(suffix); 1442 + if (!strncmp(q, suffix, strlen(suffix))) 1443 + ck_cmdline = p; 1444 + } 1445 + next: 1446 + p = strstr(p+1, name); 1447 + } 1448 + 1449 + if (!ck_cmdline) 1450 + return NULL; 1451 + 1452 + return ck_cmdline; 1453 + } 1454 + 1375 1455 static int __init __parse_crashkernel(char *cmdline, 1376 1456 unsigned long long system_ram, 1377 1457 unsigned long long *crash_size, 1378 1458 unsigned long long *crash_base, 1379 - const char *name) 1459 + const char *name, 1460 + const char *suffix) 1380 1461 { 1381 - char *p = cmdline, *ck_cmdline = NULL; 1382 1462 char *first_colon, *first_space; 1463 + char *ck_cmdline; 1383 1464 1384 1465 BUG_ON(!crash_size || !crash_base); 1385 1466 *crash_size = 0; 1386 1467 *crash_base = 0; 1387 1468 1388 - /* find crashkernel and use the last one if there are more */ 1389 - p = strstr(p, name); 1390 - while (p) { 1391 - ck_cmdline = p; 1392 - p = strstr(p+1, name); 1393 - } 1469 + ck_cmdline = get_last_crashkernel(cmdline, name, suffix); 1394 1470 1395 1471 if (!ck_cmdline) 1396 1472 return -EINVAL; 1397 1473 1398 1474 ck_cmdline += strlen(name); 1399 1475 1476 + if (suffix) 1477 + return parse_crashkernel_suffix(ck_cmdline, crash_size, 1478 + crash_base, suffix); 1400 1479 /* 1401 1480 * if the commandline contains a ':', then that's the extended 1402 1481 * syntax -- if not, it must be the classic syntax ··· 1492 1413 return 0; 1493 1414 } 1494 1415 1416 + /* 1417 + * That function is the entry point for command line parsing and should be 1418 + * called from the arch-specific code. 1419 + */ 1495 1420 int __init parse_crashkernel(char *cmdline, 1496 1421 unsigned long long system_ram, 1497 1422 unsigned long long *crash_size, 1498 1423 unsigned long long *crash_base) 1499 1424 { 1500 1425 return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, 1501 - "crashkernel="); 1426 + "crashkernel=", NULL); 1427 + } 1428 + 1429 + int __init parse_crashkernel_high(char *cmdline, 1430 + unsigned long long system_ram, 1431 + unsigned long long *crash_size, 1432 + unsigned long long *crash_base) 1433 + { 1434 + return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, 1435 + "crashkernel=", suffix_tbl[SUFFIX_HIGH]); 1502 1436 } 1503 1437 1504 1438 int __init parse_crashkernel_low(char *cmdline, ··· 1520 1428 unsigned long long *crash_base) 1521 1429 { 1522 1430 return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, 1523 - "crashkernel_low="); 1431 + "crashkernel=", suffix_tbl[SUFFIX_LOW]); 1524 1432 } 1525 1433 1526 1434 static void update_vmcoreinfo_note(void)
+13 -6
kernel/kprobes.c
··· 794 794 } 795 795 796 796 #ifdef CONFIG_SYSCTL 797 - /* This should be called with kprobe_mutex locked */ 798 797 static void __kprobes optimize_all_kprobes(void) 799 798 { 800 799 struct hlist_head *head; 801 800 struct kprobe *p; 802 801 unsigned int i; 803 802 803 + mutex_lock(&kprobe_mutex); 804 804 /* If optimization is already allowed, just return */ 805 805 if (kprobes_allow_optimization) 806 - return; 806 + goto out; 807 807 808 808 kprobes_allow_optimization = true; 809 809 for (i = 0; i < KPROBE_TABLE_SIZE; i++) { ··· 813 813 optimize_kprobe(p); 814 814 } 815 815 printk(KERN_INFO "Kprobes globally optimized\n"); 816 + out: 817 + mutex_unlock(&kprobe_mutex); 816 818 } 817 819 818 - /* This should be called with kprobe_mutex locked */ 819 820 static void __kprobes unoptimize_all_kprobes(void) 820 821 { 821 822 struct hlist_head *head; 822 823 struct kprobe *p; 823 824 unsigned int i; 824 825 826 + mutex_lock(&kprobe_mutex); 825 827 /* If optimization is already prohibited, just return */ 826 - if (!kprobes_allow_optimization) 828 + if (!kprobes_allow_optimization) { 829 + mutex_unlock(&kprobe_mutex); 827 830 return; 831 + } 828 832 829 833 kprobes_allow_optimization = false; 830 834 for (i = 0; i < KPROBE_TABLE_SIZE; i++) { ··· 838 834 unoptimize_kprobe(p, false); 839 835 } 840 836 } 837 + mutex_unlock(&kprobe_mutex); 838 + 841 839 /* Wait for unoptimizing completion */ 842 840 wait_for_kprobe_optimizer(); 843 841 printk(KERN_INFO "Kprobes globally unoptimized\n"); 844 842 } 845 843 844 + static DEFINE_MUTEX(kprobe_sysctl_mutex); 846 845 int sysctl_kprobes_optimization; 847 846 int proc_kprobes_optimization_handler(struct ctl_table *table, int write, 848 847 void __user *buffer, size_t *length, ··· 853 846 { 854 847 int ret; 855 848 856 - mutex_lock(&kprobe_mutex); 849 + mutex_lock(&kprobe_sysctl_mutex); 857 850 sysctl_kprobes_optimization = kprobes_allow_optimization ? 1 : 0; 858 851 ret = proc_dointvec_minmax(table, write, buffer, length, ppos); 859 852 ··· 861 854 optimize_all_kprobes(); 862 855 else 863 856 unoptimize_all_kprobes(); 864 - mutex_unlock(&kprobe_mutex); 857 + mutex_unlock(&kprobe_sysctl_mutex); 865 858 866 859 return ret; 867 860 }
+28 -24
kernel/kthread.c
··· 124 124 125 125 static void __kthread_parkme(struct kthread *self) 126 126 { 127 - __set_current_state(TASK_INTERRUPTIBLE); 127 + __set_current_state(TASK_PARKED); 128 128 while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) { 129 129 if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags)) 130 130 complete(&self->parked); 131 131 schedule(); 132 - __set_current_state(TASK_INTERRUPTIBLE); 132 + __set_current_state(TASK_PARKED); 133 133 } 134 134 clear_bit(KTHREAD_IS_PARKED, &self->flags); 135 135 __set_current_state(TASK_RUNNING); ··· 256 256 } 257 257 EXPORT_SYMBOL(kthread_create_on_node); 258 258 259 - static void __kthread_bind(struct task_struct *p, unsigned int cpu) 259 + static void __kthread_bind(struct task_struct *p, unsigned int cpu, long state) 260 260 { 261 + /* Must have done schedule() in kthread() before we set_task_cpu */ 262 + if (!wait_task_inactive(p, state)) { 263 + WARN_ON(1); 264 + return; 265 + } 261 266 /* It's safe because the task is inactive. */ 262 267 do_set_cpus_allowed(p, cpumask_of(cpu)); 263 268 p->flags |= PF_THREAD_BOUND; ··· 279 274 */ 280 275 void kthread_bind(struct task_struct *p, unsigned int cpu) 281 276 { 282 - /* Must have done schedule() in kthread() before we set_task_cpu */ 283 - if (!wait_task_inactive(p, TASK_UNINTERRUPTIBLE)) { 284 - WARN_ON(1); 285 - return; 286 - } 287 - __kthread_bind(p, cpu); 277 + __kthread_bind(p, cpu, TASK_UNINTERRUPTIBLE); 288 278 } 289 279 EXPORT_SYMBOL(kthread_bind); 290 280 ··· 324 324 return NULL; 325 325 } 326 326 327 + static void __kthread_unpark(struct task_struct *k, struct kthread *kthread) 328 + { 329 + clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 330 + /* 331 + * We clear the IS_PARKED bit here as we don't wait 332 + * until the task has left the park code. So if we'd 333 + * park before that happens we'd see the IS_PARKED bit 334 + * which might be about to be cleared. 335 + */ 336 + if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) { 337 + if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags)) 338 + __kthread_bind(k, kthread->cpu, TASK_PARKED); 339 + wake_up_state(k, TASK_PARKED); 340 + } 341 + } 342 + 327 343 /** 328 344 * kthread_unpark - unpark a thread created by kthread_create(). 329 345 * @k: thread created by kthread_create(). ··· 352 336 { 353 337 struct kthread *kthread = task_get_live_kthread(k); 354 338 355 - if (kthread) { 356 - clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 357 - /* 358 - * We clear the IS_PARKED bit here as we don't wait 359 - * until the task has left the park code. So if we'd 360 - * park before that happens we'd see the IS_PARKED bit 361 - * which might be about to be cleared. 362 - */ 363 - if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) { 364 - if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags)) 365 - __kthread_bind(k, kthread->cpu); 366 - wake_up_process(k); 367 - } 368 - } 339 + if (kthread) 340 + __kthread_unpark(k, kthread); 369 341 put_task_struct(k); 370 342 } 371 343 ··· 411 407 trace_sched_kthread_stop(k); 412 408 if (kthread) { 413 409 set_bit(KTHREAD_SHOULD_STOP, &kthread->flags); 414 - clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 410 + __kthread_unpark(k, kthread); 415 411 wake_up_process(k); 416 412 wait_for_completion(&kthread->exited); 417 413 }
+26
kernel/sched/clock.c
··· 176 176 u64 this_clock, remote_clock; 177 177 u64 *ptr, old_val, val; 178 178 179 + #if BITS_PER_LONG != 64 180 + again: 181 + /* 182 + * Careful here: The local and the remote clock values need to 183 + * be read out atomic as we need to compare the values and 184 + * then update either the local or the remote side. So the 185 + * cmpxchg64 below only protects one readout. 186 + * 187 + * We must reread via sched_clock_local() in the retry case on 188 + * 32bit as an NMI could use sched_clock_local() via the 189 + * tracer and hit between the readout of 190 + * the low32bit and the high 32bit portion. 191 + */ 192 + this_clock = sched_clock_local(my_scd); 193 + /* 194 + * We must enforce atomic readout on 32bit, otherwise the 195 + * update on the remote cpu can hit inbetween the readout of 196 + * the low32bit and the high 32bit portion. 197 + */ 198 + remote_clock = cmpxchg64(&scd->clock, 0, 0); 199 + #else 200 + /* 201 + * On 64bit the read of [my]scd->clock is atomic versus the 202 + * update, so we can avoid the above 32bit dance. 203 + */ 179 204 sched_clock_local(my_scd); 180 205 again: 181 206 this_clock = my_scd->clock; 182 207 remote_clock = scd->clock; 208 + #endif 183 209 184 210 /* 185 211 * Use the opportunity that we have both locks
+5 -3
kernel/sched/core.c
··· 1498 1498 { 1499 1499 struct rq *rq = task_rq(p); 1500 1500 1501 - BUG_ON(rq != this_rq()); 1502 - BUG_ON(p == current); 1501 + if (WARN_ON_ONCE(rq != this_rq()) || 1502 + WARN_ON_ONCE(p == current)) 1503 + return; 1504 + 1503 1505 lockdep_assert_held(&rq->lock); 1504 1506 1505 1507 if (!raw_spin_trylock(&p->pi_lock)) { ··· 5001 4999 } 5002 5000 5003 5001 static int min_load_idx = 0; 5004 - static int max_load_idx = CPU_LOAD_IDX_MAX; 5002 + static int max_load_idx = CPU_LOAD_IDX_MAX-1; 5005 5003 5006 5004 static void 5007 5005 set_table_entry(struct ctl_table *entry,
+1 -1
kernel/sched/cputime.c
··· 310 310 311 311 t = tsk; 312 312 do { 313 - task_cputime(tsk, &utime, &stime); 313 + task_cputime(t, &utime, &stime); 314 314 times->utime += utime; 315 315 times->stime += stime; 316 316 times->sum_exec_runtime += task_sched_runtime(t);
+1 -1
kernel/signal.c
··· 2950 2950 2951 2951 static int do_tkill(pid_t tgid, pid_t pid, int sig) 2952 2952 { 2953 - struct siginfo info; 2953 + struct siginfo info = {}; 2954 2954 2955 2955 info.si_signo = sig; 2956 2956 info.si_errno = 0;
+12 -2
kernel/smpboot.c
··· 185 185 } 186 186 get_task_struct(tsk); 187 187 *per_cpu_ptr(ht->store, cpu) = tsk; 188 - if (ht->create) 189 - ht->create(cpu); 188 + if (ht->create) { 189 + /* 190 + * Make sure that the task has actually scheduled out 191 + * into park position, before calling the create 192 + * callback. At least the migration thread callback 193 + * requires that the task is off the runqueue. 194 + */ 195 + if (!wait_task_inactive(tsk, TASK_PARKED)) 196 + WARN_ON(1); 197 + else 198 + ht->create(cpu); 199 + } 190 200 return 0; 191 201 } 192 202
+2 -1
kernel/sys.c
··· 324 324 system_state = SYSTEM_RESTART; 325 325 usermodehelper_disable(); 326 326 device_shutdown(); 327 - syscore_shutdown(); 328 327 } 329 328 330 329 /** ··· 369 370 { 370 371 kernel_restart_prepare(cmd); 371 372 disable_nonboot_cpus(); 373 + syscore_shutdown(); 372 374 if (!cmd) 373 375 printk(KERN_EMERG "Restarting system.\n"); 374 376 else ··· 395 395 void kernel_halt(void) 396 396 { 397 397 kernel_shutdown_prepare(SYSTEM_HALT); 398 + disable_nonboot_cpus(); 398 399 syscore_shutdown(); 399 400 printk(KERN_EMERG "System halted.\n"); 400 401 kmsg_dump(KMSG_DUMP_HALT);
+3 -23
kernel/trace/blktrace.c
··· 739 739 struct request_queue *q, 740 740 struct request *rq) 741 741 { 742 - struct blk_trace *bt = q->blk_trace; 743 - 744 - /* if control ever passes through here, it's a request based driver */ 745 - if (unlikely(bt && !bt->rq_based)) 746 - bt->rq_based = true; 747 - 748 742 blk_add_trace_rq(q, rq, BLK_TA_COMPLETE); 749 743 } 750 744 ··· 774 780 blk_add_trace_bio(q, bio, BLK_TA_BOUNCE, 0); 775 781 } 776 782 777 - static void blk_add_trace_bio_complete(void *ignore, struct bio *bio, int error) 783 + static void blk_add_trace_bio_complete(void *ignore, 784 + struct request_queue *q, struct bio *bio, 785 + int error) 778 786 { 779 - struct request_queue *q; 780 - struct blk_trace *bt; 781 - 782 - if (!bio->bi_bdev) 783 - return; 784 - 785 - q = bdev_get_queue(bio->bi_bdev); 786 - bt = q->blk_trace; 787 - 788 - /* 789 - * Request based drivers will generate both rq and bio completions. 790 - * Ignore bio ones. 791 - */ 792 - if (likely(!bt) || bt->rq_based) 793 - return; 794 - 795 787 blk_add_trace_bio(q, bio, BLK_TA_COMPLETE, error); 796 788 } 797 789
+25 -29
kernel/trace/ftrace.c
··· 66 66 67 67 static struct ftrace_ops ftrace_list_end __read_mostly = { 68 68 .func = ftrace_stub, 69 - .flags = FTRACE_OPS_FL_RECURSION_SAFE, 69 + .flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_STUB, 70 70 }; 71 71 72 72 /* ftrace_enabled is a method to turn ftrace on or off */ ··· 694 694 free_page(tmp); 695 695 } 696 696 697 - free_page((unsigned long)stat->pages); 698 697 stat->pages = NULL; 699 698 stat->start = NULL; 700 699 ··· 1051 1052 #endif /* CONFIG_FUNCTION_PROFILER */ 1052 1053 1053 1054 static struct pid * const ftrace_swapper_pid = &init_struct_pid; 1055 + 1056 + loff_t 1057 + ftrace_filter_lseek(struct file *file, loff_t offset, int whence) 1058 + { 1059 + loff_t ret; 1060 + 1061 + if (file->f_mode & FMODE_READ) 1062 + ret = seq_lseek(file, offset, whence); 1063 + else 1064 + file->f_pos = ret = 1; 1065 + 1066 + return ret; 1067 + } 1054 1068 1055 1069 #ifdef CONFIG_DYNAMIC_FTRACE 1056 1070 ··· 2625 2613 * routine, you can use ftrace_filter_write() for the write 2626 2614 * routine if @flag has FTRACE_ITER_FILTER set, or 2627 2615 * ftrace_notrace_write() if @flag has FTRACE_ITER_NOTRACE set. 2628 - * ftrace_regex_lseek() should be used as the lseek routine, and 2616 + * ftrace_filter_lseek() should be used as the lseek routine, and 2629 2617 * release must call ftrace_regex_release(). 2630 2618 */ 2631 2619 int ··· 2707 2695 { 2708 2696 return ftrace_regex_open(&global_ops, FTRACE_ITER_NOTRACE, 2709 2697 inode, file); 2710 - } 2711 - 2712 - loff_t 2713 - ftrace_regex_lseek(struct file *file, loff_t offset, int whence) 2714 - { 2715 - loff_t ret; 2716 - 2717 - if (file->f_mode & FMODE_READ) 2718 - ret = seq_lseek(file, offset, whence); 2719 - else 2720 - file->f_pos = ret = 1; 2721 - 2722 - return ret; 2723 2698 } 2724 2699 2725 2700 static int ftrace_match(char *str, char *regex, int len, int type) ··· 3440 3441 3441 3442 static int __init set_ftrace_notrace(char *str) 3442 3443 { 3443 - strncpy(ftrace_notrace_buf, str, FTRACE_FILTER_SIZE); 3444 + strlcpy(ftrace_notrace_buf, str, FTRACE_FILTER_SIZE); 3444 3445 return 1; 3445 3446 } 3446 3447 __setup("ftrace_notrace=", set_ftrace_notrace); 3447 3448 3448 3449 static int __init set_ftrace_filter(char *str) 3449 3450 { 3450 - strncpy(ftrace_filter_buf, str, FTRACE_FILTER_SIZE); 3451 + strlcpy(ftrace_filter_buf, str, FTRACE_FILTER_SIZE); 3451 3452 return 1; 3452 3453 } 3453 3454 __setup("ftrace_filter=", set_ftrace_filter); ··· 3570 3571 .open = ftrace_filter_open, 3571 3572 .read = seq_read, 3572 3573 .write = ftrace_filter_write, 3573 - .llseek = ftrace_regex_lseek, 3574 + .llseek = ftrace_filter_lseek, 3574 3575 .release = ftrace_regex_release, 3575 3576 }; 3576 3577 ··· 3578 3579 .open = ftrace_notrace_open, 3579 3580 .read = seq_read, 3580 3581 .write = ftrace_notrace_write, 3581 - .llseek = ftrace_regex_lseek, 3582 + .llseek = ftrace_filter_lseek, 3582 3583 .release = ftrace_regex_release, 3583 3584 }; 3584 3585 ··· 3783 3784 .open = ftrace_graph_open, 3784 3785 .read = seq_read, 3785 3786 .write = ftrace_graph_write, 3787 + .llseek = ftrace_filter_lseek, 3786 3788 .release = ftrace_graph_release, 3787 - .llseek = seq_lseek, 3788 3789 }; 3789 3790 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 3790 3791 ··· 4130 4131 preempt_disable_notrace(); 4131 4132 trace_recursion_set(TRACE_CONTROL_BIT); 4132 4133 do_for_each_ftrace_op(op, ftrace_control_list) { 4133 - if (!ftrace_function_local_disabled(op) && 4134 + if (!(op->flags & FTRACE_OPS_FL_STUB) && 4135 + !ftrace_function_local_disabled(op) && 4134 4136 ftrace_ops_test(op, ip)) 4135 4137 op->func(ip, parent_ip, op, regs); 4136 4138 } while_for_each_ftrace_op(op); ··· 4439 4439 .open = ftrace_pid_open, 4440 4440 .write = ftrace_pid_write, 4441 4441 .read = seq_read, 4442 - .llseek = seq_lseek, 4442 + .llseek = ftrace_filter_lseek, 4443 4443 .release = ftrace_pid_release, 4444 4444 }; 4445 4445 ··· 4555 4555 ftrace_startup_sysctl(); 4556 4556 4557 4557 /* we are starting ftrace again */ 4558 - if (ftrace_ops_list != &ftrace_list_end) { 4559 - if (ftrace_ops_list->next == &ftrace_list_end) 4560 - ftrace_trace_function = ftrace_ops_list->func; 4561 - else 4562 - ftrace_trace_function = ftrace_ops_list_func; 4563 - } 4558 + if (ftrace_ops_list != &ftrace_list_end) 4559 + update_ftrace_function(); 4564 4560 4565 4561 } else { 4566 4562 /* stopping ftrace calls (just send to ftrace_stub) */
+6 -3
kernel/trace/trace.c
··· 132 132 133 133 static int __init set_cmdline_ftrace(char *str) 134 134 { 135 - strncpy(bootup_tracer_buf, str, MAX_TRACER_SIZE); 135 + strlcpy(bootup_tracer_buf, str, MAX_TRACER_SIZE); 136 136 default_bootup_tracer = bootup_tracer_buf; 137 137 /* We are using ftrace early, expand it */ 138 138 ring_buffer_expanded = 1; ··· 162 162 163 163 static int __init set_trace_boot_options(char *str) 164 164 { 165 - strncpy(trace_boot_options_buf, str, MAX_TRACER_SIZE); 165 + strlcpy(trace_boot_options_buf, str, MAX_TRACER_SIZE); 166 166 trace_boot_options = trace_boot_options_buf; 167 167 return 0; 168 168 } ··· 744 744 return; 745 745 746 746 WARN_ON_ONCE(!irqs_disabled()); 747 - if (WARN_ON_ONCE(!current_trace->allocated_snapshot)) 747 + if (!current_trace->allocated_snapshot) { 748 + /* Only the nop tracer should hit this when disabling */ 749 + WARN_ON_ONCE(current_trace != &nop_trace); 748 750 return; 751 + } 749 752 750 753 arch_spin_lock(&ftrace_max_lock); 751 754
+1 -1
kernel/trace/trace_stack.c
··· 322 322 .open = stack_trace_filter_open, 323 323 .read = seq_read, 324 324 .write = ftrace_filter_write, 325 - .llseek = ftrace_regex_lseek, 325 + .llseek = ftrace_filter_lseek, 326 326 .release = ftrace_regex_release, 327 327 }; 328 328
+13 -9
kernel/user_namespace.c
··· 25 25 26 26 static struct kmem_cache *user_ns_cachep __read_mostly; 27 27 28 - static bool new_idmap_permitted(struct user_namespace *ns, int cap_setid, 28 + static bool new_idmap_permitted(const struct file *file, 29 + struct user_namespace *ns, int cap_setid, 29 30 struct uid_gid_map *map); 30 31 31 32 static void set_cred_user_ns(struct cred *cred, struct user_namespace *user_ns) ··· 613 612 if (map->nr_extents != 0) 614 613 goto out; 615 614 616 - /* Require the appropriate privilege CAP_SETUID or CAP_SETGID 617 - * over the user namespace in order to set the id mapping. 615 + /* 616 + * Adjusting namespace settings requires capabilities on the target. 618 617 */ 619 - if (cap_valid(cap_setid) && !ns_capable(ns, cap_setid)) 618 + if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN)) 620 619 goto out; 621 620 622 621 /* Get a buffer */ ··· 701 700 702 701 ret = -EPERM; 703 702 /* Validate the user is allowed to use user id's mapped to. */ 704 - if (!new_idmap_permitted(ns, cap_setid, &new_map)) 703 + if (!new_idmap_permitted(file, ns, cap_setid, &new_map)) 705 704 goto out; 706 705 707 706 /* Map the lower ids from the parent user namespace to the ··· 788 787 &ns->projid_map, &ns->parent->projid_map); 789 788 } 790 789 791 - static bool new_idmap_permitted(struct user_namespace *ns, int cap_setid, 790 + static bool new_idmap_permitted(const struct file *file, 791 + struct user_namespace *ns, int cap_setid, 792 792 struct uid_gid_map *new_map) 793 793 { 794 794 /* Allow mapping to your own filesystem ids */ ··· 797 795 u32 id = new_map->extent[0].lower_first; 798 796 if (cap_setid == CAP_SETUID) { 799 797 kuid_t uid = make_kuid(ns->parent, id); 800 - if (uid_eq(uid, current_fsuid())) 798 + if (uid_eq(uid, file->f_cred->fsuid)) 801 799 return true; 802 800 } 803 801 else if (cap_setid == CAP_SETGID) { 804 802 kgid_t gid = make_kgid(ns->parent, id); 805 - if (gid_eq(gid, current_fsgid())) 803 + if (gid_eq(gid, file->f_cred->fsgid)) 806 804 return true; 807 805 } 808 806 } ··· 813 811 814 812 /* Allow the specified ids if we have the appropriate capability 815 813 * (CAP_SETUID or CAP_SETGID) over the parent user namespace. 814 + * And the opener of the id file also had the approprpiate capability. 816 815 */ 817 - if (ns_capable(ns->parent, cap_setid)) 816 + if (ns_capable(ns->parent, cap_setid) && 817 + file_ns_capable(file, ns->parent, cap_setid)) 818 818 return true; 819 819 820 820 return false;
+3
lib/Kconfig
··· 404 404 help 405 405 Enable fast lookup object identifier registry. 406 406 407 + config UCS2_STRING 408 + tristate 409 + 407 410 endmenu
+2
lib/Makefile
··· 174 174 cmd_build_OID_registry = perl $(srctree)/$(src)/build_OID_registry $< $@ 175 175 176 176 clean-files += oid_registry_data.c 177 + 178 + obj-$(CONFIG_UCS2_STRING) += ucs2_string.o
+8 -1
lib/kobject.c
··· 529 529 return kobj; 530 530 } 531 531 532 + static struct kobject *kobject_get_unless_zero(struct kobject *kobj) 533 + { 534 + if (!kref_get_unless_zero(&kobj->kref)) 535 + kobj = NULL; 536 + return kobj; 537 + } 538 + 532 539 /* 533 540 * kobject_cleanup - free kobject resources. 534 541 * @kobj: object to cleanup ··· 758 751 759 752 list_for_each_entry(k, &kset->list, entry) { 760 753 if (kobject_name(k) && !strcmp(kobject_name(k), name)) { 761 - ret = kobject_get(k); 754 + ret = kobject_get_unless_zero(k); 762 755 break; 763 756 } 764 757 }
+15 -4
lib/swiotlb.c
··· 105 105 if (!strcmp(str, "force")) 106 106 swiotlb_force = 1; 107 107 108 - return 1; 108 + return 0; 109 109 } 110 - __setup("swiotlb=", setup_io_tlb_npages); 110 + early_param("swiotlb", setup_io_tlb_npages); 111 111 /* make io_tlb_overflow tunable too? */ 112 112 113 113 unsigned long swiotlb_nr_tbl(void) ··· 115 115 return io_tlb_nslabs; 116 116 } 117 117 EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); 118 + 119 + /* default to 64MB */ 120 + #define IO_TLB_DEFAULT_SIZE (64UL<<20) 121 + unsigned long swiotlb_size_or_default(void) 122 + { 123 + unsigned long size; 124 + 125 + size = io_tlb_nslabs << IO_TLB_SHIFT; 126 + 127 + return size ? size : (IO_TLB_DEFAULT_SIZE); 128 + } 129 + 118 130 /* Note that this doesn't work with highmem page */ 119 131 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, 120 132 volatile void *address) ··· 200 188 void __init 201 189 swiotlb_init(int verbose) 202 190 { 203 - /* default to 64MB */ 204 - size_t default_size = 64UL<<20; 191 + size_t default_size = IO_TLB_DEFAULT_SIZE; 205 192 unsigned char *vstart; 206 193 unsigned long bytes; 207 194
+51
lib/ucs2_string.c
··· 1 + #include <linux/ucs2_string.h> 2 + #include <linux/module.h> 3 + 4 + /* Return the number of unicode characters in data */ 5 + unsigned long 6 + ucs2_strnlen(const ucs2_char_t *s, size_t maxlength) 7 + { 8 + unsigned long length = 0; 9 + 10 + while (*s++ != 0 && length < maxlength) 11 + length++; 12 + return length; 13 + } 14 + EXPORT_SYMBOL(ucs2_strnlen); 15 + 16 + unsigned long 17 + ucs2_strlen(const ucs2_char_t *s) 18 + { 19 + return ucs2_strnlen(s, ~0UL); 20 + } 21 + EXPORT_SYMBOL(ucs2_strlen); 22 + 23 + /* 24 + * Return the number of bytes is the length of this string 25 + * Note: this is NOT the same as the number of unicode characters 26 + */ 27 + unsigned long 28 + ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength) 29 + { 30 + return ucs2_strnlen(data, maxlength/sizeof(ucs2_char_t)) * sizeof(ucs2_char_t); 31 + } 32 + EXPORT_SYMBOL(ucs2_strsize); 33 + 34 + int 35 + ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len) 36 + { 37 + while (1) { 38 + if (len == 0) 39 + return 0; 40 + if (*a < *b) 41 + return -1; 42 + if (*a > *b) 43 + return 1; 44 + if (*a == 0) /* implies *b == 0 */ 45 + return 0; 46 + a++; 47 + b++; 48 + len--; 49 + } 50 + } 51 + EXPORT_SYMBOL(ucs2_strncmp);
+11 -1
mm/hugetlb.c
··· 2961 2961 break; 2962 2962 } 2963 2963 2964 - if (absent || 2964 + /* 2965 + * We need call hugetlb_fault for both hugepages under migration 2966 + * (in which case hugetlb_fault waits for the migration,) and 2967 + * hwpoisoned hugepages (in which case we need to prevent the 2968 + * caller from accessing to them.) In order to do this, we use 2969 + * here is_swap_pte instead of is_hugetlb_entry_migration and 2970 + * is_hugetlb_entry_hwpoisoned. This is because it simply covers 2971 + * both cases, and because we can't follow correct pages 2972 + * directly from any kind of swap entries. 2973 + */ 2974 + if (absent || is_swap_pte(huge_ptep_get(pte)) || 2965 2975 ((flags & FOLL_WRITE) && !pte_write(huge_ptep_get(pte)))) { 2966 2976 int ret; 2967 2977
+48
mm/memory.c
··· 216 216 tlb->mm = mm; 217 217 218 218 tlb->fullmm = fullmm; 219 + tlb->need_flush_all = 0; 219 220 tlb->start = -1UL; 220 221 tlb->end = 0; 221 222 tlb->need_flush = 0; ··· 2392 2391 return err; 2393 2392 } 2394 2393 EXPORT_SYMBOL(remap_pfn_range); 2394 + 2395 + /** 2396 + * vm_iomap_memory - remap memory to userspace 2397 + * @vma: user vma to map to 2398 + * @start: start of area 2399 + * @len: size of area 2400 + * 2401 + * This is a simplified io_remap_pfn_range() for common driver use. The 2402 + * driver just needs to give us the physical memory range to be mapped, 2403 + * we'll figure out the rest from the vma information. 2404 + * 2405 + * NOTE! Some drivers might want to tweak vma->vm_page_prot first to get 2406 + * whatever write-combining details or similar. 2407 + */ 2408 + int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len) 2409 + { 2410 + unsigned long vm_len, pfn, pages; 2411 + 2412 + /* Check that the physical memory area passed in looks valid */ 2413 + if (start + len < start) 2414 + return -EINVAL; 2415 + /* 2416 + * You *really* shouldn't map things that aren't page-aligned, 2417 + * but we've historically allowed it because IO memory might 2418 + * just have smaller alignment. 2419 + */ 2420 + len += start & ~PAGE_MASK; 2421 + pfn = start >> PAGE_SHIFT; 2422 + pages = (len + ~PAGE_MASK) >> PAGE_SHIFT; 2423 + if (pfn + pages < pfn) 2424 + return -EINVAL; 2425 + 2426 + /* We start the mapping 'vm_pgoff' pages into the area */ 2427 + if (vma->vm_pgoff > pages) 2428 + return -EINVAL; 2429 + pfn += vma->vm_pgoff; 2430 + pages -= vma->vm_pgoff; 2431 + 2432 + /* Can we fit all of the mapping? */ 2433 + vm_len = vma->vm_end - vma->vm_start; 2434 + if (vm_len >> PAGE_SHIFT > pages) 2435 + return -EINVAL; 2436 + 2437 + /* Ok, let it rip */ 2438 + return io_remap_pfn_range(vma, vma->vm_start, pfn, vm_len, vma->vm_page_prot); 2439 + } 2440 + EXPORT_SYMBOL(vm_iomap_memory); 2395 2441 2396 2442 static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, 2397 2443 unsigned long addr, unsigned long end,
+1 -1
mm/vmscan.c
··· 3188 3188 if (IS_ERR(pgdat->kswapd)) { 3189 3189 /* failure at boot is fatal */ 3190 3190 BUG_ON(system_state == SYSTEM_BOOTING); 3191 - pgdat->kswapd = NULL; 3192 3191 pr_err("Failed to start kswapd on node %d\n", nid); 3193 3192 ret = PTR_ERR(pgdat->kswapd); 3193 + pgdat->kswapd = NULL; 3194 3194 } 3195 3195 return ret; 3196 3196 }
+4
net/802/mrp.c
··· 870 870 * all pending messages before the applicant is gone. 871 871 */ 872 872 del_timer_sync(&app->join_timer); 873 + 874 + spin_lock(&app->lock); 873 875 mrp_mad_event(app, MRP_EVENT_TX); 874 876 mrp_pdu_queue(app); 877 + spin_unlock(&app->lock); 878 + 875 879 mrp_queue_xmit(app); 876 880 877 881 dev_mc_del(dev, appl->group_address);
+10 -1
net/batman-adv/main.c
··· 177 177 atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE); 178 178 } 179 179 180 - int batadv_is_my_mac(const uint8_t *addr) 180 + /** 181 + * batadv_is_my_mac - check if the given mac address belongs to any of the real 182 + * interfaces in the current mesh 183 + * @bat_priv: the bat priv with all the soft interface information 184 + * @addr: the address to check 185 + */ 186 + int batadv_is_my_mac(struct batadv_priv *bat_priv, const uint8_t *addr) 181 187 { 182 188 const struct batadv_hard_iface *hard_iface; 183 189 184 190 rcu_read_lock(); 185 191 list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { 186 192 if (hard_iface->if_status != BATADV_IF_ACTIVE) 193 + continue; 194 + 195 + if (hard_iface->soft_iface != bat_priv->soft_iface) 187 196 continue; 188 197 189 198 if (batadv_compare_eth(hard_iface->net_dev->dev_addr, addr)) {
+1 -1
net/batman-adv/main.h
··· 165 165 166 166 int batadv_mesh_init(struct net_device *soft_iface); 167 167 void batadv_mesh_free(struct net_device *soft_iface); 168 - int batadv_is_my_mac(const uint8_t *addr); 168 + int batadv_is_my_mac(struct batadv_priv *bat_priv, const uint8_t *addr); 169 169 struct batadv_hard_iface * 170 170 batadv_seq_print_text_primary_if_get(struct seq_file *seq); 171 171 int batadv_batman_skb_recv(struct sk_buff *skb, struct net_device *dev,
+9 -8
net/batman-adv/network-coding.c
··· 1484 1484 { 1485 1485 struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); 1486 1486 1487 - if (batadv_is_my_mac(ethhdr->h_dest)) 1487 + if (batadv_is_my_mac(bat_priv, ethhdr->h_dest)) 1488 1488 return; 1489 1489 1490 1490 /* Set data pointer to MAC header to mimic packets from our tx path */ ··· 1496 1496 /** 1497 1497 * batadv_nc_skb_decode_packet - decode given skb using the decode data stored 1498 1498 * in nc_packet 1499 + * @bat_priv: the bat priv with all the soft interface information 1499 1500 * @skb: unicast skb to decode 1500 1501 * @nc_packet: decode data needed to decode the skb 1501 1502 * ··· 1504 1503 * in case of an error. 1505 1504 */ 1506 1505 static struct batadv_unicast_packet * 1507 - batadv_nc_skb_decode_packet(struct sk_buff *skb, 1506 + batadv_nc_skb_decode_packet(struct batadv_priv *bat_priv, struct sk_buff *skb, 1508 1507 struct batadv_nc_packet *nc_packet) 1509 1508 { 1510 1509 const int h_size = sizeof(struct batadv_unicast_packet); ··· 1538 1537 /* Select the correct unicast header information based on the location 1539 1538 * of our mac address in the coded_packet header 1540 1539 */ 1541 - if (batadv_is_my_mac(coded_packet_tmp.second_dest)) { 1540 + if (batadv_is_my_mac(bat_priv, coded_packet_tmp.second_dest)) { 1542 1541 /* If we are the second destination the packet was overheard, 1543 1542 * so the Ethernet address must be copied to h_dest and 1544 1543 * pkt_type changed from PACKET_OTHERHOST to PACKET_HOST ··· 1609 1608 1610 1609 /* Select the correct packet id based on the location of our mac-addr */ 1611 1610 dest = ethhdr->h_source; 1612 - if (!batadv_is_my_mac(coded->second_dest)) { 1611 + if (!batadv_is_my_mac(bat_priv, coded->second_dest)) { 1613 1612 source = coded->second_source; 1614 1613 packet_id = coded->second_crc; 1615 1614 } else { ··· 1676 1675 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1677 1676 1678 1677 /* Verify frame is destined for us */ 1679 - if (!batadv_is_my_mac(ethhdr->h_dest) && 1680 - !batadv_is_my_mac(coded_packet->second_dest)) 1678 + if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest) && 1679 + !batadv_is_my_mac(bat_priv, coded_packet->second_dest)) 1681 1680 return NET_RX_DROP; 1682 1681 1683 1682 /* Update stat counter */ 1684 - if (batadv_is_my_mac(coded_packet->second_dest)) 1683 + if (batadv_is_my_mac(bat_priv, coded_packet->second_dest)) 1685 1684 batadv_inc_counter(bat_priv, BATADV_CNT_NC_SNIFFED); 1686 1685 1687 1686 nc_packet = batadv_nc_find_decoding_packet(bat_priv, ethhdr, ··· 1699 1698 goto free_nc_packet; 1700 1699 1701 1700 /* Decode the packet */ 1702 - unicast_packet = batadv_nc_skb_decode_packet(skb, nc_packet); 1701 + unicast_packet = batadv_nc_skb_decode_packet(bat_priv, skb, nc_packet); 1703 1702 if (!unicast_packet) { 1704 1703 batadv_inc_counter(bat_priv, BATADV_CNT_NC_DECODE_FAILED); 1705 1704 goto free_nc_packet;
+21 -18
net/batman-adv/routing.c
··· 403 403 goto out; 404 404 405 405 /* not for me */ 406 - if (!batadv_is_my_mac(ethhdr->h_dest)) 406 + if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest)) 407 407 goto out; 408 408 409 409 icmp_packet = (struct batadv_icmp_packet_rr *)skb->data; ··· 417 417 } 418 418 419 419 /* packet for me */ 420 - if (batadv_is_my_mac(icmp_packet->dst)) 420 + if (batadv_is_my_mac(bat_priv, icmp_packet->dst)) 421 421 return batadv_recv_my_icmp_packet(bat_priv, skb, hdr_size); 422 422 423 423 /* TTL exceeded */ ··· 551 551 552 552 /** 553 553 * batadv_check_unicast_packet - Check for malformed unicast packets 554 + * @bat_priv: the bat priv with all the soft interface information 554 555 * @skb: packet to check 555 556 * @hdr_size: size of header to pull 556 557 * ··· 560 559 * reason: -ENODATA for bad header, -EBADR for broadcast destination or source, 561 560 * and -EREMOTE for non-local (other host) destination. 562 561 */ 563 - static int batadv_check_unicast_packet(struct sk_buff *skb, int hdr_size) 562 + static int batadv_check_unicast_packet(struct batadv_priv *bat_priv, 563 + struct sk_buff *skb, int hdr_size) 564 564 { 565 565 struct ethhdr *ethhdr; 566 566 ··· 580 578 return -EBADR; 581 579 582 580 /* not for me */ 583 - if (!batadv_is_my_mac(ethhdr->h_dest)) 581 + if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest)) 584 582 return -EREMOTE; 585 583 586 584 return 0; ··· 595 593 char tt_flag; 596 594 size_t packet_size; 597 595 598 - if (batadv_check_unicast_packet(skb, hdr_size) < 0) 596 + if (batadv_check_unicast_packet(bat_priv, skb, hdr_size) < 0) 599 597 return NET_RX_DROP; 600 598 601 599 /* I could need to modify it */ ··· 627 625 case BATADV_TT_RESPONSE: 628 626 batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_RX); 629 627 630 - if (batadv_is_my_mac(tt_query->dst)) { 628 + if (batadv_is_my_mac(bat_priv, tt_query->dst)) { 631 629 /* packet needs to be linearized to access the TT 632 630 * changes 633 631 */ ··· 670 668 struct batadv_roam_adv_packet *roam_adv_packet; 671 669 struct batadv_orig_node *orig_node; 672 670 673 - if (batadv_check_unicast_packet(skb, sizeof(*roam_adv_packet)) < 0) 671 + if (batadv_check_unicast_packet(bat_priv, skb, 672 + sizeof(*roam_adv_packet)) < 0) 674 673 goto out; 675 674 676 675 batadv_inc_counter(bat_priv, BATADV_CNT_TT_ROAM_ADV_RX); 677 676 678 677 roam_adv_packet = (struct batadv_roam_adv_packet *)skb->data; 679 678 680 - if (!batadv_is_my_mac(roam_adv_packet->dst)) 679 + if (!batadv_is_my_mac(bat_priv, roam_adv_packet->dst)) 681 680 return batadv_route_unicast_packet(skb, recv_if); 682 681 683 682 /* check if it is a backbone gateway. we don't accept ··· 984 981 * last time) the packet had an updated information or not 985 982 */ 986 983 curr_ttvn = (uint8_t)atomic_read(&bat_priv->tt.vn); 987 - if (!batadv_is_my_mac(unicast_packet->dest)) { 984 + if (!batadv_is_my_mac(bat_priv, unicast_packet->dest)) { 988 985 orig_node = batadv_orig_hash_find(bat_priv, 989 986 unicast_packet->dest); 990 987 /* if it is not possible to find the orig_node representing the ··· 1062 1059 hdr_size = sizeof(*unicast_4addr_packet); 1063 1060 1064 1061 /* function returns -EREMOTE for promiscuous packets */ 1065 - check = batadv_check_unicast_packet(skb, hdr_size); 1062 + check = batadv_check_unicast_packet(bat_priv, skb, hdr_size); 1066 1063 1067 1064 /* Even though the packet is not for us, we might save it to use for 1068 1065 * decoding a later received coded packet ··· 1077 1074 return NET_RX_DROP; 1078 1075 1079 1076 /* packet for me */ 1080 - if (batadv_is_my_mac(unicast_packet->dest)) { 1077 + if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) { 1081 1078 if (is4addr) { 1082 1079 batadv_dat_inc_counter(bat_priv, 1083 1080 unicast_4addr_packet->subtype); ··· 1114 1111 struct sk_buff *new_skb = NULL; 1115 1112 int ret; 1116 1113 1117 - if (batadv_check_unicast_packet(skb, hdr_size) < 0) 1114 + if (batadv_check_unicast_packet(bat_priv, skb, hdr_size) < 0) 1118 1115 return NET_RX_DROP; 1119 1116 1120 1117 if (!batadv_check_unicast_ttvn(bat_priv, skb)) ··· 1123 1120 unicast_packet = (struct batadv_unicast_frag_packet *)skb->data; 1124 1121 1125 1122 /* packet for me */ 1126 - if (batadv_is_my_mac(unicast_packet->dest)) { 1123 + if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) { 1127 1124 ret = batadv_frag_reassemble_skb(skb, bat_priv, &new_skb); 1128 1125 1129 1126 if (ret == NET_RX_DROP) ··· 1177 1174 goto out; 1178 1175 1179 1176 /* ignore broadcasts sent by myself */ 1180 - if (batadv_is_my_mac(ethhdr->h_source)) 1177 + if (batadv_is_my_mac(bat_priv, ethhdr->h_source)) 1181 1178 goto out; 1182 1179 1183 1180 bcast_packet = (struct batadv_bcast_packet *)skb->data; 1184 1181 1185 1182 /* ignore broadcasts originated by myself */ 1186 - if (batadv_is_my_mac(bcast_packet->orig)) 1183 + if (batadv_is_my_mac(bat_priv, bcast_packet->orig)) 1187 1184 goto out; 1188 1185 1189 1186 if (bcast_packet->header.ttl < 2) ··· 1269 1266 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1270 1267 1271 1268 /* not for me */ 1272 - if (!batadv_is_my_mac(ethhdr->h_dest)) 1269 + if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest)) 1273 1270 return NET_RX_DROP; 1274 1271 1275 1272 /* ignore own packets */ 1276 - if (batadv_is_my_mac(vis_packet->vis_orig)) 1273 + if (batadv_is_my_mac(bat_priv, vis_packet->vis_orig)) 1277 1274 return NET_RX_DROP; 1278 1275 1279 - if (batadv_is_my_mac(vis_packet->sender_orig)) 1276 + if (batadv_is_my_mac(bat_priv, vis_packet->sender_orig)) 1280 1277 return NET_RX_DROP; 1281 1278 1282 1279 switch (vis_packet->vis_type) {
+1 -1
net/batman-adv/translation-table.c
··· 1940 1940 bool batadv_send_tt_response(struct batadv_priv *bat_priv, 1941 1941 struct batadv_tt_query_packet *tt_request) 1942 1942 { 1943 - if (batadv_is_my_mac(tt_request->dst)) { 1943 + if (batadv_is_my_mac(bat_priv, tt_request->dst)) { 1944 1944 /* don't answer backbone gws! */ 1945 1945 if (batadv_bla_is_backbone_gw_orig(bat_priv, tt_request->src)) 1946 1946 return true;
+2 -2
net/batman-adv/vis.c
··· 477 477 478 478 /* Are we the target for this VIS packet? */ 479 479 if (vis_server == BATADV_VIS_TYPE_SERVER_SYNC && 480 - batadv_is_my_mac(vis_packet->target_orig)) 480 + batadv_is_my_mac(bat_priv, vis_packet->target_orig)) 481 481 are_target = 1; 482 482 483 483 spin_lock_bh(&bat_priv->vis.hash_lock); ··· 496 496 batadv_send_list_add(bat_priv, info); 497 497 498 498 /* ... we're not the recipient (and thus need to forward). */ 499 - } else if (!batadv_is_my_mac(packet->target_orig)) { 499 + } else if (!batadv_is_my_mac(bat_priv, packet->target_orig)) { 500 500 batadv_send_list_add(bat_priv, info); 501 501 } 502 502
+2 -1
net/bridge/br_if.c
··· 67 67 struct net_device *dev = p->dev; 68 68 struct net_bridge *br = p->br; 69 69 70 - if (netif_running(dev) && netif_oper_up(dev)) 70 + if (!(p->flags & BR_ADMIN_COST) && 71 + netif_running(dev) && netif_oper_up(dev)) 71 72 p->path_cost = port_cost(dev); 72 73 73 74 if (!netif_running(br->dev))
+1
net/bridge/br_private.h
··· 156 156 #define BR_BPDU_GUARD 0x00000002 157 157 #define BR_ROOT_BLOCK 0x00000004 158 158 #define BR_MULTICAST_FAST_LEAVE 0x00000008 159 + #define BR_ADMIN_COST 0x00000010 159 160 160 161 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING 161 162 u32 multicast_startup_queries_sent;
+1
net/bridge/br_stp_if.c
··· 288 288 path_cost > BR_MAX_PATH_COST) 289 289 return -ERANGE; 290 290 291 + p->flags |= BR_ADMIN_COST; 291 292 p->path_cost = path_cost; 292 293 br_configuration_update(p->br); 293 294 br_port_state_selection(p->br);
+3 -3
net/can/gw.c
··· 466 466 if (gwj->src.dev == dev || gwj->dst.dev == dev) { 467 467 hlist_del(&gwj->list); 468 468 cgw_unregister_filter(gwj); 469 - kfree(gwj); 469 + kmem_cache_free(cgw_cache, gwj); 470 470 } 471 471 } 472 472 } ··· 863 863 hlist_for_each_entry_safe(gwj, nx, &cgw_list, list) { 864 864 hlist_del(&gwj->list); 865 865 cgw_unregister_filter(gwj); 866 - kfree(gwj); 866 + kmem_cache_free(cgw_cache, gwj); 867 867 } 868 868 } 869 869 ··· 919 919 920 920 hlist_del(&gwj->list); 921 921 cgw_unregister_filter(gwj); 922 - kfree(gwj); 922 + kmem_cache_free(cgw_cache, gwj); 923 923 err = 0; 924 924 break; 925 925 }
+3
net/core/dev.c
··· 2146 2146 struct net_device *dev = skb->dev; 2147 2147 const char *driver = ""; 2148 2148 2149 + if (!net_ratelimit()) 2150 + return; 2151 + 2149 2152 if (dev && dev->dev.parent) 2150 2153 driver = dev_driver_string(dev->dev.parent); 2151 2154
+2 -2
net/core/rtnetlink.c
··· 1046 1046 rcu_read_lock(); 1047 1047 cb->seq = net->dev_base_seq; 1048 1048 1049 - if (nlmsg_parse(cb->nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1049 + if (nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb, IFLA_MAX, 1050 1050 ifla_policy) >= 0) { 1051 1051 1052 1052 if (tb[IFLA_EXT_MASK]) ··· 1896 1896 u32 ext_filter_mask = 0; 1897 1897 u16 min_ifinfo_dump_size = 0; 1898 1898 1899 - if (nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1899 + if (nlmsg_parse(nlh, sizeof(struct ifinfomsg), tb, IFLA_MAX, 1900 1900 ifla_policy) >= 0) { 1901 1901 if (tb[IFLA_EXT_MASK]) 1902 1902 ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]);
+44 -16
net/ipv4/devinet.c
··· 587 587 { 588 588 unsigned long now, next, next_sec, next_sched; 589 589 struct in_ifaddr *ifa; 590 + struct hlist_node *n; 590 591 int i; 591 592 592 593 now = jiffies; 593 594 next = round_jiffies_up(now + ADDR_CHECK_FREQUENCY); 594 595 595 - rcu_read_lock(); 596 596 for (i = 0; i < IN4_ADDR_HSIZE; i++) { 597 + bool change_needed = false; 598 + 599 + rcu_read_lock(); 597 600 hlist_for_each_entry_rcu(ifa, &inet_addr_lst[i], hash) { 598 601 unsigned long age; 599 602 ··· 609 606 610 607 if (ifa->ifa_valid_lft != INFINITY_LIFE_TIME && 611 608 age >= ifa->ifa_valid_lft) { 612 - struct in_ifaddr **ifap ; 613 - 614 - rtnl_lock(); 615 - for (ifap = &ifa->ifa_dev->ifa_list; 616 - *ifap != NULL; ifap = &ifa->ifa_next) { 617 - if (*ifap == ifa) 618 - inet_del_ifa(ifa->ifa_dev, 619 - ifap, 1); 620 - } 621 - rtnl_unlock(); 609 + change_needed = true; 622 610 } else if (ifa->ifa_preferred_lft == 623 611 INFINITY_LIFE_TIME) { 624 612 continue; ··· 619 625 next = ifa->ifa_tstamp + 620 626 ifa->ifa_valid_lft * HZ; 621 627 622 - if (!(ifa->ifa_flags & IFA_F_DEPRECATED)) { 623 - ifa->ifa_flags |= IFA_F_DEPRECATED; 624 - rtmsg_ifa(RTM_NEWADDR, ifa, NULL, 0); 625 - } 628 + if (!(ifa->ifa_flags & IFA_F_DEPRECATED)) 629 + change_needed = true; 626 630 } else if (time_before(ifa->ifa_tstamp + 627 631 ifa->ifa_preferred_lft * HZ, 628 632 next)) { ··· 628 636 ifa->ifa_preferred_lft * HZ; 629 637 } 630 638 } 639 + rcu_read_unlock(); 640 + if (!change_needed) 641 + continue; 642 + rtnl_lock(); 643 + hlist_for_each_entry_safe(ifa, n, &inet_addr_lst[i], hash) { 644 + unsigned long age; 645 + 646 + if (ifa->ifa_flags & IFA_F_PERMANENT) 647 + continue; 648 + 649 + /* We try to batch several events at once. */ 650 + age = (now - ifa->ifa_tstamp + 651 + ADDRCONF_TIMER_FUZZ_MINUS) / HZ; 652 + 653 + if (ifa->ifa_valid_lft != INFINITY_LIFE_TIME && 654 + age >= ifa->ifa_valid_lft) { 655 + struct in_ifaddr **ifap; 656 + 657 + for (ifap = &ifa->ifa_dev->ifa_list; 658 + *ifap != NULL; ifap = &(*ifap)->ifa_next) { 659 + if (*ifap == ifa) { 660 + inet_del_ifa(ifa->ifa_dev, 661 + ifap, 1); 662 + break; 663 + } 664 + } 665 + } else if (ifa->ifa_preferred_lft != 666 + INFINITY_LIFE_TIME && 667 + age >= ifa->ifa_preferred_lft && 668 + !(ifa->ifa_flags & IFA_F_DEPRECATED)) { 669 + ifa->ifa_flags |= IFA_F_DEPRECATED; 670 + rtmsg_ifa(RTM_NEWADDR, ifa, NULL, 0); 671 + } 672 + } 673 + rtnl_unlock(); 631 674 } 632 - rcu_read_unlock(); 633 675 634 676 next_sec = round_jiffies_up(next); 635 677 next_sched = next; ··· 830 804 return -EEXIST; 831 805 ifa = ifa_existing; 832 806 set_ifa_lifetime(ifa, valid_lft, prefered_lft); 807 + cancel_delayed_work(&check_lifetime_work); 808 + schedule_delayed_work(&check_lifetime_work, 0); 833 809 rtmsg_ifa(RTM_NEWADDR, ifa, nlh, NETLINK_CB(skb).portid); 834 810 blocking_notifier_call_chain(&inetaddr_chain, NETDEV_UP, ifa); 835 811 }
+3 -3
net/ipv4/esp4.c
··· 139 139 140 140 /* skb is pure payload to encrypt */ 141 141 142 - err = -ENOMEM; 143 - 144 142 esp = x->data; 145 143 aead = esp->aead; 146 144 alen = crypto_aead_authsize(aead); ··· 174 176 } 175 177 176 178 tmp = esp_alloc_tmp(aead, nfrags + sglists, seqhilen); 177 - if (!tmp) 179 + if (!tmp) { 180 + err = -ENOMEM; 178 181 goto error; 182 + } 179 183 180 184 seqhi = esp_tmp_seqhi(tmp); 181 185 iv = esp_tmp_iv(aead, tmp, seqhilen);
+10 -4
net/ipv4/ip_fragment.c
··· 219 219 if (!head->dev) 220 220 goto out_rcu_unlock; 221 221 222 - /* skb dst is stale, drop it, and perform route lookup again */ 223 - skb_dst_drop(head); 222 + /* skb has no dst, perform route lookup again */ 224 223 iph = ip_hdr(head); 225 224 err = ip_route_input_noref(head, iph->daddr, iph->saddr, 226 225 iph->tos, head->dev); ··· 493 494 qp->q.max_size = skb->len + ihl; 494 495 495 496 if (qp->q.last_in == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) && 496 - qp->q.meat == qp->q.len) 497 - return ip_frag_reasm(qp, prev, dev); 497 + qp->q.meat == qp->q.len) { 498 + unsigned long orefdst = skb->_skb_refdst; 498 499 500 + skb->_skb_refdst = 0UL; 501 + err = ip_frag_reasm(qp, prev, dev); 502 + skb->_skb_refdst = orefdst; 503 + return err; 504 + } 505 + 506 + skb_dst_drop(skb); 499 507 inet_frag_lru_move(&qp->q); 500 508 return -EINPROGRESS; 501 509
+7 -1
net/ipv4/netfilter/ipt_rpfilter.c
··· 66 66 return dev_match; 67 67 } 68 68 69 + static bool rpfilter_is_local(const struct sk_buff *skb) 70 + { 71 + const struct rtable *rt = skb_rtable(skb); 72 + return rt && (rt->rt_flags & RTCF_LOCAL); 73 + } 74 + 69 75 static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) 70 76 { 71 77 const struct xt_rpfilter_info *info; ··· 82 76 info = par->matchinfo; 83 77 invert = info->flags & XT_RPFILTER_INVERT; 84 78 85 - if (par->in->flags & IFF_LOOPBACK) 79 + if (rpfilter_is_local(skb)) 86 80 return true ^ invert; 87 81 88 82 iph = ip_hdr(skb);
+2 -2
net/ipv4/syncookies.c
··· 348 348 * hasn't changed since we received the original syn, but I see 349 349 * no easy way to do this. 350 350 */ 351 - flowi4_init_output(&fl4, 0, sk->sk_mark, RT_CONN_FLAGS(sk), 352 - RT_SCOPE_UNIVERSE, IPPROTO_TCP, 351 + flowi4_init_output(&fl4, sk->sk_bound_dev_if, sk->sk_mark, 352 + RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE, IPPROTO_TCP, 353 353 inet_sk_flowi_flags(sk), 354 354 (opt && opt->srr) ? opt->faddr : ireq->rmt_addr, 355 355 ireq->loc_addr, th->source, th->dest);
+31 -33
net/ipv4/tcp_input.c
··· 111 111 #define FLAG_SND_UNA_ADVANCED 0x400 /* Snd_una was changed (!= FLAG_DATA_ACKED) */ 112 112 #define FLAG_DSACKING_ACK 0x800 /* SACK blocks contained D-SACK info */ 113 113 #define FLAG_SACK_RENEGING 0x2000 /* snd_una advanced to a sacked seq */ 114 + #define FLAG_UPDATE_TS_RECENT 0x4000 /* tcp_replace_ts_recent() */ 114 115 115 116 #define FLAG_ACKED (FLAG_DATA_ACKED|FLAG_SYN_ACKED) 116 117 #define FLAG_NOT_DUP (FLAG_DATA|FLAG_WIN_UPDATE|FLAG_ACKED) ··· 3266 3265 } 3267 3266 } 3268 3267 3268 + static void tcp_store_ts_recent(struct tcp_sock *tp) 3269 + { 3270 + tp->rx_opt.ts_recent = tp->rx_opt.rcv_tsval; 3271 + tp->rx_opt.ts_recent_stamp = get_seconds(); 3272 + } 3273 + 3274 + static void tcp_replace_ts_recent(struct tcp_sock *tp, u32 seq) 3275 + { 3276 + if (tp->rx_opt.saw_tstamp && !after(seq, tp->rcv_wup)) { 3277 + /* PAWS bug workaround wrt. ACK frames, the PAWS discard 3278 + * extra check below makes sure this can only happen 3279 + * for pure ACK frames. -DaveM 3280 + * 3281 + * Not only, also it occurs for expired timestamps. 3282 + */ 3283 + 3284 + if (tcp_paws_check(&tp->rx_opt, 0)) 3285 + tcp_store_ts_recent(tp); 3286 + } 3287 + } 3288 + 3269 3289 /* This routine deals with acks during a TLP episode. 3270 3290 * Ref: loss detection algorithm in draft-dukkipati-tcpm-tcp-loss-probe. 3271 3291 */ ··· 3361 3339 3362 3340 prior_fackets = tp->fackets_out; 3363 3341 prior_in_flight = tcp_packets_in_flight(tp); 3342 + 3343 + /* ts_recent update must be made after we are sure that the packet 3344 + * is in window. 3345 + */ 3346 + if (flag & FLAG_UPDATE_TS_RECENT) 3347 + tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 3364 3348 3365 3349 if (!(flag & FLAG_SLOWPATH) && after(ack, prior_snd_una)) { 3366 3350 /* Window is constant, pure forward advance. ··· 3663 3635 } 3664 3636 EXPORT_SYMBOL(tcp_parse_md5sig_option); 3665 3637 #endif 3666 - 3667 - static inline void tcp_store_ts_recent(struct tcp_sock *tp) 3668 - { 3669 - tp->rx_opt.ts_recent = tp->rx_opt.rcv_tsval; 3670 - tp->rx_opt.ts_recent_stamp = get_seconds(); 3671 - } 3672 - 3673 - static inline void tcp_replace_ts_recent(struct tcp_sock *tp, u32 seq) 3674 - { 3675 - if (tp->rx_opt.saw_tstamp && !after(seq, tp->rcv_wup)) { 3676 - /* PAWS bug workaround wrt. ACK frames, the PAWS discard 3677 - * extra check below makes sure this can only happen 3678 - * for pure ACK frames. -DaveM 3679 - * 3680 - * Not only, also it occurs for expired timestamps. 3681 - */ 3682 - 3683 - if (tcp_paws_check(&tp->rx_opt, 0)) 3684 - tcp_store_ts_recent(tp); 3685 - } 3686 - } 3687 3638 3688 3639 /* Sorry, PAWS as specified is broken wrt. pure-ACKs -DaveM 3689 3640 * ··· 5257 5250 return 0; 5258 5251 5259 5252 step5: 5260 - if (tcp_ack(sk, skb, FLAG_SLOWPATH) < 0) 5253 + if (tcp_ack(sk, skb, FLAG_SLOWPATH | FLAG_UPDATE_TS_RECENT) < 0) 5261 5254 goto discard; 5262 - 5263 - /* ts_recent update must be made after we are sure that the packet 5264 - * is in window. 5265 - */ 5266 - tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 5267 5255 5268 5256 tcp_rcv_rtt_measure_ts(sk, skb); 5269 5257 ··· 5668 5666 5669 5667 /* step 5: check the ACK field */ 5670 5668 if (true) { 5671 - int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH) > 0; 5669 + int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH | 5670 + FLAG_UPDATE_TS_RECENT) > 0; 5672 5671 5673 5672 switch (sk->sk_state) { 5674 5673 case TCP_SYN_RECV: ··· 5819 5816 break; 5820 5817 } 5821 5818 } 5822 - 5823 - /* ts_recent update must be made after we are sure that the packet 5824 - * is in window. 5825 - */ 5826 - tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 5827 5819 5828 5820 /* step 6: check the URG bit */ 5829 5821 tcp_urg(sk, skb, th);
+7 -2
net/ipv4/tcp_output.c
··· 2353 2353 */ 2354 2354 TCP_SKB_CB(skb)->when = tcp_time_stamp; 2355 2355 2356 - /* make sure skb->data is aligned on arches that require it */ 2357 - if (unlikely(NET_IP_ALIGN && ((unsigned long)skb->data & 3))) { 2356 + /* make sure skb->data is aligned on arches that require it 2357 + * and check if ack-trimming & collapsing extended the headroom 2358 + * beyond what csum_start can cover. 2359 + */ 2360 + if (unlikely((NET_IP_ALIGN && ((unsigned long)skb->data & 3)) || 2361 + skb_headroom(skb) >= 0xFFFF)) { 2358 2362 struct sk_buff *nskb = __pskb_copy(skb, MAX_TCP_HEADER, 2359 2363 GFP_ATOMIC); 2360 2364 return nskb ? tcp_transmit_skb(sk, nskb, 0, GFP_ATOMIC) : ··· 2670 2666 skb_reserve(skb, MAX_TCP_HEADER); 2671 2667 2672 2668 skb_dst_set(skb, dst); 2669 + security_skb_owned_by(skb, sk); 2673 2670 2674 2671 mss = dst_metric_advmss(dst); 2675 2672 if (tp->rx_opt.user_mss && tp->rx_opt.user_mss < mss)
+3 -21
net/ipv6/addrconf.c
··· 169 169 static bool ipv6_chk_same_addr(struct net *net, const struct in6_addr *addr, 170 170 struct net_device *dev); 171 171 172 - static ATOMIC_NOTIFIER_HEAD(inet6addr_chain); 173 - 174 172 static struct ipv6_devconf ipv6_devconf __read_mostly = { 175 173 .forwarding = 0, 176 174 .hop_limit = IPV6_DEFAULT_HOPLIMIT, ··· 908 910 rcu_read_unlock_bh(); 909 911 910 912 if (likely(err == 0)) 911 - atomic_notifier_call_chain(&inet6addr_chain, NETDEV_UP, ifa); 913 + inet6addr_notifier_call_chain(NETDEV_UP, ifa); 912 914 else { 913 915 kfree(ifa); 914 916 ifa = ERR_PTR(err); ··· 998 1000 999 1001 ipv6_ifa_notify(RTM_DELADDR, ifp); 1000 1002 1001 - atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifp); 1003 + inet6addr_notifier_call_chain(NETDEV_DOWN, ifp); 1002 1004 1003 1005 /* 1004 1006 * Purge or update corresponding prefix ··· 3085 3087 3086 3088 if (state != INET6_IFADDR_STATE_DEAD) { 3087 3089 __ipv6_ifa_notify(RTM_DELADDR, ifa); 3088 - atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifa); 3090 + inet6addr_notifier_call_chain(NETDEV_DOWN, ifa); 3089 3091 } 3090 3092 in6_ifa_put(ifa); 3091 3093 ··· 5051 5053 .init = addrconf_init_net, 5052 5054 .exit = addrconf_exit_net, 5053 5055 }; 5054 - 5055 - /* 5056 - * Device notifier 5057 - */ 5058 - 5059 - int register_inet6addr_notifier(struct notifier_block *nb) 5060 - { 5061 - return atomic_notifier_chain_register(&inet6addr_chain, nb); 5062 - } 5063 - EXPORT_SYMBOL(register_inet6addr_notifier); 5064 - 5065 - int unregister_inet6addr_notifier(struct notifier_block *nb) 5066 - { 5067 - return atomic_notifier_chain_unregister(&inet6addr_chain, nb); 5068 - } 5069 - EXPORT_SYMBOL(unregister_inet6addr_notifier); 5070 5056 5071 5057 static struct rtnl_af_ops inet6_ops = { 5072 5058 .family = AF_INET6,
+19
net/ipv6/addrconf_core.c
··· 78 78 } 79 79 EXPORT_SYMBOL(__ipv6_addr_type); 80 80 81 + static ATOMIC_NOTIFIER_HEAD(inet6addr_chain); 82 + 83 + int register_inet6addr_notifier(struct notifier_block *nb) 84 + { 85 + return atomic_notifier_chain_register(&inet6addr_chain, nb); 86 + } 87 + EXPORT_SYMBOL(register_inet6addr_notifier); 88 + 89 + int unregister_inet6addr_notifier(struct notifier_block *nb) 90 + { 91 + return atomic_notifier_chain_unregister(&inet6addr_chain, nb); 92 + } 93 + EXPORT_SYMBOL(unregister_inet6addr_notifier); 94 + 95 + int inet6addr_notifier_call_chain(unsigned long val, void *v) 96 + { 97 + return atomic_notifier_call_chain(&inet6addr_chain, val, v); 98 + } 99 + EXPORT_SYMBOL(inet6addr_notifier_call_chain);
+7 -1
net/ipv6/netfilter/ip6t_rpfilter.c
··· 71 71 return ret; 72 72 } 73 73 74 + static bool rpfilter_is_local(const struct sk_buff *skb) 75 + { 76 + const struct rt6_info *rt = (const void *) skb_dst(skb); 77 + return rt && (rt->rt6i_flags & RTF_LOCAL); 78 + } 79 + 74 80 static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) 75 81 { 76 82 const struct xt_rpfilter_info *info = par->matchinfo; ··· 84 78 struct ipv6hdr *iph; 85 79 bool invert = info->flags & XT_RPFILTER_INVERT; 86 80 87 - if (par->in->flags & IFF_LOOPBACK) 81 + if (rpfilter_is_local(skb)) 88 82 return true ^ invert; 89 83 90 84 iph = ipv6_hdr(skb);
+10 -2
net/ipv6/reassembly.c
··· 342 342 } 343 343 344 344 if (fq->q.last_in == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) && 345 - fq->q.meat == fq->q.len) 346 - return ip6_frag_reasm(fq, prev, dev); 345 + fq->q.meat == fq->q.len) { 346 + int res; 347 + unsigned long orefdst = skb->_skb_refdst; 347 348 349 + skb->_skb_refdst = 0UL; 350 + res = ip6_frag_reasm(fq, prev, dev); 351 + skb->_skb_refdst = orefdst; 352 + return res; 353 + } 354 + 355 + skb_dst_drop(skb); 348 356 inet_frag_lru_move(&fq->q); 349 357 return -1; 350 358
+2 -1
net/irda/iriap.c
··· 303 303 { 304 304 struct iriap_cb *self; 305 305 306 - IRDA_DEBUG(4, "%s(), reason=%s\n", __func__, irlmp_reasons[reason]); 306 + IRDA_DEBUG(4, "%s(), reason=%s [%d]\n", __func__, 307 + irlmp_reason_str(reason), reason); 307 308 308 309 self = instance; 309 310
+9 -1
net/irda/irlmp.c
··· 66 66 "LM_LAP_RESET", 67 67 "LM_INIT_DISCONNECT", 68 68 "ERROR, NOT USED", 69 + "UNKNOWN", 69 70 }; 71 + 72 + const char *irlmp_reason_str(LM_REASON reason) 73 + { 74 + reason = min_t(size_t, reason, ARRAY_SIZE(irlmp_reasons) - 1); 75 + return irlmp_reasons[reason]; 76 + } 70 77 71 78 /* 72 79 * Function irlmp_init (void) ··· 754 747 { 755 748 struct lsap_cb *lsap; 756 749 757 - IRDA_DEBUG(1, "%s(), reason=%s\n", __func__, irlmp_reasons[reason]); 750 + IRDA_DEBUG(1, "%s(), reason=%s [%d]\n", __func__, 751 + irlmp_reason_str(reason), reason); 758 752 IRDA_ASSERT(self != NULL, return;); 759 753 IRDA_ASSERT(self->magic == LMP_LSAP_MAGIC, return;); 760 754
+16 -18
net/iucv/af_iucv.c
··· 49 49 50 50 #define TRGCLS_SIZE (sizeof(((struct iucv_message *)0)->class)) 51 51 52 - /* macros to set/get socket control buffer at correct offset */ 53 - #define CB_TAG(skb) ((skb)->cb) /* iucv message tag */ 54 - #define CB_TAG_LEN (sizeof(((struct iucv_message *) 0)->tag)) 55 - #define CB_TRGCLS(skb) ((skb)->cb + CB_TAG_LEN) /* iucv msg target class */ 56 - #define CB_TRGCLS_LEN (TRGCLS_SIZE) 57 - 58 52 #define __iucv_sock_wait(sk, condition, timeo, ret) \ 59 53 do { \ 60 54 DEFINE_WAIT(__wait); \ ··· 1135 1141 1136 1142 /* increment and save iucv message tag for msg_completion cbk */ 1137 1143 txmsg.tag = iucv->send_tag++; 1138 - memcpy(CB_TAG(skb), &txmsg.tag, CB_TAG_LEN); 1144 + IUCV_SKB_CB(skb)->tag = txmsg.tag; 1139 1145 1140 1146 if (iucv->transport == AF_IUCV_TRANS_HIPER) { 1141 1147 atomic_inc(&iucv->msg_sent); ··· 1218 1224 return -ENOMEM; 1219 1225 1220 1226 /* copy target class to control buffer of new skb */ 1221 - memcpy(CB_TRGCLS(nskb), CB_TRGCLS(skb), CB_TRGCLS_LEN); 1227 + IUCV_SKB_CB(nskb)->class = IUCV_SKB_CB(skb)->class; 1222 1228 1223 1229 /* copy data fragment */ 1224 1230 memcpy(nskb->data, skb->data + copied, size); ··· 1250 1256 1251 1257 /* store msg target class in the second 4 bytes of skb ctrl buffer */ 1252 1258 /* Note: the first 4 bytes are reserved for msg tag */ 1253 - memcpy(CB_TRGCLS(skb), &msg->class, CB_TRGCLS_LEN); 1259 + IUCV_SKB_CB(skb)->class = msg->class; 1254 1260 1255 1261 /* check for special IPRM messages (e.g. iucv_sock_shutdown) */ 1256 1262 if ((msg->flags & IUCV_IPRMDATA) && len > 7) { ··· 1286 1292 } 1287 1293 } 1288 1294 1295 + IUCV_SKB_CB(skb)->offset = 0; 1289 1296 if (sock_queue_rcv_skb(sk, skb)) 1290 1297 skb_queue_head(&iucv_sk(sk)->backlog_skb_q, skb); 1291 1298 } ··· 1322 1327 unsigned int copied, rlen; 1323 1328 struct sk_buff *skb, *rskb, *cskb; 1324 1329 int err = 0; 1330 + u32 offset; 1325 1331 1326 1332 msg->msg_namelen = 0; 1327 1333 ··· 1344 1348 return err; 1345 1349 } 1346 1350 1347 - rlen = skb->len; /* real length of skb */ 1351 + offset = IUCV_SKB_CB(skb)->offset; 1352 + rlen = skb->len - offset; /* real length of skb */ 1348 1353 copied = min_t(unsigned int, rlen, len); 1349 1354 if (!rlen) 1350 1355 sk->sk_shutdown = sk->sk_shutdown | RCV_SHUTDOWN; 1351 1356 1352 1357 cskb = skb; 1353 - if (skb_copy_datagram_iovec(cskb, 0, msg->msg_iov, copied)) { 1358 + if (skb_copy_datagram_iovec(cskb, offset, msg->msg_iov, copied)) { 1354 1359 if (!(flags & MSG_PEEK)) 1355 1360 skb_queue_head(&sk->sk_receive_queue, skb); 1356 1361 return -EFAULT; ··· 1369 1372 * get the trgcls from the control buffer of the skb due to 1370 1373 * fragmentation of original iucv message. */ 1371 1374 err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS, 1372 - CB_TRGCLS_LEN, CB_TRGCLS(skb)); 1375 + sizeof(IUCV_SKB_CB(skb)->class), 1376 + (void *)&IUCV_SKB_CB(skb)->class); 1373 1377 if (err) { 1374 1378 if (!(flags & MSG_PEEK)) 1375 1379 skb_queue_head(&sk->sk_receive_queue, skb); ··· 1382 1384 1383 1385 /* SOCK_STREAM: re-queue skb if it contains unreceived data */ 1384 1386 if (sk->sk_type == SOCK_STREAM) { 1385 - skb_pull(skb, copied); 1386 - if (skb->len) { 1387 - skb_queue_head(&sk->sk_receive_queue, skb); 1387 + if (copied < rlen) { 1388 + IUCV_SKB_CB(skb)->offset = offset + copied; 1388 1389 goto done; 1389 1390 } 1390 1391 } ··· 1402 1405 spin_lock_bh(&iucv->message_q.lock); 1403 1406 rskb = skb_dequeue(&iucv->backlog_skb_q); 1404 1407 while (rskb) { 1408 + IUCV_SKB_CB(rskb)->offset = 0; 1405 1409 if (sock_queue_rcv_skb(sk, rskb)) { 1406 1410 skb_queue_head(&iucv->backlog_skb_q, 1407 1411 rskb); ··· 1831 1833 spin_lock_irqsave(&list->lock, flags); 1832 1834 1833 1835 while (list_skb != (struct sk_buff *)list) { 1834 - if (!memcmp(&msg->tag, CB_TAG(list_skb), CB_TAG_LEN)) { 1836 + if (msg->tag != IUCV_SKB_CB(list_skb)->tag) { 1835 1837 this = list_skb; 1836 1838 break; 1837 1839 } ··· 2092 2094 skb_pull(skb, sizeof(struct af_iucv_trans_hdr)); 2093 2095 skb_reset_transport_header(skb); 2094 2096 skb_reset_network_header(skb); 2097 + IUCV_SKB_CB(skb)->offset = 0; 2095 2098 spin_lock(&iucv->message_q.lock); 2096 2099 if (skb_queue_empty(&iucv->backlog_skb_q)) { 2097 2100 if (sock_queue_rcv_skb(sk, skb)) { ··· 2197 2198 /* fall through and receive zero length data */ 2198 2199 case 0: 2199 2200 /* plain data frame */ 2200 - memcpy(CB_TRGCLS(skb), &trans_hdr->iucv_hdr.class, 2201 - CB_TRGCLS_LEN); 2201 + IUCV_SKB_CB(skb)->class = trans_hdr->iucv_hdr.class; 2202 2202 err = afiucv_hs_callback_rx(sk, skb); 2203 2203 break; 2204 2204 default:
+19 -8
net/mac80211/iface.c
··· 78 78 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_TXPOWER); 79 79 } 80 80 81 - u32 ieee80211_idle_off(struct ieee80211_local *local) 81 + static u32 __ieee80211_idle_off(struct ieee80211_local *local) 82 82 { 83 83 if (!(local->hw.conf.flags & IEEE80211_CONF_IDLE)) 84 84 return 0; ··· 87 87 return IEEE80211_CONF_CHANGE_IDLE; 88 88 } 89 89 90 - static u32 ieee80211_idle_on(struct ieee80211_local *local) 90 + static u32 __ieee80211_idle_on(struct ieee80211_local *local) 91 91 { 92 92 if (local->hw.conf.flags & IEEE80211_CONF_IDLE) 93 93 return 0; ··· 98 98 return IEEE80211_CONF_CHANGE_IDLE; 99 99 } 100 100 101 - void ieee80211_recalc_idle(struct ieee80211_local *local) 101 + static u32 __ieee80211_recalc_idle(struct ieee80211_local *local, 102 + bool force_active) 102 103 { 103 104 bool working = false, scanning, active; 104 105 unsigned int led_trig_start = 0, led_trig_stop = 0; 105 106 struct ieee80211_roc_work *roc; 106 - u32 change; 107 107 108 108 lockdep_assert_held(&local->mtx); 109 109 110 - active = !list_empty(&local->chanctx_list) || local->monitors; 110 + active = force_active || 111 + !list_empty(&local->chanctx_list) || 112 + local->monitors; 111 113 112 114 if (!local->ops->remain_on_channel) { 113 115 list_for_each_entry(roc, &local->roc_list, list) { ··· 134 132 ieee80211_mod_tpt_led_trig(local, led_trig_start, led_trig_stop); 135 133 136 134 if (working || scanning || active) 137 - change = ieee80211_idle_off(local); 138 - else 139 - change = ieee80211_idle_on(local); 135 + return __ieee80211_idle_off(local); 136 + return __ieee80211_idle_on(local); 137 + } 138 + 139 + u32 ieee80211_idle_off(struct ieee80211_local *local) 140 + { 141 + return __ieee80211_recalc_idle(local, true); 142 + } 143 + 144 + void ieee80211_recalc_idle(struct ieee80211_local *local) 145 + { 146 + u32 change = __ieee80211_recalc_idle(local, false); 140 147 if (change) 141 148 ieee80211_hw_config(local, change); 142 149 }
+20 -4
net/mac80211/mlme.c
··· 3887 3887 /* prep auth_data so we don't go into idle on disassoc */ 3888 3888 ifmgd->auth_data = auth_data; 3889 3889 3890 - if (ifmgd->associated) 3891 - ieee80211_set_disassoc(sdata, 0, 0, false, NULL); 3890 + if (ifmgd->associated) { 3891 + u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; 3892 + 3893 + ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, 3894 + WLAN_REASON_UNSPECIFIED, 3895 + false, frame_buf); 3896 + 3897 + __cfg80211_send_deauth(sdata->dev, frame_buf, 3898 + sizeof(frame_buf)); 3899 + } 3892 3900 3893 3901 sdata_info(sdata, "authenticate with %pM\n", req->bss->bssid); 3894 3902 ··· 3956 3948 3957 3949 mutex_lock(&ifmgd->mtx); 3958 3950 3959 - if (ifmgd->associated) 3960 - ieee80211_set_disassoc(sdata, 0, 0, false, NULL); 3951 + if (ifmgd->associated) { 3952 + u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; 3953 + 3954 + ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, 3955 + WLAN_REASON_UNSPECIFIED, 3956 + false, frame_buf); 3957 + 3958 + __cfg80211_send_deauth(sdata->dev, frame_buf, 3959 + sizeof(frame_buf)); 3960 + } 3961 3961 3962 3962 if (ifmgd->auth_data && !ifmgd->auth_data->done) { 3963 3963 err = -EBUSY;
+5 -1
net/netfilter/ipset/ip_set_bitmap_ipmac.c
··· 339 339 nla_put_failure: 340 340 nla_nest_cancel(skb, nested); 341 341 ipset_nest_end(skb, atd); 342 - return -EMSGSIZE; 342 + if (unlikely(id == first)) { 343 + cb->args[2] = 0; 344 + return -EMSGSIZE; 345 + } 346 + return 0; 343 347 } 344 348 345 349 static int
+18
net/netfilter/ipset/ip_set_hash_ipportnet.c
··· 104 104 dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 105 105 } 106 106 107 + static inline void 108 + hash_ipportnet4_data_reset_flags(struct hash_ipportnet4_elem *dst, u32 *flags) 109 + { 110 + if (dst->nomatch) { 111 + *flags = IPSET_FLAG_NOMATCH; 112 + dst->nomatch = 0; 113 + } 114 + } 115 + 107 116 static inline int 108 117 hash_ipportnet4_data_match(const struct hash_ipportnet4_elem *elem) 109 118 { ··· 421 412 hash_ipportnet6_data_flags(struct hash_ipportnet6_elem *dst, u32 flags) 422 413 { 423 414 dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 415 + } 416 + 417 + static inline void 418 + hash_ipportnet6_data_reset_flags(struct hash_ipportnet6_elem *dst, u32 *flags) 419 + { 420 + if (dst->nomatch) { 421 + *flags = IPSET_FLAG_NOMATCH; 422 + dst->nomatch = 0; 423 + } 424 424 } 425 425 426 426 static inline int
+20 -2
net/netfilter/ipset/ip_set_hash_net.c
··· 87 87 static inline void 88 88 hash_net4_data_flags(struct hash_net4_elem *dst, u32 flags) 89 89 { 90 - dst->nomatch = flags & IPSET_FLAG_NOMATCH; 90 + dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 91 + } 92 + 93 + static inline void 94 + hash_net4_data_reset_flags(struct hash_net4_elem *dst, u32 *flags) 95 + { 96 + if (dst->nomatch) { 97 + *flags = IPSET_FLAG_NOMATCH; 98 + dst->nomatch = 0; 99 + } 91 100 } 92 101 93 102 static inline int ··· 317 308 static inline void 318 309 hash_net6_data_flags(struct hash_net6_elem *dst, u32 flags) 319 310 { 320 - dst->nomatch = flags & IPSET_FLAG_NOMATCH; 311 + dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 312 + } 313 + 314 + static inline void 315 + hash_net6_data_reset_flags(struct hash_net6_elem *dst, u32 *flags) 316 + { 317 + if (dst->nomatch) { 318 + *flags = IPSET_FLAG_NOMATCH; 319 + dst->nomatch = 0; 320 + } 321 321 } 322 322 323 323 static inline int
+20 -2
net/netfilter/ipset/ip_set_hash_netiface.c
··· 198 198 static inline void 199 199 hash_netiface4_data_flags(struct hash_netiface4_elem *dst, u32 flags) 200 200 { 201 - dst->nomatch = flags & IPSET_FLAG_NOMATCH; 201 + dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 202 + } 203 + 204 + static inline void 205 + hash_netiface4_data_reset_flags(struct hash_netiface4_elem *dst, u32 *flags) 206 + { 207 + if (dst->nomatch) { 208 + *flags = IPSET_FLAG_NOMATCH; 209 + dst->nomatch = 0; 210 + } 202 211 } 203 212 204 213 static inline int ··· 503 494 static inline void 504 495 hash_netiface6_data_flags(struct hash_netiface6_elem *dst, u32 flags) 505 496 { 506 - dst->nomatch = flags & IPSET_FLAG_NOMATCH; 497 + dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 507 498 } 508 499 509 500 static inline int 510 501 hash_netiface6_data_match(const struct hash_netiface6_elem *elem) 511 502 { 512 503 return elem->nomatch ? -ENOTEMPTY : 1; 504 + } 505 + 506 + static inline void 507 + hash_netiface6_data_reset_flags(struct hash_netiface6_elem *dst, u32 *flags) 508 + { 509 + if (dst->nomatch) { 510 + *flags = IPSET_FLAG_NOMATCH; 511 + dst->nomatch = 0; 512 + } 513 513 } 514 514 515 515 static inline void
+18
net/netfilter/ipset/ip_set_hash_netport.c
··· 104 104 dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 105 105 } 106 106 107 + static inline void 108 + hash_netport4_data_reset_flags(struct hash_netport4_elem *dst, u32 *flags) 109 + { 110 + if (dst->nomatch) { 111 + *flags = IPSET_FLAG_NOMATCH; 112 + dst->nomatch = 0; 113 + } 114 + } 115 + 107 116 static inline int 108 117 hash_netport4_data_match(const struct hash_netport4_elem *elem) 109 118 { ··· 382 373 hash_netport6_data_flags(struct hash_netport6_elem *dst, u32 flags) 383 374 { 384 375 dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH); 376 + } 377 + 378 + static inline void 379 + hash_netport6_data_reset_flags(struct hash_netport6_elem *dst, u32 *flags) 380 + { 381 + if (dst->nomatch) { 382 + *flags = IPSET_FLAG_NOMATCH; 383 + dst->nomatch = 0; 384 + } 385 385 } 386 386 387 387 static inline int
+7 -3
net/netfilter/ipset/ip_set_list_set.c
··· 174 174 { 175 175 const struct set_elem *e = list_set_elem(map, i); 176 176 177 - if (i == map->size - 1 && e->id != IPSET_INVALID_ID) 178 - /* Last element replaced: e.g. add new,before,last */ 179 - ip_set_put_byindex(e->id); 177 + if (e->id != IPSET_INVALID_ID) { 178 + const struct set_elem *x = list_set_elem(map, map->size - 1); 179 + 180 + /* Last element replaced or pushed off */ 181 + if (x->id != IPSET_INVALID_ID) 182 + ip_set_put_byindex(x->id); 183 + } 180 184 if (with_timeout(map->timeout)) 181 185 list_elem_tadd(map, i, id, ip_set_timeout_set(timeout)); 182 186 else
+2 -4
net/netfilter/nf_conntrack_sip.c
··· 1593 1593 end += strlen("\r\n\r\n") + clen; 1594 1594 1595 1595 msglen = origlen = end - dptr; 1596 - if (msglen > datalen) { 1597 - nf_ct_helper_log(skb, ct, "incomplete/bad SIP message"); 1598 - return NF_DROP; 1599 - } 1596 + if (msglen > datalen) 1597 + return NF_ACCEPT; 1600 1598 1601 1599 ret = process_sip_msg(skb, ct, protoff, dataoff, 1602 1600 &dptr, &msglen);
+7 -33
net/netfilter/nf_nat_core.c
··· 468 468 struct nf_nat_proto_clean { 469 469 u8 l3proto; 470 470 u8 l4proto; 471 - bool hash; 472 471 }; 473 472 474 - /* Clear NAT section of all conntracks, in case we're loaded again. */ 475 - static int nf_nat_proto_clean(struct nf_conn *i, void *data) 473 + /* kill conntracks with affected NAT section */ 474 + static int nf_nat_proto_remove(struct nf_conn *i, void *data) 476 475 { 477 476 const struct nf_nat_proto_clean *clean = data; 478 477 struct nf_conn_nat *nat = nfct_nat(i); 479 478 480 479 if (!nat) 481 480 return 0; 482 - if (!(i->status & IPS_SRC_NAT_DONE)) 483 - return 0; 481 + 484 482 if ((clean->l3proto && nf_ct_l3num(i) != clean->l3proto) || 485 483 (clean->l4proto && nf_ct_protonum(i) != clean->l4proto)) 486 484 return 0; 487 485 488 - if (clean->hash) { 489 - spin_lock_bh(&nf_nat_lock); 490 - hlist_del_rcu(&nat->bysource); 491 - spin_unlock_bh(&nf_nat_lock); 492 - } else { 493 - memset(nat, 0, sizeof(*nat)); 494 - i->status &= ~(IPS_NAT_MASK | IPS_NAT_DONE_MASK | 495 - IPS_SEQ_ADJUST); 496 - } 497 - return 0; 486 + return i->status & IPS_NAT_MASK ? 1 : 0; 498 487 } 499 488 500 489 static void nf_nat_l4proto_clean(u8 l3proto, u8 l4proto) ··· 495 506 struct net *net; 496 507 497 508 rtnl_lock(); 498 - /* Step 1 - remove from bysource hash */ 499 - clean.hash = true; 500 509 for_each_net(net) 501 - nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean); 502 - synchronize_rcu(); 503 - 504 - /* Step 2 - clean NAT section */ 505 - clean.hash = false; 506 - for_each_net(net) 507 - nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean); 510 + nf_ct_iterate_cleanup(net, nf_nat_proto_remove, &clean); 508 511 rtnl_unlock(); 509 512 } 510 513 ··· 508 527 struct net *net; 509 528 510 529 rtnl_lock(); 511 - /* Step 1 - remove from bysource hash */ 512 - clean.hash = true; 513 - for_each_net(net) 514 - nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean); 515 - synchronize_rcu(); 516 530 517 - /* Step 2 - clean NAT section */ 518 - clean.hash = false; 519 531 for_each_net(net) 520 - nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean); 532 + nf_ct_iterate_cleanup(net, nf_nat_proto_remove, &clean); 521 533 rtnl_unlock(); 522 534 } 523 535 ··· 748 774 { 749 775 struct nf_nat_proto_clean clean = {}; 750 776 751 - nf_ct_iterate_cleanup(net, &nf_nat_proto_clean, &clean); 777 + nf_ct_iterate_cleanup(net, &nf_nat_proto_remove, &clean); 752 778 synchronize_rcu(); 753 779 nf_ct_free_hashtable(net->ct.nat_bysource, net->ct.nat_htable_size); 754 780 }
+1 -1
net/netrom/af_netrom.c
··· 1173 1173 } 1174 1174 1175 1175 if (sax != NULL) { 1176 - memset(sax, 0, sizeof(sax)); 1176 + memset(sax, 0, sizeof(*sax)); 1177 1177 sax->sax25_family = AF_NETROM; 1178 1178 skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call, 1179 1179 AX25_ADDR_LEN);
+18 -12
net/openvswitch/datapath.c
··· 1681 1681 return ERR_PTR(-ENOMEM); 1682 1682 1683 1683 retval = ovs_vport_cmd_fill_info(vport, skb, portid, seq, 0, cmd); 1684 - if (retval < 0) { 1685 - kfree_skb(skb); 1686 - return ERR_PTR(retval); 1687 - } 1684 + BUG_ON(retval < 0); 1685 + 1688 1686 return skb; 1689 1687 } 1690 1688 ··· 1812 1814 nla_get_u32(a[OVS_VPORT_ATTR_TYPE]) != vport->ops->type) 1813 1815 err = -EINVAL; 1814 1816 1817 + reply = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 1818 + if (!reply) { 1819 + err = -ENOMEM; 1820 + goto exit_unlock; 1821 + } 1822 + 1815 1823 if (!err && a[OVS_VPORT_ATTR_OPTIONS]) 1816 1824 err = ovs_vport_set_options(vport, a[OVS_VPORT_ATTR_OPTIONS]); 1817 1825 if (err) 1818 - goto exit_unlock; 1826 + goto exit_free; 1827 + 1819 1828 if (a[OVS_VPORT_ATTR_UPCALL_PID]) 1820 1829 vport->upcall_portid = nla_get_u32(a[OVS_VPORT_ATTR_UPCALL_PID]); 1821 1830 1822 - reply = ovs_vport_cmd_build_info(vport, info->snd_portid, info->snd_seq, 1823 - OVS_VPORT_CMD_NEW); 1824 - if (IS_ERR(reply)) { 1825 - netlink_set_err(sock_net(skb->sk)->genl_sock, 0, 1826 - ovs_dp_vport_multicast_group.id, PTR_ERR(reply)); 1827 - goto exit_unlock; 1828 - } 1831 + err = ovs_vport_cmd_fill_info(vport, reply, info->snd_portid, 1832 + info->snd_seq, 0, OVS_VPORT_CMD_NEW); 1833 + BUG_ON(err < 0); 1829 1834 1830 1835 ovs_unlock(); 1831 1836 ovs_notify(reply, info, &ovs_dp_vport_multicast_group); 1832 1837 return 0; 1833 1838 1839 + rtnl_unlock(); 1840 + return 0; 1841 + 1842 + exit_free: 1843 + kfree_skb(reply); 1834 1844 exit_unlock: 1835 1845 ovs_unlock(); 1836 1846 return err;
+1 -1
net/openvswitch/flow.c
··· 795 795 796 796 void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow) 797 797 { 798 + BUG_ON(table->count == 0); 798 799 hlist_del_rcu(&flow->hash_node[table->node_ver]); 799 800 table->count--; 800 - BUG_ON(table->count < 0); 801 801 } 802 802 803 803 /* The size of the argument for each %OVS_KEY_ATTR_* Netlink attribute. */
+1 -1
net/sched/cls_fw.c
··· 204 204 if (err < 0) 205 205 return err; 206 206 207 - err = -EINVAL; 208 207 if (tb[TCA_FW_CLASSID]) { 209 208 f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]); 210 209 tcf_bind_filter(tp, &f->res, base); ··· 217 218 } 218 219 #endif /* CONFIG_NET_CLS_IND */ 219 220 221 + err = -EINVAL; 220 222 if (tb[TCA_FW_MASK]) { 221 223 mask = nla_get_u32(tb[TCA_FW_MASK]); 222 224 if (mask != head->mask)
+3 -8
net/sunrpc/clnt.c
··· 304 304 err = rpciod_up(); 305 305 if (err) 306 306 goto out_no_rpciod; 307 - err = -EINVAL; 308 - if (!xprt) 309 - goto out_no_xprt; 310 307 308 + err = -EINVAL; 311 309 if (args->version >= program->nrvers) 312 310 goto out_err; 313 311 version = program->version[args->version]; ··· 380 382 out_no_stats: 381 383 kfree(clnt); 382 384 out_err: 383 - xprt_put(xprt); 384 - out_no_xprt: 385 385 rpciod_down(); 386 386 out_no_rpciod: 387 + xprt_put(xprt); 387 388 return ERR_PTR(err); 388 389 } 389 390 ··· 509 512 new = rpc_new_client(args, xprt); 510 513 if (IS_ERR(new)) { 511 514 err = PTR_ERR(new); 512 - goto out_put; 515 + goto out_err; 513 516 } 514 517 515 518 atomic_inc(&clnt->cl_count); ··· 522 525 new->cl_chatty = clnt->cl_chatty; 523 526 return new; 524 527 525 - out_put: 526 - xprt_put(xprt); 527 528 out_err: 528 529 dprintk("RPC: %s: returned error %d\n", __func__, err); 529 530 return ERR_PTR(err);
+1 -1
net/unix/af_unix.c
··· 1414 1414 !other->sk_socket || 1415 1415 test_bit(SOCK_PASSCRED, &other->sk_socket->flags)) { 1416 1416 UNIXCB(skb).pid = get_pid(task_tgid(current)); 1417 - current_euid_egid(&UNIXCB(skb).uid, &UNIXCB(skb).gid); 1417 + current_uid_gid(&UNIXCB(skb).uid, &UNIXCB(skb).gid); 1418 1418 } 1419 1419 } 1420 1420
+1
scripts/checkpatch.pl
··· 3016 3016 $dstat !~ /^'X'$/ && # character constants 3017 3017 $dstat !~ /$exceptions/ && 3018 3018 $dstat !~ /^\.$Ident\s*=/ && # .foo = 3019 + $dstat !~ /^(?:\#\s*$Ident|\#\s*$Constant)\s*$/ && # stringification #foo 3019 3020 $dstat !~ /^do\s*$Constant\s*while\s*$Constant;?$/ && # do {...} while (...); // do {...} while (...) 3020 3021 $dstat !~ /^for\s*$Constant$/ && # for (...) 3021 3022 $dstat !~ /^for\s*$Constant\s+(?:$Ident|-?$Constant)$/ && # for (...) bar()
+6
security/capability.c
··· 737 737 { 738 738 return 0; 739 739 } 740 + 741 + static void cap_skb_owned_by(struct sk_buff *skb, struct sock *sk) 742 + { 743 + } 744 + 740 745 #endif /* CONFIG_SECURITY_NETWORK */ 741 746 742 747 #ifdef CONFIG_SECURITY_NETWORK_XFRM ··· 1076 1071 set_to_cap_if_null(ops, tun_dev_open); 1077 1072 set_to_cap_if_null(ops, tun_dev_attach_queue); 1078 1073 set_to_cap_if_null(ops, tun_dev_attach); 1074 + set_to_cap_if_null(ops, skb_owned_by); 1079 1075 #endif /* CONFIG_SECURITY_NETWORK */ 1080 1076 #ifdef CONFIG_SECURITY_NETWORK_XFRM 1081 1077 set_to_cap_if_null(ops, xfrm_policy_alloc_security);
+5
security/security.c
··· 1290 1290 } 1291 1291 EXPORT_SYMBOL(security_tun_dev_open); 1292 1292 1293 + void security_skb_owned_by(struct sk_buff *skb, struct sock *sk) 1294 + { 1295 + security_ops->skb_owned_by(skb, sk); 1296 + } 1297 + 1293 1298 #endif /* CONFIG_SECURITY_NETWORK */ 1294 1299 1295 1300 #ifdef CONFIG_SECURITY_NETWORK_XFRM
+7
security/selinux/hooks.c
··· 51 51 #include <linux/tty.h> 52 52 #include <net/icmp.h> 53 53 #include <net/ip.h> /* for local_port_range[] */ 54 + #include <net/sock.h> 54 55 #include <net/tcp.h> /* struct or_callable used in sock_rcv_skb */ 55 56 #include <net/net_namespace.h> 56 57 #include <net/netlabel.h> ··· 4364 4363 selinux_skb_peerlbl_sid(skb, family, &sksec->peer_sid); 4365 4364 } 4366 4365 4366 + static void selinux_skb_owned_by(struct sk_buff *skb, struct sock *sk) 4367 + { 4368 + skb_set_owner_w(skb, sk); 4369 + } 4370 + 4367 4371 static int selinux_secmark_relabel_packet(u32 sid) 4368 4372 { 4369 4373 const struct task_security_struct *__tsec; ··· 5670 5664 .tun_dev_attach_queue = selinux_tun_dev_attach_queue, 5671 5665 .tun_dev_attach = selinux_tun_dev_attach, 5672 5666 .tun_dev_open = selinux_tun_dev_open, 5667 + .skb_owned_by = selinux_skb_owned_by, 5673 5668 5674 5669 #ifdef CONFIG_SECURITY_NETWORK_XFRM 5675 5670 .xfrm_policy_alloc_security = selinux_xfrm_policy_alloc,
+2 -10
sound/core/pcm_native.c
··· 3222 3222 int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, 3223 3223 struct vm_area_struct *area) 3224 3224 { 3225 - long size; 3226 - unsigned long offset; 3225 + struct snd_pcm_runtime *runtime = substream->runtime;; 3227 3226 3228 3227 area->vm_page_prot = pgprot_noncached(area->vm_page_prot); 3229 - area->vm_flags |= VM_IO; 3230 - size = area->vm_end - area->vm_start; 3231 - offset = area->vm_pgoff << PAGE_SHIFT; 3232 - if (io_remap_pfn_range(area, area->vm_start, 3233 - (substream->runtime->dma_addr + offset) >> PAGE_SHIFT, 3234 - size, area->vm_page_prot)) 3235 - return -EAGAIN; 3236 - return 0; 3228 + return vm_iomap_memory(area, runtime->dma_addr, runtime->dma_bytes); 3237 3229 } 3238 3230 3239 3231 EXPORT_SYMBOL(snd_pcm_lib_mmap_iomem);
+1 -1
sound/soc/codecs/wm5102.c
··· 584 584 struct snd_kcontrol *kcontrol, int event) 585 585 { 586 586 struct snd_soc_codec *codec = w->codec; 587 - struct arizona *arizona = dev_get_drvdata(codec->dev); 587 + struct arizona *arizona = dev_get_drvdata(codec->dev->parent); 588 588 struct regmap *regmap = codec->control_data; 589 589 const struct reg_default *patch = NULL; 590 590 int i, patch_size;
+2
sound/soc/codecs/wm8903.c
··· 1083 1083 { "ROP", NULL, "Right Speaker PGA" }, 1084 1084 { "RON", NULL, "Right Speaker PGA" }, 1085 1085 1086 + { "Charge Pump", NULL, "CLK_DSP" }, 1087 + 1086 1088 { "Left Headphone Output PGA", NULL, "Charge Pump" }, 1087 1089 { "Right Headphone Output PGA", NULL, "Charge Pump" }, 1088 1090 { "Left Line Output PGA", NULL, "Charge Pump" },
+12 -5
sound/soc/samsung/i2s.c
··· 972 972 static struct i2s_dai *i2s_alloc_dai(struct platform_device *pdev, bool sec) 973 973 { 974 974 struct i2s_dai *i2s; 975 + int ret; 975 976 976 977 i2s = devm_kzalloc(&pdev->dev, sizeof(struct i2s_dai), GFP_KERNEL); 977 978 if (i2s == NULL) ··· 997 996 i2s->i2s_dai_drv.capture.channels_max = 2; 998 997 i2s->i2s_dai_drv.capture.rates = SAMSUNG_I2S_RATES; 999 998 i2s->i2s_dai_drv.capture.formats = SAMSUNG_I2S_FMTS; 999 + dev_set_drvdata(&i2s->pdev->dev, i2s); 1000 1000 } else { /* Create a new platform_device for Secondary */ 1001 - i2s->pdev = platform_device_register_resndata(NULL, 1002 - "samsung-i2s-sec", -1, NULL, 0, NULL, 0); 1001 + i2s->pdev = platform_device_alloc("samsung-i2s-sec", -1); 1003 1002 if (IS_ERR(i2s->pdev)) 1004 1003 return NULL; 1005 - } 1006 1004 1007 - /* Pre-assign snd_soc_dai_set_drvdata */ 1008 - dev_set_drvdata(&i2s->pdev->dev, i2s); 1005 + platform_set_drvdata(i2s->pdev, i2s); 1006 + ret = platform_device_add(i2s->pdev); 1007 + if (ret < 0) 1008 + return NULL; 1009 + } 1009 1010 1010 1011 return i2s; 1011 1012 } ··· 1110 1107 1111 1108 if (samsung_dai_type == TYPE_SEC) { 1112 1109 sec_dai = dev_get_drvdata(&pdev->dev); 1110 + if (!sec_dai) { 1111 + dev_err(&pdev->dev, "Unable to get drvdata\n"); 1112 + return -EFAULT; 1113 + } 1113 1114 snd_soc_register_dai(&sec_dai->pdev->dev, 1114 1115 &sec_dai->i2s_dai_drv); 1115 1116 asoc_dma_platform_register(&pdev->dev);
+11 -3
sound/soc/soc-compress.c
··· 211 211 if (platform->driver->compr_ops && platform->driver->compr_ops->set_params) { 212 212 ret = platform->driver->compr_ops->set_params(cstream, params); 213 213 if (ret < 0) 214 - goto out; 214 + goto err; 215 215 } 216 216 217 217 if (rtd->dai_link->compr_ops && rtd->dai_link->compr_ops->set_params) { 218 218 ret = rtd->dai_link->compr_ops->set_params(cstream); 219 219 if (ret < 0) 220 - goto out; 220 + goto err; 221 221 } 222 222 223 223 snd_soc_dapm_stream_event(rtd, SNDRV_PCM_STREAM_PLAYBACK, 224 224 SND_SOC_DAPM_STREAM_START); 225 225 226 - out: 226 + /* cancel any delayed stream shutdown that is pending */ 227 + rtd->pop_wait = 0; 228 + mutex_unlock(&rtd->pcm_mutex); 229 + 230 + cancel_delayed_work_sync(&rtd->delayed_work); 231 + 232 + return ret; 233 + 234 + err: 227 235 mutex_unlock(&rtd->pcm_mutex); 228 236 return ret; 229 237 }
+1 -1
sound/soc/soc-core.c
··· 2963 2963 val = val << shift; 2964 2964 2965 2965 ret = snd_soc_update_bits_locked(codec, reg, val_mask, val); 2966 - if (ret != 0) 2966 + if (ret < 0) 2967 2967 return ret; 2968 2968 2969 2969 if (snd_soc_volsw_is_stereo(mc)) {
+1 -23
sound/soc/tegra/tegra_pcm.c
··· 43 43 static const struct snd_pcm_hardware tegra_pcm_hardware = { 44 44 .info = SNDRV_PCM_INFO_MMAP | 45 45 SNDRV_PCM_INFO_MMAP_VALID | 46 - SNDRV_PCM_INFO_PAUSE | 47 - SNDRV_PCM_INFO_RESUME | 48 46 SNDRV_PCM_INFO_INTERLEAVED, 49 47 .formats = SNDRV_PCM_FMTBIT_S16_LE, 50 48 .channels_min = 2, ··· 125 127 return 0; 126 128 } 127 129 128 - static int tegra_pcm_trigger(struct snd_pcm_substream *substream, int cmd) 129 - { 130 - switch (cmd) { 131 - case SNDRV_PCM_TRIGGER_START: 132 - case SNDRV_PCM_TRIGGER_RESUME: 133 - case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 134 - return snd_dmaengine_pcm_trigger(substream, 135 - SNDRV_PCM_TRIGGER_START); 136 - 137 - case SNDRV_PCM_TRIGGER_STOP: 138 - case SNDRV_PCM_TRIGGER_SUSPEND: 139 - case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 140 - return snd_dmaengine_pcm_trigger(substream, 141 - SNDRV_PCM_TRIGGER_STOP); 142 - default: 143 - return -EINVAL; 144 - } 145 - return 0; 146 - } 147 - 148 130 static int tegra_pcm_mmap(struct snd_pcm_substream *substream, 149 131 struct vm_area_struct *vma) 150 132 { ··· 142 164 .ioctl = snd_pcm_lib_ioctl, 143 165 .hw_params = tegra_pcm_hw_params, 144 166 .hw_free = tegra_pcm_hw_free, 145 - .trigger = tegra_pcm_trigger, 167 + .trigger = snd_dmaengine_pcm_trigger, 146 168 .pointer = snd_dmaengine_pcm_pointer, 147 169 .mmap = tegra_pcm_mmap, 148 170 };
+2 -2
sound/usb/mixer_quirks.c
··· 509 509 else 510 510 ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), bRequest, 511 511 USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN, 512 - 0, cpu_to_le16(wIndex), 512 + 0, wIndex, 513 513 &tmp, sizeof(tmp), 1000); 514 514 up_read(&mixer->chip->shutdown_rwsem); 515 515 ··· 540 540 else 541 541 ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), bRequest, 542 542 USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT, 543 - cpu_to_le16(wValue), cpu_to_le16(wIndex), 543 + wValue, wIndex, 544 544 NULL, 0, 1000); 545 545 up_read(&mixer->chip->shutdown_rwsem); 546 546
+1 -1
sound/usb/quirks.c
··· 486 486 { 487 487 int ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), 488 488 0xaf, USB_TYPE_VENDOR | USB_RECIP_DEVICE, 489 - cpu_to_le16(1), 0, NULL, 0, 1000); 489 + 1, 0, NULL, 0, 1000); 490 490 491 491 if (ret < 0) 492 492 return ret;
+4 -1
tools/power/x86/turbostat/turbostat.c
··· 1421 1421 case 0x3C: /* HSW */ 1422 1422 case 0x3F: /* HSW */ 1423 1423 case 0x45: /* HSW */ 1424 + case 0x46: /* HSW */ 1424 1425 return 1; 1425 1426 case 0x2E: /* Nehalem-EX Xeon - Beckton */ 1426 1427 case 0x2F: /* Westmere-EX Xeon - Eagleton */ ··· 1516 1515 case 0x3C: /* HSW */ 1517 1516 case 0x3F: /* HSW */ 1518 1517 case 0x45: /* HSW */ 1518 + case 0x46: /* HSW */ 1519 1519 do_rapl = RAPL_PKG | RAPL_CORES | RAPL_GFX; 1520 1520 break; 1521 1521 case 0x2D: ··· 1756 1754 case 0x3C: /* HSW */ 1757 1755 case 0x3F: /* HSW */ 1758 1756 case 0x45: /* HSW */ 1757 + case 0x46: /* HSW */ 1759 1758 return 1; 1760 1759 } 1761 1760 return 0; ··· 2279 2276 cmdline(argc, argv); 2280 2277 2281 2278 if (verbose) 2282 - fprintf(stderr, "turbostat v3.2 February 11, 2013" 2279 + fprintf(stderr, "turbostat v3.3 March 15, 2013" 2283 2280 " - Len Brown <lenb@kernel.org>\n"); 2284 2281 2285 2282 turbostat_init();
+37 -10
virt/kvm/kvm_main.c
··· 1541 1541 } 1542 1542 1543 1543 int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, 1544 - gpa_t gpa) 1544 + gpa_t gpa, unsigned long len) 1545 1545 { 1546 1546 struct kvm_memslots *slots = kvm_memslots(kvm); 1547 1547 int offset = offset_in_page(gpa); 1548 - gfn_t gfn = gpa >> PAGE_SHIFT; 1548 + gfn_t start_gfn = gpa >> PAGE_SHIFT; 1549 + gfn_t end_gfn = (gpa + len - 1) >> PAGE_SHIFT; 1550 + gfn_t nr_pages_needed = end_gfn - start_gfn + 1; 1551 + gfn_t nr_pages_avail; 1549 1552 1550 1553 ghc->gpa = gpa; 1551 1554 ghc->generation = slots->generation; 1552 - ghc->memslot = gfn_to_memslot(kvm, gfn); 1553 - ghc->hva = gfn_to_hva_many(ghc->memslot, gfn, NULL); 1554 - if (!kvm_is_error_hva(ghc->hva)) 1555 + ghc->len = len; 1556 + ghc->memslot = gfn_to_memslot(kvm, start_gfn); 1557 + ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail); 1558 + if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) { 1555 1559 ghc->hva += offset; 1556 - else 1557 - return -EFAULT; 1558 - 1560 + } else { 1561 + /* 1562 + * If the requested region crosses two memslots, we still 1563 + * verify that the entire region is valid here. 1564 + */ 1565 + while (start_gfn <= end_gfn) { 1566 + ghc->memslot = gfn_to_memslot(kvm, start_gfn); 1567 + ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, 1568 + &nr_pages_avail); 1569 + if (kvm_is_error_hva(ghc->hva)) 1570 + return -EFAULT; 1571 + start_gfn += nr_pages_avail; 1572 + } 1573 + /* Use the slow path for cross page reads and writes. */ 1574 + ghc->memslot = NULL; 1575 + } 1559 1576 return 0; 1560 1577 } 1561 1578 EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init); ··· 1583 1566 struct kvm_memslots *slots = kvm_memslots(kvm); 1584 1567 int r; 1585 1568 1569 + BUG_ON(len > ghc->len); 1570 + 1586 1571 if (slots->generation != ghc->generation) 1587 - kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa); 1572 + kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa, ghc->len); 1573 + 1574 + if (unlikely(!ghc->memslot)) 1575 + return kvm_write_guest(kvm, ghc->gpa, data, len); 1588 1576 1589 1577 if (kvm_is_error_hva(ghc->hva)) 1590 1578 return -EFAULT; ··· 1609 1587 struct kvm_memslots *slots = kvm_memslots(kvm); 1610 1588 int r; 1611 1589 1590 + BUG_ON(len > ghc->len); 1591 + 1612 1592 if (slots->generation != ghc->generation) 1613 - kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa); 1593 + kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa, ghc->len); 1594 + 1595 + if (unlikely(!ghc->memslot)) 1596 + return kvm_read_guest(kvm, ghc->gpa, data, len); 1614 1597 1615 1598 if (kvm_is_error_hva(ghc->hva)) 1616 1599 return -EFAULT;