···11-What: /sys/block/rssd*/registers22-Date: March 201233-KernelVersion: 3.344-Contact: Asai Thambi S P <asamymuthupa@micron.com>55-Description: This is a read-only file. Dumps below driver information and66- hardware registers.77- - S ACTive88- - Command Issue99- - Completed1010- - PORT IRQ STAT1111- - HOST IRQ STAT1212- - Allocated1313- - Commands in Q1414-151What: /sys/block/rssd*/status162Date: April 2012173KernelVersion: 3.4184Contact: Asai Thambi S P <asamymuthupa@micron.com>195Description: This is a read-only file. Indicates the status of the device.2020-2121-What: /sys/block/rssd*/flags2222-Date: May 20122323-KernelVersion: 3.52424-Contact: Asai Thambi S P <asamymuthupa@micron.com>2525-Description: This is a read-only file. Dumps the flags in port and driver2626- data structure
+45-84
Documentation/device-mapper/verity.txt
···7788Construction Parameters99=======================1010- <version> <dev> <hash_dev> <hash_start>1010+ <version> <dev> <hash_dev>1111 <data_block_size> <hash_block_size>1212 <num_data_blocks> <hash_start_block>1313 <algorithm> <digest> <salt>14141515<version>1616- This is the version number of the on-disk format.1616+ This is the type of the on-disk hash format.17171818 0 is the original format used in the Chromium OS.1919- The salt is appended when hashing, digests are stored continuously and2020- the rest of the block is padded with zeros.1919+ The salt is appended when hashing, digests are stored continuously and2020+ the rest of the block is padded with zeros.21212222 1 is the current format that should be used for new devices.2323- The salt is prepended when hashing and each digest is2424- padded with zeros to the power of two.2323+ The salt is prepended when hashing and each digest is2424+ padded with zeros to the power of two.25252626<dev>2727- This is the device containing the data the integrity of which needs to be2727+ This is the device containing data, the integrity of which needs to be2828 checked. It may be specified as a path, like /dev/sdaX, or a device number,2929 <major>:<minor>.30303131<hash_dev>3232- This is the device that that supplies the hash tree data. It may be3232+ This is the device that supplies the hash tree data. It may be3333 specified similarly to the device path and may be the same device. If the3434- same device is used, the hash_start should be outside of the dm-verity3535- configured device size.3434+ same device is used, the hash_start should be outside the configured3535+ dm-verity device.36363737<data_block_size>3838- The block size on a data device. Each block corresponds to one digest on3939- the hash device.3838+ The block size on a data device in bytes.3939+ Each block corresponds to one digest on the hash device.40404141<hash_block_size>4242- The size of a hash block.4242+ The size of a hash block in bytes.43434444<num_data_blocks>4545 The number of data blocks on the data device. Additional blocks are···6565Theory of operation6666===================67676868-dm-verity is meant to be setup as part of a verified boot path. This6868+dm-verity is meant to be set up as part of a verified boot path. This6969may be anything ranging from a boot using tboot or trustedgrub to just7070booting from a known-good device (like a USB drive or CD).7171···7373has been authenticated in some way (cryptographic signatures, etc).7474After instantiation, all hashes will be verified on-demand during7575disk access. If they cannot be verified up to the root node of the7676-tree, the root hash, then the I/O will fail. This should identify7676+tree, the root hash, then the I/O will fail. This should detect7777tampering with any data on the device and the hash data.78787979Cryptographic hashes are used to assert the integrity of the device on a8080-per-block basis. This allows for a lightweight hash computation on first read8181-into the page cache. Block hashes are stored linearly-aligned to the nearest8282-block the size of a page.8080+per-block basis. This allows for a lightweight hash computation on first read8181+into the page cache. Block hashes are stored linearly, aligned to the nearest8282+block size.83838484Hash Tree8585---------86868787Each node in the tree is a cryptographic hash. If it is a leaf node, the hash8888-is of some block data on disk. If it is an intermediary node, then the hash is8989-of a number of child nodes.8888+of some data block on disk is calculated. If it is an intermediary node,8989+the hash of a number of child nodes is calculated.90909191Each entry in the tree is a collection of neighboring nodes that fit in one9292block. The number is determined based on block_size and the size of the···110110On-disk format111111==============112112113113-Below is the recommended on-disk format. The verity kernel code does not114114-read the on-disk header. It only reads the hash blocks which directly115115-follow the header. It is expected that a user-space tool will verify the116116-integrity of the verity_header and then call dmsetup with the correct117117-parameters. Alternatively, the header can be omitted and the dmsetup118118-parameters can be passed via the kernel command-line in a rooted chain119119-of trust where the command-line is verified.113113+The verity kernel code does not read the verity metadata on-disk header.114114+It only reads the hash blocks which directly follow the header.115115+It is expected that a user-space tool will verify the integrity of the116116+verity header.120117121121-The on-disk format is especially useful in cases where the hash blocks122122-are on a separate partition. The magic number allows easy identification123123-of the partition contents. Alternatively, the hash blocks can be stored124124-in the same partition as the data to be verified. In such a configuration125125-the filesystem on the partition would be sized a little smaller than126126-the full-partition, leaving room for the hash blocks.127127-128128-struct superblock {129129- uint8_t signature[8]130130- "verity\0\0";131131-132132- uint8_t version;133133- 1 - current format134134-135135- uint8_t data_block_bits;136136- log2(data block size)137137-138138- uint8_t hash_block_bits;139139- log2(hash block size)140140-141141- uint8_t pad1[1];142142- zero padding143143-144144- uint16_t salt_size;145145- big-endian salt size146146-147147- uint8_t pad2[2];148148- zero padding149149-150150- uint32_t data_blocks_hi;151151- big-endian high 32 bits of the 64-bit number of data blocks152152-153153- uint32_t data_blocks_lo;154154- big-endian low 32 bits of the 64-bit number of data blocks155155-156156- uint8_t algorithm[16];157157- cryptographic algorithm158158-159159- uint8_t salt[384];160160- salt (the salt size is specified above)161161-162162- uint8_t pad3[88];163163- zero padding to 512-byte boundary164164-}118118+Alternatively, the header can be omitted and the dmsetup parameters can119119+be passed via the kernel command-line in a rooted chain of trust where120120+the command-line is verified.165121166122Directly following the header (and with sector number padded to the next hash167123block boundary) are the hash blocks which are stored a depth at a time168124(starting from the root), sorted in order of increasing index.125125+126126+The full specification of kernel parameters and on-disk metadata format127127+is available at the cryptsetup project's wiki page128128+ http://code.google.com/p/cryptsetup/wiki/DMVerity169129170130Status171131======···134174135175Example136176=======137137-138138-Setup a device:139139- dmsetup create vroot --table \140140- "0 2097152 "\141141- "verity 1 /dev/sda1 /dev/sda2 4096 4096 2097152 1 "\177177+Set up a device:178178+ # dmsetup create vroot --readonly --table \179179+ "0 2097152 verity 1 /dev/sda1 /dev/sda2 4096 4096 262144 1 sha256 "\142180 "4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 "\143181 "1234000000000000000000000000000000000000000000000000000000000000"144182145183A command line tool veritysetup is available to compute or verify146146-the hash tree or activate the kernel driver. This is available from147147-the LVM2 upstream repository and may be supplied as a package called148148-device-mapper-verity-tools:149149- git://sources.redhat.com/git/lvm2150150- http://sourceware.org/git/?p=lvm2.git151151- http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/verity?cvsroot=lvm2184184+the hash tree or activate the kernel device. This is available from185185+the cryptsetup upstream repository http://code.google.com/p/cryptsetup/186186+(as a libcryptsetup extension).152187153153-veritysetup -a vroot /dev/sda1 /dev/sda2 \154154- 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076188188+Create hash on the device:189189+ # veritysetup format /dev/sda1 /dev/sda2190190+ ...191191+ Root hash: 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076192192+193193+Activate the device:194194+ # veritysetup create vroot /dev/sda1 /dev/sda2 \195195+ 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
···33This isn't an exhaustive list, but you should add new prefixes to it before44using them to avoid name-space collisions.5566+ad Avionic Design GmbH67adi Analog Devices, Inc.78amcc Applied Micro Circuits Corporation (APM, formally AMCC)89apm Applied Micro Circuits Corporation (APM)
+57
Documentation/prctl/no_new_privs.txt
···11+The execve system call can grant a newly-started program privileges that22+its parent did not have. The most obvious examples are setuid/setgid33+programs and file capabilities. To prevent the parent program from44+gaining these privileges as well, the kernel and user code must be55+careful to prevent the parent from doing anything that could subvert the66+child. For example:77+88+ - The dynamic loader handles LD_* environment variables differently if99+ a program is setuid.1010+1111+ - chroot is disallowed to unprivileged processes, since it would allow1212+ /etc/passwd to be replaced from the point of view of a process that1313+ inherited chroot.1414+1515+ - The exec code has special handling for ptrace.1616+1717+These are all ad-hoc fixes. The no_new_privs bit (since Linux 3.5) is a1818+new, generic mechanism to make it safe for a process to modify its1919+execution environment in a manner that persists across execve. Any task2020+can set no_new_privs. Once the bit is set, it is inherited across fork,2121+clone, and execve and cannot be unset. With no_new_privs set, execve2222+promises not to grant the privilege to do anything that could not have2323+been done without the execve call. For example, the setuid and setgid2424+bits will no longer change the uid or gid; file capabilities will not2525+add to the permitted set, and LSMs will not relax constraints after2626+execve.2727+2828+To set no_new_privs, use prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0).2929+3030+Be careful, though: LSMs might also not tighten constraints on exec3131+in no_new_privs mode. (This means that setting up a general-purpose3232+service launcher to set no_new_privs before execing daemons may3333+interfere with LSM-based sandboxing.)3434+3535+Note that no_new_privs does not prevent privilege changes that do not3636+involve execve. An appropriately privileged task can still call3737+setuid(2) and receive SCM_RIGHTS datagrams.3838+3939+There are two main use cases for no_new_privs so far:4040+4141+ - Filters installed for the seccomp mode 2 sandbox persist across4242+ execve and can change the behavior of newly-executed programs.4343+ Unprivileged users are therefore only allowed to install such filters4444+ if no_new_privs is set.4545+4646+ - By itself, no_new_privs can be used to reduce the attack surface4747+ available to an unprivileged user. If everything running with a4848+ given uid has no_new_privs set, then that uid will be unable to4949+ escalate its privileges by directly attacking setuid, setgid, and5050+ fcap-using binaries; it will need to compromise something without the5151+ no_new_privs bit set first.5252+5353+In the future, other potentially dangerous kernel features could become5454+available to unprivileged tasks if no_new_privs is set. In principle,5555+several options to unshare(2) and clone(2) would be safe when5656+no_new_privs is set, and no_new_privs + chroot is considerable less5757+dangerous than chroot by itself.
+17
Documentation/virtual/kvm/api.txt
···19301930PTE's RPN field (ie, it needs to be shifted left by 12 to OR it19311931into the hash PTE second double word).1932193219331933+4.75 KVM_IRQFD19341934+19351935+Capability: KVM_CAP_IRQFD19361936+Architectures: x8619371937+Type: vm ioctl19381938+Parameters: struct kvm_irqfd (in)19391939+Returns: 0 on success, -1 on error19401940+19411941+Allows setting an eventfd to directly trigger a guest interrupt.19421942+kvm_irqfd.fd specifies the file descriptor to use as the eventfd and19431943+kvm_irqfd.gsi specifies the irqchip pin toggled by this event. When19441944+an event is tiggered on the eventfd, an interrupt is injected into19451945+the guest using the specified gsi pin. The irqfd is removed using19461946+the KVM_IRQFD_FLAG_DEASSIGN flag, specifying both kvm_irqfd.fd19471947+and kvm_irqfd.gsi.19481948+19491949+193319505. The kvm_run structure19341951------------------------19351952
···2727 */2828#define SWI_SYS_SIGRETURN (0xef000000|(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE))2929#define SWI_SYS_RT_SIGRETURN (0xef000000|(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE))3030+#define SWI_SYS_RESTART (0xef000000|__NR_restart_syscall|__NR_OABI_SYSCALL_BASE)30313132/*3233 * With EABI, the syscall number has to be loaded into r7.···4544const unsigned long sigreturn_codes[7] = {4645 MOV_R7_NR_SIGRETURN, SWI_SYS_SIGRETURN, SWI_THUMB_SIGRETURN,4746 MOV_R7_NR_RT_SIGRETURN, SWI_SYS_RT_SIGRETURN, SWI_THUMB_RT_SIGRETURN,4747+};4848+4949+/*5050+ * Either we support OABI only, or we have EABI with the OABI5151+ * compat layer enabled. In the later case we don't know if5252+ * user space is EABI or not, and if not we must not clobber r7.5353+ * Always using the OABI syscall solves that issue and works for5454+ * all those cases.5555+ */5656+const unsigned long syscall_restart_code[2] = {5757+ SWI_SYS_RESTART, /* swi __NR_restart_syscall */5858+ 0xe49df004, /* ldr pc, [sp], #4 */4859};49605061/*···605592 case -ERESTARTNOHAND:606593 case -ERESTARTSYS:607594 case -ERESTARTNOINTR:608608- case -ERESTART_RESTARTBLOCK:609595 regs->ARM_r0 = regs->ARM_ORIG_r0;610596 regs->ARM_pc = restart_addr;597597+ break;598598+ case -ERESTART_RESTARTBLOCK:599599+ regs->ARM_r0 = -EINTR;611600 break;612601 }613602 }···626611 * debugger has chosen to restart at a different PC.627612 */628613 if (regs->ARM_pc == restart_addr) {629629- if (retval == -ERESTARTNOHAND ||630630- retval == -ERESTART_RESTARTBLOCK614614+ if (retval == -ERESTARTNOHAND631615 || (retval == -ERESTARTSYS632616 && !(ka.sa.sa_flags & SA_RESTART))) {633617 regs->ARM_r0 = -EINTR;634618 regs->ARM_pc = continue_addr;635619 }636636- clear_thread_flag(TIF_SYSCALL_RESTARTSYS);637620 }638621639622 handle_signal(signr, &ka, &info, regs);···645632 * ignore the restart.646633 */647634 if (retval == -ERESTART_RESTARTBLOCK648648- && regs->ARM_pc == restart_addr)649649- set_thread_flag(TIF_SYSCALL_RESTARTSYS);635635+ && regs->ARM_pc == continue_addr) {636636+ if (thumb_mode(regs)) {637637+ regs->ARM_r7 = __NR_restart_syscall - __NR_SYSCALL_BASE;638638+ regs->ARM_pc -= 2;639639+ } else {640640+#if defined(CONFIG_AEABI) && !defined(CONFIG_OABI_COMPAT)641641+ regs->ARM_r7 = __NR_restart_syscall;642642+ regs->ARM_pc -= 4;643643+#else644644+ u32 __user *usp;645645+646646+ regs->ARM_sp -= 4;647647+ usp = (u32 __user *)regs->ARM_sp;648648+649649+ if (put_user(regs->ARM_pc, usp) == 0) {650650+ regs->ARM_pc = KERN_RESTART_CODE;651651+ } else {652652+ regs->ARM_sp += 4;653653+ force_sigsegv(0, current);654654+ }655655+#endif656656+ }657657+ }650658 }651659652660 restore_saved_sigmask();
+2
arch/arm/kernel/signal.h
···88 * published by the Free Software Foundation.99 */1010#define KERN_SIGRETURN_CODE (CONFIG_VECTORS_BASE + 0x00000500)1111+#define KERN_RESTART_CODE (KERN_SIGRETURN_CODE + sizeof(sigreturn_codes))11121213extern const unsigned long sigreturn_codes[7];1414+extern const unsigned long syscall_restart_code[2];
···201201 pr_err("i.MX35 clk %d: register failed with %ld\n",202202 i, PTR_ERR(clk[i]));203203204204-205204 clk_register_clkdev(clk[pata_gate], NULL, "pata_imx");206205 clk_register_clkdev(clk[can1_gate], NULL, "flexcan.0");207206 clk_register_clkdev(clk[can2_gate], NULL, "flexcan.1");···262263 clk_prepare_enable(clk[gpio3_gate]);263264 clk_prepare_enable(clk[iim_gate]);264265 clk_prepare_enable(clk[emi_gate]);266266+267267+ /*268268+ * SCC is needed to boot via mmc after a watchdog reset. The clock code269269+ * before conversion to common clk also enabled UART1 (which isn't270270+ * handled here and not needed for mmc) and IIM (which is enabled271271+ * unconditionally above).272272+ */273273+ clk_prepare_enable(clk[scc_gate]);265274266275 imx_print_silicon_rev("i.MX35", mx35_revision());267276
···791791 }792792}793793794794+#ifndef CONFIG_ARM_LPAE795795+796796+/*797797+ * The Linux PMD is made of two consecutive section entries covering 2MB798798+ * (see definition in include/asm/pgtable-2level.h). However a call to799799+ * create_mapping() may optimize static mappings by using individual800800+ * 1MB section mappings. This leaves the actual PMD potentially half801801+ * initialized if the top or bottom section entry isn't used, leaving it802802+ * open to problems if a subsequent ioremap() or vmalloc() tries to use803803+ * the virtual space left free by that unused section entry.804804+ *805805+ * Let's avoid the issue by inserting dummy vm entries covering the unused806806+ * PMD halves once the static mappings are in place.807807+ */808808+809809+static void __init pmd_empty_section_gap(unsigned long addr)810810+{811811+ struct vm_struct *vm;812812+813813+ vm = early_alloc_aligned(sizeof(*vm), __alignof__(*vm));814814+ vm->addr = (void *)addr;815815+ vm->size = SECTION_SIZE;816816+ vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING;817817+ vm->caller = pmd_empty_section_gap;818818+ vm_area_add_early(vm);819819+}820820+821821+static void __init fill_pmd_gaps(void)822822+{823823+ struct vm_struct *vm;824824+ unsigned long addr, next = 0;825825+ pmd_t *pmd;826826+827827+ /* we're still single threaded hence no lock needed here */828828+ for (vm = vmlist; vm; vm = vm->next) {829829+ if (!(vm->flags & VM_ARM_STATIC_MAPPING))830830+ continue;831831+ addr = (unsigned long)vm->addr;832832+ if (addr < next)833833+ continue;834834+835835+ /*836836+ * Check if this vm starts on an odd section boundary.837837+ * If so and the first section entry for this PMD is free838838+ * then we block the corresponding virtual address.839839+ */840840+ if ((addr & ~PMD_MASK) == SECTION_SIZE) {841841+ pmd = pmd_off_k(addr);842842+ if (pmd_none(*pmd))843843+ pmd_empty_section_gap(addr & PMD_MASK);844844+ }845845+846846+ /*847847+ * Then check if this vm ends on an odd section boundary.848848+ * If so and the second section entry for this PMD is empty849849+ * then we block the corresponding virtual address.850850+ */851851+ addr += vm->size;852852+ if ((addr & ~PMD_MASK) == SECTION_SIZE) {853853+ pmd = pmd_off_k(addr) + 1;854854+ if (pmd_none(*pmd))855855+ pmd_empty_section_gap(addr);856856+ }857857+858858+ /* no need to look at any vm entry until we hit the next PMD */859859+ next = (addr + PMD_SIZE - 1) & PMD_MASK;860860+ }861861+}862862+863863+#else864864+#define fill_pmd_gaps() do { } while (0)865865+#endif866866+794867static void * __initdata vmalloc_min =795868 (void *)(VMALLOC_END - (240 << 20) - VMALLOC_OFFSET);796869···11451072 */11461073 if (mdesc->map_io)11471074 mdesc->map_io();10751075+ fill_pmd_gaps();1148107611491077 /*11501078 * Finally flush the caches and tlb to ensure that we're in a
+1-1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···810810 lwz r3,VCORE_NAPPING_THREADS(r5)811811 lwz r4,VCPU_PTID(r9)812812 li r0,1813813- sldi r0,r0,r4813813+ sld r0,r0,r4814814 andc. r3,r3,r0 /* no sense IPI'ing ourselves */815815 beq 43f816816 mulli r4,r4,PACA_SIZE /* get paca for thread 0 */
+1-1
arch/powerpc/xmon/xmon.c
···971971 /* print cpus waiting or in xmon */972972 printf("cpus stopped:");973973 count = 0;974974- for (cpu = 0; cpu < NR_CPUS; ++cpu) {974974+ for_each_possible_cpu(cpu) {975975 if (cpumask_test_cpu(cpu, &cpus_in_xmon)) {976976 if (count == 0)977977 printf(" %x", cpu);
···125125126126 blkg->pd[i] = pd;127127 pd->blkg = blkg;128128- }129128130130- /* invoke per-policy init */131131- for (i = 0; i < BLKCG_MAX_POLS; i++) {132132- struct blkcg_policy *pol = blkcg_policy[i];133133-129129+ /* invoke per-policy init */134130 if (blkcg_policy_enabled(blkg->q, pol))135131 pol->pd_init_fn(blkg);136132 }···241245242246static void blkg_destroy(struct blkcg_gq *blkg)243247{244244- struct request_queue *q = blkg->q;245248 struct blkcg *blkcg = blkg->blkcg;246249247247- lockdep_assert_held(q->queue_lock);250250+ lockdep_assert_held(blkg->q->queue_lock);248251 lockdep_assert_held(&blkcg->lock);249252250253 /* Something wrong if we are trying to remove same group twice */
+19-6
block/blk-core.c
···361361 */362362void blk_drain_queue(struct request_queue *q, bool drain_all)363363{364364+ int i;365365+364366 while (true) {365367 bool drain = false;366366- int i;367368368369 spin_lock_irq(q->queue_lock);369370···408407 if (!drain)409408 break;410409 msleep(10);410410+ }411411+412412+ /*413413+ * With queue marked dead, any woken up waiter will fail the414414+ * allocation path, so the wakeup chaining is lost and we're415415+ * left with hung waiters. We need to wake up those waiters.416416+ */417417+ if (q->request_fn) {418418+ spin_lock_irq(q->queue_lock);419419+ for (i = 0; i < ARRAY_SIZE(q->rq.wait); i++)420420+ wake_up_all(&q->rq.wait[i]);421421+ spin_unlock_irq(q->queue_lock);411422 }412423}413424···480467 /* mark @q DEAD, no new request or merges will be allowed afterwards */481468 mutex_lock(&q->sysfs_lock);482469 queue_flag_set_unlocked(QUEUE_FLAG_DEAD, q);483483-484470 spin_lock_irq(lock);485471486472 /*···497485 queue_flag_set(QUEUE_FLAG_NOMERGES, q);498486 queue_flag_set(QUEUE_FLAG_NOXMERGES, q);499487 queue_flag_set(QUEUE_FLAG_DEAD, q);500500-501501- if (q->queue_lock != &q->__queue_lock)502502- q->queue_lock = &q->__queue_lock;503503-504488 spin_unlock_irq(lock);505489 mutex_unlock(&q->sysfs_lock);506490···506498 /* @q won't process any more request, flush async actions */507499 del_timer_sync(&q->backing_dev_info.laptop_mode_wb_timer);508500 blk_sync_queue(q);501501+502502+ spin_lock_irq(lock);503503+ if (q->queue_lock != &q->__queue_lock)504504+ q->queue_lock = &q->__queue_lock;505505+ spin_unlock_irq(lock);509506510507 /* @q is and will stay empty, shutdown and put */511508 blk_put_queue(q);
-41
block/blk-timeout.c
···197197 mod_timer(&q->timeout, expiry);198198}199199200200-/**201201- * blk_abort_queue -- Abort all request on given queue202202- * @queue: pointer to queue203203- *204204- */205205-void blk_abort_queue(struct request_queue *q)206206-{207207- unsigned long flags;208208- struct request *rq, *tmp;209209- LIST_HEAD(list);210210-211211- /*212212- * Not a request based block device, nothing to abort213213- */214214- if (!q->request_fn)215215- return;216216-217217- spin_lock_irqsave(q->queue_lock, flags);218218-219219- elv_abort_queue(q);220220-221221- /*222222- * Splice entries to local list, to avoid deadlocking if entries223223- * get readded to the timeout list by error handling224224- */225225- list_splice_init(&q->timeout_list, &list);226226-227227- list_for_each_entry_safe(rq, tmp, &list, timeout_list)228228- blk_abort_request(rq);229229-230230- /*231231- * Occasionally, blk_abort_request() will return without232232- * deleting the element from the list. Make sure we add those back233233- * instead of leaving them on the local stack list.234234- */235235- list_splice(&list, &q->timeout_list);236236-237237- spin_unlock_irqrestore(q->queue_lock, flags);238238-239239-}240240-EXPORT_SYMBOL_GPL(blk_abort_queue);
···721721 break;722722 }723723724724+ if (capable(CAP_SYS_RAWIO))725725+ return 0;726726+724727 /* In particular, rule out all resets and host-specific ioctls. */725728 printk_ratelimited(KERN_WARNING726729 "%s: sending ioctl %x to a partition!\n", current->comm, cmd);727730728728- return capable(CAP_SYS_RAWIO) ? 0 : -ENOIOCTLCMD;731731+ return -ENOIOCTLCMD;729732}730733EXPORT_SYMBOL(scsi_verify_blk_ioctl);731734
+9-2
drivers/block/drbd/drbd_bitmap.c
···14751475 first_word = 0;14761476 spin_lock_irq(&b->bm_lock);14771477 }14781478-14791478 /* last page (respectively only page, for first page == last page) */14801479 last_word = MLPP(el >> LN2_BPL);14811481- bm_set_full_words_within_one_page(mdev->bitmap, last_page, first_word, last_word);14801480+14811481+ /* consider bitmap->bm_bits = 32768, bitmap->bm_number_of_pages = 1. (or multiples).14821482+ * ==> e = 32767, el = 32768, last_page = 2,14831483+ * and now last_word = 0.14841484+ * We do not want to touch last_page in this case,14851485+ * as we did not allocate it, it is not present in bitmap->bm_pages.14861486+ */14871487+ if (last_word)14881488+ bm_set_full_words_within_one_page(mdev->bitmap, last_page, first_word, last_word);1482148914831490 /* possibly trailing bits.14841491 * example: (e & 63) == 63, el will be e+1.
+42-24
drivers/block/drbd/drbd_req.c
···472472 req->rq_state |= RQ_LOCAL_COMPLETED;473473 req->rq_state &= ~RQ_LOCAL_PENDING;474474475475- D_ASSERT(!(req->rq_state & RQ_NET_MASK));475475+ if (req->rq_state & RQ_LOCAL_ABORTED) {476476+ _req_may_be_done(req, m);477477+ break;478478+ }476479477480 __drbd_chk_io_error(mdev, false);478481479482 goto_queue_for_net_read:483483+484484+ D_ASSERT(!(req->rq_state & RQ_NET_MASK));480485481486 /* no point in retrying if there is no good remote data,482487 * or we have no connection. */···770765 return 0 == drbd_bm_count_bits(mdev, sbnr, ebnr);771766}772767768768+static void maybe_pull_ahead(struct drbd_conf *mdev)769769+{770770+ int congested = 0;771771+772772+ /* If I don't even have good local storage, we can not reasonably try773773+ * to pull ahead of the peer. We also need the local reference to make774774+ * sure mdev->act_log is there.775775+ * Note: caller has to make sure that net_conf is there.776776+ */777777+ if (!get_ldev_if_state(mdev, D_UP_TO_DATE))778778+ return;779779+780780+ if (mdev->net_conf->cong_fill &&781781+ atomic_read(&mdev->ap_in_flight) >= mdev->net_conf->cong_fill) {782782+ dev_info(DEV, "Congestion-fill threshold reached\n");783783+ congested = 1;784784+ }785785+786786+ if (mdev->act_log->used >= mdev->net_conf->cong_extents) {787787+ dev_info(DEV, "Congestion-extents threshold reached\n");788788+ congested = 1;789789+ }790790+791791+ if (congested) {792792+ queue_barrier(mdev); /* last barrier, after mirrored writes */793793+794794+ if (mdev->net_conf->on_congestion == OC_PULL_AHEAD)795795+ _drbd_set_state(_NS(mdev, conn, C_AHEAD), 0, NULL);796796+ else /*mdev->net_conf->on_congestion == OC_DISCONNECT */797797+ _drbd_set_state(_NS(mdev, conn, C_DISCONNECTING), 0, NULL);798798+ }799799+ put_ldev(mdev);800800+}801801+773802static int drbd_make_request_common(struct drbd_conf *mdev, struct bio *bio, unsigned long start_time)774803{775804 const int rw = bio_rw(bio);···1011972 _req_mod(req, queue_for_send_oos);10129731013974 if (remote &&10141014- mdev->net_conf->on_congestion != OC_BLOCK && mdev->agreed_pro_version >= 96) {10151015- int congested = 0;10161016-10171017- if (mdev->net_conf->cong_fill &&10181018- atomic_read(&mdev->ap_in_flight) >= mdev->net_conf->cong_fill) {10191019- dev_info(DEV, "Congestion-fill threshold reached\n");10201020- congested = 1;10211021- }10221022-10231023- if (mdev->act_log->used >= mdev->net_conf->cong_extents) {10241024- dev_info(DEV, "Congestion-extents threshold reached\n");10251025- congested = 1;10261026- }10271027-10281028- if (congested) {10291029- queue_barrier(mdev); /* last barrier, after mirrored writes */10301030-10311031- if (mdev->net_conf->on_congestion == OC_PULL_AHEAD)10321032- _drbd_set_state(_NS(mdev, conn, C_AHEAD), 0, NULL);10331033- else /*mdev->net_conf->on_congestion == OC_DISCONNECT */10341034- _drbd_set_state(_NS(mdev, conn, C_DISCONNECTING), 0, NULL);10351035- }10361036- }975975+ mdev->net_conf->on_congestion != OC_BLOCK && mdev->agreed_pro_version >= 96)976976+ maybe_pull_ahead(mdev);10379771038978 spin_unlock_irq(&mdev->req_lock);1039979 kfree(b); /* if someone else has beaten us to it... */
···3737#include <linux/kthread.h>3838#include <../drivers/ata/ahci.h>3939#include <linux/export.h>4040+#include <linux/debugfs.h>4041#include "mtip32xx.h"41424243#define HW_CMD_SLOT_SZ (MTIP_MAX_COMMAND_SLOTS * 32)···8685 * allocated in mtip_init().8786 */8887static int mtip_major;8888+static struct dentry *dfs_parent;89899090static DEFINE_SPINLOCK(rssd_index_lock);9191static DEFINE_IDA(rssd_index_ida);···25482546}2549254725502548/*25512551- * Sysfs register/status dump.25492549+ * Sysfs status dump.25522550 *25532551 * @dev Pointer to the device structure, passed by the kernrel.25542552 * @attr Pointer to the device_attribute structure passed by the kernel.···25572555 * return value25582556 * The size, in bytes, of the data copied into buf.25592557 */25602560-static ssize_t mtip_hw_show_registers(struct device *dev,25612561- struct device_attribute *attr,25622562- char *buf)25632563-{25642564- u32 group_allocated;25652565- struct driver_data *dd = dev_to_disk(dev)->private_data;25662566- int size = 0;25672567- int n;25682568-25692569- size += sprintf(&buf[size], "Hardware\n--------\n");25702570- size += sprintf(&buf[size], "S ACTive : [ 0x");25712571-25722572- for (n = dd->slot_groups-1; n >= 0; n--)25732573- size += sprintf(&buf[size], "%08X ",25742574- readl(dd->port->s_active[n]));25752575-25762576- size += sprintf(&buf[size], "]\n");25772577- size += sprintf(&buf[size], "Command Issue : [ 0x");25782578-25792579- for (n = dd->slot_groups-1; n >= 0; n--)25802580- size += sprintf(&buf[size], "%08X ",25812581- readl(dd->port->cmd_issue[n]));25822582-25832583- size += sprintf(&buf[size], "]\n");25842584- size += sprintf(&buf[size], "Completed : [ 0x");25852585-25862586- for (n = dd->slot_groups-1; n >= 0; n--)25872587- size += sprintf(&buf[size], "%08X ",25882588- readl(dd->port->completed[n]));25892589-25902590- size += sprintf(&buf[size], "]\n");25912591- size += sprintf(&buf[size], "PORT IRQ STAT : [ 0x%08X ]\n",25922592- readl(dd->port->mmio + PORT_IRQ_STAT));25932593- size += sprintf(&buf[size], "HOST IRQ STAT : [ 0x%08X ]\n",25942594- readl(dd->mmio + HOST_IRQ_STAT));25952595- size += sprintf(&buf[size], "\n");25962596-25972597- size += sprintf(&buf[size], "Local\n-----\n");25982598- size += sprintf(&buf[size], "Allocated : [ 0x");25992599-26002600- for (n = dd->slot_groups-1; n >= 0; n--) {26012601- if (sizeof(long) > sizeof(u32))26022602- group_allocated =26032603- dd->port->allocated[n/2] >> (32*(n&1));26042604- else26052605- group_allocated = dd->port->allocated[n];26062606- size += sprintf(&buf[size], "%08X ", group_allocated);26072607- }26082608- size += sprintf(&buf[size], "]\n");26092609-26102610- size += sprintf(&buf[size], "Commands in Q: [ 0x");26112611-26122612- for (n = dd->slot_groups-1; n >= 0; n--) {26132613- if (sizeof(long) > sizeof(u32))26142614- group_allocated =26152615- dd->port->cmds_to_issue[n/2] >> (32*(n&1));26162616- else26172617- group_allocated = dd->port->cmds_to_issue[n];26182618- size += sprintf(&buf[size], "%08X ", group_allocated);26192619- }26202620- size += sprintf(&buf[size], "]\n");26212621-26222622- return size;26232623-}26242624-26252558static ssize_t mtip_hw_show_status(struct device *dev,26262559 struct device_attribute *attr,26272560 char *buf)···25742637 return size;25752638}2576263925772577-static ssize_t mtip_hw_show_flags(struct device *dev,25782578- struct device_attribute *attr,25792579- char *buf)26402640+static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL);26412641+26422642+static ssize_t mtip_hw_read_registers(struct file *f, char __user *ubuf,26432643+ size_t len, loff_t *offset)25802644{25812581- struct driver_data *dd = dev_to_disk(dev)->private_data;25822582- int size = 0;26452645+ struct driver_data *dd = (struct driver_data *)f->private_data;26462646+ char buf[MTIP_DFS_MAX_BUF_SIZE];26472647+ u32 group_allocated;26482648+ int size = *offset;26492649+ int n;2583265025842584- size += sprintf(&buf[size], "Flag in port struct : [ %08lX ]\n",25852585- dd->port->flags);25862586- size += sprintf(&buf[size], "Flag in dd struct : [ %08lX ]\n",25872587- dd->dd_flag);26512651+ if (!len || size)26522652+ return 0;2588265325892589- return size;26542654+ if (size < 0)26552655+ return -EINVAL;26562656+26572657+ size += sprintf(&buf[size], "H/ S ACTive : [ 0x");26582658+26592659+ for (n = dd->slot_groups-1; n >= 0; n--)26602660+ size += sprintf(&buf[size], "%08X ",26612661+ readl(dd->port->s_active[n]));26622662+26632663+ size += sprintf(&buf[size], "]\n");26642664+ size += sprintf(&buf[size], "H/ Command Issue : [ 0x");26652665+26662666+ for (n = dd->slot_groups-1; n >= 0; n--)26672667+ size += sprintf(&buf[size], "%08X ",26682668+ readl(dd->port->cmd_issue[n]));26692669+26702670+ size += sprintf(&buf[size], "]\n");26712671+ size += sprintf(&buf[size], "H/ Completed : [ 0x");26722672+26732673+ for (n = dd->slot_groups-1; n >= 0; n--)26742674+ size += sprintf(&buf[size], "%08X ",26752675+ readl(dd->port->completed[n]));26762676+26772677+ size += sprintf(&buf[size], "]\n");26782678+ size += sprintf(&buf[size], "H/ PORT IRQ STAT : [ 0x%08X ]\n",26792679+ readl(dd->port->mmio + PORT_IRQ_STAT));26802680+ size += sprintf(&buf[size], "H/ HOST IRQ STAT : [ 0x%08X ]\n",26812681+ readl(dd->mmio + HOST_IRQ_STAT));26822682+ size += sprintf(&buf[size], "\n");26832683+26842684+ size += sprintf(&buf[size], "L/ Allocated : [ 0x");26852685+26862686+ for (n = dd->slot_groups-1; n >= 0; n--) {26872687+ if (sizeof(long) > sizeof(u32))26882688+ group_allocated =26892689+ dd->port->allocated[n/2] >> (32*(n&1));26902690+ else26912691+ group_allocated = dd->port->allocated[n];26922692+ size += sprintf(&buf[size], "%08X ", group_allocated);26932693+ }26942694+ size += sprintf(&buf[size], "]\n");26952695+26962696+ size += sprintf(&buf[size], "L/ Commands in Q : [ 0x");26972697+26982698+ for (n = dd->slot_groups-1; n >= 0; n--) {26992699+ if (sizeof(long) > sizeof(u32))27002700+ group_allocated =27012701+ dd->port->cmds_to_issue[n/2] >> (32*(n&1));27022702+ else27032703+ group_allocated = dd->port->cmds_to_issue[n];27042704+ size += sprintf(&buf[size], "%08X ", group_allocated);27052705+ }27062706+ size += sprintf(&buf[size], "]\n");27072707+27082708+ *offset = size <= len ? size : len;27092709+ size = copy_to_user(ubuf, buf, *offset);27102710+ if (size)27112711+ return -EFAULT;27122712+27132713+ return *offset;25902714}2591271525922592-static DEVICE_ATTR(registers, S_IRUGO, mtip_hw_show_registers, NULL);25932593-static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL);25942594-static DEVICE_ATTR(flags, S_IRUGO, mtip_hw_show_flags, NULL);27162716+static ssize_t mtip_hw_read_flags(struct file *f, char __user *ubuf,27172717+ size_t len, loff_t *offset)27182718+{27192719+ struct driver_data *dd = (struct driver_data *)f->private_data;27202720+ char buf[MTIP_DFS_MAX_BUF_SIZE];27212721+ int size = *offset;27222722+27232723+ if (!len || size)27242724+ return 0;27252725+27262726+ if (size < 0)27272727+ return -EINVAL;27282728+27292729+ size += sprintf(&buf[size], "Flag-port : [ %08lX ]\n",27302730+ dd->port->flags);27312731+ size += sprintf(&buf[size], "Flag-dd : [ %08lX ]\n",27322732+ dd->dd_flag);27332733+27342734+ *offset = size <= len ? size : len;27352735+ size = copy_to_user(ubuf, buf, *offset);27362736+ if (size)27372737+ return -EFAULT;27382738+27392739+ return *offset;27402740+}27412741+27422742+static const struct file_operations mtip_regs_fops = {27432743+ .owner = THIS_MODULE,27442744+ .open = simple_open,27452745+ .read = mtip_hw_read_registers,27462746+ .llseek = no_llseek,27472747+};27482748+27492749+static const struct file_operations mtip_flags_fops = {27502750+ .owner = THIS_MODULE,27512751+ .open = simple_open,27522752+ .read = mtip_hw_read_flags,27532753+ .llseek = no_llseek,27542754+};2595275525962756/*25972757 * Create the sysfs related attributes.···27052671 if (!kobj || !dd)27062672 return -EINVAL;2707267327082708- if (sysfs_create_file(kobj, &dev_attr_registers.attr))27092709- dev_warn(&dd->pdev->dev,27102710- "Error creating 'registers' sysfs entry\n");27112674 if (sysfs_create_file(kobj, &dev_attr_status.attr))27122675 dev_warn(&dd->pdev->dev,27132676 "Error creating 'status' sysfs entry\n");27142714- if (sysfs_create_file(kobj, &dev_attr_flags.attr))27152715- dev_warn(&dd->pdev->dev,27162716- "Error creating 'flags' sysfs entry\n");27172677 return 0;27182678}27192679···27262698 if (!kobj || !dd)27272699 return -EINVAL;2728270027292729- sysfs_remove_file(kobj, &dev_attr_registers.attr);27302701 sysfs_remove_file(kobj, &dev_attr_status.attr);27312731- sysfs_remove_file(kobj, &dev_attr_flags.attr);2732270227332703 return 0;27342704}27052705+27062706+static int mtip_hw_debugfs_init(struct driver_data *dd)27072707+{27082708+ if (!dfs_parent)27092709+ return -1;27102710+27112711+ dd->dfs_node = debugfs_create_dir(dd->disk->disk_name, dfs_parent);27122712+ if (IS_ERR_OR_NULL(dd->dfs_node)) {27132713+ dev_warn(&dd->pdev->dev,27142714+ "Error creating node %s under debugfs\n",27152715+ dd->disk->disk_name);27162716+ dd->dfs_node = NULL;27172717+ return -1;27182718+ }27192719+27202720+ debugfs_create_file("flags", S_IRUGO, dd->dfs_node, dd,27212721+ &mtip_flags_fops);27222722+ debugfs_create_file("registers", S_IRUGO, dd->dfs_node, dd,27232723+ &mtip_regs_fops);27242724+27252725+ return 0;27262726+}27272727+27282728+static void mtip_hw_debugfs_exit(struct driver_data *dd)27292729+{27302730+ debugfs_remove_recursive(dd->dfs_node);27312731+}27322732+2735273327362734/*27372735 * Perform any init/resume time hardware setup···37843730 mtip_hw_sysfs_init(dd, kobj);37853731 kobject_put(kobj);37863732 }37333733+ mtip_hw_debugfs_init(dd);3787373437883735 if (dd->mtip_svc_handler) {37893736 set_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag);···38103755 return rv;3811375638123757kthread_run_error:37583758+ mtip_hw_debugfs_exit(dd);37593759+38133760 /* Delete our gendisk. This also removes the device from /dev */38143761 del_gendisk(dd->disk);38153762···38623805 kobject_put(kobj);38633806 }38643807 }38083808+ mtip_hw_debugfs_exit(dd);3865380938663810 /*38673811 * Delete our gendisk structure. This also removes the device···42104152 }42114153 mtip_major = error;4212415441554155+ if (!dfs_parent) {41564156+ dfs_parent = debugfs_create_dir("rssd", NULL);41574157+ if (IS_ERR_OR_NULL(dfs_parent)) {41584158+ printk(KERN_WARNING "Error creating debugfs parent\n");41594159+ dfs_parent = NULL;41604160+ }41614161+ }41624162+42134163 /* Register our PCI operations. */42144164 error = pci_register_driver(&mtip_pci_driver);42154215- if (error)41654165+ if (error) {41664166+ debugfs_remove(dfs_parent);42164167 unregister_blkdev(mtip_major, MTIP_DRV_NAME);41684168+ }4217416942184170 return error;42194171}···42404172 */42414173static void __exit mtip_exit(void)42424174{41754175+ debugfs_remove_recursive(dfs_parent);41764176+42434177 /* Release the allocated major block device number. */42444178 unregister_blkdev(mtip_major, MTIP_DRV_NAME);42454179
+4-1
drivers/block/mtip32xx/mtip32xx.h
···2626#include <linux/ata.h>2727#include <linux/interrupt.h>2828#include <linux/genhd.h>2929-#include <linux/version.h>30293130/* Offset of Subsystem Device ID in pci confoguration space */3231#define PCI_SUBSYSTEM_DEVICEID 0x2E···109110#else110111 #define dbg_printk(format, arg...)111112#endif113113+114114+#define MTIP_DFS_MAX_BUF_SIZE 1024112115113116#define __force_bit2int (unsigned int __force)114117···448447 unsigned long dd_flag; /* NOTE: use atomic bit operations on this */449448450449 struct task_struct *mtip_svc_handler; /* task_struct of svc thd */450450+451451+ struct dentry *dfs_node;451452};452453453454#endif
···141141 return free;142142}143143144144-static void add_id_to_freelist(struct blkfront_info *info,144144+static int add_id_to_freelist(struct blkfront_info *info,145145 unsigned long id)146146{147147+ if (info->shadow[id].req.u.rw.id != id)148148+ return -EINVAL;149149+ if (info->shadow[id].request == NULL)150150+ return -EINVAL;147151 info->shadow[id].req.u.rw.id = info->shadow_free;148152 info->shadow[id].request = NULL;149153 info->shadow_free = id;154154+ return 0;150155}151156157157+static const char *op_name(int op)158158+{159159+ static const char *const names[] = {160160+ [BLKIF_OP_READ] = "read",161161+ [BLKIF_OP_WRITE] = "write",162162+ [BLKIF_OP_WRITE_BARRIER] = "barrier",163163+ [BLKIF_OP_FLUSH_DISKCACHE] = "flush",164164+ [BLKIF_OP_DISCARD] = "discard" };165165+166166+ if (op < 0 || op >= ARRAY_SIZE(names))167167+ return "unknown";168168+169169+ if (!names[op])170170+ return "reserved";171171+172172+ return names[op];173173+}152174static int xlbd_reserve_minors(unsigned int minor, unsigned int nr)153175{154176 unsigned int end = minor + nr;···768746769747 bret = RING_GET_RESPONSE(&info->ring, i);770748 id = bret->id;749749+ /*750750+ * The backend has messed up and given us an id that we would751751+ * never have given to it (we stamp it up to BLK_RING_SIZE -752752+ * look in get_id_from_freelist.753753+ */754754+ if (id >= BLK_RING_SIZE) {755755+ WARN(1, "%s: response to %s has incorrect id (%ld)\n",756756+ info->gd->disk_name, op_name(bret->operation), id);757757+ /* We can't safely get the 'struct request' as758758+ * the id is busted. */759759+ continue;760760+ }771761 req = info->shadow[id].request;772762773763 if (bret->operation != BLKIF_OP_DISCARD)774764 blkif_completion(&info->shadow[id]);775765776776- add_id_to_freelist(info, id);766766+ if (add_id_to_freelist(info, id)) {767767+ WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n",768768+ info->gd->disk_name, op_name(bret->operation), id);769769+ continue;770770+ }777771778772 error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO;779773 switch (bret->operation) {780774 case BLKIF_OP_DISCARD:781775 if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {782776 struct request_queue *rq = info->rq;783783- printk(KERN_WARNING "blkfront: %s: discard op failed\n",784784- info->gd->disk_name);777777+ printk(KERN_WARNING "blkfront: %s: %s op failed\n",778778+ info->gd->disk_name, op_name(bret->operation));785779 error = -EOPNOTSUPP;786780 info->feature_discard = 0;787781 info->feature_secdiscard = 0;···809771 case BLKIF_OP_FLUSH_DISKCACHE:810772 case BLKIF_OP_WRITE_BARRIER:811773 if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {812812- printk(KERN_WARNING "blkfront: %s: write %s op failed\n",813813- info->flush_op == BLKIF_OP_WRITE_BARRIER ?814814- "barrier" : "flush disk cache",815815- info->gd->disk_name);774774+ printk(KERN_WARNING "blkfront: %s: %s op failed\n",775775+ info->gd->disk_name, op_name(bret->operation));816776 error = -EOPNOTSUPP;817777 }818778 if (unlikely(bret->status == BLKIF_RSP_ERROR &&819779 info->shadow[id].req.u.rw.nr_segments == 0)) {820820- printk(KERN_WARNING "blkfront: %s: empty write %s op failed\n",821821- info->flush_op == BLKIF_OP_WRITE_BARRIER ?822822- "barrier" : "flush disk cache",823823- info->gd->disk_name);780780+ printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",781781+ info->gd->disk_name, op_name(bret->operation));824782 error = -EOPNOTSUPP;825783 }826784 if (unlikely(error)) {
+13-15
drivers/clk/clk.c
···1067106710681068 old_parent = clk->parent;1069106910701070- /* find index of new parent clock using cached parent ptrs */10711071- if (clk->parents)10721072- for (i = 0; i < clk->num_parents; i++)10731073- if (clk->parents[i] == parent)10741074- break;10751075- else10701070+ if (!clk->parents)10761071 clk->parents = kzalloc((sizeof(struct clk*) * clk->num_parents),10771072 GFP_KERNEL);1078107310791074 /*10801080- * find index of new parent clock using string name comparison10811081- * also try to cache the parent to avoid future calls to __clk_lookup10751075+ * find index of new parent clock using cached parent ptrs,10761076+ * or if not yet cached, use string name comparison and cache10771077+ * them now to avoid future calls to __clk_lookup.10821078 */10831083- if (i == clk->num_parents)10841084- for (i = 0; i < clk->num_parents; i++)10851085- if (!strcmp(clk->parent_names[i], parent->name)) {10861086- if (clk->parents)10871087- clk->parents[i] = __clk_lookup(parent->name);10881088- break;10891089- }10791079+ for (i = 0; i < clk->num_parents; i++) {10801080+ if (clk->parents && clk->parents[i] == parent)10811081+ break;10821082+ else if (!strcmp(clk->parent_names[i], parent->name)) {10831083+ if (clk->parents)10841084+ clk->parents[i] = __clk_lookup(parent->name);10851085+ break;10861086+ }10871087+ }1090108810911089 if (i == clk->num_parents) {10921090 pr_debug("%s: clock %s is not a possible parent of clock %s\n",
+24-3
drivers/gpu/drm/drm_edid.c
···10391039 return true;10401040}1041104110421042+static bool valid_inferred_mode(const struct drm_connector *connector,10431043+ const struct drm_display_mode *mode)10441044+{10451045+ struct drm_display_mode *m;10461046+ bool ok = false;10471047+10481048+ list_for_each_entry(m, &connector->probed_modes, head) {10491049+ if (mode->hdisplay == m->hdisplay &&10501050+ mode->vdisplay == m->vdisplay &&10511051+ drm_mode_vrefresh(mode) == drm_mode_vrefresh(m))10521052+ return false; /* duplicated */10531053+ if (mode->hdisplay <= m->hdisplay &&10541054+ mode->vdisplay <= m->vdisplay)10551055+ ok = true;10561056+ }10571057+ return ok;10581058+}10591059+10421060static int10431061drm_dmt_modes_for_range(struct drm_connector *connector, struct edid *edid,10441062 struct detailed_timing *timing)···10661048 struct drm_device *dev = connector->dev;1067104910681050 for (i = 0; i < drm_num_dmt_modes; i++) {10691069- if (mode_in_range(drm_dmt_modes + i, edid, timing)) {10511051+ if (mode_in_range(drm_dmt_modes + i, edid, timing) &&10521052+ valid_inferred_mode(connector, drm_dmt_modes + i)) {10701053 newmode = drm_mode_duplicate(dev, &drm_dmt_modes[i]);10711054 if (newmode) {10721055 drm_mode_probed_add(connector, newmode);···11071088 return modes;1108108911091090 fixup_mode_1366x768(newmode);11101110- if (!mode_in_range(newmode, edid, timing)) {10911091+ if (!mode_in_range(newmode, edid, timing) ||10921092+ !valid_inferred_mode(connector, newmode)) {11111093 drm_mode_destroy(dev, newmode);11121094 continue;11131095 }···11361116 return modes;1137111711381118 fixup_mode_1366x768(newmode);11391139- if (!mode_in_range(newmode, edid, timing)) {11191119+ if (!mode_in_range(newmode, edid, timing) ||11201120+ !valid_inferred_mode(connector, newmode)) {11401121 drm_mode_destroy(dev, newmode);11411122 continue;11421123 }
+30-7
drivers/gpu/drm/i915/i915_dma.c
···14011401 }14021402}1403140314041404+static void i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)14051405+{14061406+ struct apertures_struct *ap;14071407+ struct pci_dev *pdev = dev_priv->dev->pdev;14081408+ bool primary;14091409+14101410+ ap = alloc_apertures(1);14111411+ if (!ap)14121412+ return;14131413+14141414+ ap->ranges[0].base = dev_priv->dev->agp->base;14151415+ ap->ranges[0].size =14161416+ dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT;14171417+ primary =14181418+ pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;14191419+14201420+ remove_conflicting_framebuffers(ap, "inteldrmfb", primary);14211421+14221422+ kfree(ap);14231423+}14241424+14041425/**14051426 * i915_driver_load - setup chip and create an initial config14061427 * @dev: DRM device···14671446 goto free_priv;14681447 }1469144814491449+ dev_priv->mm.gtt = intel_gtt_get();14501450+ if (!dev_priv->mm.gtt) {14511451+ DRM_ERROR("Failed to initialize GTT\n");14521452+ ret = -ENODEV;14531453+ goto put_bridge;14541454+ }14551455+14561456+ i915_kick_out_firmware_fb(dev_priv);14571457+14701458 pci_set_master(dev->pdev);1471145914721460 /* overlay on gen2 is broken and can't address above 1G */···14991469 DRM_ERROR("failed to map registers\n");15001470 ret = -EIO;15011471 goto put_bridge;15021502- }15031503-15041504- dev_priv->mm.gtt = intel_gtt_get();15051505- if (!dev_priv->mm.gtt) {15061506- DRM_ERROR("Failed to initialize GTT\n");15071507- ret = -ENODEV;15081508- goto out_rmmap;15091472 }1510147315111474 aperture_size = dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT;
+11-2
drivers/gpu/drm/radeon/radeon_gart.c
···289289 rdev->vm_manager.enabled = false;290290291291 /* mark first vm as always in use, it's the system one */292292+ /* allocate enough for 2 full VM pts */292293 r = radeon_sa_bo_manager_init(rdev, &rdev->vm_manager.sa_manager,293293- rdev->vm_manager.max_pfn * 8,294294+ rdev->vm_manager.max_pfn * 8 * 2,294295 RADEON_GEM_DOMAIN_VRAM);295296 if (r) {296297 dev_err(rdev->dev, "failed to allocate vm bo (%dKB)\n",···634633 mutex_init(&vm->mutex);635634 INIT_LIST_HEAD(&vm->list);636635 INIT_LIST_HEAD(&vm->va);637637- vm->last_pfn = 0;636636+ /* SI requires equal sized PTs for all VMs, so always set637637+ * last_pfn to max_pfn. cayman allows variable sized638638+ * pts so we can grow then as needed. Once we switch639639+ * to two level pts we can unify this again.640640+ */641641+ if (rdev->family >= CHIP_TAHITI)642642+ vm->last_pfn = rdev->vm_manager.max_pfn;643643+ else644644+ vm->last_pfn = 0;638645 /* map the ib pool buffer at 0 in virtual address space, set639646 * read only640647 */
+6-4
drivers/gpu/drm/radeon/radeon_gem.c
···292292int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,293293 struct drm_file *filp)294294{295295+ struct radeon_device *rdev = dev->dev_private;295296 struct drm_radeon_gem_busy *args = data;296297 struct drm_gem_object *gobj;297298 struct radeon_bo *robj;···318317 break;319318 }320319 drm_gem_object_unreference_unlocked(gobj);321321- r = radeon_gem_handle_lockup(robj->rdev, r);320320+ r = radeon_gem_handle_lockup(rdev, r);322321 return r;323322}324323325324int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,326325 struct drm_file *filp)327326{327327+ struct radeon_device *rdev = dev->dev_private;328328 struct drm_radeon_gem_wait_idle *args = data;329329 struct drm_gem_object *gobj;330330 struct radeon_bo *robj;···338336 robj = gem_to_radeon_bo(gobj);339337 r = radeon_bo_wait(robj, NULL, false);340338 /* callback hw specific functions if any */341341- if (robj->rdev->asic->ioctl_wait_idle)342342- robj->rdev->asic->ioctl_wait_idle(robj->rdev, robj);339339+ if (rdev->asic->ioctl_wait_idle)340340+ robj->rdev->asic->ioctl_wait_idle(rdev, robj);343341 drm_gem_object_unreference_unlocked(gobj);344344- r = radeon_gem_handle_lockup(robj->rdev, r);342342+ r = radeon_gem_handle_lockup(rdev, r);345343 return r;346344}347345
+2-2
drivers/gpu/drm/radeon/si.c
···23652365 WREG32(0x15DC, 0);2366236623672367 /* empty context1-15 */23682368- /* FIXME start with 1G, once using 2 level pt switch to full23682368+ /* FIXME start with 4G, once using 2 level pt switch to full23692369 * vm size space23702370 */23712371 /* set vm size, must be a multiple of 4 */23722372 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);23732373- WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, (1 << 30) / RADEON_GPU_PAGE_SIZE);23732373+ WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);23742374 for (i = 1; i < 16; i++) {23752375 if (i < 8)23762376 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
···138138139139void dm_tm_destroy(struct dm_transaction_manager *tm)140140{141141+ if (!tm->is_clone)142142+ wipe_shadow_table(tm);143143+141144 kfree(tm);142145}143146EXPORT_SYMBOL_GPL(dm_tm_destroy);···347344 }348345349346 *sm = dm_sm_checker_create(inner);350350- if (!*sm)347347+ if (IS_ERR(*sm)) {348348+ r = PTR_ERR(*sm);351349 goto bad2;350350+ }352351353352 } else {354353 r = dm_bm_write_lock(dm_tm_get_bm(*tm), sb_location,···369364 }370365371366 *sm = dm_sm_checker_create(inner);372372- if (!*sm)367367+ if (IS_ERR(*sm)) {368368+ r = PTR_ERR(*sm);373369 goto bad2;370370+ }374371 }375372376373 return 0;
+5-8
drivers/md/raid1.c
···517517 int bad_sectors;518518519519 int disk = start_disk + i;520520- if (disk >= conf->raid_disks)521521- disk -= conf->raid_disks;520520+ if (disk >= conf->raid_disks * 2)521521+ disk -= conf->raid_disks * 2;522522523523 rdev = rcu_dereference(conf->mirrors[disk].rdev);524524 if (r1_bio->bios[disk] == IO_BLOCKED···883883 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);884884 const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA));885885 struct md_rdev *blocked_rdev;886886- int plugged;887886 int first_clone;888887 int sectors_handled;889888 int max_sectors;···10331034 * the bad blocks. Each set of writes gets it's own r1bio10341035 * with a set of bios attached.10351036 */10361036- plugged = mddev_check_plugged(mddev);1037103710381038 disks = conf->raid_disks * 2;10391039 retry_write:···11891191 bio_list_add(&conf->pending_bio_list, mbio);11901192 conf->pending_count++;11911193 spin_unlock_irqrestore(&conf->device_lock, flags);11941194+ if (!mddev_check_plugged(mddev))11951195+ md_wakeup_thread(mddev->thread);11921196 }11931197 /* Mustn't call r1_bio_write_done before this next test,11941198 * as it could result in the bio being freed.···1213121312141214 /* In case raid1d snuck in to freeze_array */12151215 wake_up(&conf->wait_barrier);12161216-12171217- if (do_sync || !bitmap || !plugged)12181218- md_wakeup_thread(mddev->thread);12191216}1220121712211218static void status(struct seq_file *seq, struct mddev *mddev)···26182621 goto abort;26192622 }26202623 err = -ENOMEM;26212621- conf->thread = md_register_thread(raid1d, mddev, NULL);26242624+ conf->thread = md_register_thread(raid1d, mddev, "raid1");26222625 if (!conf->thread) {26232626 printk(KERN_ERR26242627 "md/raid1:%s: couldn't allocate thread\n",
+16-10
drivers/md/raid10.c
···10391039 const unsigned long do_fua = (bio->bi_rw & REQ_FUA);10401040 unsigned long flags;10411041 struct md_rdev *blocked_rdev;10421042- int plugged;10431042 int sectors_handled;10441043 int max_sectors;10451044 int sectors;···12381239 * of r10_bios is recored in bio->bi_phys_segments just as with12391240 * the read case.12401241 */12411241- plugged = mddev_check_plugged(mddev);1242124212431243 r10_bio->read_slot = -1; /* make sure repl_bio gets freed */12441244 raid10_find_phys(conf, r10_bio);···13941396 bio_list_add(&conf->pending_bio_list, mbio);13951397 conf->pending_count++;13961398 spin_unlock_irqrestore(&conf->device_lock, flags);13991399+ if (!mddev_check_plugged(mddev))14001400+ md_wakeup_thread(mddev->thread);1397140113981402 if (!r10_bio->devs[i].repl_bio)13991403 continue;···14231423 bio_list_add(&conf->pending_bio_list, mbio);14241424 conf->pending_count++;14251425 spin_unlock_irqrestore(&conf->device_lock, flags);14261426+ if (!mddev_check_plugged(mddev))14271427+ md_wakeup_thread(mddev->thread);14261428 }1427142914281430 /* Don't remove the bias on 'remaining' (one_write_done) until···1450144814511449 /* In case raid10d snuck in to freeze_array */14521450 wake_up(&conf->wait_barrier);14531453-14541454- if (do_sync || !mddev->bitmap || !plugged)14551455- md_wakeup_thread(mddev->thread);14561451}1457145214581453static void status(struct seq_file *seq, struct mddev *mddev)···23092310 if (r10_sync_page_io(rdev,23102311 r10_bio->devs[sl].addr +23112312 sect,23122312- s<<9, conf->tmppage, WRITE)23132313+ s, conf->tmppage, WRITE)23132314 == 0) {23142315 /* Well, this device is dead */23152316 printk(KERN_NOTICE···23482349 switch (r10_sync_page_io(rdev,23492350 r10_bio->devs[sl].addr +23502351 sect,23512351- s<<9, conf->tmppage,23522352+ s, conf->tmppage,23522353 READ)) {23532354 case 0:23542355 /* Well, this device is dead */···25112512 slot = r10_bio->read_slot;25122513 printk_ratelimited(25132514 KERN_ERR25142514- "md/raid10:%s: %s: redirecting"25152515+ "md/raid10:%s: %s: redirecting "25152516 "sector %llu to another mirror\n",25162517 mdname(mddev),25172518 bdevname(rdev->bdev, b),···26602661 blk_start_plug(&plug);26612662 for (;;) {2662266326632663- flush_pending_writes(conf);26642664+ if (atomic_read(&mddev->plug_cnt) == 0)26652665+ flush_pending_writes(conf);2664266626652667 spin_lock_irqsave(&conf->device_lock, flags);26662668 if (list_empty(head)) {···28902890 /* want to reconstruct this device */28912891 rb2 = r10_bio;28922892 sect = raid10_find_virt(conf, sector_nr, i);28932893+ if (sect >= mddev->resync_max_sectors) {28942894+ /* last stripe is not complete - don't28952895+ * try to recover this sector.28962896+ */28972897+ continue;28982898+ }28932899 /* Unless we are doing a full sync, or a replacement28942900 * we only need to recover the block if it is set in28952901 * the bitmap···34273421 spin_lock_init(&conf->resync_lock);34283422 init_waitqueue_head(&conf->wait_barrier);3429342334303430- conf->thread = md_register_thread(raid10d, mddev, NULL);34243424+ conf->thread = md_register_thread(raid10d, mddev, "raid10");34313425 if (!conf->thread)34323426 goto out;34333427
+47-20
drivers/md/raid5.c
···196196 BUG_ON(!list_empty(&sh->lru));197197 BUG_ON(atomic_read(&conf->active_stripes)==0);198198 if (test_bit(STRIPE_HANDLE, &sh->state)) {199199- if (test_bit(STRIPE_DELAYED, &sh->state))199199+ if (test_bit(STRIPE_DELAYED, &sh->state) &&200200+ !test_bit(STRIPE_PREREAD_ACTIVE, &sh->state))200201 list_add_tail(&sh->lru, &conf->delayed_list);201202 else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&202203 sh->bm_seq - conf->seq_write > 0)203204 list_add_tail(&sh->lru, &conf->bitmap_list);204205 else {206206+ clear_bit(STRIPE_DELAYED, &sh->state);205207 clear_bit(STRIPE_BIT_DELAY, &sh->state);206208 list_add_tail(&sh->lru, &conf->handle_list);207209 }···608606 * a chance*/609607 md_check_recovery(conf->mddev);610608 }609609+ /*610610+ * Because md_wait_for_blocked_rdev611611+ * will dec nr_pending, we must612612+ * increment it first.613613+ */614614+ atomic_inc(&rdev->nr_pending);611615 md_wait_for_blocked_rdev(rdev, conf->mddev);612616 } else {613617 /* Acknowledged bad block - skip the write */···17451737 } else {17461738 const char *bdn = bdevname(rdev->bdev, b);17471739 int retry = 0;17401740+ int set_bad = 0;1748174117491742 clear_bit(R5_UPTODATE, &sh->dev[i].flags);17501743 atomic_inc(&rdev->read_errors);···17571748 mdname(conf->mddev),17581749 (unsigned long long)s,17591750 bdn);17601760- else if (conf->mddev->degraded >= conf->max_degraded)17511751+ else if (conf->mddev->degraded >= conf->max_degraded) {17521752+ set_bad = 1;17611753 printk_ratelimited(17621754 KERN_WARNING17631755 "md/raid:%s: read error not correctable "···17661756 mdname(conf->mddev),17671757 (unsigned long long)s,17681758 bdn);17691769- else if (test_bit(R5_ReWrite, &sh->dev[i].flags))17591759+ } else if (test_bit(R5_ReWrite, &sh->dev[i].flags)) {17701760 /* Oh, no!!! */17611761+ set_bad = 1;17711762 printk_ratelimited(17721763 KERN_WARNING17731764 "md/raid:%s: read error NOT corrected!! "···17761765 mdname(conf->mddev),17771766 (unsigned long long)s,17781767 bdn);17791779- else if (atomic_read(&rdev->read_errors)17681768+ } else if (atomic_read(&rdev->read_errors)17801769 > conf->max_nr_stripes)17811770 printk(KERN_WARNING17821771 "md/raid:%s: Too many read errors, failing device %s.\n",···17881777 else {17891778 clear_bit(R5_ReadError, &sh->dev[i].flags);17901779 clear_bit(R5_ReWrite, &sh->dev[i].flags);17911791- md_error(conf->mddev, rdev);17801780+ if (!(set_bad17811781+ && test_bit(In_sync, &rdev->flags)17821782+ && rdev_set_badblocks(17831783+ rdev, sh->sector, STRIPE_SECTORS, 0)))17841784+ md_error(conf->mddev, rdev);17921785 }17931786 }17941787 rdev_dec_pending(rdev, conf->mddev);···3597358235983583finish:35993584 /* wait for this device to become unblocked */36003600- if (conf->mddev->external && unlikely(s.blocked_rdev))36013601- md_wait_for_blocked_rdev(s.blocked_rdev, conf->mddev);35853585+ if (unlikely(s.blocked_rdev)) {35863586+ if (conf->mddev->external)35873587+ md_wait_for_blocked_rdev(s.blocked_rdev,35883588+ conf->mddev);35893589+ else35903590+ /* Internal metadata will immediately35913591+ * be written by raid5d, so we don't35923592+ * need to wait here.35933593+ */35943594+ rdev_dec_pending(s.blocked_rdev,35953595+ conf->mddev);35963596+ }3602359736033598 if (s.handle_bad_blocks)36043599 for (i = disks; i--; ) {···39063881 raid_bio->bi_next = (void*)rdev;39073882 align_bi->bi_bdev = rdev->bdev;39083883 align_bi->bi_flags &= ~(1 << BIO_SEG_VALID);39093909- /* No reshape active, so we can trust rdev->data_offset */39103910- align_bi->bi_sector += rdev->data_offset;3911388439123885 if (!bio_fits_rdev(align_bi) ||39133886 is_badblock(rdev, align_bi->bi_sector, align_bi->bi_size>>9,···39153892 rdev_dec_pending(rdev, mddev);39163893 return 0;39173894 }38953895+38963896+ /* No reshape active, so we can trust rdev->data_offset */38973897+ align_bi->bi_sector += rdev->data_offset;3918389839193899 spin_lock_irq(&conf->device_lock);39203900 wait_event_lock_irq(conf->wait_for_stripe,···39973971 struct stripe_head *sh;39983972 const int rw = bio_data_dir(bi);39993973 int remaining;40004000- int plugged;4001397440023975 if (unlikely(bi->bi_rw & REQ_FLUSH)) {40033976 md_flush_request(mddev, bi);···40153990 bi->bi_next = NULL;40163991 bi->bi_phys_segments = 1; /* over-loaded to count active stripes */4017399240184018- plugged = mddev_check_plugged(mddev);40193993 for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) {40203994 DEFINE_WAIT(w);40213995 int previous;···41164092 if ((bi->bi_rw & REQ_SYNC) &&41174093 !test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))41184094 atomic_inc(&conf->preread_active_stripes);40954095+ mddev_check_plugged(mddev);41194096 release_stripe(sh);41204097 } else {41214098 /* cannot get stripe for read-ahead, just give-up */···41244099 finish_wait(&conf->wait_for_overlap, &w);41254100 break;41264101 }41274127-41284102 }41294129- if (!plugged)41304130- md_wakeup_thread(mddev->thread);4131410341324104 spin_lock_irq(&conf->device_lock);41334105 remaining = raid5_dec_bi_phys_segments(bi);···48454823 int raid_disk, memory, max_disks;48464824 struct md_rdev *rdev;48474825 struct disk_info *disk;48264826+ char pers_name[6];4848482748494828 if (mddev->new_level != 548504829 && mddev->new_level != 4···49694946 printk(KERN_INFO "md/raid:%s: allocated %dkB\n",49704947 mdname(mddev), memory);4971494849724972- conf->thread = md_register_thread(raid5d, mddev, NULL);49494949+ sprintf(pers_name, "raid%d", mddev->new_level);49504950+ conf->thread = md_register_thread(raid5d, mddev, pers_name);49734951 if (!conf->thread) {49744952 printk(KERN_ERR49754953 "md/raid:%s: couldn't allocate thread.\n",···54895465 if (rdev->saved_raid_disk >= 0 &&54905466 rdev->saved_raid_disk >= first &&54915467 conf->disks[rdev->saved_raid_disk].rdev == NULL)54925492- disk = rdev->saved_raid_disk;54935493- else54945494- disk = first;54955495- for ( ; disk <= last ; disk++) {54685468+ first = rdev->saved_raid_disk;54695469+54705470+ for (disk = first; disk <= last; disk++) {54965471 p = conf->disks + disk;54975472 if (p->rdev == NULL) {54985473 clear_bit(In_sync, &rdev->flags);···55005477 if (rdev->saved_raid_disk != disk)55015478 conf->fullsync = 1;55025479 rcu_assign_pointer(p->rdev, rdev);55035503- break;54805480+ goto out;55045481 }54825482+ }54835483+ for (disk = first; disk <= last; disk++) {54845484+ p = conf->disks + disk;55055485 if (test_bit(WantReplacement, &p->rdev->flags) &&55065486 p->replacement == NULL) {55075487 clear_bit(In_sync, &rdev->flags);···55165490 break;55175491 }55185492 }54935493+out:55195494 print_raid5_conf(conf);55205495 return err;55215496}
+1-1
drivers/mtd/nand/cafe_nand.c
···102102static int cafe_device_ready(struct mtd_info *mtd)103103{104104 struct cafe_priv *cafe = mtd->priv;105105- int result = !!(cafe_readl(cafe, NAND_STATUS) | 0x40000000);105105+ int result = !!(cafe_readl(cafe, NAND_STATUS) & 0x40000000);106106 uint32_t irqs = cafe_readl(cafe, NAND_IRQ);107107108108 cafe_writel(cafe, irqs, NAND_IRQ);
+7
drivers/mtd/nand/nand_base.c
···35013501 /* propagate ecc info to mtd_info */35023502 mtd->ecclayout = chip->ecc.layout;35033503 mtd->ecc_strength = chip->ecc.strength;35043504+ /*35053505+ * Initialize bitflip_threshold to its default prior scan_bbt() call.35063506+ * scan_bbt() might invoke mtd_read(), thus bitflip_threshold must be35073507+ * properly set.35083508+ */35093509+ if (!mtd->bitflip_threshold)35103510+ mtd->bitflip_threshold = mtd->ecc_strength;3504351135053512 /* Check, if we should skip the bad block table scan */35063513 if (chip->options & NAND_SKIP_BBTSCAN)
+4-6
drivers/net/ethernet/freescale/gianfar.c
···18041804 if (priv->mode == MQ_MG_MODE) {18051805 baddr = ®s->txic0;18061806 for_each_set_bit(i, &tx_mask, priv->num_tx_queues) {18071807- if (likely(priv->tx_queue[i]->txcoalescing)) {18081808- gfar_write(baddr + i, 0);18071807+ gfar_write(baddr + i, 0);18081808+ if (likely(priv->tx_queue[i]->txcoalescing))18091809 gfar_write(baddr + i, priv->tx_queue[i]->txic);18101810- }18111810 }1812181118131812 baddr = ®s->rxic0;18141813 for_each_set_bit(i, &rx_mask, priv->num_rx_queues) {18151815- if (likely(priv->rx_queue[i]->rxcoalescing)) {18161816- gfar_write(baddr + i, 0);18141814+ gfar_write(baddr + i, 0);18151815+ if (likely(priv->rx_queue[i]->rxcoalescing))18171816 gfar_write(baddr + i, priv->rx_queue[i]->rxic);18181818- }18191817 }18201818 }18211819}
+1
drivers/net/ethernet/intel/e1000e/defines.h
···103103#define E1000_RXD_ERR_SEQ 0x04 /* Sequence Error */104104#define E1000_RXD_ERR_CXE 0x10 /* Carrier Extension Error */105105#define E1000_RXD_ERR_TCPE 0x20 /* TCP/UDP Checksum Error */106106+#define E1000_RXD_ERR_IPE 0x40 /* IP Checksum Error */106107#define E1000_RXD_ERR_RXE 0x80 /* Rx Data Error */107108#define E1000_RXD_SPC_VLAN_MASK 0x0FFF /* VLAN ID is in lower 12 bits */108109
+14-61
drivers/net/ethernet/intel/e1000e/netdev.c
···496496 * @sk_buff: socket buffer with received data497497 **/498498static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,499499- __le16 csum, struct sk_buff *skb)499499+ struct sk_buff *skb)500500{501501 u16 status = (u16)status_err;502502 u8 errors = (u8)(status_err >> 24);···511511 if (status & E1000_RXD_STAT_IXSM)512512 return;513513514514- /* TCP/UDP checksum error bit is set */515515- if (errors & E1000_RXD_ERR_TCPE) {514514+ /* TCP/UDP checksum error bit or IP checksum error bit is set */515515+ if (errors & (E1000_RXD_ERR_TCPE | E1000_RXD_ERR_IPE)) {516516 /* let the stack verify checksum errors */517517 adapter->hw_csum_err++;518518 return;···523523 return;524524525525 /* It must be a TCP or UDP packet with a valid checksum */526526- if (status & E1000_RXD_STAT_TCPCS) {527527- /* TCP checksum is good */528528- skb->ip_summed = CHECKSUM_UNNECESSARY;529529- } else {530530- /*531531- * IP fragment with UDP payload532532- * Hardware complements the payload checksum, so we undo it533533- * and then put the value in host order for further stack use.534534- */535535- __sum16 sum = (__force __sum16)swab16((__force u16)csum);536536- skb->csum = csum_unfold(~sum);537537- skb->ip_summed = CHECKSUM_COMPLETE;538538- }526526+ skb->ip_summed = CHECKSUM_UNNECESSARY;539527 adapter->hw_csum_good++;540528}541529···942954 skb_put(skb, length);943955944956 /* Receive Checksum Offload */945945- e1000_rx_checksum(adapter, staterr,946946- rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);957957+ e1000_rx_checksum(adapter, staterr, skb);947958948959 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);949960···13281341 total_rx_bytes += skb->len;13291342 total_rx_packets++;1330134313311331- e1000_rx_checksum(adapter, staterr,13321332- rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);13441344+ e1000_rx_checksum(adapter, staterr, skb);1333134513341346 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);13351347···14981512 }14991513 }1500151415011501- /* Receive Checksum Offload XXX recompute due to CRC strip? */15021502- e1000_rx_checksum(adapter, staterr,15031503- rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);15151515+ /* Receive Checksum Offload */15161516+ e1000_rx_checksum(adapter, staterr, skb);1504151715051518 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);15061519···3083309830843099 /* Enable Receive Checksum Offload for TCP and UDP */30853100 rxcsum = er32(RXCSUM);30863086- if (adapter->netdev->features & NETIF_F_RXCSUM) {31013101+ if (adapter->netdev->features & NETIF_F_RXCSUM)30873102 rxcsum |= E1000_RXCSUM_TUOFL;30883088-30893089- /*30903090- * IPv4 payload checksum for UDP fragments must be30913091- * used in conjunction with packet-split.30923092- */30933093- if (adapter->rx_ps_pages)30943094- rxcsum |= E1000_RXCSUM_IPPCSE;30953095- } else {31033103+ else30963104 rxcsum &= ~E1000_RXCSUM_TUOFL;30973097- /* no need to clear IPPCSE as it defaults to 0 */30983098- }30993105 ew32(RXCSUM, rxcsum);3100310631013107 if (adapter->hw.mac.type == e1000_pch2lan) {···52175241 int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;5218524252195243 /* Jumbo frame support */52205220- if (max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) {52215221- if (!(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {52225222- e_err("Jumbo Frames not supported.\n");52235223- return -EINVAL;52245224- }52255225-52265226- /*52275227- * IP payload checksum (enabled with jumbos/packet-split when52285228- * Rx checksum is enabled) and generation of RSS hash is52295229- * mutually exclusive in the hardware.52305230- */52315231- if ((netdev->features & NETIF_F_RXCSUM) &&52325232- (netdev->features & NETIF_F_RXHASH)) {52335233- e_err("Jumbo frames cannot be enabled when both receive checksum offload and receive hashing are enabled. Disable one of the receive offload features before enabling jumbos.\n");52345234- return -EINVAL;52355235- }52445244+ if ((max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) &&52455245+ !(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {52465246+ e_err("Jumbo Frames not supported.\n");52475247+ return -EINVAL;52365248 }5237524952385250 /* Supported frame sizes */···59936029 NETIF_F_RXCSUM | NETIF_F_RXHASH | NETIF_F_RXFCS |59946030 NETIF_F_RXALL)))59956031 return 0;59965996-59975997- /*59985998- * IP payload checksum (enabled with jumbos/packet-split when Rx59995999- * checksum is enabled) and generation of RSS hash is mutually60006000- * exclusive in the hardware.60016001- */60026002- if (adapter->rx_ps_pages &&60036003- (features & NETIF_F_RXCSUM) && (features & NETIF_F_RXHASH)) {60046004- e_err("Enabling both receive checksum offload and receive hashing is not possible with jumbo frames. Disable jumbos or enable only one of the receive offload features.\n");60056005- return -EINVAL;60066006- }6007603260086033 if (changed & NETIF_F_RXFCS) {60096034 if (features & NETIF_F_RXFCS) {
+19-12
drivers/net/ethernet/intel/igbvf/ethtool.c
···357357 struct igbvf_adapter *adapter = netdev_priv(netdev);358358 struct e1000_hw *hw = &adapter->hw;359359360360- if ((ec->rx_coalesce_usecs > IGBVF_MAX_ITR_USECS) ||361361- ((ec->rx_coalesce_usecs > 3) &&362362- (ec->rx_coalesce_usecs < IGBVF_MIN_ITR_USECS)) ||363363- (ec->rx_coalesce_usecs == 2))364364- return -EINVAL;365365-366366- /* convert to rate of irq's per second */367367- if (ec->rx_coalesce_usecs && ec->rx_coalesce_usecs <= 3) {368368- adapter->current_itr = IGBVF_START_ITR;369369- adapter->requested_itr = ec->rx_coalesce_usecs;370370- } else {360360+ if ((ec->rx_coalesce_usecs >= IGBVF_MIN_ITR_USECS) &&361361+ (ec->rx_coalesce_usecs <= IGBVF_MAX_ITR_USECS)) {371362 adapter->current_itr = ec->rx_coalesce_usecs << 2;372363 adapter->requested_itr = 1000000000 /373364 (adapter->current_itr * 256);374374- }365365+ } else if ((ec->rx_coalesce_usecs == 3) ||366366+ (ec->rx_coalesce_usecs == 2)) {367367+ adapter->current_itr = IGBVF_START_ITR;368368+ adapter->requested_itr = ec->rx_coalesce_usecs;369369+ } else if (ec->rx_coalesce_usecs == 0) {370370+ /*371371+ * The user's desire is to turn off interrupt throttling372372+ * altogether, but due to HW limitations, we can't do that.373373+ * Instead we set a very small value in EITR, which would374374+ * allow ~967k interrupts per second, but allow the adapter's375375+ * internal clocking to still function properly.376376+ */377377+ adapter->current_itr = 4;378378+ adapter->requested_itr = 1000000000 /379379+ (adapter->current_itr * 256);380380+ } else381381+ return -EINVAL;375382376383 writel(adapter->current_itr,377384 hw->hw_addr + adapter->rx_ring->itr_register);
···197197static int qmi_wwan_cdc_wdm_manage_power(struct usb_interface *intf, int on)198198{199199 struct usbnet *dev = usb_get_intfdata(intf);200200+201201+ /* can be called while disconnecting */202202+ if (!dev)203203+ return 0;200204 return qmi_wwan_manage_power(dev, on);201205}202206
···796796 switch (op) {797797 case ADD:798798 ret = iwlagn_mac_sta_add(hw, vif, sta);799799+ if (ret)800800+ break;801801+ /*802802+ * Clear the in-progress flag, the AP station entry was added803803+ * but we'll initialize LQ only when we've associated (which804804+ * would also clear the in-progress flag). This is necessary805805+ * in case we never initialize LQ because association fails.806806+ */807807+ spin_lock_bh(&priv->sta_lock);808808+ priv->stations[iwl_sta_id(sta)].used &=809809+ ~IWL_STA_UCODE_INPROGRESS;810810+ spin_unlock_bh(&priv->sta_lock);799811 break;800812 case REMOVE:801813 ret = iwlagn_mac_sta_remove(hw, vif, sta);
···11config WLCORE22 tristate "TI wlcore support"33 depends on WL_TI && GENERIC_HARDIRQS && MAC8021144- depends on INET54 select FW_LOADER65 ---help---76 This module contains the main code for TI WLAN chips. It abstracts
+26-4
drivers/of/base.c
···511511}512512EXPORT_SYMBOL(of_find_node_with_property);513513514514+static const struct of_device_id *of_match_compat(const struct of_device_id *matches,515515+ const char *compat)516516+{517517+ while (matches->name[0] || matches->type[0] || matches->compatible[0]) {518518+ const char *cp = matches->compatible;519519+ int len = strlen(cp);520520+521521+ if (len > 0 && of_compat_cmp(compat, cp, len) == 0)522522+ return matches;523523+524524+ matches++;525525+ }526526+527527+ return NULL;528528+}529529+514530/**515531 * of_match_node - Tell if an device_node has a matching of_match structure516532 * @matches: array of of device match structures to search in···537521const struct of_device_id *of_match_node(const struct of_device_id *matches,538522 const struct device_node *node)539523{524524+ struct property *prop;525525+ const char *cp;526526+540527 if (!matches)541528 return NULL;529529+530530+ of_property_for_each_string(node, "compatible", prop, cp) {531531+ const struct of_device_id *match = of_match_compat(matches, cp);532532+ if (match)533533+ return match;534534+ }542535543536 while (matches->name[0] || matches->type[0] || matches->compatible[0]) {544537 int match = 1;···557532 if (matches->type[0])558533 match &= node->type559534 && !strcmp(matches->type, node->type);560560- if (matches->compatible[0])561561- match &= of_device_is_compatible(node,562562- matches->compatible);563563- if (match)535535+ if (match && !matches->compatible[0])564536 return matches;565537 matches++;566538 }
···5858 struct ft_tport *tport;5959 int i;60606161- tport = rcu_dereference(lport->prov[FC_TYPE_FCP]);6161+ tport = rcu_dereference_protected(lport->prov[FC_TYPE_FCP],6262+ lockdep_is_held(&ft_lport_lock));6263 if (tport && tport->tpg)6364 return tport;6465
+9-6
fs/btrfs/backref.c
···301301 goto out;302302303303 eb = path->nodes[level];304304- if (!eb) {305305- WARN_ON(1);306306- ret = 1;307307- goto out;304304+ while (!eb) {305305+ if (!level) {306306+ WARN_ON(1);307307+ ret = 1;308308+ goto out;309309+ }310310+ level--;311311+ eb = path->nodes[level];308312 }309313310314 ret = add_all_parents(root, path, parents, level, &ref->key_for_search,···839835 }840836 ret = __add_delayed_refs(head, delayed_ref_seq,841837 &prefs_delayed);838838+ mutex_unlock(&head->mutex);842839 if (ret) {843840 spin_unlock(&delayed_refs->lock);844841 goto out;···933928 }934929935930out:936936- if (head)937937- mutex_unlock(&head->mutex);938931 btrfs_free_path(path);939932 while (!list_empty(&prefs)) {940933 ref = list_first_entry(&prefs, struct __prelim_ref, list);
+35-25
fs/btrfs/ctree.c
···10241024 if (!looped && !tm)10251025 return 0;10261026 /*10271027- * we must have key remove operations in the log before the10281028- * replace operation.10271027+ * if there are no tree operation for the oldest root, we simply10281028+ * return it. this should only happen if that (old) root is at10291029+ * level 0.10291030 */10301030- BUG_ON(!tm);10311031+ if (!tm)10321032+ break;1031103310341034+ /*10351035+ * if there's an operation that's not a root replacement, we10361036+ * found the oldest version of our root. normally, we'll find a10371037+ * MOD_LOG_KEY_REMOVE_WHILE_FREEING operation here.10381038+ */10321039 if (tm->op != MOD_LOG_ROOT_REPLACE)10331040 break;10341041···10941087 tm->generation);10951088 break;10961089 case MOD_LOG_KEY_ADD:10971097- if (tm->slot != n - 1) {10981098- o_dst = btrfs_node_key_ptr_offset(tm->slot);10991099- o_src = btrfs_node_key_ptr_offset(tm->slot + 1);11001100- memmove_extent_buffer(eb, o_dst, o_src, p_size);11011101- }10901090+ /* if a move operation is needed it's in the log */11021091 n--;11031092 break;11041093 case MOD_LOG_MOVE_KEYS:···11951192 }1196119311971194 tm = tree_mod_log_search(root->fs_info, logical, time_seq);11981198- /*11991199- * there was an item in the log when __tree_mod_log_oldest_root12001200- * returned. this one must not go away, because the time_seq passed to12011201- * us must be blocking its removal.12021202- */12031203- BUG_ON(!tm);12041204-12051195 if (old_root)12061206- eb = alloc_dummy_extent_buffer(tm->index << PAGE_CACHE_SHIFT,12071207- root->nodesize);11961196+ eb = alloc_dummy_extent_buffer(logical, root->nodesize);12081197 else12091198 eb = btrfs_clone_extent_buffer(root->node);12101199 btrfs_tree_read_unlock(root->node);···12111216 btrfs_set_header_level(eb, old_root->level);12121217 btrfs_set_header_generation(eb, old_generation);12131218 }12141214- __tree_mod_log_rewind(eb, time_seq, tm);12191219+ if (tm)12201220+ __tree_mod_log_rewind(eb, time_seq, tm);12211221+ else12221222+ WARN_ON(btrfs_header_level(eb) != 0);12151223 extent_buffer_get(eb);1216122412171225 return eb;···29932995static void insert_ptr(struct btrfs_trans_handle *trans,29942996 struct btrfs_root *root, struct btrfs_path *path,29952997 struct btrfs_disk_key *key, u64 bytenr,29962996- int slot, int level, int tree_mod_log)29982998+ int slot, int level)29972999{29983000 struct extent_buffer *lower;29993001 int nritems;···30063008 BUG_ON(slot > nritems);30073009 BUG_ON(nritems == BTRFS_NODEPTRS_PER_BLOCK(root));30083010 if (slot != nritems) {30093009- if (tree_mod_log && level)30113011+ if (level)30103012 tree_mod_log_eb_move(root->fs_info, lower, slot + 1,30113013 slot, nritems - slot);30123014 memmove_extent_buffer(lower,···30143016 btrfs_node_key_ptr_offset(slot),30153017 (nritems - slot) * sizeof(struct btrfs_key_ptr));30163018 }30173017- if (tree_mod_log && level) {30193019+ if (level) {30183020 ret = tree_mod_log_insert_key(root->fs_info, lower, slot,30193021 MOD_LOG_KEY_ADD);30203022 BUG_ON(ret < 0);···31023104 btrfs_mark_buffer_dirty(split);3103310531043106 insert_ptr(trans, root, path, &disk_key, split->start,31053105- path->slots[level + 1] + 1, level + 1, 1);31073107+ path->slots[level + 1] + 1, level + 1);3106310831073109 if (path->slots[level] >= mid) {31083110 path->slots[level] -= mid;···36393641 btrfs_set_header_nritems(l, mid);36403642 btrfs_item_key(right, &disk_key, 0);36413643 insert_ptr(trans, root, path, &disk_key, right->start,36423642- path->slots[1] + 1, 1, 0);36443644+ path->slots[1] + 1, 1);3643364536443646 btrfs_mark_buffer_dirty(right);36453647 btrfs_mark_buffer_dirty(l);···38463848 if (mid <= slot) {38473849 btrfs_set_header_nritems(right, 0);38483850 insert_ptr(trans, root, path, &disk_key, right->start,38493849- path->slots[1] + 1, 1, 0);38513851+ path->slots[1] + 1, 1);38503852 btrfs_tree_unlock(path->nodes[0]);38513853 free_extent_buffer(path->nodes[0]);38523854 path->nodes[0] = right;···38553857 } else {38563858 btrfs_set_header_nritems(right, 0);38573859 insert_ptr(trans, root, path, &disk_key, right->start,38583858- path->slots[1], 1, 0);38603860+ path->slots[1], 1);38593861 btrfs_tree_unlock(path->nodes[0]);38603862 free_extent_buffer(path->nodes[0]);38613863 path->nodes[0] = right;···5119512151205122 if (!path->skip_locking) {51215123 ret = btrfs_try_tree_read_lock(next);51245124+ if (!ret && time_seq) {51255125+ /*51265126+ * If we don't get the lock, we may be racing51275127+ * with push_leaf_left, holding that lock while51285128+ * itself waiting for the leaf we've currently51295129+ * locked. To solve this situation, we give up51305130+ * on our lock and cycle.51315131+ */51325132+ btrfs_release_path(path);51335133+ cond_resched();51345134+ goto again;51355135+ }51225136 if (!ret) {51235137 btrfs_set_path_blocking(path);51245138 btrfs_tree_read_lock(next);
+21-13
fs/btrfs/disk-io.c
···23542354 BTRFS_CSUM_TREE_OBJECTID, csum_root);23552355 if (ret)23562356 goto recovery_tree_root;23572357-23582357 csum_root->track_dirty = 1;2359235823602359 fs_info->generation = generation;23612360 fs_info->last_trans_committed = generation;23612361+23622362+ ret = btrfs_recover_balance(fs_info);23632363+ if (ret) {23642364+ printk(KERN_WARNING "btrfs: failed to recover balance\n");23652365+ goto fail_block_groups;23662366+ }2362236723632368 ret = btrfs_init_dev_stats(fs_info);23642369 if (ret) {···24902485 goto fail_trans_kthread;24912486 }2492248724932493- if (!(sb->s_flags & MS_RDONLY)) {24942494- down_read(&fs_info->cleanup_work_sem);24952495- err = btrfs_orphan_cleanup(fs_info->fs_root);24962496- if (!err)24972497- err = btrfs_orphan_cleanup(fs_info->tree_root);24882488+ if (sb->s_flags & MS_RDONLY)24892489+ return 0;24902490+24912491+ down_read(&fs_info->cleanup_work_sem);24922492+ if ((ret = btrfs_orphan_cleanup(fs_info->fs_root)) ||24932493+ (ret = btrfs_orphan_cleanup(fs_info->tree_root))) {24982494 up_read(&fs_info->cleanup_work_sem);24952495+ close_ctree(tree_root);24962496+ return ret;24972497+ }24982498+ up_read(&fs_info->cleanup_work_sem);2499249925002500- if (!err)25012501- err = btrfs_recover_balance(fs_info->tree_root);25022502-25032503- if (err) {25042504- close_ctree(tree_root);25052505- return err;25062506- }25002500+ ret = btrfs_resume_balance_async(fs_info);25012501+ if (ret) {25022502+ printk(KERN_WARNING "btrfs: failed to resume balance\n");25032503+ close_ctree(tree_root);25042504+ return ret;25072505 }2508250625092507 return 0;
+6-5
fs/btrfs/extent-tree.c
···23472347 return count;23482348}2349234923502350-23512350static void wait_for_more_refs(struct btrfs_delayed_ref_root *delayed_refs,23522352- unsigned long num_refs)23512351+ unsigned long num_refs,23522352+ struct list_head *first_seq)23532353{23542354- struct list_head *first_seq = delayed_refs->seq_head.next;23552355-23562354 spin_unlock(&delayed_refs->lock);23572355 pr_debug("waiting for more refs (num %ld, first %p)\n",23582356 num_refs, first_seq);···23792381 struct btrfs_delayed_ref_root *delayed_refs;23802382 struct btrfs_delayed_ref_node *ref;23812383 struct list_head cluster;23842384+ struct list_head *first_seq = NULL;23822385 int ret;23832386 u64 delayed_start;23842387 int run_all = count == (unsigned long)-1;···24352436 */24362437 consider_waiting = 1;24372438 num_refs = delayed_refs->num_entries;24392439+ first_seq = root->fs_info->tree_mod_seq_list.next;24382440 } else {24392439- wait_for_more_refs(delayed_refs, num_refs);24412441+ wait_for_more_refs(delayed_refs,24422442+ num_refs, first_seq);24402443 /*24412444 * after waiting, things have changed. we24422445 * dropped the lock and someone else might have
+14
fs/btrfs/extent_io.c
···33243324 writepage_t writepage, void *data,33253325 void (*flush_fn)(void *))33263326{33273327+ struct inode *inode = mapping->host;33273328 int ret = 0;33283329 int done = 0;33293330 int nr_to_write_done = 0;···33343333 pgoff_t end; /* Inclusive */33353334 int scanned = 0;33363335 int tag;33363336+33373337+ /*33383338+ * We have to hold onto the inode so that ordered extents can do their33393339+ * work when the IO finishes. The alternative to this is failing to add33403340+ * an ordered extent if the igrab() fails there and that is a huge pain33413341+ * to deal with, so instead just hold onto the inode throughout the33423342+ * writepages operation. If it fails here we are freeing up the inode33433343+ * anyway and we'd rather not waste our time writing out stuff that is33443344+ * going to be truncated anyway.33453345+ */33463346+ if (!igrab(inode))33473347+ return 0;3337334833383349 pagevec_init(&pvec, 0);33393350 if (wbc->range_cyclic) {···34413428 index = 0;34423429 goto retry;34433430 }34313431+ btrfs_add_delayed_iput(inode);34443432 return ret;34453433}34463434
-13
fs/btrfs/file.c
···13341334 loff_t *ppos, size_t count, size_t ocount)13351335{13361336 struct file *file = iocb->ki_filp;13371337- struct inode *inode = fdentry(file)->d_inode;13381337 struct iov_iter i;13391338 ssize_t written;13401339 ssize_t written_buffered;···1342134313431344 written = generic_file_direct_write(iocb, iov, &nr_segs, pos, ppos,13441345 count, ocount);13451345-13461346- /*13471347- * the generic O_DIRECT will update in-memory i_size after the13481348- * DIOs are done. But our endio handlers that update the on13491349- * disk i_size never update past the in memory i_size. So we13501350- * need one more update here to catch any additions to the13511351- * file13521352- */13531353- if (inode->i_size != BTRFS_I(inode)->disk_i_size) {13541354- btrfs_ordered_update_i_size(inode, inode->i_size, NULL);13551355- mark_inode_dirty(inode);13561356- }1357134613581347 if (written < 0 || written == count)13591348 return written;
+51-92
fs/btrfs/free-space-cache.c
···15431543 end = bitmap_info->offset + (u64)(BITS_PER_BITMAP * ctl->unit) - 1;1544154415451545 /*15461546- * XXX - this can go away after a few releases.15471547- *15481548- * since the only user of btrfs_remove_free_space is the tree logging15491549- * stuff, and the only way to test that is under crash conditions, we15501550- * want to have this debug stuff here just in case somethings not15511551- * working. Search the bitmap for the space we are trying to use to15521552- * make sure its actually there. If its not there then we need to stop15531553- * because something has gone wrong.15461546+ * We need to search for bits in this bitmap. We could only cover some15471547+ * of the extent in this bitmap thanks to how we add space, so we need15481548+ * to search for as much as it as we can and clear that amount, and then15491549+ * go searching for the next bit.15541550 */15551551 search_start = *offset;15561556- search_bytes = *bytes;15521552+ search_bytes = ctl->unit;15571553 search_bytes = min(search_bytes, end - search_start + 1);15581554 ret = search_bitmap(ctl, bitmap_info, &search_start, &search_bytes);15591555 BUG_ON(ret < 0 || search_start != *offset);1560155615611561- if (*offset > bitmap_info->offset && *offset + *bytes > end) {15621562- bitmap_clear_bits(ctl, bitmap_info, *offset, end - *offset + 1);15631563- *bytes -= end - *offset + 1;15641564- *offset = end + 1;15651565- } else if (*offset >= bitmap_info->offset && *offset + *bytes <= end) {15661566- bitmap_clear_bits(ctl, bitmap_info, *offset, *bytes);15671567- *bytes = 0;15681568- }15571557+ /* We may have found more bits than what we need */15581558+ search_bytes = min(search_bytes, *bytes);15591559+15601560+ /* Cannot clear past the end of the bitmap */15611561+ search_bytes = min(search_bytes, end - search_start + 1);15621562+15631563+ bitmap_clear_bits(ctl, bitmap_info, search_start, search_bytes);15641564+ *offset += search_bytes;15651565+ *bytes -= search_bytes;1569156615701567 if (*bytes) {15711568 struct rb_node *next = rb_next(&bitmap_info->offset_index);···15931596 * everything over again.15941597 */15951598 search_start = *offset;15961596- search_bytes = *bytes;15991599+ search_bytes = ctl->unit;15971600 ret = search_bitmap(ctl, bitmap_info, &search_start,15981601 &search_bytes);15991602 if (ret < 0 || search_start != *offset)···18761879{18771880 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;18781881 struct btrfs_free_space *info;18791879- struct btrfs_free_space *next_info = NULL;18801882 int ret = 0;1881188318821884 spin_lock(&ctl->tree_lock);1883188518841886again:18871887+ if (!bytes)18881888+ goto out_lock;18891889+18851890 info = tree_search_offset(ctl, offset, 0, 0);18861891 if (!info) {18871892 /*···19041905 }19051906 }1906190719071907- if (info->bytes < bytes && rb_next(&info->offset_index)) {19081908- u64 end;19091909- next_info = rb_entry(rb_next(&info->offset_index),19101910- struct btrfs_free_space,19111911- offset_index);19121912-19131913- if (next_info->bitmap)19141914- end = next_info->offset +19151915- BITS_PER_BITMAP * ctl->unit - 1;19161916- else19171917- end = next_info->offset + next_info->bytes;19181918-19191919- if (next_info->bytes < bytes ||19201920- next_info->offset > offset || offset > end) {19211921- printk(KERN_CRIT "Found free space at %llu, size %llu,"19221922- " trying to use %llu\n",19231923- (unsigned long long)info->offset,19241924- (unsigned long long)info->bytes,19251925- (unsigned long long)bytes);19261926- WARN_ON(1);19271927- ret = -EINVAL;19281928- goto out_lock;19291929- }19301930-19311931- info = next_info;19321932- }19331933-19341934- if (info->bytes == bytes) {19081908+ if (!info->bitmap) {19351909 unlink_free_space(ctl, info);19361936- if (info->bitmap) {19371937- kfree(info->bitmap);19381938- ctl->total_bitmaps--;19391939- }19401940- kmem_cache_free(btrfs_free_space_cachep, info);19411941- ret = 0;19421942- goto out_lock;19431943- }19101910+ if (offset == info->offset) {19111911+ u64 to_free = min(bytes, info->bytes);1944191219451945- if (!info->bitmap && info->offset == offset) {19461946- unlink_free_space(ctl, info);19471947- info->offset += bytes;19481948- info->bytes -= bytes;19491949- ret = link_free_space(ctl, info);19501950- WARN_ON(ret);19511951- goto out_lock;19521952- }19131913+ info->bytes -= to_free;19141914+ info->offset += to_free;19151915+ if (info->bytes) {19161916+ ret = link_free_space(ctl, info);19171917+ WARN_ON(ret);19181918+ } else {19191919+ kmem_cache_free(btrfs_free_space_cachep, info);19201920+ }1953192119541954- if (!info->bitmap && info->offset <= offset &&19551955- info->offset + info->bytes >= offset + bytes) {19561956- u64 old_start = info->offset;19571957- /*19581958- * we're freeing space in the middle of the info,19591959- * this can happen during tree log replay19601960- *19611961- * first unlink the old info and then19621962- * insert it again after the hole we're creating19631963- */19641964- unlink_free_space(ctl, info);19651965- if (offset + bytes < info->offset + info->bytes) {19661966- u64 old_end = info->offset + info->bytes;19221922+ offset += to_free;19231923+ bytes -= to_free;19241924+ goto again;19251925+ } else {19261926+ u64 old_end = info->bytes + info->offset;1967192719681968- info->offset = offset + bytes;19691969- info->bytes = old_end - info->offset;19281928+ info->bytes = offset - info->offset;19701929 ret = link_free_space(ctl, info);19711930 WARN_ON(ret);19721931 if (ret)19731932 goto out_lock;19741974- } else {19751975- /* the hole we're creating ends at the end19761976- * of the info struct, just free the info19771977- */19781978- kmem_cache_free(btrfs_free_space_cachep, info);19791979- }19801980- spin_unlock(&ctl->tree_lock);1981193319821982- /* step two, insert a new info struct to cover19831983- * anything before the hole19841984- */19851985- ret = btrfs_add_free_space(block_group, old_start,19861986- offset - old_start);19871987- WARN_ON(ret); /* -ENOMEM */19881988- goto out;19341934+ /* Not enough bytes in this entry to satisfy us */19351935+ if (old_end < offset + bytes) {19361936+ bytes -= old_end - offset;19371937+ offset = old_end;19381938+ goto again;19391939+ } else if (old_end == offset + bytes) {19401940+ /* all done */19411941+ goto out_lock;19421942+ }19431943+ spin_unlock(&ctl->tree_lock);19441944+19451945+ ret = btrfs_add_free_space(block_group, offset + bytes,19461946+ old_end - (offset + bytes));19471947+ WARN_ON(ret);19481948+ goto out;19491949+ }19891950 }1990195119911952 ret = remove_from_bitmap(ctl, info, &offset, &bytes);
+51-6
fs/btrfs/inode.c
···37543754 btrfs_wait_ordered_range(inode, 0, (u64)-1);3755375537563756 if (root->fs_info->log_root_recovering) {37573757- BUG_ON(!test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,37573757+ BUG_ON(test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,37583758 &BTRFS_I(inode)->runtime_flags));37593759 goto no_delete;37603760 }···58765876 bh_result->b_size = len;58775877 bh_result->b_bdev = em->bdev;58785878 set_buffer_mapped(bh_result);58795879- if (create && !test_bit(EXTENT_FLAG_PREALLOC, &em->flags))58805880- set_buffer_new(bh_result);58795879+ if (create) {58805880+ if (!test_bit(EXTENT_FLAG_PREALLOC, &em->flags))58815881+ set_buffer_new(bh_result);58825882+58835883+ /*58845884+ * Need to update the i_size under the extent lock so buffered58855885+ * readers will get the updated i_size when we unlock.58865886+ */58875887+ if (start + len > i_size_read(inode))58885888+ i_size_write(inode, start + len);58895889+ }5881589058825891 free_extent_map(em);58835892···63696360 */63706361 ordered = btrfs_lookup_ordered_range(inode, lockstart,63716362 lockend - lockstart + 1);63726372- if (!ordered)63636363+63646364+ /*63656365+ * We need to make sure there are no buffered pages in this63666366+ * range either, we could have raced between the invalidate in63676367+ * generic_file_direct_write and locking the extent. The63686368+ * invalidate needs to happen so that reads after a write do not63696369+ * get stale data.63706370+ */63716371+ if (!ordered && (!writing ||63726372+ !test_range_bit(&BTRFS_I(inode)->io_tree,63736373+ lockstart, lockend, EXTENT_UPTODATE, 0,63746374+ cached_state)))63736375 break;63766376+63746377 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend,63756378 &cached_state, GFP_NOFS);63766376- btrfs_start_ordered_extent(inode, ordered, 1);63776377- btrfs_put_ordered_extent(ordered);63796379+63806380+ if (ordered) {63816381+ btrfs_start_ordered_extent(inode, ordered, 1);63826382+ btrfs_put_ordered_extent(ordered);63836383+ } else {63846384+ /* Screw you mmap */63856385+ ret = filemap_write_and_wait_range(file->f_mapping,63866386+ lockstart,63876387+ lockend);63886388+ if (ret)63896389+ goto out;63906390+63916391+ /*63926392+ * If we found a page that couldn't be invalidated just63936393+ * fall back to buffered.63946394+ */63956395+ ret = invalidate_inode_pages2_range(file->f_mapping,63966396+ lockstart >> PAGE_CACHE_SHIFT,63976397+ lockend >> PAGE_CACHE_SHIFT);63986398+ if (ret) {63996399+ if (ret == -EBUSY)64006400+ ret = 0;64016401+ goto out;64026402+ }64036403+ }64046404+63786405 cond_resched();63796406 }63806407
···16531653 * If yes, we have encountered a double deliminator16541654 * reset the NULL character to the deliminator16551655 */16561656- if (tmp_end < end && tmp_end[1] == delim)16561656+ if (tmp_end < end && tmp_end[1] == delim) {16571657 tmp_end[0] = delim;1658165816591659- /* Keep iterating until we get to a single deliminator16601660- * OR the end16611661- */16621662- while ((tmp_end = strchr(tmp_end, delim)) != NULL &&16631663- (tmp_end[1] == delim)) {16641664- tmp_end = (char *) &tmp_end[2];16651665- }16591659+ /* Keep iterating until we get to a single16601660+ * deliminator OR the end16611661+ */16621662+ while ((tmp_end = strchr(tmp_end, delim))16631663+ != NULL && (tmp_end[1] == delim)) {16641664+ tmp_end = (char *) &tmp_end[2];16651665+ }1666166616671667- /* Reset var options to point to next element */16681668- if (tmp_end) {16691669- tmp_end[0] = '\0';16701670- options = (char *) &tmp_end[1];16711671- } else16721672- /* Reached the end of the mount option string */16731673- options = end;16671667+ /* Reset var options to point to next element */16681668+ if (tmp_end) {16691669+ tmp_end[0] = '\0';16701670+ options = (char *) &tmp_end[1];16711671+ } else16721672+ /* Reached the end of the mount option16731673+ * string */16741674+ options = end;16751675+ }1674167616751677 /* Now build new password string */16761678 temp_len = strlen(value);···34953493 * MS-CIFS indicates that servers are only limited by the client's34963494 * bufsize for reads, testing against win98se shows that it throws34973495 * INVALID_PARAMETER errors if you try to request too large a read.34963496+ * OS/2 just sends back short reads.34983497 *34993499- * If the server advertises a MaxBufferSize of less than one page,35003500- * assume that it also can't satisfy reads larger than that either.35013501- *35023502- * FIXME: Is there a better heuristic for this?34983498+ * If the server doesn't advertise CAP_LARGE_READ_X, then assume that34993499+ * it can't handle a read request larger than its MaxBufferSize either.35033500 */35043501 if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_READ_CAP))35053502 defsize = CIFS_DEFAULT_IOSIZE;35063503 else if (server->capabilities & CAP_LARGE_READ_X)35073504 defsize = CIFS_DEFAULT_NON_POSIX_RSIZE;35083508- else if (server->maxBuf >= PAGE_CACHE_SIZE)35093509- defsize = CIFSMaxBufSize;35103505 else35113506 defsize = server->maxBuf - sizeof(READ_RSP);35123507
+1-1
fs/ecryptfs/kthread.c
···149149 (*lower_file) = dentry_open(lower_dentry, lower_mnt, flags, cred);150150 if (!IS_ERR(*lower_file))151151 goto out;152152- if (flags & O_RDONLY) {152152+ if ((flags & O_ACCMODE) == O_RDONLY) {153153 rc = PTR_ERR((*lower_file));154154 goto out;155155 }
···116116117117/**118118 * EVIOCGMTSLOTS(len) - get MT slot values119119+ * @len: size of the data buffer in bytes119120 *120121 * The ioctl buffer argument should be binary equivalent to121122 *
+2-2
include/linux/kvm_host.h
···815815#ifdef CONFIG_HAVE_KVM_EVENTFD816816817817void kvm_eventfd_init(struct kvm *kvm);818818-int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);818818+int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args);819819void kvm_irqfd_release(struct kvm *kvm);820820void kvm_irq_routing_update(struct kvm *, struct kvm_irq_routing_table *);821821int kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args);···824824825825static inline void kvm_eventfd_init(struct kvm *kvm) {}826826827827-static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)827827+static inline int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args)828828{829829 return -EINVAL;830830}
+2
include/linux/prctl.h
···141141 * Changing LSM security domain is considered a new privilege. So, for example,142142 * asking selinux for a specific new context (e.g. with runcon) will result143143 * in execve returning -EPERM.144144+ *145145+ * See Documentation/prctl/no_new_privs.txt for more details.144146 */145147#define PR_SET_NO_NEW_PRIVS 38146148#define PR_GET_NO_NEW_PRIVS 39
+4-4
include/linux/splice.h
···5151struct splice_pipe_desc {5252 struct page **pages; /* page map */5353 struct partial_page *partial; /* pages[] may not be contig */5454- int nr_pages; /* number of pages in map */5454+ int nr_pages; /* number of populated pages in map */5555+ unsigned int nr_pages_max; /* pages[] & partial[] arrays size */5556 unsigned int flags; /* splice flags */5657 const struct pipe_buf_operations *ops;/* ops associated with output pipe */5758 void (*spd_release)(struct splice_pipe_desc *, unsigned int);···8685/*8786 * for dynamic pipe sizing8887 */8989-extern int splice_grow_spd(struct pipe_inode_info *, struct splice_pipe_desc *);9090-extern void splice_shrink_spd(struct pipe_inode_info *,9191- struct splice_pipe_desc *);8888+extern int splice_grow_spd(const struct pipe_inode_info *, struct splice_pipe_desc *);8989+extern void splice_shrink_spd(struct splice_pipe_desc *);9290extern void spd_release_page(struct splice_pipe_desc *, unsigned int);93919492extern const struct pipe_buf_operations page_cache_pipe_buf_ops;
+4
include/net/sctp/structs.h
···912912 /* Is this structure kfree()able? */913913 malloced:1;914914915915+ /* Has this transport moved the ctsn since we last sacked */916916+ __u32 sack_generation;917917+915918 struct flowi fl;916919917920 /* This is the peer's IP address and port. */···15871584 */15881585 __u8 sack_needed; /* Do we need to sack the peer? */15891586 __u32 sack_cnt;15871587+ __u32 sack_generation;1590158815911589 /* These are capabilities which our peer advertised. */15921590 __u8 ecn_capable:1, /* Can peer do ECN? */
+2-1
include/net/sctp/tsnmap.h
···117117int sctp_tsnmap_check(const struct sctp_tsnmap *, __u32 tsn);118118119119/* Mark this TSN as seen. */120120-int sctp_tsnmap_mark(struct sctp_tsnmap *, __u32 tsn);120120+int sctp_tsnmap_mark(struct sctp_tsnmap *, __u32 tsn,121121+ struct sctp_transport *trans);121122122123/* Mark this TSN and all lower as seen. */123124void sctp_tsnmap_skip(struct sctp_tsnmap *map, __u32 tsn);
···1515#include <linux/sched.h>1616#include <linux/ksm.h>1717#include <linux/fs.h>1818+#include <linux/file.h>18191920/*2021 * Any behaviour which results in changes to the vma->vm_flags needs to···205204{206205 loff_t offset;207206 int error;207207+ struct file *f;208208209209 *prev = NULL; /* tell sys_madvise we drop mmap_sem */210210211211 if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB))212212 return -EINVAL;213213214214- if (!vma->vm_file || !vma->vm_file->f_mapping215215- || !vma->vm_file->f_mapping->host) {214214+ f = vma->vm_file;215215+216216+ if (!f || !f->f_mapping || !f->f_mapping->host) {216217 return -EINVAL;217218 }218219···224221 offset = (loff_t)(start - vma->vm_start)225222 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);226223227227- /* filesystem's fallocate may need to take i_mutex */224224+ /*225225+ * Filesystem's fallocate may need to take i_mutex. We need to226226+ * explicitly grab a reference because the vma (and hence the227227+ * vma's reference to the file) can go away as soon as we drop228228+ * mmap_sem.229229+ */230230+ get_file(f);228231 up_read(¤t->mm->mmap_sem);229229- error = do_fallocate(vma->vm_file,232232+ error = do_fallocate(f,230233 FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,231234 offset, end - start);235235+ fput(f);232236 down_read(¤t->mm->mmap_sem);233237 return error;234238}
···11361136 no_module = request_module("netdev-%s", name);11371137 if (no_module && capable(CAP_SYS_MODULE)) {11381138 if (!request_module("%s", name))11391139- pr_err("Loading kernel module for a network device with CAP_SYS_MODULE (deprecated). Use CAP_NET_ADMIN and alias netdev-%s instead.\n",11401140- name);11391139+ pr_warn("Loading kernel module for a network device with CAP_SYS_MODULE (deprecated). Use CAP_NET_ADMIN and alias netdev-%s instead.\n",11401140+ name);11411141 }11421142}11431143EXPORT_SYMBOL(dev_load);
···13421342 struct ieee80211_local *local = sdata->local;13431343 struct sta_info *sta;13441344 u32 changed = 0;13451345- u8 bssid[ETH_ALEN];1346134513471346 ASSERT_MGD_MTX(ifmgd);13481347···1353135413541355 ieee80211_stop_poll(sdata);1355135613561356- memcpy(bssid, ifmgd->associated->bssid, ETH_ALEN);13571357-13581357 ifmgd->associated = NULL;13591359- memset(ifmgd->bssid, 0, ETH_ALEN);1360135813611359 /*13621360 * we need to commit the associated = NULL change because the···13731377 netif_carrier_off(sdata->dev);1374137813751379 mutex_lock(&local->sta_mtx);13761376- sta = sta_info_get(sdata, bssid);13801380+ sta = sta_info_get(sdata, ifmgd->bssid);13771381 if (sta) {13781382 set_sta_flag(sta, WLAN_STA_BLOCK_BA);13791383 ieee80211_sta_tear_down_BA_sessions(sta, tx);···1382138613831387 /* deauthenticate/disassociate now */13841388 if (tx || frame_buf)13851385- ieee80211_send_deauth_disassoc(sdata, bssid, stype, reason,13861386- tx, frame_buf);13891389+ ieee80211_send_deauth_disassoc(sdata, ifmgd->bssid, stype,13901390+ reason, tx, frame_buf);1387139113881392 /* flush out frame */13891393 if (tx)13901394 drv_flush(local, false);13951395+13961396+ /* clear bssid only after building the needed mgmt frames */13971397+ memset(ifmgd->bssid, 0, ETH_ALEN);1391139813921399 /* remove AP and TDLS peers */13931400 sta_info_flush(local, sdata);
+4-1
net/mac80211/rx.c
···24552455 * frames that we didn't handle, including returning unknown24562456 * ones. For all other modes we will return them to the sender,24572457 * setting the 0x80 bit in the action category, as required by24582458- * 802.11-2007 7.3.1.11.24582458+ * 802.11-2012 9.24.4.24592459 * Newer versions of hostapd shall also use the management frame24602460 * registration mechanisms, but older ones still use cooked24612461 * monitor interfaces so push all frames there.···24632463 if (!(status->rx_flags & IEEE80211_RX_MALFORMED_ACTION_FRM) &&24642464 (sdata->vif.type == NL80211_IFTYPE_AP ||24652465 sdata->vif.type == NL80211_IFTYPE_AP_VLAN))24662466+ return RX_DROP_MONITOR;24672467+24682468+ if (is_multicast_ether_addr(mgmt->da))24662469 return RX_DROP_MONITOR;2467247024682471 /* do not return rejected action frames */
···271271 */272272 asoc->peer.sack_needed = 1;273273 asoc->peer.sack_cnt = 0;274274+ asoc->peer.sack_generation = 1;274275275276 /* Assume that the peer will tell us if he recognizes ASCONF276277 * as part of INIT exchange.
+5
net/sctp/output.c
···248248 /* If the SACK timer is running, we have a pending SACK */249249 if (timer_pending(timer)) {250250 struct sctp_chunk *sack;251251+252252+ if (pkt->transport->sack_generation !=253253+ pkt->transport->asoc->peer.sack_generation)254254+ return retval;255255+251256 asoc->a_rwnd = asoc->rwnd;252257 sack = sctp_make_sack(asoc);253258 if (sack) {
+16
net/sctp/sm_make_chunk.c
···734734 int len;735735 __u32 ctsn;736736 __u16 num_gabs, num_dup_tsns;737737+ struct sctp_association *aptr = (struct sctp_association *)asoc;737738 struct sctp_tsnmap *map = (struct sctp_tsnmap *)&asoc->peer.tsn_map;738739 struct sctp_gap_ack_block gabs[SCTP_MAX_GABS];740740+ struct sctp_transport *trans;739741740742 memset(gabs, 0, sizeof(gabs));741743 ctsn = sctp_tsnmap_get_ctsn(map);···807805 sctp_addto_chunk(retval, sizeof(__u32) * num_dup_tsns,808806 sctp_tsnmap_get_dups(map));809807808808+ /* Once we have a sack generated, check to see what our sack809809+ * generation is, if its 0, reset the transports to 0, and reset810810+ * the association generation to 1811811+ *812812+ * The idea is that zero is never used as a valid generation for the813813+ * association so no transport will match after a wrap event like this,814814+ * Until the next sack815815+ */816816+ if (++aptr->peer.sack_generation == 0) {817817+ list_for_each_entry(trans, &asoc->peer.transport_addr_list,818818+ transports)819819+ trans->sack_generation = 0;820820+ aptr->peer.sack_generation = 1;821821+ }810822nodata:811823 return retval;812824}
+1-1
net/sctp/sm_sideeffect.c
···12681268 case SCTP_CMD_REPORT_TSN:12691269 /* Record the arrival of a TSN. */12701270 error = sctp_tsnmap_mark(&asoc->peer.tsn_map,12711271- cmd->obj.u32);12711271+ cmd->obj.u32, NULL);12721272 break;1273127312741274 case SCTP_CMD_REPORT_FWDTSN:
+2
net/sctp/transport.c
···6868 peer->af_specific = sctp_get_af_specific(addr->sa.sa_family);6969 memset(&peer->saddr, 0, sizeof(union sctp_addr));70707171+ peer->sack_generation = 0;7272+7173 /* From 6.3.1 RTO Calculation:7274 *7375 * C1) Until an RTT measurement has been made for a packet sent to the
+5-1
net/sctp/tsnmap.c
···114114115115116116/* Mark this TSN as seen. */117117-int sctp_tsnmap_mark(struct sctp_tsnmap *map, __u32 tsn)117117+int sctp_tsnmap_mark(struct sctp_tsnmap *map, __u32 tsn,118118+ struct sctp_transport *trans)118119{119120 u16 gap;120121···134133 */135134 map->max_tsn_seen++;136135 map->cumulative_tsn_ack_point++;136136+ if (trans)137137+ trans->sack_generation =138138+ trans->asoc->peer.sack_generation;137139 map->base_tsn++;138140 } else {139141 /* Either we already have a gap, or about to record a gap, so
+2-1
net/sctp/ulpevent.c
···715715 * can mark it as received so the tsn_map is updated correctly.716716 */717717 if (sctp_tsnmap_mark(&asoc->peer.tsn_map,718718- ntohl(chunk->subh.data_hdr->tsn)))718718+ ntohl(chunk->subh.data_hdr->tsn),719719+ chunk->transport))719720 goto fail_mark;720721721722 /* First calculate the padding, so we don't inadvertently