Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge commit 'v2.6.37-rc3' into perf/core

Merge reason: Pick up latest fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>

+2470 -1325
+3 -3
Documentation/DocBook/uio-howto.tmpl
··· 16 16 </orgname> 17 17 18 18 <address> 19 - <email>hjk@linutronix.de</email> 19 + <email>hjk@hansjkoch.de</email> 20 20 </address> 21 21 </affiliation> 22 22 </author> ··· 114 114 115 115 <para>If you know of any translations for this document, or you are 116 116 interested in translating it, please email me 117 - <email>hjk@linutronix.de</email>. 117 + <email>hjk@hansjkoch.de</email>. 118 118 </para> 119 119 </sect1> 120 120 ··· 171 171 <title>Feedback</title> 172 172 <para>Find something wrong with this document? (Or perhaps something 173 173 right?) I would love to hear from you. Please email me at 174 - <email>hjk@linutronix.de</email>.</para> 174 + <email>hjk@hansjkoch.de</email>.</para> 175 175 </sect1> 176 176 </chapter> 177 177
+22 -9
Documentation/development-process/2.Process
··· 154 154 inclusion, it should be accepted by a relevant subsystem maintainer - 155 155 though this acceptance is not a guarantee that the patch will make it 156 156 all the way to the mainline. The patch will show up in the maintainer's 157 - subsystem tree and into the staging trees (described below). When the 157 + subsystem tree and into the -next trees (described below). When the 158 158 process works, this step leads to more extensive review of the patch and 159 159 the discovery of any problems resulting from the integration of this 160 160 patch with work being done by others. ··· 236 236 normally the right way to go. 237 237 238 238 239 - 2.4: STAGING TREES 239 + 2.4: NEXT TREES 240 240 241 241 The chain of subsystem trees guides the flow of patches into the kernel, 242 242 but it also raises an interesting question: what if somebody wants to look ··· 250 250 the interesting subsystem trees, but that would be a big and error-prone 251 251 job. 252 252 253 - The answer comes in the form of staging trees, where subsystem trees are 253 + The answer comes in the form of -next trees, where subsystem trees are 254 254 collected for testing and review. The older of these trees, maintained by 255 255 Andrew Morton, is called "-mm" (for memory management, which is how it got 256 256 started). The -mm tree integrates patches from a long list of subsystem ··· 275 275 Use of the MMOTM tree is likely to be a frustrating experience, though; 276 276 there is a definite chance that it will not even compile. 277 277 278 - The other staging tree, started more recently, is linux-next, maintained by 278 + The other -next tree, started more recently, is linux-next, maintained by 279 279 Stephen Rothwell. The linux-next tree is, by design, a snapshot of what 280 280 the mainline is expected to look like after the next merge window closes. 281 281 Linux-next trees are announced on the linux-kernel and linux-next mailing ··· 303 303 See http://lwn.net/Articles/289013/ for more information on this topic, and 304 304 stay tuned; much is still in flux where linux-next is involved. 305 305 306 - Besides the mmotm and linux-next trees, the kernel source tree now contains 307 - the drivers/staging/ directory and many sub-directories for drivers or 308 - filesystems that are on their way to being added to the kernel tree 309 - proper, but they remain in drivers/staging/ while they still need more 310 - work. 306 + 2.4.1: STAGING TREES 311 307 308 + The kernel source tree now contains the drivers/staging/ directory, where 309 + many sub-directories for drivers or filesystems that are on their way to 310 + being added to the kernel tree live. They remain in drivers/staging while 311 + they still need more work; once complete, they can be moved into the 312 + kernel proper. This is a way to keep track of drivers that aren't 313 + up to Linux kernel coding or quality standards, but people may want to use 314 + them and track development. 315 + 316 + Greg Kroah-Hartman currently (as of 2.6.36) maintains the staging tree. 317 + Drivers that still need work are sent to him, with each driver having 318 + its own subdirectory in drivers/staging/. Along with the driver source 319 + files, a TODO file should be present in the directory as well. The TODO 320 + file lists the pending work that the driver needs for acceptance into 321 + the kernel proper, as well as a list of people that should be Cc'd for any 322 + patches to the driver. Staging drivers that don't currently build should 323 + have their config entries depend upon CONFIG_BROKEN. Once they can 324 + be successfully built without outside patches, CONFIG_BROKEN can be removed. 312 325 313 326 2.5: TOOLS 314 327
+1 -1
Documentation/filesystems/configfs/configfs_example_explicit.c
··· 89 89 char *p = (char *) page; 90 90 91 91 tmp = simple_strtoul(p, &p, 10); 92 - if (!p || (*p && (*p != '\n'))) 92 + if ((*p != '\0') && (*p != '\n')) 93 93 return -EINVAL; 94 94 95 95 if (tmp > INT_MAX)
+10
Documentation/gpio.txt
··· 617 617 is configured as an output, this value may be written; 618 618 any nonzero value is treated as high. 619 619 620 + If the pin can be configured as interrupt-generating interrupt 621 + and if it has been configured to generate interrupts (see the 622 + description of "edge"), you can poll(2) on that file and 623 + poll(2) will return whenever the interrupt was triggered. If 624 + you use poll(2), set the events POLLPRI and POLLERR. If you 625 + use select(2), set the file descriptor in exceptfds. After 626 + poll(2) returns, either lseek(2) to the beginning of the sysfs 627 + file and read the new value or close the file and re-open it 628 + to read the value. 629 + 620 630 "edge" ... reads as either "none", "rising", "falling", or 621 631 "both". Write these strings to select the signal edge(s) 622 632 that will make poll(2) on the "value" file return.
+1 -1
Documentation/hwmon/lm93
··· 11 11 Mark M. Hoffman <mhoffman@lightlink.com> 12 12 Ported to 2.6 by Eric J. Bowersox <ericb@aspsys.com> 13 13 Adapted to 2.6.20 by Carsten Emde <ce@osadl.org> 14 - Modified for mainline integration by Hans J. Koch <hjk@linutronix.de> 14 + Modified for mainline integration by Hans J. Koch <hjk@hansjkoch.de> 15 15 16 16 Module Parameters 17 17 -----------------
+1 -1
Documentation/hwmon/max6650
··· 8 8 Datasheet: http://pdfserv.maxim-ic.com/en/ds/MAX6650-MAX6651.pdf 9 9 10 10 Authors: 11 - Hans J. Koch <hjk@linutronix.de> 11 + Hans J. Koch <hjk@hansjkoch.de> 12 12 John Morris <john.morris@spirentcom.com> 13 13 Claus Gindhart <claus.gindhart@kontron.com> 14 14
+3
Documentation/power/opp.txt
··· 37 37 SoC framework -> modifies on required cases certain OPPs -> OPP layer 38 38 -> queries to search/retrieve information -> 39 39 40 + Architectures that provide a SoC framework for OPP should select ARCH_HAS_OPP 41 + to make the OPP layer available. 42 + 40 43 OPP layer expects each domain to be represented by a unique device pointer. SoC 41 44 framework registers a set of initial OPPs per device with the OPP layer. This 42 45 list is expected to be an optimally small number typically around 5 per device.
+9
MAINTAINERS
··· 1829 1829 S: Supported 1830 1830 F: drivers/net/cxgb4vf/ 1831 1831 1832 + STMMAC ETHERNET DRIVER 1833 + M: Giuseppe Cavallaro <peppe.cavallaro@st.com> 1834 + L: netdev@vger.kernel.org 1835 + W: http://www.stlinux.com 1836 + S: Supported 1837 + F: drivers/net/stmmac/ 1838 + 1832 1839 CYBERPRO FB DRIVER 1833 1840 M: Russell King <linux@arm.linux.org.uk> 1834 1841 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 2015 2008 DOCBOOK FOR DOCUMENTATION 2016 2009 M: Randy Dunlap <rdunlap@xenotime.net> 2017 2010 S: Maintained 2011 + F: scripts/kernel-doc 2018 2012 2019 2013 DOCKING STATION DRIVER 2020 2014 M: Shaohua Li <shaohua.li@intel.com> ··· 2026 2018 DOCUMENTATION 2027 2019 M: Randy Dunlap <rdunlap@xenotime.net> 2028 2020 L: linux-doc@vger.kernel.org 2021 + T: quilt oss.oracle.com/~rdunlap/kernel-doc-patches/current/ 2029 2022 S: Maintained 2030 2023 F: Documentation/ 2031 2024
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 37 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Flesh-Eating Bats with Fangs 6 6 7 7 # *DOCUMENTATION*
-1
arch/blackfin/kernel/process.c
··· 7 7 */ 8 8 9 9 #include <linux/module.h> 10 - #include <linux/smp_lock.h> 11 10 #include <linux/unistd.h> 12 11 #include <linux/user.h> 13 12 #include <linux/uaccess.h>
-1
arch/frv/kernel/process.c
··· 16 16 #include <linux/kernel.h> 17 17 #include <linux/mm.h> 18 18 #include <linux/smp.h> 19 - #include <linux/smp_lock.h> 20 19 #include <linux/stddef.h> 21 20 #include <linux/unistd.h> 22 21 #include <linux/ptrace.h>
-1
arch/h8300/kernel/process.c
··· 28 28 #include <linux/kernel.h> 29 29 #include <linux/mm.h> 30 30 #include <linux/smp.h> 31 - #include <linux/smp_lock.h> 32 31 #include <linux/stddef.h> 33 32 #include <linux/unistd.h> 34 33 #include <linux/ptrace.h>
+3 -1
arch/ia64/hp/sim/simscsi.c
··· 202 202 } 203 203 204 204 static int 205 - simscsi_queuecommand (struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 205 + simscsi_queuecommand_lck (struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 206 206 { 207 207 unsigned int target_id = sc->device->id; 208 208 char fname[MAX_ROOT_LEN+16]; ··· 325 325 tasklet_schedule(&simscsi_tasklet); 326 326 return 0; 327 327 } 328 + 329 + static DEF_SCSI_QCMD(simscsi_queuecommand) 328 330 329 331 static int 330 332 simscsi_host_reset (struct scsi_cmnd *sc)
-1
arch/m68k/kernel/process.c
··· 18 18 #include <linux/slab.h> 19 19 #include <linux/fs.h> 20 20 #include <linux/smp.h> 21 - #include <linux/smp_lock.h> 22 21 #include <linux/stddef.h> 23 22 #include <linux/unistd.h> 24 23 #include <linux/ptrace.h>
-1
arch/m68knommu/kernel/process.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <linux/mm.h> 21 21 #include <linux/smp.h> 22 - #include <linux/smp_lock.h> 23 22 #include <linux/stddef.h> 24 23 #include <linux/unistd.h> 25 24 #include <linux/ptrace.h>
-1
arch/mn10300/kernel/process.c
··· 14 14 #include <linux/kernel.h> 15 15 #include <linux/mm.h> 16 16 #include <linux/smp.h> 17 - #include <linux/smp_lock.h> 18 17 #include <linux/stddef.h> 19 18 #include <linux/unistd.h> 20 19 #include <linux/ptrace.h>
-1
arch/parisc/hpux/sys_hpux.c
··· 28 28 #include <linux/namei.h> 29 29 #include <linux/sched.h> 30 30 #include <linux/slab.h> 31 - #include <linux/smp_lock.h> 32 31 #include <linux/syscalls.h> 33 32 #include <linux/utsname.h> 34 33 #include <linux/vfs.h>
-1
arch/parisc/kernel/sys_parisc32.c
··· 20 20 #include <linux/times.h> 21 21 #include <linux/time.h> 22 22 #include <linux/smp.h> 23 - #include <linux/smp_lock.h> 24 23 #include <linux/sem.h> 25 24 #include <linux/msg.h> 26 25 #include <linux/shm.h>
+4
arch/powerpc/Kconfig
··· 4 4 bool 5 5 default y if !PPC64 6 6 7 + config 32BIT 8 + bool 9 + default y if PPC32 10 + 7 11 config 64BIT 8 12 bool 9 13 default y if PPC64
+2 -1
arch/powerpc/boot/div64.S
··· 33 33 cntlzw r0,r5 # we are shifting the dividend right 34 34 li r10,-1 # to make it < 2^32, and shifting 35 35 srw r10,r10,r0 # the divisor right the same amount, 36 - add r9,r4,r10 # rounding up (so the estimate cannot 36 + addc r9,r4,r10 # rounding up (so the estimate cannot 37 37 andc r11,r6,r10 # ever be too large, only too small) 38 38 andc r9,r9,r10 39 + addze r9,r9 39 40 or r11,r5,r11 40 41 rotlw r9,r9,r0 41 42 rotlw r11,r11,r0
+2 -2
arch/powerpc/kernel/kgdb.c
··· 337 337 /* FP registers 32 -> 63 */ 338 338 #if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE) 339 339 if (current) 340 - memcpy(mem, current->thread.evr[regno-32], 340 + memcpy(mem, &current->thread.evr[regno-32], 341 341 dbg_reg_def[regno].size); 342 342 #else 343 343 /* fp registers not used by kernel, leave zero */ ··· 362 362 if (regno >= 32 && regno < 64) { 363 363 /* FP registers 32 -> 63 */ 364 364 #if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE) 365 - memcpy(current->thread.evr[regno-32], mem, 365 + memcpy(&current->thread.evr[regno-32], mem, 366 366 dbg_reg_def[regno].size); 367 367 #else 368 368 /* fp registers not used by kernel, leave zero */
+2 -3
arch/powerpc/kernel/setup_64.c
··· 497 497 } 498 498 499 499 /* 500 - * Called into from start_kernel, after lock_kernel has been called. 501 - * Initializes bootmem, which is unsed to manage page allocation until 502 - * mem_init is called. 500 + * Called into from start_kernel this initializes bootmem, which is used 501 + * to manage page allocation until mem_init is called. 503 502 */ 504 503 void __init setup_arch(char **cmdline_p) 505 504 {
-1
arch/powerpc/kernel/sys_ppc32.c
··· 23 23 #include <linux/resource.h> 24 24 #include <linux/times.h> 25 25 #include <linux/smp.h> 26 - #include <linux/smp_lock.h> 27 26 #include <linux/sem.h> 28 27 #include <linux/msg.h> 29 28 #include <linux/shm.h>
+1 -1
arch/powerpc/mm/hash_utils_64.c
··· 1123 1123 else 1124 1124 #endif /* CONFIG_PPC_HAS_HASH_64K */ 1125 1125 rc = __hash_page_4K(ea, access, vsid, ptep, trap, local, ssize, 1126 - subpage_protection(pgdir, ea)); 1126 + subpage_protection(mm, ea)); 1127 1127 1128 1128 /* Dump some info in case of hash insertion failure, they should 1129 1129 * never happen so it is really useful to know if/when they do
+4 -1
arch/powerpc/mm/tlb_low_64e.S
··· 138 138 cmpldi cr0,r15,0 /* Check for user region */ 139 139 std r14,EX_TLB_ESR(r12) /* write crazy -1 to frame */ 140 140 beq normal_tlb_miss 141 + 142 + li r11,_PAGE_PRESENT|_PAGE_BAP_SX /* Base perm */ 143 + oris r11,r11,_PAGE_ACCESSED@h 141 144 /* XXX replace the RMW cycles with immediate loads + writes */ 142 - 1: mfspr r10,SPRN_MAS1 145 + mfspr r10,SPRN_MAS1 143 146 cmpldi cr0,r15,8 /* Check for vmalloc region */ 144 147 rlwinm r10,r10,0,16,1 /* Clear TID */ 145 148 mtspr SPRN_MAS1,r10
+1 -1
arch/powerpc/mm/tlb_nohash.c
··· 585 585 ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000); 586 586 587 587 /* Finally limit subsequent allocations */ 588 - memblock_set_current_limit(ppc64_memblock_base + ppc64_rma_size); 588 + memblock_set_current_limit(first_memblock_base + ppc64_rma_size); 589 589 } 590 590 #endif /* CONFIG_PPC64 */
+6
arch/powerpc/platforms/pseries/Kconfig
··· 47 47 config PPC_PSERIES_DEBUG 48 48 depends on PPC_PSERIES && PPC_EARLY_DEBUG 49 49 bool "Enable extra debug logging in platforms/pseries" 50 + help 51 + Say Y here if you want the pseries core to produce a bunch of 52 + debug messages to the system log. Select this if you are having a 53 + problem with the pseries core and want to see more of what is 54 + going on. This does not enable debugging in lpar.c, which must 55 + be manually done due to its verbosity. 50 56 default y 51 57 52 58 config PPC_SMLPAR
-2
arch/powerpc/platforms/pseries/eeh.c
··· 21 21 * Please address comments and feedback to Linas Vepstas <linas@austin.ibm.com> 22 22 */ 23 23 24 - #undef DEBUG 25 - 26 24 #include <linux/delay.h> 27 25 #include <linux/init.h> 28 26 #include <linux/list.h>
-2
arch/powerpc/platforms/pseries/pci_dlpar.c
··· 25 25 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 26 26 */ 27 27 28 - #undef DEBUG 29 - 30 28 #include <linux/pci.h> 31 29 #include <asm/pci-bridge.h> 32 30 #include <asm/ppc-pci.h>
+12
arch/s390/Kconfig.debug
··· 6 6 7 7 source "lib/Kconfig.debug" 8 8 9 + config STRICT_DEVMEM 10 + def_bool y 11 + prompt "Filter access to /dev/mem" 12 + ---help--- 13 + This option restricts access to /dev/mem. If this option is 14 + disabled, you allow userspace access to all memory, including 15 + kernel and userspace memory. Accidental memory access is likely 16 + to be disastrous. 17 + Memory access is required for experts who want to debug the kernel. 18 + 19 + If you are unsure, say Y. 20 + 9 21 config DEBUG_STRICT_USER_COPY_CHECKS 10 22 bool "Strict user copy size checks" 11 23 ---help---
+5
arch/s390/include/asm/page.h
··· 130 130 void arch_free_page(struct page *page, int order); 131 131 void arch_alloc_page(struct page *page, int order); 132 132 133 + static inline int devmem_is_allowed(unsigned long pfn) 134 + { 135 + return 0; 136 + } 137 + 133 138 #define HAVE_ARCH_FREE_PAGE 134 139 #define HAVE_ARCH_ALLOC_PAGE 135 140
-1
arch/s390/kernel/compat_linux.c
··· 25 25 #include <linux/resource.h> 26 26 #include <linux/times.h> 27 27 #include <linux/smp.h> 28 - #include <linux/smp_lock.h> 29 28 #include <linux/sem.h> 30 29 #include <linux/msg.h> 31 30 #include <linux/shm.h>
+53 -17
arch/s390/kernel/kprobes.c
··· 30 30 #include <asm/sections.h> 31 31 #include <linux/module.h> 32 32 #include <linux/slab.h> 33 + #include <linux/hardirq.h> 33 34 34 35 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; 35 36 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); ··· 213 212 /* Set the PER control regs, turns on single step for this address */ 214 213 __ctl_load(kprobe_per_regs, 9, 11); 215 214 regs->psw.mask |= PSW_MASK_PER; 216 - regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK); 215 + regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT); 217 216 } 218 217 219 218 static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb) ··· 240 239 __get_cpu_var(current_kprobe) = p; 241 240 /* Save the interrupt and per flags */ 242 241 kcb->kprobe_saved_imask = regs->psw.mask & 243 - (PSW_MASK_PER | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK); 242 + (PSW_MASK_PER | PSW_MASK_IO | PSW_MASK_EXT); 244 243 /* Save the control regs that govern PER */ 245 244 __ctl_store(kcb->kprobe_saved_ctl, 9, 11); 246 245 } ··· 317 316 return 1; 318 317 319 318 ss_probe: 320 - if (regs->psw.mask & (PSW_MASK_PER | PSW_MASK_IO)) 321 - local_irq_disable(); 322 319 prepare_singlestep(p, regs); 323 320 kcb->kprobe_status = KPROBE_HIT_SS; 324 321 return 1; ··· 349 350 struct hlist_node *node, *tmp; 350 351 unsigned long flags, orig_ret_address = 0; 351 352 unsigned long trampoline_address = (unsigned long)&kretprobe_trampoline; 353 + kprobe_opcode_t *correct_ret_addr = NULL; 352 354 353 355 INIT_HLIST_HEAD(&empty_rp); 354 356 kretprobe_hash_lock(current, &head, &flags); ··· 372 372 /* another task is sharing our hash bucket */ 373 373 continue; 374 374 375 - if (ri->rp && ri->rp->handler) 376 - ri->rp->handler(ri, regs); 375 + orig_ret_address = (unsigned long)ri->ret_addr; 376 + 377 + if (orig_ret_address != trampoline_address) 378 + /* 379 + * This is the real return address. Any other 380 + * instances associated with this task are for 381 + * other calls deeper on the call stack 382 + */ 383 + break; 384 + } 385 + 386 + kretprobe_assert(ri, orig_ret_address, trampoline_address); 387 + 388 + correct_ret_addr = ri->ret_addr; 389 + hlist_for_each_entry_safe(ri, node, tmp, head, hlist) { 390 + if (ri->task != current) 391 + /* another task is sharing our hash bucket */ 392 + continue; 377 393 378 394 orig_ret_address = (unsigned long)ri->ret_addr; 395 + 396 + if (ri->rp && ri->rp->handler) { 397 + ri->ret_addr = correct_ret_addr; 398 + ri->rp->handler(ri, regs); 399 + } 400 + 379 401 recycle_rp_inst(ri, &empty_rp); 380 402 381 403 if (orig_ret_address != trampoline_address) { ··· 409 387 break; 410 388 } 411 389 } 412 - kretprobe_assert(ri, orig_ret_address, trampoline_address); 390 + 413 391 regs->psw.addr = orig_ret_address | PSW_ADDR_AMODE; 414 392 415 393 reset_current_kprobe(); ··· 487 465 goto out; 488 466 } 489 467 reset_current_kprobe(); 490 - if (regs->psw.mask & (PSW_MASK_PER | PSW_MASK_IO)) 491 - local_irq_enable(); 492 468 out: 493 469 preempt_enable_no_resched(); 494 470 ··· 502 482 return 1; 503 483 } 504 484 505 - int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) 485 + static int __kprobes kprobe_trap_handler(struct pt_regs *regs, int trapnr) 506 486 { 507 487 struct kprobe *cur = kprobe_running(); 508 488 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); ··· 528 508 restore_previous_kprobe(kcb); 529 509 else { 530 510 reset_current_kprobe(); 531 - if (regs->psw.mask & (PSW_MASK_PER | PSW_MASK_IO)) 532 - local_irq_enable(); 533 511 } 534 512 preempt_enable_no_resched(); 535 513 break; ··· 571 553 return 0; 572 554 } 573 555 556 + int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) 557 + { 558 + int ret; 559 + 560 + if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT)) 561 + local_irq_disable(); 562 + ret = kprobe_trap_handler(regs, trapnr); 563 + if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT)) 564 + local_irq_restore(regs->psw.mask & ~PSW_MASK_PER); 565 + return ret; 566 + } 567 + 574 568 /* 575 569 * Wrapper routine to for handling exceptions. 576 570 */ ··· 590 560 unsigned long val, void *data) 591 561 { 592 562 struct die_args *args = (struct die_args *)data; 563 + struct pt_regs *regs = args->regs; 593 564 int ret = NOTIFY_DONE; 565 + 566 + if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT)) 567 + local_irq_disable(); 594 568 595 569 switch (val) { 596 570 case DIE_BPT: ··· 606 572 ret = NOTIFY_STOP; 607 573 break; 608 574 case DIE_TRAP: 609 - /* kprobe_running() needs smp_processor_id() */ 610 - preempt_disable(); 611 - if (kprobe_running() && 612 - kprobe_fault_handler(args->regs, args->trapnr)) 575 + if (!preemptible() && kprobe_running() && 576 + kprobe_trap_handler(args->regs, args->trapnr)) 613 577 ret = NOTIFY_STOP; 614 - preempt_enable(); 615 578 break; 616 579 default: 617 580 break; 618 581 } 582 + 583 + if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT)) 584 + local_irq_restore(regs->psw.mask & ~PSW_MASK_PER); 585 + 619 586 return ret; 620 587 } 621 588 ··· 630 595 631 596 /* setup return addr to the jprobe handler routine */ 632 597 regs->psw.addr = (unsigned long)(jp->entry) | PSW_ADDR_AMODE; 598 + regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT); 633 599 634 600 /* r14 is the function return address */ 635 601 kcb->jprobe_saved_r14 = (unsigned long)regs->gprs[14];
+3 -4
arch/s390/mm/gup.c
··· 20 20 static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr, 21 21 unsigned long end, int write, struct page **pages, int *nr) 22 22 { 23 - unsigned long mask, result; 23 + unsigned long mask; 24 24 pte_t *ptep, pte; 25 25 struct page *page; 26 26 27 - result = write ? 0 : _PAGE_RO; 28 - mask = result | _PAGE_INVALID | _PAGE_SPECIAL; 27 + mask = (write ? _PAGE_RO : 0) | _PAGE_INVALID | _PAGE_SPECIAL; 29 28 30 29 ptep = ((pte_t *) pmd_deref(pmd)) + pte_index(addr); 31 30 do { 32 31 pte = *ptep; 33 32 barrier(); 34 - if ((pte_val(pte) & mask) != result) 33 + if ((pte_val(pte) & mask) != 0) 35 34 return 0; 36 35 VM_BUG_ON(!pfn_valid(pte_pfn(pte))); 37 36 page = pte_page(pte);
-1
arch/sparc/kernel/leon_smp.c
··· 12 12 #include <linux/sched.h> 13 13 #include <linux/threads.h> 14 14 #include <linux/smp.h> 15 - #include <linux/smp_lock.h> 16 15 #include <linux/interrupt.h> 17 16 #include <linux/kernel_stat.h> 18 17 #include <linux/init.h>
-1
arch/sparc/kernel/sys_sparc32.c
··· 17 17 #include <linux/resource.h> 18 18 #include <linux/times.h> 19 19 #include <linux/smp.h> 20 - #include <linux/smp_lock.h> 21 20 #include <linux/sem.h> 22 21 #include <linux/msg.h> 23 22 #include <linux/shm.h>
-1
arch/sparc/kernel/sys_sparc_32.c
··· 19 19 #include <linux/mman.h> 20 20 #include <linux/utsname.h> 21 21 #include <linux/smp.h> 22 - #include <linux/smp_lock.h> 23 22 #include <linux/ipc.h> 24 23 25 24 #include <asm/uaccess.h>
-1
arch/sparc/kernel/unaligned_32.c
··· 16 16 #include <asm/system.h> 17 17 #include <asm/uaccess.h> 18 18 #include <linux/smp.h> 19 - #include <linux/smp_lock.h> 20 19 #include <linux/perf_event.h> 21 20 22 21 enum direction {
-1
arch/sparc/kernel/windows.c
··· 9 9 #include <linux/string.h> 10 10 #include <linux/mm.h> 11 11 #include <linux/smp.h> 12 - #include <linux/smp_lock.h> 13 12 14 13 #include <asm/uaccess.h> 15 14
-1
arch/tile/kernel/compat.c
··· 21 21 #include <linux/kdev_t.h> 22 22 #include <linux/fs.h> 23 23 #include <linux/fcntl.h> 24 - #include <linux/smp_lock.h> 25 24 #include <linux/uaccess.h> 26 25 #include <linux/signal.h> 27 26 #include <asm/syscalls.h>
-1
arch/tile/kernel/compat_signal.c
··· 15 15 #include <linux/sched.h> 16 16 #include <linux/mm.h> 17 17 #include <linux/smp.h> 18 - #include <linux/smp_lock.h> 19 18 #include <linux/kernel.h> 20 19 #include <linux/signal.h> 21 20 #include <linux/errno.h>
-1
arch/tile/kernel/signal.c
··· 16 16 #include <linux/sched.h> 17 17 #include <linux/mm.h> 18 18 #include <linux/smp.h> 19 - #include <linux/smp_lock.h> 20 19 #include <linux/kernel.h> 21 20 #include <linux/signal.h> 22 21 #include <linux/errno.h>
-1
arch/tile/kernel/smpboot.c
··· 18 18 #include <linux/mm.h> 19 19 #include <linux/sched.h> 20 20 #include <linux/kernel_stat.h> 21 - #include <linux/smp_lock.h> 22 21 #include <linux/bootmem.h> 23 22 #include <linux/notifier.h> 24 23 #include <linux/cpu.h>
-1
arch/tile/kernel/sys.c
··· 20 20 #include <linux/sched.h> 21 21 #include <linux/mm.h> 22 22 #include <linux/smp.h> 23 - #include <linux/smp_lock.h> 24 23 #include <linux/syscalls.h> 25 24 #include <linux/mman.h> 26 25 #include <linux/file.h>
-1
arch/tile/mm/fault.c
··· 24 24 #include <linux/mman.h> 25 25 #include <linux/mm.h> 26 26 #include <linux/smp.h> 27 - #include <linux/smp_lock.h> 28 27 #include <linux/interrupt.h> 29 28 #include <linux/init.h> 30 29 #include <linux/tty.h>
-1
arch/tile/mm/hugetlbpage.c
··· 21 21 #include <linux/mm.h> 22 22 #include <linux/hugetlb.h> 23 23 #include <linux/pagemap.h> 24 - #include <linux/smp_lock.h> 25 24 #include <linux/slab.h> 26 25 #include <linux/err.h> 27 26 #include <linux/sysctl.h>
-1
arch/um/kernel/exec.c
··· 5 5 6 6 #include "linux/stddef.h" 7 7 #include "linux/fs.h" 8 - #include "linux/smp_lock.h" 9 8 #include "linux/ptrace.h" 10 9 #include "linux/sched.h" 11 10 #include "linux/slab.h"
-1
arch/x86/ia32/sys_ia32.c
··· 28 28 #include <linux/syscalls.h> 29 29 #include <linux/times.h> 30 30 #include <linux/utsname.h> 31 - #include <linux/smp_lock.h> 32 31 #include <linux/mm.h> 33 32 #include <linux/uio.h> 34 33 #include <linux/poll.h>
-1
arch/x86/kernel/cpuid.c
··· 33 33 #include <linux/init.h> 34 34 #include <linux/poll.h> 35 35 #include <linux/smp.h> 36 - #include <linux/smp_lock.h> 37 36 #include <linux/major.h> 38 37 #include <linux/fs.h> 39 38 #include <linux/device.h>
+8 -4
arch/x86/kernel/kgdb.c
··· 315 315 if (!breakinfo[i].enabled) 316 316 continue; 317 317 bp = *per_cpu_ptr(breakinfo[i].pev, cpu); 318 - if (bp->attr.disabled == 1) 318 + if (!bp->attr.disabled) { 319 + arch_uninstall_hw_breakpoint(bp); 320 + bp->attr.disabled = 1; 319 321 continue; 322 + } 320 323 if (dbg_is_early) 321 324 early_dr7 &= ~encode_dr7(i, breakinfo[i].len, 322 325 breakinfo[i].type); 323 - else 324 - arch_uninstall_hw_breakpoint(bp); 325 - bp->attr.disabled = 1; 326 + else if (hw_break_release_slot(i)) 327 + printk(KERN_ERR "KGDB: hw bpt remove failed %lx\n", 328 + breakinfo[i].addr); 329 + breakinfo[i].enabled = 0; 326 330 } 327 331 } 328 332
-1
arch/x86/kernel/msr.c
··· 30 30 #include <linux/init.h> 31 31 #include <linux/poll.h> 32 32 #include <linux/smp.h> 33 - #include <linux/smp_lock.h> 34 33 #include <linux/major.h> 35 34 #include <linux/fs.h> 36 35 #include <linux/device.h>
+1 -1
arch/x86/kvm/svm.c
··· 3395 3395 vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip; 3396 3396 3397 3397 load_host_msrs(vcpu); 3398 + kvm_load_ldt(ldt_selector); 3398 3399 loadsegment(fs, fs_selector); 3399 3400 #ifdef CONFIG_X86_64 3400 3401 load_gs_index(gs_selector); ··· 3403 3402 #else 3404 3403 loadsegment(gs, gs_selector); 3405 3404 #endif 3406 - kvm_load_ldt(ldt_selector); 3407 3405 3408 3406 reload_tss(vcpu); 3409 3407
+9 -10
arch/x86/kvm/vmx.c
··· 821 821 #endif 822 822 823 823 #ifdef CONFIG_X86_64 824 - if (is_long_mode(&vmx->vcpu)) { 825 - rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base); 824 + rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base); 825 + if (is_long_mode(&vmx->vcpu)) 826 826 wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); 827 - } 828 827 #endif 829 828 for (i = 0; i < vmx->save_nmsrs; ++i) 830 829 kvm_set_shared_msr(vmx->guest_msrs[i].index, ··· 838 839 839 840 ++vmx->vcpu.stat.host_state_reload; 840 841 vmx->host_state.loaded = 0; 841 - if (vmx->host_state.fs_reload_needed) 842 - loadsegment(fs, vmx->host_state.fs_sel); 842 + #ifdef CONFIG_X86_64 843 + if (is_long_mode(&vmx->vcpu)) 844 + rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); 845 + #endif 843 846 if (vmx->host_state.gs_ldt_reload_needed) { 844 847 kvm_load_ldt(vmx->host_state.ldt_sel); 845 848 #ifdef CONFIG_X86_64 846 849 load_gs_index(vmx->host_state.gs_sel); 847 - wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs); 848 850 #else 849 851 loadsegment(gs, vmx->host_state.gs_sel); 850 852 #endif 851 853 } 854 + if (vmx->host_state.fs_reload_needed) 855 + loadsegment(fs, vmx->host_state.fs_sel); 852 856 reload_tss(); 853 857 #ifdef CONFIG_X86_64 854 - if (is_long_mode(&vmx->vcpu)) { 855 - rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); 856 - wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base); 857 - } 858 + wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base); 858 859 #endif 859 860 if (current_thread_info()->status & TS_USEDFPU) 860 861 clts();
-1
block/compat_ioctl.c
··· 8 8 #include <linux/hdreg.h> 9 9 #include <linux/slab.h> 10 10 #include <linux/syscalls.h> 11 - #include <linux/smp_lock.h> 12 11 #include <linux/types.h> 13 12 #include <linux/uaccess.h> 14 13
-1
block/ioctl.c
··· 5 5 #include <linux/hdreg.h> 6 6 #include <linux/backing-dev.h> 7 7 #include <linux/buffer_head.h> 8 - #include <linux/smp_lock.h> 9 8 #include <linux/blktrace_api.h> 10 9 #include <asm/uaccess.h> 11 10
+9 -10
drivers/ata/libata-scsi.c
··· 3166 3166 3167 3167 /** 3168 3168 * ata_scsi_queuecmd - Issue SCSI cdb to libata-managed device 3169 + * @shost: SCSI host of command to be sent 3169 3170 * @cmd: SCSI command to be sent 3170 - * @done: Completion function, called when command is complete 3171 3171 * 3172 3172 * In some cases, this function translates SCSI commands into 3173 3173 * ATA taskfiles, and queues the taskfiles to be sent to ··· 3177 3177 * ATA and ATAPI devices appearing as SCSI devices. 3178 3178 * 3179 3179 * LOCKING: 3180 - * Releases scsi-layer-held lock, and obtains host lock. 3180 + * ATA host lock 3181 3181 * 3182 3182 * RETURNS: 3183 3183 * Return value from __ata_scsi_queuecmd() if @cmd can be queued, 3184 3184 * 0 otherwise. 3185 3185 */ 3186 - int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 3186 + int ata_scsi_queuecmd(struct Scsi_Host *shost, struct scsi_cmnd *cmd) 3187 3187 { 3188 3188 struct ata_port *ap; 3189 3189 struct ata_device *dev; 3190 3190 struct scsi_device *scsidev = cmd->device; 3191 - struct Scsi_Host *shost = scsidev->host; 3192 3191 int rc = 0; 3192 + unsigned long irq_flags; 3193 3193 3194 3194 ap = ata_shost_to_port(shost); 3195 3195 3196 - spin_unlock(shost->host_lock); 3197 - spin_lock(ap->lock); 3196 + spin_lock_irqsave(ap->lock, irq_flags); 3198 3197 3199 3198 ata_scsi_dump_cdb(ap, cmd); 3200 3199 3201 3200 dev = ata_scsi_find_dev(ap, scsidev); 3202 3201 if (likely(dev)) 3203 - rc = __ata_scsi_queuecmd(cmd, done, dev); 3202 + rc = __ata_scsi_queuecmd(cmd, cmd->scsi_done, dev); 3204 3203 else { 3205 3204 cmd->result = (DID_BAD_TARGET << 16); 3206 - done(cmd); 3205 + cmd->scsi_done(cmd); 3207 3206 } 3208 3207 3209 - spin_unlock(ap->lock); 3210 - spin_lock(shost->host_lock); 3208 + spin_unlock_irqrestore(ap->lock, irq_flags); 3209 + 3211 3210 return rc; 3212 3211 } 3213 3212
+5 -4
drivers/ata/sata_via.c
··· 538 538 return 0; 539 539 } 540 540 541 - static void svia_configure(struct pci_dev *pdev) 541 + static void svia_configure(struct pci_dev *pdev, int board_id) 542 542 { 543 543 u8 tmp8; 544 544 ··· 577 577 } 578 578 579 579 /* 580 - * vt6421 has problems talking to some drives. The following 580 + * vt6420/1 has problems talking to some drives. The following 581 581 * is the fix from Joseph Chan <JosephChan@via.com.tw>. 582 582 * 583 583 * When host issues HOLD, device may send up to 20DW of data ··· 596 596 * 597 597 * https://bugzilla.kernel.org/show_bug.cgi?id=15173 598 598 * http://article.gmane.org/gmane.linux.ide/46352 599 + * http://thread.gmane.org/gmane.linux.kernel/1062139 599 600 */ 600 - if (pdev->device == 0x3249) { 601 + if (board_id == vt6420 || board_id == vt6421) { 601 602 pci_read_config_byte(pdev, 0x52, &tmp8); 602 603 tmp8 |= 1 << 2; 603 604 pci_write_config_byte(pdev, 0x52, tmp8); ··· 653 652 if (rc) 654 653 return rc; 655 654 656 - svia_configure(pdev); 655 + svia_configure(pdev, board_id); 657 656 658 657 pci_set_master(pdev); 659 658 return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
+30 -4
drivers/base/power/main.c
··· 475 475 */ 476 476 void dpm_resume_noirq(pm_message_t state) 477 477 { 478 - struct device *dev; 478 + struct list_head list; 479 479 ktime_t starttime = ktime_get(); 480 480 481 + INIT_LIST_HEAD(&list); 481 482 mutex_lock(&dpm_list_mtx); 482 483 transition_started = false; 483 - list_for_each_entry(dev, &dpm_list, power.entry) 484 + while (!list_empty(&dpm_list)) { 485 + struct device *dev = to_device(dpm_list.next); 486 + 487 + get_device(dev); 484 488 if (dev->power.status > DPM_OFF) { 485 489 int error; 486 490 487 491 dev->power.status = DPM_OFF; 492 + mutex_unlock(&dpm_list_mtx); 493 + 488 494 error = device_resume_noirq(dev, state); 495 + 496 + mutex_lock(&dpm_list_mtx); 489 497 if (error) 490 498 pm_dev_err(dev, state, " early", error); 491 499 } 500 + if (!list_empty(&dev->power.entry)) 501 + list_move_tail(&dev->power.entry, &list); 502 + put_device(dev); 503 + } 504 + list_splice(&list, &dpm_list); 492 505 mutex_unlock(&dpm_list_mtx); 493 506 dpm_show_time(starttime, state, "early"); 494 507 resume_device_irqs(); ··· 802 789 */ 803 790 int dpm_suspend_noirq(pm_message_t state) 804 791 { 805 - struct device *dev; 792 + struct list_head list; 806 793 ktime_t starttime = ktime_get(); 807 794 int error = 0; 808 795 796 + INIT_LIST_HEAD(&list); 809 797 suspend_device_irqs(); 810 798 mutex_lock(&dpm_list_mtx); 811 - list_for_each_entry_reverse(dev, &dpm_list, power.entry) { 799 + while (!list_empty(&dpm_list)) { 800 + struct device *dev = to_device(dpm_list.prev); 801 + 802 + get_device(dev); 803 + mutex_unlock(&dpm_list_mtx); 804 + 812 805 error = device_suspend_noirq(dev, state); 806 + 807 + mutex_lock(&dpm_list_mtx); 813 808 if (error) { 814 809 pm_dev_err(dev, state, " late", error); 810 + put_device(dev); 815 811 break; 816 812 } 817 813 dev->power.status = DPM_OFF_IRQ; 814 + if (!list_empty(&dev->power.entry)) 815 + list_move(&dev->power.entry, &list); 816 + put_device(dev); 818 817 } 818 + list_splice_tail(&list, &dpm_list); 819 819 mutex_unlock(&dpm_list_mtx); 820 820 if (error) 821 821 dpm_resume_noirq(resume_event(state));
+5 -3
drivers/block/cciss_scsi.c
··· 62 62 int length, /* length of data in buffer */ 63 63 int func); /* 0 == read, 1 == write */ 64 64 65 - static int cciss_scsi_queue_command (struct scsi_cmnd *cmd, 66 - void (* done)(struct scsi_cmnd *)); 65 + static int cciss_scsi_queue_command (struct Scsi_Host *h, 66 + struct scsi_cmnd *cmd); 67 67 static int cciss_eh_device_reset_handler(struct scsi_cmnd *); 68 68 static int cciss_eh_abort_handler(struct scsi_cmnd *); 69 69 ··· 1406 1406 1407 1407 1408 1408 static int 1409 - cciss_scsi_queue_command (struct scsi_cmnd *cmd, void (* done)(struct scsi_cmnd *)) 1409 + cciss_scsi_queue_command_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 1410 1410 { 1411 1411 ctlr_info_t *h; 1412 1412 int rc; ··· 1503 1503 /* the cmd'll come back via intr handler in complete_scsi_command() */ 1504 1504 return 0; 1505 1505 } 1506 + 1507 + static DEF_SCSI_QCMD(cciss_scsi_queue_command) 1506 1508 1507 1509 static void cciss_unregister_scsi(ctlr_info_t *h) 1508 1510 {
-1
drivers/block/drbd/drbd_receiver.c
··· 36 36 #include <linux/memcontrol.h> 37 37 #include <linux/mm_inline.h> 38 38 #include <linux/slab.h> 39 - #include <linux/smp_lock.h> 40 39 #include <linux/pkt_sched.h> 41 40 #define __KERNEL_SYSCALLS__ 42 41 #include <linux/unistd.h>
-1
drivers/block/drbd/drbd_worker.c
··· 26 26 #include <linux/module.h> 27 27 #include <linux/drbd.h> 28 28 #include <linux/sched.h> 29 - #include <linux/smp_lock.h> 30 29 #include <linux/wait.h> 31 30 #include <linux/mm.h> 32 31 #include <linux/memcontrol.h>
-1
drivers/char/agp/frontend.c
··· 39 39 #include <linux/mm.h> 40 40 #include <linux/fs.h> 41 41 #include <linux/sched.h> 42 - #include <linux/smp_lock.h> 43 42 #include <asm/uaccess.h> 44 43 #include <asm/pgtable.h> 45 44 #include "agp.h"
-1
drivers/char/amiserial.c
··· 81 81 #include <linux/mm.h> 82 82 #include <linux/seq_file.h> 83 83 #include <linux/slab.h> 84 - #include <linux/smp_lock.h> 85 84 #include <linux/init.h> 86 85 #include <linux/bitops.h> 87 86 #include <linux/platform_device.h>
-1
drivers/char/briq_panel.c
··· 6 6 7 7 #include <linux/module.h> 8 8 9 - #include <linux/smp_lock.h> 10 9 #include <linux/types.h> 11 10 #include <linux/errno.h> 12 11 #include <linux/tty.h>
-1
drivers/char/hpet.c
··· 14 14 #include <linux/interrupt.h> 15 15 #include <linux/module.h> 16 16 #include <linux/kernel.h> 17 - #include <linux/smp_lock.h> 18 17 #include <linux/types.h> 19 18 #include <linux/miscdevice.h> 20 19 #include <linux/major.h>
-1
drivers/char/hw_random/core.c
··· 37 37 #include <linux/kernel.h> 38 38 #include <linux/fs.h> 39 39 #include <linux/sched.h> 40 - #include <linux/smp_lock.h> 41 40 #include <linux/init.h> 42 41 #include <linux/miscdevice.h> 43 42 #include <linux/delay.h>
-1
drivers/char/istallion.c
··· 21 21 #include <linux/module.h> 22 22 #include <linux/sched.h> 23 23 #include <linux/slab.h> 24 - #include <linux/smp_lock.h> 25 24 #include <linux/interrupt.h> 26 25 #include <linux/tty.h> 27 26 #include <linux/tty_flip.h>
-1
drivers/char/serial167.c
··· 52 52 #include <linux/interrupt.h> 53 53 #include <linux/serial.h> 54 54 #include <linux/serialP.h> 55 - #include <linux/smp_lock.h> 56 55 #include <linux/string.h> 57 56 #include <linux/fcntl.h> 58 57 #include <linux/ptrace.h>
-1
drivers/char/specialix.c
··· 87 87 #include <linux/tty_flip.h> 88 88 #include <linux/mm.h> 89 89 #include <linux/serial.h> 90 - #include <linux/smp_lock.h> 91 90 #include <linux/fcntl.h> 92 91 #include <linux/major.h> 93 92 #include <linux/delay.h>
-1
drivers/char/stallion.c
··· 40 40 #include <linux/stallion.h> 41 41 #include <linux/ioport.h> 42 42 #include <linux/init.h> 43 - #include <linux/smp_lock.h> 44 43 #include <linux/device.h> 45 44 #include <linux/delay.h> 46 45 #include <linux/ctype.h>
-1
drivers/char/sx.c
··· 216 216 #include <linux/eisa.h> 217 217 #include <linux/pci.h> 218 218 #include <linux/slab.h> 219 - #include <linux/smp_lock.h> 220 219 #include <linux/init.h> 221 220 #include <linux/miscdevice.h> 222 221 #include <linux/bitops.h>
-1
drivers/char/uv_mmtimer.c
··· 23 23 #include <linux/interrupt.h> 24 24 #include <linux/time.h> 25 25 #include <linux/math64.h> 26 - #include <linux/smp_lock.h> 27 26 28 27 #include <asm/genapic.h> 29 28 #include <asm/uv/uv_hub.h>
+3 -1
drivers/firewire/sbp2.c
··· 1468 1468 1469 1469 /* SCSI stack integration */ 1470 1470 1471 - static int sbp2_scsi_queuecommand(struct scsi_cmnd *cmd, scsi_done_fn_t done) 1471 + static int sbp2_scsi_queuecommand_lck(struct scsi_cmnd *cmd, scsi_done_fn_t done) 1472 1472 { 1473 1473 struct sbp2_logical_unit *lu = cmd->device->hostdata; 1474 1474 struct fw_device *device = target_device(lu->tgt); ··· 1533 1533 kref_put(&orb->base.kref, free_orb); 1534 1534 return retval; 1535 1535 } 1536 + 1537 + static DEF_SCSI_QCMD(sbp2_scsi_queuecommand) 1536 1538 1537 1539 static int sbp2_scsi_slave_alloc(struct scsi_device *sdev) 1538 1540 {
-1
drivers/gpu/drm/drm_fops.c
··· 37 37 #include "drmP.h" 38 38 #include <linux/poll.h> 39 39 #include <linux/slab.h> 40 - #include <linux/smp_lock.h> 41 40 42 41 /* from BKL pushdown: note that nothing else serializes idr_find() */ 43 42 DEFINE_MUTEX(drm_global_mutex);
+2 -1
drivers/gpu/drm/i915/i915_drv.c
··· 150 150 151 151 static const struct intel_device_info intel_ironlake_m_info = { 152 152 .gen = 5, .is_mobile = 1, 153 - .need_gfx_hws = 1, .has_fbc = 1, .has_rc6 = 1, .has_hotplug = 1, 153 + .need_gfx_hws = 1, .has_rc6 = 1, .has_hotplug = 1, 154 + .has_fbc = 0, /* disabled due to buggy hardware */ 154 155 .has_bsd_ring = 1, 155 156 }; 156 157
+2
drivers/gpu/drm/i915/i915_drv.h
··· 1045 1045 int i915_gem_object_set_domain(struct drm_gem_object *obj, 1046 1046 uint32_t read_domains, 1047 1047 uint32_t write_domain); 1048 + int i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj, 1049 + bool interruptible); 1048 1050 int i915_gem_init_ringbuffer(struct drm_device *dev); 1049 1051 void i915_gem_cleanup_ringbuffer(struct drm_device *dev); 1050 1052 int i915_gem_do_init(struct drm_device *dev, unsigned long start,
+41 -36
drivers/gpu/drm/i915/i915_gem.c
··· 547 547 struct drm_i915_gem_object *obj_priv; 548 548 int ret = 0; 549 549 550 + if (args->size == 0) 551 + return 0; 552 + 553 + if (!access_ok(VERIFY_WRITE, 554 + (char __user *)(uintptr_t)args->data_ptr, 555 + args->size)) 556 + return -EFAULT; 557 + 558 + ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr, 559 + args->size); 560 + if (ret) 561 + return -EFAULT; 562 + 550 563 ret = i915_mutex_lock_interruptible(dev); 551 564 if (ret) 552 565 return ret; ··· 574 561 /* Bounds check source. */ 575 562 if (args->offset > obj->size || args->size > obj->size - args->offset) { 576 563 ret = -EINVAL; 577 - goto out; 578 - } 579 - 580 - if (args->size == 0) 581 - goto out; 582 - 583 - if (!access_ok(VERIFY_WRITE, 584 - (char __user *)(uintptr_t)args->data_ptr, 585 - args->size)) { 586 - ret = -EFAULT; 587 - goto out; 588 - } 589 - 590 - ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr, 591 - args->size); 592 - if (ret) { 593 - ret = -EFAULT; 594 564 goto out; 595 565 } 596 566 ··· 977 981 struct drm_i915_gem_pwrite *args = data; 978 982 struct drm_gem_object *obj; 979 983 struct drm_i915_gem_object *obj_priv; 980 - int ret = 0; 984 + int ret; 985 + 986 + if (args->size == 0) 987 + return 0; 988 + 989 + if (!access_ok(VERIFY_READ, 990 + (char __user *)(uintptr_t)args->data_ptr, 991 + args->size)) 992 + return -EFAULT; 993 + 994 + ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr, 995 + args->size); 996 + if (ret) 997 + return -EFAULT; 981 998 982 999 ret = i915_mutex_lock_interruptible(dev); 983 1000 if (ret) ··· 1003 994 } 1004 995 obj_priv = to_intel_bo(obj); 1005 996 1006 - 1007 997 /* Bounds check destination. */ 1008 998 if (args->offset > obj->size || args->size > obj->size - args->offset) { 1009 999 ret = -EINVAL; 1010 - goto out; 1011 - } 1012 - 1013 - if (args->size == 0) 1014 - goto out; 1015 - 1016 - if (!access_ok(VERIFY_READ, 1017 - (char __user *)(uintptr_t)args->data_ptr, 1018 - args->size)) { 1019 - ret = -EFAULT; 1020 - goto out; 1021 - } 1022 - 1023 - ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr, 1024 - args->size); 1025 - if (ret) { 1026 - ret = -EFAULT; 1027 1000 goto out; 1028 1001 } 1029 1002 ··· 2896 2905 obj->write_domain); 2897 2906 2898 2907 return 0; 2908 + } 2909 + 2910 + int 2911 + i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj, 2912 + bool interruptible) 2913 + { 2914 + if (!obj->active) 2915 + return 0; 2916 + 2917 + if (obj->base.write_domain & I915_GEM_GPU_DOMAINS) 2918 + i915_gem_flush_ring(obj->base.dev, NULL, obj->ring, 2919 + 0, obj->base.write_domain); 2920 + 2921 + return i915_gem_object_wait_rendering(&obj->base, interruptible); 2899 2922 } 2900 2923 2901 2924 /**
+89 -60
drivers/gpu/drm/i915/intel_crt.c
··· 34 34 #include "i915_drm.h" 35 35 #include "i915_drv.h" 36 36 37 + /* Here's the desired hotplug mode */ 38 + #define ADPA_HOTPLUG_BITS (ADPA_CRT_HOTPLUG_PERIOD_128 | \ 39 + ADPA_CRT_HOTPLUG_WARMUP_10MS | \ 40 + ADPA_CRT_HOTPLUG_SAMPLE_4S | \ 41 + ADPA_CRT_HOTPLUG_VOLTAGE_50 | \ 42 + ADPA_CRT_HOTPLUG_VOLREF_325MV | \ 43 + ADPA_CRT_HOTPLUG_ENABLE) 44 + 45 + struct intel_crt { 46 + struct intel_encoder base; 47 + bool force_hotplug_required; 48 + }; 49 + 50 + static struct intel_crt *intel_attached_crt(struct drm_connector *connector) 51 + { 52 + return container_of(intel_attached_encoder(connector), 53 + struct intel_crt, base); 54 + } 55 + 37 56 static void intel_crt_dpms(struct drm_encoder *encoder, int mode) 38 57 { 39 58 struct drm_device *dev = encoder->dev; ··· 148 129 dpll_md & ~DPLL_MD_UDI_MULTIPLIER_MASK); 149 130 } 150 131 151 - adpa = 0; 132 + adpa = ADPA_HOTPLUG_BITS; 152 133 if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) 153 134 adpa |= ADPA_HSYNC_ACTIVE_HIGH; 154 135 if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC) ··· 176 157 static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector) 177 158 { 178 159 struct drm_device *dev = connector->dev; 160 + struct intel_crt *crt = intel_attached_crt(connector); 179 161 struct drm_i915_private *dev_priv = dev->dev_private; 180 - u32 adpa, temp; 162 + u32 adpa; 181 163 bool ret; 182 - bool turn_off_dac = false; 183 164 184 - temp = adpa = I915_READ(PCH_ADPA); 165 + /* The first time through, trigger an explicit detection cycle */ 166 + if (crt->force_hotplug_required) { 167 + bool turn_off_dac = HAS_PCH_SPLIT(dev); 168 + u32 save_adpa; 185 169 186 - if (HAS_PCH_SPLIT(dev)) 187 - turn_off_dac = true; 170 + crt->force_hotplug_required = 0; 188 171 189 - adpa &= ~ADPA_CRT_HOTPLUG_MASK; 190 - if (turn_off_dac) 191 - adpa &= ~ADPA_DAC_ENABLE; 172 + save_adpa = adpa = I915_READ(PCH_ADPA); 173 + DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa); 192 174 193 - /* disable HPD first */ 194 - I915_WRITE(PCH_ADPA, adpa); 195 - (void)I915_READ(PCH_ADPA); 175 + adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER; 176 + if (turn_off_dac) 177 + adpa &= ~ADPA_DAC_ENABLE; 196 178 197 - adpa |= (ADPA_CRT_HOTPLUG_PERIOD_128 | 198 - ADPA_CRT_HOTPLUG_WARMUP_10MS | 199 - ADPA_CRT_HOTPLUG_SAMPLE_4S | 200 - ADPA_CRT_HOTPLUG_VOLTAGE_50 | /* default */ 201 - ADPA_CRT_HOTPLUG_VOLREF_325MV | 202 - ADPA_CRT_HOTPLUG_ENABLE | 203 - ADPA_CRT_HOTPLUG_FORCE_TRIGGER); 179 + I915_WRITE(PCH_ADPA, adpa); 204 180 205 - DRM_DEBUG_KMS("pch crt adpa 0x%x", adpa); 206 - I915_WRITE(PCH_ADPA, adpa); 181 + if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0, 182 + 1000)) 183 + DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER"); 207 184 208 - if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0, 209 - 1000)) 210 - DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER"); 211 - 212 - if (turn_off_dac) { 213 - /* Make sure hotplug is enabled */ 214 - I915_WRITE(PCH_ADPA, temp | ADPA_CRT_HOTPLUG_ENABLE); 215 - (void)I915_READ(PCH_ADPA); 185 + if (turn_off_dac) { 186 + I915_WRITE(PCH_ADPA, save_adpa); 187 + POSTING_READ(PCH_ADPA); 188 + } 216 189 } 217 190 218 191 /* Check the status to see if both blue and green are on now */ 219 192 adpa = I915_READ(PCH_ADPA); 220 - adpa &= ADPA_CRT_HOTPLUG_MONITOR_MASK; 221 - if ((adpa == ADPA_CRT_HOTPLUG_MONITOR_COLOR) || 222 - (adpa == ADPA_CRT_HOTPLUG_MONITOR_MONO)) 193 + if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0) 223 194 ret = true; 224 195 else 225 196 ret = false; 197 + DRM_DEBUG_KMS("ironlake hotplug adpa=0x%x, result %d\n", adpa, ret); 226 198 227 199 return ret; 228 200 } ··· 287 277 return i2c_transfer(&dev_priv->gmbus[ddc_bus].adapter, msgs, 1) == 1; 288 278 } 289 279 290 - static bool intel_crt_detect_ddc(struct drm_encoder *encoder) 280 + static bool intel_crt_detect_ddc(struct intel_crt *crt) 291 281 { 292 - struct intel_encoder *intel_encoder = to_intel_encoder(encoder); 293 - struct drm_i915_private *dev_priv = encoder->dev->dev_private; 282 + struct drm_i915_private *dev_priv = crt->base.base.dev->dev_private; 294 283 295 284 /* CRT should always be at 0, but check anyway */ 296 - if (intel_encoder->type != INTEL_OUTPUT_ANALOG) 285 + if (crt->base.type != INTEL_OUTPUT_ANALOG) 297 286 return false; 298 287 299 288 if (intel_crt_ddc_probe(dev_priv, dev_priv->crt_ddc_pin)) { ··· 300 291 return true; 301 292 } 302 293 303 - if (intel_ddc_probe(intel_encoder, dev_priv->crt_ddc_pin)) { 294 + if (intel_ddc_probe(&crt->base, dev_priv->crt_ddc_pin)) { 304 295 DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n"); 305 296 return true; 306 297 } ··· 309 300 } 310 301 311 302 static enum drm_connector_status 312 - intel_crt_load_detect(struct drm_crtc *crtc, struct intel_encoder *intel_encoder) 303 + intel_crt_load_detect(struct drm_crtc *crtc, struct intel_crt *crt) 313 304 { 314 - struct drm_encoder *encoder = &intel_encoder->base; 305 + struct drm_encoder *encoder = &crt->base.base; 315 306 struct drm_device *dev = encoder->dev; 316 307 struct drm_i915_private *dev_priv = dev->dev_private; 317 308 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); ··· 443 434 intel_crt_detect(struct drm_connector *connector, bool force) 444 435 { 445 436 struct drm_device *dev = connector->dev; 446 - struct intel_encoder *encoder = intel_attached_encoder(connector); 437 + struct intel_crt *crt = intel_attached_crt(connector); 447 438 struct drm_crtc *crtc; 448 439 int dpms_mode; 449 440 enum drm_connector_status status; ··· 452 443 if (intel_crt_detect_hotplug(connector)) { 453 444 DRM_DEBUG_KMS("CRT detected via hotplug\n"); 454 445 return connector_status_connected; 455 - } else 446 + } else { 447 + DRM_DEBUG_KMS("CRT not detected via hotplug\n"); 456 448 return connector_status_disconnected; 449 + } 457 450 } 458 451 459 - if (intel_crt_detect_ddc(&encoder->base)) 452 + if (intel_crt_detect_ddc(crt)) 460 453 return connector_status_connected; 461 454 462 455 if (!force) 463 456 return connector->status; 464 457 465 458 /* for pre-945g platforms use load detect */ 466 - if (encoder->base.crtc && encoder->base.crtc->enabled) { 467 - status = intel_crt_load_detect(encoder->base.crtc, encoder); 459 + crtc = crt->base.base.crtc; 460 + if (crtc && crtc->enabled) { 461 + status = intel_crt_load_detect(crtc, crt); 468 462 } else { 469 - crtc = intel_get_load_detect_pipe(encoder, connector, 463 + crtc = intel_get_load_detect_pipe(&crt->base, connector, 470 464 NULL, &dpms_mode); 471 465 if (crtc) { 472 - if (intel_crt_detect_ddc(&encoder->base)) 466 + if (intel_crt_detect_ddc(crt)) 473 467 status = connector_status_connected; 474 468 else 475 - status = intel_crt_load_detect(crtc, encoder); 476 - intel_release_load_detect_pipe(encoder, 469 + status = intel_crt_load_detect(crtc, crt); 470 + intel_release_load_detect_pipe(&crt->base, 477 471 connector, dpms_mode); 478 472 } else 479 473 status = connector_status_unknown; ··· 548 536 void intel_crt_init(struct drm_device *dev) 549 537 { 550 538 struct drm_connector *connector; 551 - struct intel_encoder *intel_encoder; 539 + struct intel_crt *crt; 552 540 struct intel_connector *intel_connector; 553 541 struct drm_i915_private *dev_priv = dev->dev_private; 554 542 555 - intel_encoder = kzalloc(sizeof(struct intel_encoder), GFP_KERNEL); 556 - if (!intel_encoder) 543 + crt = kzalloc(sizeof(struct intel_crt), GFP_KERNEL); 544 + if (!crt) 557 545 return; 558 546 559 547 intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL); 560 548 if (!intel_connector) { 561 - kfree(intel_encoder); 549 + kfree(crt); 562 550 return; 563 551 } 564 552 ··· 566 554 drm_connector_init(dev, &intel_connector->base, 567 555 &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA); 568 556 569 - drm_encoder_init(dev, &intel_encoder->base, &intel_crt_enc_funcs, 557 + drm_encoder_init(dev, &crt->base.base, &intel_crt_enc_funcs, 570 558 DRM_MODE_ENCODER_DAC); 571 559 572 - intel_connector_attach_encoder(intel_connector, intel_encoder); 560 + intel_connector_attach_encoder(intel_connector, &crt->base); 573 561 574 - intel_encoder->type = INTEL_OUTPUT_ANALOG; 575 - intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 576 - (1 << INTEL_ANALOG_CLONE_BIT) | 577 - (1 << INTEL_SDVO_LVDS_CLONE_BIT); 578 - intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 562 + crt->base.type = INTEL_OUTPUT_ANALOG; 563 + crt->base.clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT | 564 + 1 << INTEL_ANALOG_CLONE_BIT | 565 + 1 << INTEL_SDVO_LVDS_CLONE_BIT); 566 + crt->base.crtc_mask = (1 << 0) | (1 << 1); 579 567 connector->interlace_allowed = 1; 580 568 connector->doublescan_allowed = 0; 581 569 582 - drm_encoder_helper_add(&intel_encoder->base, &intel_crt_helper_funcs); 570 + drm_encoder_helper_add(&crt->base.base, &intel_crt_helper_funcs); 583 571 drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs); 584 572 585 573 drm_sysfs_connector_add(connector); ··· 588 576 connector->polled = DRM_CONNECTOR_POLL_HPD; 589 577 else 590 578 connector->polled = DRM_CONNECTOR_POLL_CONNECT; 579 + 580 + /* 581 + * Configure the automatic hotplug detection stuff 582 + */ 583 + crt->force_hotplug_required = 0; 584 + if (HAS_PCH_SPLIT(dev)) { 585 + u32 adpa; 586 + 587 + adpa = I915_READ(PCH_ADPA); 588 + adpa &= ~ADPA_CRT_HOTPLUG_MASK; 589 + adpa |= ADPA_HOTPLUG_BITS; 590 + I915_WRITE(PCH_ADPA, adpa); 591 + POSTING_READ(PCH_ADPA); 592 + 593 + DRM_DEBUG_KMS("pch crt adpa set to 0x%x\n", adpa); 594 + crt->force_hotplug_required = 1; 595 + } 591 596 592 597 dev_priv->hotplug_supported_mask |= CRT_HOTPLUG_INT_STATUS; 593 598 }
+12
drivers/gpu/drm/i915/intel_display.c
··· 1611 1611 1612 1612 wait_event(dev_priv->pending_flip_queue, 1613 1613 atomic_read(&obj_priv->pending_flip) == 0); 1614 + 1615 + /* Big Hammer, we also need to ensure that any pending 1616 + * MI_WAIT_FOR_EVENT inside a user batch buffer on the 1617 + * current scanout is retired before unpinning the old 1618 + * framebuffer. 1619 + */ 1620 + ret = i915_gem_object_flush_gpu(obj_priv, false); 1621 + if (ret) { 1622 + i915_gem_object_unpin(to_intel_framebuffer(crtc->fb)->obj); 1623 + mutex_unlock(&dev->struct_mutex); 1624 + return ret; 1625 + } 1614 1626 } 1615 1627 1616 1628 ret = intel_pipe_set_base_atomic(crtc, crtc->fb, x, y,
+6 -5
drivers/gpu/drm/i915/intel_i2c.c
··· 160 160 }; 161 161 struct intel_gpio *gpio; 162 162 163 - if (pin < 1 || pin > 7) 163 + if (pin >= ARRAY_SIZE(map_pin_to_reg) || !map_pin_to_reg[pin]) 164 164 return NULL; 165 165 166 166 gpio = kzalloc(sizeof(struct intel_gpio), GFP_KERNEL); ··· 172 172 gpio->reg += PCH_GPIOA - GPIOA; 173 173 gpio->dev_priv = dev_priv; 174 174 175 - snprintf(gpio->adapter.name, I2C_NAME_SIZE, "GPIO%c", "?BACDEF?"[pin]); 175 + snprintf(gpio->adapter.name, sizeof(gpio->adapter.name), 176 + "i915 GPIO%c", "?BACDE?F"[pin]); 176 177 gpio->adapter.owner = THIS_MODULE; 177 178 gpio->adapter.algo_data = &gpio->algo; 178 179 gpio->adapter.dev.parent = &dev_priv->dev->pdev->dev; ··· 350 349 "panel", 351 350 "dpc", 352 351 "dpb", 353 - "reserved" 352 + "reserved", 354 353 "dpd", 355 354 }; 356 355 struct drm_i915_private *dev_priv = dev->dev_private; ··· 367 366 bus->adapter.owner = THIS_MODULE; 368 367 bus->adapter.class = I2C_CLASS_DDC; 369 368 snprintf(bus->adapter.name, 370 - I2C_NAME_SIZE, 371 - "gmbus %s", 369 + sizeof(bus->adapter.name), 370 + "i915 gmbus %s", 372 371 names[i]); 373 372 374 373 bus->adapter.dev.parent = &dev->pdev->dev;
+9
drivers/gpu/drm/nouveau/nouveau_backlight.c
··· 31 31 */ 32 32 33 33 #include <linux/backlight.h> 34 + #include <linux/acpi.h> 34 35 35 36 #include "drmP.h" 36 37 #include "nouveau_drv.h" ··· 136 135 int nouveau_backlight_init(struct drm_device *dev) 137 136 { 138 137 struct drm_nouveau_private *dev_priv = dev->dev_private; 138 + 139 + #ifdef CONFIG_ACPI 140 + if (acpi_video_backlight_support()) { 141 + NV_INFO(dev, "ACPI backlight interface available, " 142 + "not registering our own\n"); 143 + return 0; 144 + } 145 + #endif 139 146 140 147 switch (dev_priv->card_type) { 141 148 case NV_40:
+1 -1
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 6829 6829 struct drm_nouveau_private *dev_priv = dev->dev_private; 6830 6830 unsigned htotal; 6831 6831 6832 - if (dev_priv->chipset >= NV_50) { 6832 + if (dev_priv->card_type >= NV_50) { 6833 6833 if (NVReadVgaCrtc(dev, 0, 0x00) == 0 && 6834 6834 NVReadVgaCrtc(dev, 0, 0x1a) == 0) 6835 6835 return false;
+38 -5
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 143 143 nvbo->no_vm = no_vm; 144 144 nvbo->tile_mode = tile_mode; 145 145 nvbo->tile_flags = tile_flags; 146 + nvbo->bo.bdev = &dev_priv->ttm.bdev; 146 147 147 - nouveau_bo_fixup_align(dev, tile_mode, tile_flags, &align, &size); 148 + nouveau_bo_fixup_align(dev, tile_mode, nouveau_bo_tile_layout(nvbo), 149 + &align, &size); 148 150 align >>= PAGE_SHIFT; 149 151 150 152 nouveau_bo_placement_set(nvbo, flags, 0); ··· 178 176 pl[(*n)++] = TTM_PL_FLAG_SYSTEM | flags; 179 177 } 180 178 179 + static void 180 + set_placement_range(struct nouveau_bo *nvbo, uint32_t type) 181 + { 182 + struct drm_nouveau_private *dev_priv = nouveau_bdev(nvbo->bo.bdev); 183 + 184 + if (dev_priv->card_type == NV_10 && 185 + nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM)) { 186 + /* 187 + * Make sure that the color and depth buffers are handled 188 + * by independent memory controller units. Up to a 9x 189 + * speed up when alpha-blending and depth-test are enabled 190 + * at the same time. 191 + */ 192 + int vram_pages = dev_priv->vram_size >> PAGE_SHIFT; 193 + 194 + if (nvbo->tile_flags & NOUVEAU_GEM_TILE_ZETA) { 195 + nvbo->placement.fpfn = vram_pages / 2; 196 + nvbo->placement.lpfn = ~0; 197 + } else { 198 + nvbo->placement.fpfn = 0; 199 + nvbo->placement.lpfn = vram_pages / 2; 200 + } 201 + } 202 + } 203 + 181 204 void 182 205 nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t type, uint32_t busy) 183 206 { ··· 217 190 pl->busy_placement = nvbo->busy_placements; 218 191 set_placement_list(nvbo->busy_placements, &pl->num_busy_placement, 219 192 type | busy, flags); 193 + 194 + set_placement_range(nvbo, type); 220 195 } 221 196 222 197 int ··· 554 525 stride = 16 * 4; 555 526 height = amount / stride; 556 527 557 - if (new_mem->mem_type == TTM_PL_VRAM && nvbo->tile_flags) { 528 + if (new_mem->mem_type == TTM_PL_VRAM && 529 + nouveau_bo_tile_layout(nvbo)) { 558 530 ret = RING_SPACE(chan, 8); 559 531 if (ret) 560 532 return ret; ··· 576 546 BEGIN_RING(chan, NvSubM2MF, 0x0200, 1); 577 547 OUT_RING (chan, 1); 578 548 } 579 - if (old_mem->mem_type == TTM_PL_VRAM && nvbo->tile_flags) { 549 + if (old_mem->mem_type == TTM_PL_VRAM && 550 + nouveau_bo_tile_layout(nvbo)) { 580 551 ret = RING_SPACE(chan, 8); 581 552 if (ret) 582 553 return ret; ··· 784 753 if (dev_priv->card_type == NV_50) { 785 754 ret = nv50_mem_vm_bind_linear(dev, 786 755 offset + dev_priv->vm_vram_base, 787 - new_mem->size, nvbo->tile_flags, 756 + new_mem->size, 757 + nouveau_bo_tile_layout(nvbo), 788 758 offset); 789 759 if (ret) 790 760 return ret; ··· 926 894 * nothing to do here. 927 895 */ 928 896 if (bo->mem.mem_type != TTM_PL_VRAM) { 929 - if (dev_priv->card_type < NV_50 || !nvbo->tile_flags) 897 + if (dev_priv->card_type < NV_50 || 898 + !nouveau_bo_tile_layout(nvbo)) 930 899 return 0; 931 900 } 932 901
+30 -47
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 281 281 nv_encoder = find_encoder_by_type(connector, OUTPUT_ANALOG); 282 282 if (!nv_encoder && !nouveau_tv_disable) 283 283 nv_encoder = find_encoder_by_type(connector, OUTPUT_TV); 284 - if (nv_encoder) { 284 + if (nv_encoder && force) { 285 285 struct drm_encoder *encoder = to_drm_encoder(nv_encoder); 286 286 struct drm_encoder_helper_funcs *helper = 287 287 encoder->helper_private; ··· 641 641 return ret; 642 642 } 643 643 644 + static unsigned 645 + get_tmds_link_bandwidth(struct drm_connector *connector) 646 + { 647 + struct nouveau_connector *nv_connector = nouveau_connector(connector); 648 + struct drm_nouveau_private *dev_priv = connector->dev->dev_private; 649 + struct dcb_entry *dcb = nv_connector->detected_encoder->dcb; 650 + 651 + if (dcb->location != DCB_LOC_ON_CHIP || 652 + dev_priv->chipset >= 0x46) 653 + return 165000; 654 + else if (dev_priv->chipset >= 0x40) 655 + return 155000; 656 + else if (dev_priv->chipset >= 0x18) 657 + return 135000; 658 + else 659 + return 112000; 660 + } 661 + 644 662 static int 645 663 nouveau_connector_mode_valid(struct drm_connector *connector, 646 664 struct drm_display_mode *mode) 647 665 { 648 - struct drm_nouveau_private *dev_priv = connector->dev->dev_private; 649 666 struct nouveau_connector *nv_connector = nouveau_connector(connector); 650 667 struct nouveau_encoder *nv_encoder = nv_connector->detected_encoder; 651 668 struct drm_encoder *encoder = to_drm_encoder(nv_encoder); ··· 680 663 max_clock = 400000; 681 664 break; 682 665 case OUTPUT_TMDS: 683 - if ((dev_priv->card_type >= NV_50 && !nouveau_duallink) || 684 - !nv_encoder->dcb->duallink_possible) 685 - max_clock = 165000; 686 - else 687 - max_clock = 330000; 666 + max_clock = get_tmds_link_bandwidth(connector); 667 + if (nouveau_duallink && nv_encoder->dcb->duallink_possible) 668 + max_clock *= 2; 688 669 break; 689 670 case OUTPUT_ANALOG: 690 671 max_clock = nv_encoder->dcb->crtconf.maxfreq; ··· 722 707 return to_drm_encoder(nv_connector->detected_encoder); 723 708 724 709 return NULL; 725 - } 726 - 727 - void 728 - nouveau_connector_set_polling(struct drm_connector *connector) 729 - { 730 - struct drm_device *dev = connector->dev; 731 - struct drm_nouveau_private *dev_priv = dev->dev_private; 732 - struct drm_crtc *crtc; 733 - bool spare_crtc = false; 734 - 735 - list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) 736 - spare_crtc |= !crtc->enabled; 737 - 738 - connector->polled = 0; 739 - 740 - switch (connector->connector_type) { 741 - case DRM_MODE_CONNECTOR_VGA: 742 - case DRM_MODE_CONNECTOR_TV: 743 - if (dev_priv->card_type >= NV_50 || 744 - (nv_gf4_disp_arch(dev) && spare_crtc)) 745 - connector->polled = DRM_CONNECTOR_POLL_CONNECT; 746 - break; 747 - 748 - case DRM_MODE_CONNECTOR_DVII: 749 - case DRM_MODE_CONNECTOR_DVID: 750 - case DRM_MODE_CONNECTOR_HDMIA: 751 - case DRM_MODE_CONNECTOR_DisplayPort: 752 - case DRM_MODE_CONNECTOR_eDP: 753 - if (dev_priv->card_type >= NV_50) 754 - connector->polled = DRM_CONNECTOR_POLL_HPD; 755 - else if (connector->connector_type == DRM_MODE_CONNECTOR_DVID || 756 - spare_crtc) 757 - connector->polled = DRM_CONNECTOR_POLL_CONNECT; 758 - break; 759 - 760 - default: 761 - break; 762 - } 763 710 } 764 711 765 712 static const struct drm_connector_helper_funcs ··· 849 872 dev->mode_config.scaling_mode_property, 850 873 nv_connector->scaling_mode); 851 874 } 875 + connector->polled = DRM_CONNECTOR_POLL_CONNECT; 852 876 /* fall-through */ 853 877 case DCB_CONNECTOR_TV_0: 854 878 case DCB_CONNECTOR_TV_1: ··· 866 888 dev->mode_config.dithering_mode_property, 867 889 nv_connector->use_dithering ? 868 890 DRM_MODE_DITHERING_ON : DRM_MODE_DITHERING_OFF); 891 + 892 + if (dcb->type != DCB_CONNECTOR_LVDS) { 893 + if (dev_priv->card_type >= NV_50) 894 + connector->polled = DRM_CONNECTOR_POLL_HPD; 895 + else 896 + connector->polled = DRM_CONNECTOR_POLL_CONNECT; 897 + } 869 898 break; 870 899 } 871 - 872 - nouveau_connector_set_polling(connector); 873 900 874 901 drm_sysfs_connector_add(connector); 875 902 dcb->drm = connector;
-3
drivers/gpu/drm/nouveau/nouveau_connector.h
··· 52 52 struct drm_connector * 53 53 nouveau_connector_create(struct drm_device *, int index); 54 54 55 - void 56 - nouveau_connector_set_polling(struct drm_connector *); 57 - 58 55 int 59 56 nouveau_connector_bpp(struct drm_connector *); 60 57
+17 -38
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 100 100 int pin_refcnt; 101 101 }; 102 102 103 + #define nouveau_bo_tile_layout(nvbo) \ 104 + ((nvbo)->tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK) 105 + 103 106 static inline struct nouveau_bo * 104 107 nouveau_bo(struct ttm_buffer_object *bo) 105 108 { ··· 307 304 void (*destroy_context)(struct nouveau_channel *); 308 305 int (*load_context)(struct nouveau_channel *); 309 306 int (*unload_context)(struct drm_device *); 307 + void (*tlb_flush)(struct drm_device *dev); 310 308 }; 311 309 312 310 struct nouveau_pgraph_object_method { ··· 340 336 void (*destroy_context)(struct nouveau_channel *); 341 337 int (*load_context)(struct nouveau_channel *); 342 338 int (*unload_context)(struct drm_device *); 339 + void (*tlb_flush)(struct drm_device *dev); 343 340 344 341 void (*set_region_tiling)(struct drm_device *dev, int i, uint32_t addr, 345 342 uint32_t size, uint32_t pitch); ··· 490 485 }; 491 486 492 487 struct nv04_crtc_reg { 493 - unsigned char MiscOutReg; /* */ 488 + unsigned char MiscOutReg; 494 489 uint8_t CRTC[0xa0]; 495 490 uint8_t CR58[0x10]; 496 491 uint8_t Sequencer[5]; 497 492 uint8_t Graphics[9]; 498 493 uint8_t Attribute[21]; 499 - unsigned char DAC[768]; /* Internal Colorlookuptable */ 494 + unsigned char DAC[768]; 500 495 501 496 /* PCRTC regs */ 502 497 uint32_t fb_start; ··· 544 539 }; 545 540 546 541 struct nv04_mode_state { 547 - uint32_t bpp; 548 - uint32_t width; 549 - uint32_t height; 550 - uint32_t interlace; 551 - uint32_t repaint0; 552 - uint32_t repaint1; 553 - uint32_t screen; 554 - uint32_t scale; 555 - uint32_t dither; 556 - uint32_t extra; 557 - uint32_t fifo; 558 - uint32_t pixel; 559 - uint32_t horiz; 560 - int arbitration0; 561 - int arbitration1; 562 - uint32_t pll; 563 - uint32_t pllB; 564 - uint32_t vpll; 565 - uint32_t vpll2; 566 - uint32_t vpllB; 567 - uint32_t vpll2B; 542 + struct nv04_crtc_reg crtc_reg[2]; 568 543 uint32_t pllsel; 569 544 uint32_t sel_clk; 570 - uint32_t general; 571 - uint32_t crtcOwner; 572 - uint32_t head; 573 - uint32_t head2; 574 - uint32_t cursorConfig; 575 - uint32_t cursor0; 576 - uint32_t cursor1; 577 - uint32_t cursor2; 578 - uint32_t timingH; 579 - uint32_t timingV; 580 - uint32_t displayV; 581 - uint32_t crtcSync; 582 - 583 - struct nv04_crtc_reg crtc_reg[2]; 584 545 }; 585 546 586 547 enum nouveau_card_type { ··· 583 612 struct workqueue_struct *wq; 584 613 struct work_struct irq_work; 585 614 struct work_struct hpd_work; 615 + 616 + struct { 617 + spinlock_t lock; 618 + uint32_t hpd0_bits; 619 + uint32_t hpd1_bits; 620 + } hpd_state; 586 621 587 622 struct list_head vbl_waiting; 588 623 ··· 1022 1045 extern void nv50_fifo_destroy_context(struct nouveau_channel *); 1023 1046 extern int nv50_fifo_load_context(struct nouveau_channel *); 1024 1047 extern int nv50_fifo_unload_context(struct drm_device *); 1048 + extern void nv50_fifo_tlb_flush(struct drm_device *dev); 1025 1049 1026 1050 /* nvc0_fifo.c */ 1027 1051 extern int nvc0_fifo_init(struct drm_device *); ··· 1100 1122 extern int nv50_graph_unload_context(struct drm_device *); 1101 1123 extern void nv50_graph_context_switch(struct drm_device *); 1102 1124 extern int nv50_grctx_init(struct nouveau_grctx *); 1125 + extern void nv50_graph_tlb_flush(struct drm_device *dev); 1126 + extern void nv86_graph_tlb_flush(struct drm_device *dev); 1103 1127 1104 1128 /* nvc0_graph.c */ 1105 1129 extern int nvc0_graph_init(struct drm_device *); ··· 1219 1239 extern void nouveau_bo_wr16(struct nouveau_bo *nvbo, unsigned index, u16 val); 1220 1240 extern u32 nouveau_bo_rd32(struct nouveau_bo *nvbo, unsigned index); 1221 1241 extern void nouveau_bo_wr32(struct nouveau_bo *nvbo, unsigned index, u32 val); 1222 - extern int nouveau_bo_sync_gpu(struct nouveau_bo *, struct nouveau_channel *); 1223 1242 1224 1243 /* nouveau_fence.c */ 1225 1244 struct nouveau_fence;
+6 -1
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 249 249 { 250 250 struct drm_nouveau_private *dev_priv = dev->dev_private; 251 251 struct nouveau_semaphore *sema; 252 + int ret; 252 253 253 254 if (!USE_SEMA(dev)) 254 255 return NULL; ··· 258 257 if (!sema) 259 258 goto fail; 260 259 260 + ret = drm_mm_pre_get(&dev_priv->fence.heap); 261 + if (ret) 262 + goto fail; 263 + 261 264 spin_lock(&dev_priv->fence.lock); 262 265 sema->mem = drm_mm_search_free(&dev_priv->fence.heap, 4, 0, 0); 263 266 if (sema->mem) 264 - sema->mem = drm_mm_get_block(sema->mem, 4, 0); 267 + sema->mem = drm_mm_get_block_atomic(sema->mem, 4, 0); 265 268 spin_unlock(&dev_priv->fence.lock); 266 269 267 270 if (!sema->mem)
+21 -15
drivers/gpu/drm/nouveau/nouveau_gem.c
··· 107 107 } 108 108 109 109 static bool 110 - nouveau_gem_tile_flags_valid(struct drm_device *dev, uint32_t tile_flags) { 111 - switch (tile_flags) { 112 - case 0x0000: 113 - case 0x1800: 114 - case 0x2800: 115 - case 0x4800: 116 - case 0x7000: 117 - case 0x7400: 118 - case 0x7a00: 119 - case 0xe000: 120 - break; 121 - default: 122 - NV_ERROR(dev, "bad page flags: 0x%08x\n", tile_flags); 123 - return false; 110 + nouveau_gem_tile_flags_valid(struct drm_device *dev, uint32_t tile_flags) 111 + { 112 + struct drm_nouveau_private *dev_priv = dev->dev_private; 113 + 114 + if (dev_priv->card_type >= NV_50) { 115 + switch (tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK) { 116 + case 0x0000: 117 + case 0x1800: 118 + case 0x2800: 119 + case 0x4800: 120 + case 0x7000: 121 + case 0x7400: 122 + case 0x7a00: 123 + case 0xe000: 124 + return true; 125 + } 126 + } else { 127 + if (!(tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK)) 128 + return true; 124 129 } 125 130 126 - return true; 131 + NV_ERROR(dev, "bad page flags: 0x%08x\n", tile_flags); 132 + return false; 127 133 } 128 134 129 135 int
+4 -4
drivers/gpu/drm/nouveau/nouveau_hw.c
··· 519 519 520 520 struct pll_lims pll_lim; 521 521 struct nouveau_pll_vals pv; 522 - uint32_t pllreg = head ? NV_RAMDAC_VPLL2 : NV_PRAMDAC_VPLL_COEFF; 522 + enum pll_types pll = head ? PLL_VPLL1 : PLL_VPLL0; 523 523 524 - if (get_pll_limits(dev, pllreg, &pll_lim)) 524 + if (get_pll_limits(dev, pll, &pll_lim)) 525 525 return; 526 - nouveau_hw_get_pllvals(dev, pllreg, &pv); 526 + nouveau_hw_get_pllvals(dev, pll, &pv); 527 527 528 528 if (pv.M1 >= pll_lim.vco1.min_m && pv.M1 <= pll_lim.vco1.max_m && 529 529 pv.N1 >= pll_lim.vco1.min_n && pv.N1 <= pll_lim.vco1.max_n && ··· 536 536 pv.M1 = pll_lim.vco1.max_m; 537 537 pv.N1 = pll_lim.vco1.min_n; 538 538 pv.log2P = pll_lim.max_usable_log2p; 539 - nouveau_hw_setpll(dev, pllreg, &pv); 539 + nouveau_hw_setpll(dev, pll_lim.reg, &pv); 540 540 } 541 541 542 542 /*
+19
drivers/gpu/drm/nouveau/nouveau_hw.h
··· 416 416 } 417 417 418 418 static inline void 419 + nv_set_crtc_base(struct drm_device *dev, int head, uint32_t offset) 420 + { 421 + struct drm_nouveau_private *dev_priv = dev->dev_private; 422 + 423 + NVWriteCRTC(dev, head, NV_PCRTC_START, offset); 424 + 425 + if (dev_priv->card_type == NV_04) { 426 + /* 427 + * Hilarious, the 24th bit doesn't want to stick to 428 + * PCRTC_START... 429 + */ 430 + int cre_heb = NVReadVgaCrtc(dev, head, NV_CIO_CRE_HEB__INDEX); 431 + 432 + NVWriteVgaCrtc(dev, head, NV_CIO_CRE_HEB__INDEX, 433 + (cre_heb & ~0x40) | ((offset >> 18) & 0x40)); 434 + } 435 + } 436 + 437 + static inline void 419 438 nv_show_cursor(struct drm_device *dev, int head, bool show) 420 439 { 421 440 struct drm_nouveau_private *dev_priv = dev->dev_private;
+1 -1
drivers/gpu/drm/nouveau/nouveau_i2c.c
··· 256 256 if (index >= DCB_MAX_NUM_I2C_ENTRIES) 257 257 return NULL; 258 258 259 - if (dev_priv->chipset >= NV_50 && (i2c->entry & 0x00000100)) { 259 + if (dev_priv->card_type >= NV_50 && (i2c->entry & 0x00000100)) { 260 260 uint32_t reg = 0xe500, val; 261 261 262 262 if (i2c->port_type == 6) {
+23 -19
drivers/gpu/drm/nouveau/nouveau_irq.c
··· 42 42 #include "nouveau_connector.h" 43 43 #include "nv50_display.h" 44 44 45 + static DEFINE_RATELIMIT_STATE(nouveau_ratelimit_state, 3 * HZ, 20); 46 + 47 + static int nouveau_ratelimit(void) 48 + { 49 + return __ratelimit(&nouveau_ratelimit_state); 50 + } 51 + 45 52 void 46 53 nouveau_irq_preinstall(struct drm_device *dev) 47 54 { ··· 60 53 if (dev_priv->card_type >= NV_50) { 61 54 INIT_WORK(&dev_priv->irq_work, nv50_display_irq_handler_bh); 62 55 INIT_WORK(&dev_priv->hpd_work, nv50_display_irq_hotplug_bh); 56 + spin_lock_init(&dev_priv->hpd_state.lock); 63 57 INIT_LIST_HEAD(&dev_priv->vbl_waiting); 64 58 } 65 59 } ··· 210 202 } 211 203 212 204 if (status & NV_PFIFO_INTR_DMA_PUSHER) { 213 - u32 get = nv_rd32(dev, 0x003244); 214 - u32 put = nv_rd32(dev, 0x003240); 205 + u32 dma_get = nv_rd32(dev, 0x003244); 206 + u32 dma_put = nv_rd32(dev, 0x003240); 215 207 u32 push = nv_rd32(dev, 0x003220); 216 208 u32 state = nv_rd32(dev, 0x003228); 217 209 ··· 221 213 u32 ib_get = nv_rd32(dev, 0x003334); 222 214 u32 ib_put = nv_rd32(dev, 0x003330); 223 215 224 - NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%02x%08x " 216 + if (nouveau_ratelimit()) 217 + NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%02x%08x " 225 218 "Put 0x%02x%08x IbGet 0x%08x IbPut 0x%08x " 226 219 "State 0x%08x Push 0x%08x\n", 227 - chid, ho_get, get, ho_put, put, ib_get, ib_put, 228 - state, push); 220 + chid, ho_get, dma_get, ho_put, 221 + dma_put, ib_get, ib_put, state, 222 + push); 229 223 230 224 /* METHOD_COUNT, in DMA_STATE on earlier chipsets */ 231 225 nv_wr32(dev, 0x003364, 0x00000000); 232 - if (get != put || ho_get != ho_put) { 233 - nv_wr32(dev, 0x003244, put); 226 + if (dma_get != dma_put || ho_get != ho_put) { 227 + nv_wr32(dev, 0x003244, dma_put); 234 228 nv_wr32(dev, 0x003328, ho_put); 235 229 } else 236 230 if (ib_get != ib_put) { ··· 241 231 } else { 242 232 NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%08x " 243 233 "Put 0x%08x State 0x%08x Push 0x%08x\n", 244 - chid, get, put, state, push); 234 + chid, dma_get, dma_put, state, push); 245 235 246 - if (get != put) 247 - nv_wr32(dev, 0x003244, put); 236 + if (dma_get != dma_put) 237 + nv_wr32(dev, 0x003244, dma_put); 248 238 } 249 239 250 240 nv_wr32(dev, 0x003228, 0x00000000); ··· 276 266 } 277 267 278 268 if (status) { 279 - NV_INFO(dev, "PFIFO_INTR 0x%08x - Ch %d\n", 280 - status, chid); 269 + if (nouveau_ratelimit()) 270 + NV_INFO(dev, "PFIFO_INTR 0x%08x - Ch %d\n", 271 + status, chid); 281 272 nv_wr32(dev, NV03_PFIFO_INTR_0, status); 282 273 status = 0; 283 274 } ··· 553 542 554 543 if (unhandled) 555 544 nouveau_graph_dump_trap_info(dev, "PGRAPH_NOTIFY", &trap); 556 - } 557 - 558 - static DEFINE_RATELIMIT_STATE(nouveau_ratelimit_state, 3 * HZ, 20); 559 - 560 - static int nouveau_ratelimit(void) 561 - { 562 - return __ratelimit(&nouveau_ratelimit_state); 563 545 } 564 546 565 547
+27 -22
drivers/gpu/drm/nouveau/nouveau_mem.c
··· 33 33 #include "drmP.h" 34 34 #include "drm.h" 35 35 #include "drm_sarea.h" 36 - #include "nouveau_drv.h" 37 36 38 - #define MIN(a,b) a < b ? a : b 37 + #include "nouveau_drv.h" 38 + #include "nouveau_pm.h" 39 39 40 40 /* 41 41 * NV10-NV40 tiling helpers ··· 175 175 } 176 176 } 177 177 } 178 - dev_priv->engine.instmem.flush(dev); 179 178 180 - nv50_vm_flush(dev, 5); 181 - nv50_vm_flush(dev, 0); 182 - nv50_vm_flush(dev, 4); 179 + dev_priv->engine.instmem.flush(dev); 180 + dev_priv->engine.fifo.tlb_flush(dev); 181 + dev_priv->engine.graph.tlb_flush(dev); 183 182 nv50_vm_flush(dev, 6); 184 183 return 0; 185 184 } ··· 208 209 pte++; 209 210 } 210 211 } 211 - dev_priv->engine.instmem.flush(dev); 212 212 213 - nv50_vm_flush(dev, 5); 214 - nv50_vm_flush(dev, 0); 215 - nv50_vm_flush(dev, 4); 213 + dev_priv->engine.instmem.flush(dev); 214 + dev_priv->engine.fifo.tlb_flush(dev); 215 + dev_priv->engine.graph.tlb_flush(dev); 216 216 nv50_vm_flush(dev, 6); 217 217 } 218 218 ··· 651 653 void 652 654 nouveau_mem_timing_init(struct drm_device *dev) 653 655 { 656 + /* cards < NVC0 only */ 654 657 struct drm_nouveau_private *dev_priv = dev->dev_private; 655 658 struct nouveau_pm_engine *pm = &dev_priv->engine.pm; 656 659 struct nouveau_pm_memtimings *memtimings = &pm->memtimings; ··· 718 719 tUNK_19 = 1; 719 720 tUNK_20 = 0; 720 721 tUNK_21 = 0; 721 - switch (MIN(recordlen,21)) { 722 - case 21: 722 + switch (min(recordlen, 22)) { 723 + case 22: 723 724 tUNK_21 = entry[21]; 724 - case 20: 725 + case 21: 725 726 tUNK_20 = entry[20]; 726 - case 19: 727 + case 20: 727 728 tUNK_19 = entry[19]; 728 - case 18: 729 + case 19: 729 730 tUNK_18 = entry[18]; 730 731 default: 731 732 tUNK_0 = entry[0]; ··· 755 756 timing->reg_100228 = (tUNK_12 << 16 | tUNK_11 << 8 | tUNK_10); 756 757 if(recordlen > 19) { 757 758 timing->reg_100228 += (tUNK_19 - 1) << 24; 758 - } else { 759 + }/* I cannot back-up this else-statement right now 760 + else { 759 761 timing->reg_100228 += tUNK_12 << 24; 760 - } 762 + }*/ 761 763 762 764 /* XXX: reg_10022c */ 765 + timing->reg_10022c = tUNK_2 - 1; 763 766 764 767 timing->reg_100230 = (tUNK_20 << 24 | tUNK_21 << 16 | 765 768 tUNK_13 << 8 | tUNK_13); 766 769 767 770 /* XXX: +6? */ 768 771 timing->reg_100234 = (tRAS << 24 | (tUNK_19 + 6) << 8 | tRC); 769 - if(tUNK_10 > tUNK_11) { 770 - timing->reg_100234 += tUNK_10 << 16; 771 - } else { 772 - timing->reg_100234 += tUNK_11 << 16; 772 + timing->reg_100234 += max(tUNK_10,tUNK_11) << 16; 773 + 774 + /* XXX; reg_100238, reg_10023c 775 + * reg: 0x00?????? 776 + * reg_10023c: 777 + * 0 for pre-NV50 cards 778 + * 0x????0202 for NV50+ cards (empirical evidence) */ 779 + if(dev_priv->card_type >= NV_50) { 780 + timing->reg_10023c = 0x202; 773 781 } 774 782 775 - /* XXX; reg_100238, reg_10023c */ 776 783 NV_DEBUG(dev, "Entry %d: 220: %08x %08x %08x %08x\n", i, 777 784 timing->reg_100220, timing->reg_100224, 778 785 timing->reg_100228, timing->reg_10022c);
+1 -1
drivers/gpu/drm/nouveau/nouveau_object.c
··· 129 129 if (ramin == NULL) { 130 130 spin_unlock(&dev_priv->ramin_lock); 131 131 nouveau_gpuobj_ref(NULL, &gpuobj); 132 - return ret; 132 + return -ENOMEM; 133 133 } 134 134 135 135 ramin = drm_mm_get_block_atomic(ramin, size, align);
+6 -1
drivers/gpu/drm/nouveau/nouveau_pm.c
··· 284 284 } 285 285 } 286 286 287 + #ifdef CONFIG_HWMON 287 288 static ssize_t 288 289 nouveau_hwmon_show_temp(struct device *d, struct device_attribute *a, char *buf) 289 290 { ··· 396 395 static const struct attribute_group hwmon_attrgroup = { 397 396 .attrs = hwmon_attributes, 398 397 }; 398 + #endif 399 399 400 400 static int 401 401 nouveau_hwmon_init(struct drm_device *dev) 402 402 { 403 + #ifdef CONFIG_HWMON 403 404 struct drm_nouveau_private *dev_priv = dev->dev_private; 404 405 struct nouveau_pm_engine *pm = &dev_priv->engine.pm; 405 406 struct device *hwmon_dev; ··· 428 425 } 429 426 430 427 pm->hwmon = hwmon_dev; 431 - 428 + #endif 432 429 return 0; 433 430 } 434 431 435 432 static void 436 433 nouveau_hwmon_fini(struct drm_device *dev) 437 434 { 435 + #ifdef CONFIG_HWMON 438 436 struct drm_nouveau_private *dev_priv = dev->dev_private; 439 437 struct nouveau_pm_engine *pm = &dev_priv->engine.pm; 440 438 ··· 443 439 sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup); 444 440 hwmon_device_unregister(pm->hwmon); 445 441 } 442 + #endif 446 443 } 447 444 448 445 int
+44 -27
drivers/gpu/drm/nouveau/nouveau_ramht.c
··· 153 153 return -ENOMEM; 154 154 } 155 155 156 + static struct nouveau_ramht_entry * 157 + nouveau_ramht_remove_entry(struct nouveau_channel *chan, u32 handle) 158 + { 159 + struct nouveau_ramht *ramht = chan ? chan->ramht : NULL; 160 + struct nouveau_ramht_entry *entry; 161 + unsigned long flags; 162 + 163 + if (!ramht) 164 + return NULL; 165 + 166 + spin_lock_irqsave(&ramht->lock, flags); 167 + list_for_each_entry(entry, &ramht->entries, head) { 168 + if (entry->channel == chan && 169 + (!handle || entry->handle == handle)) { 170 + list_del(&entry->head); 171 + spin_unlock_irqrestore(&ramht->lock, flags); 172 + 173 + return entry; 174 + } 175 + } 176 + spin_unlock_irqrestore(&ramht->lock, flags); 177 + 178 + return NULL; 179 + } 180 + 156 181 static void 157 - nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle) 182 + nouveau_ramht_remove_hash(struct nouveau_channel *chan, u32 handle) 158 183 { 159 184 struct drm_device *dev = chan->dev; 160 185 struct drm_nouveau_private *dev_priv = dev->dev_private; 161 186 struct nouveau_instmem_engine *instmem = &dev_priv->engine.instmem; 162 187 struct nouveau_gpuobj *ramht = chan->ramht->gpuobj; 163 - struct nouveau_ramht_entry *entry, *tmp; 188 + unsigned long flags; 164 189 u32 co, ho; 165 190 166 - list_for_each_entry_safe(entry, tmp, &chan->ramht->entries, head) { 167 - if (entry->channel != chan || entry->handle != handle) 168 - continue; 169 - 170 - nouveau_gpuobj_ref(NULL, &entry->gpuobj); 171 - list_del(&entry->head); 172 - kfree(entry); 173 - break; 174 - } 175 - 191 + spin_lock_irqsave(&chan->ramht->lock, flags); 176 192 co = ho = nouveau_ramht_hash_handle(chan, handle); 177 193 do { 178 194 if (nouveau_ramht_entry_valid(dev, ramht, co) && ··· 200 184 nv_wo32(ramht, co + 0, 0x00000000); 201 185 nv_wo32(ramht, co + 4, 0x00000000); 202 186 instmem->flush(dev); 203 - return; 187 + goto out; 204 188 } 205 189 206 190 co += 8; ··· 210 194 211 195 NV_ERROR(dev, "RAMHT entry not found. ch=%d, handle=0x%08x\n", 212 196 chan->id, handle); 197 + out: 198 + spin_unlock_irqrestore(&chan->ramht->lock, flags); 213 199 } 214 200 215 201 void 216 202 nouveau_ramht_remove(struct nouveau_channel *chan, u32 handle) 217 203 { 218 - struct nouveau_ramht *ramht = chan->ramht; 219 - unsigned long flags; 204 + struct nouveau_ramht_entry *entry; 220 205 221 - spin_lock_irqsave(&ramht->lock, flags); 222 - nouveau_ramht_remove_locked(chan, handle); 223 - spin_unlock_irqrestore(&ramht->lock, flags); 206 + entry = nouveau_ramht_remove_entry(chan, handle); 207 + if (!entry) 208 + return; 209 + 210 + nouveau_ramht_remove_hash(chan, entry->handle); 211 + nouveau_gpuobj_ref(NULL, &entry->gpuobj); 212 + kfree(entry); 224 213 } 225 214 226 215 struct nouveau_gpuobj * ··· 286 265 nouveau_ramht_ref(struct nouveau_ramht *ref, struct nouveau_ramht **ptr, 287 266 struct nouveau_channel *chan) 288 267 { 289 - struct nouveau_ramht_entry *entry, *tmp; 268 + struct nouveau_ramht_entry *entry; 290 269 struct nouveau_ramht *ramht; 291 - unsigned long flags; 292 270 293 271 if (ref) 294 272 kref_get(&ref->refcount); 295 273 296 274 ramht = *ptr; 297 275 if (ramht) { 298 - spin_lock_irqsave(&ramht->lock, flags); 299 - list_for_each_entry_safe(entry, tmp, &ramht->entries, head) { 300 - if (entry->channel != chan) 301 - continue; 302 - 303 - nouveau_ramht_remove_locked(chan, entry->handle); 276 + while ((entry = nouveau_ramht_remove_entry(chan, 0))) { 277 + nouveau_ramht_remove_hash(chan, entry->handle); 278 + nouveau_gpuobj_ref(NULL, &entry->gpuobj); 279 + kfree(entry); 304 280 } 305 - spin_unlock_irqrestore(&ramht->lock, flags); 306 281 307 282 kref_put(&ramht->refcount, nouveau_ramht_del); 308 283 }
+9 -5
drivers/gpu/drm/nouveau/nouveau_sgdma.c
··· 120 120 dev_priv->engine.instmem.flush(nvbe->dev); 121 121 122 122 if (dev_priv->card_type == NV_50) { 123 - nv50_vm_flush(dev, 5); /* PGRAPH */ 124 - nv50_vm_flush(dev, 0); /* PFIFO */ 123 + dev_priv->engine.fifo.tlb_flush(dev); 124 + dev_priv->engine.graph.tlb_flush(dev); 125 125 } 126 126 127 127 nvbe->bound = true; ··· 162 162 dev_priv->engine.instmem.flush(nvbe->dev); 163 163 164 164 if (dev_priv->card_type == NV_50) { 165 - nv50_vm_flush(dev, 5); 166 - nv50_vm_flush(dev, 0); 165 + dev_priv->engine.fifo.tlb_flush(dev); 166 + dev_priv->engine.graph.tlb_flush(dev); 167 167 } 168 168 169 169 nvbe->bound = false; ··· 224 224 int i, ret; 225 225 226 226 if (dev_priv->card_type < NV_50) { 227 - aper_size = (64 * 1024 * 1024); 227 + if(dev_priv->ramin_rsvd_vram < 2 * 1024 * 1024) 228 + aper_size = 64 * 1024 * 1024; 229 + else 230 + aper_size = 512 * 1024 * 1024; 231 + 228 232 obj_size = (aper_size >> NV_CTXDMA_PAGE_SHIFT) * 4; 229 233 obj_size += 8; /* ctxdma header */ 230 234 } else {
+15 -2
drivers/gpu/drm/nouveau/nouveau_state.c
··· 354 354 engine->graph.destroy_context = nv50_graph_destroy_context; 355 355 engine->graph.load_context = nv50_graph_load_context; 356 356 engine->graph.unload_context = nv50_graph_unload_context; 357 + if (dev_priv->chipset != 0x86) 358 + engine->graph.tlb_flush = nv50_graph_tlb_flush; 359 + else { 360 + /* from what i can see nvidia do this on every 361 + * pre-NVA3 board except NVAC, but, we've only 362 + * ever seen problems on NV86 363 + */ 364 + engine->graph.tlb_flush = nv86_graph_tlb_flush; 365 + } 357 366 engine->fifo.channels = 128; 358 367 engine->fifo.init = nv50_fifo_init; 359 368 engine->fifo.takedown = nv50_fifo_takedown; ··· 374 365 engine->fifo.destroy_context = nv50_fifo_destroy_context; 375 366 engine->fifo.load_context = nv50_fifo_load_context; 376 367 engine->fifo.unload_context = nv50_fifo_unload_context; 368 + engine->fifo.tlb_flush = nv50_fifo_tlb_flush; 377 369 engine->display.early_init = nv50_display_early_init; 378 370 engine->display.late_takedown = nv50_display_late_takedown; 379 371 engine->display.create = nv50_display_create; ··· 1051 1041 case NOUVEAU_GETPARAM_PTIMER_TIME: 1052 1042 getparam->value = dev_priv->engine.timer.read(dev); 1053 1043 break; 1044 + case NOUVEAU_GETPARAM_HAS_BO_USAGE: 1045 + getparam->value = 1; 1046 + break; 1054 1047 case NOUVEAU_GETPARAM_GRAPH_UNITS: 1055 1048 /* NV40 and NV50 versions are quite different, but register 1056 1049 * address is the same. User is supposed to know the card ··· 1064 1051 } 1065 1052 /* FALLTHRU */ 1066 1053 default: 1067 - NV_ERROR(dev, "unknown parameter %lld\n", getparam->param); 1054 + NV_DEBUG(dev, "unknown parameter %lld\n", getparam->param); 1068 1055 return -EINVAL; 1069 1056 } 1070 1057 ··· 1079 1066 1080 1067 switch (setparam->param) { 1081 1068 default: 1082 - NV_ERROR(dev, "unknown parameter %lld\n", setparam->param); 1069 + NV_DEBUG(dev, "unknown parameter %lld\n", setparam->param); 1083 1070 return -EINVAL; 1084 1071 } 1085 1072
+1 -1
drivers/gpu/drm/nouveau/nouveau_temp.c
··· 191 191 int offset = sensor->offset_mult / sensor->offset_div; 192 192 int core_temp; 193 193 194 - if (dev_priv->chipset >= 0x50) { 194 + if (dev_priv->card_type >= NV_50) { 195 195 core_temp = nv_rd32(dev, 0x20008); 196 196 } else { 197 197 core_temp = nv_rd32(dev, 0x0015b4) & 0x1fff;
+1 -6
drivers/gpu/drm/nouveau/nv04_crtc.c
··· 158 158 { 159 159 struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc); 160 160 struct drm_device *dev = crtc->dev; 161 - struct drm_connector *connector; 162 161 unsigned char seq1 = 0, crtc17 = 0; 163 162 unsigned char crtc1A; 164 163 ··· 212 213 NVVgaSeqReset(dev, nv_crtc->index, false); 213 214 214 215 NVWriteVgaCrtc(dev, nv_crtc->index, NV_CIO_CRE_RPC1_INDEX, crtc1A); 215 - 216 - /* Update connector polling modes */ 217 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) 218 - nouveau_connector_set_polling(connector); 219 216 } 220 217 221 218 static bool ··· 826 831 /* Update the framebuffer location. */ 827 832 regp->fb_start = nv_crtc->fb.offset & ~3; 828 833 regp->fb_start += (y * drm_fb->pitch) + (x * drm_fb->bits_per_pixel / 8); 829 - NVWriteCRTC(dev, nv_crtc->index, NV_PCRTC_START, regp->fb_start); 834 + nv_set_crtc_base(dev, nv_crtc->index, regp->fb_start); 830 835 831 836 /* Update the arbitration parameters. */ 832 837 nouveau_calc_arb(dev, crtc->mode.clock, drm_fb->bits_per_pixel,
+7 -6
drivers/gpu/drm/nouveau/nv04_dfp.c
··· 185 185 struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); 186 186 struct nouveau_connector *nv_connector = nouveau_encoder_connector_get(nv_encoder); 187 187 188 - /* For internal panels and gpu scaling on DVI we need the native mode */ 189 - if (nv_connector->scaling_mode != DRM_MODE_SCALE_NONE) { 190 - if (!nv_connector->native_mode) 191 - return false; 188 + if (!nv_connector->native_mode || 189 + nv_connector->scaling_mode == DRM_MODE_SCALE_NONE || 190 + mode->hdisplay > nv_connector->native_mode->hdisplay || 191 + mode->vdisplay > nv_connector->native_mode->vdisplay) { 192 + nv_encoder->mode = *adjusted_mode; 193 + 194 + } else { 192 195 nv_encoder->mode = *nv_connector->native_mode; 193 196 adjusted_mode->clock = nv_connector->native_mode->clock; 194 - } else { 195 - nv_encoder->mode = *adjusted_mode; 196 197 } 197 198 198 199 return true;
+9
drivers/gpu/drm/nouveau/nv04_pm.c
··· 76 76 reg += 4; 77 77 78 78 nouveau_hw_setpll(dev, reg, &state->calc); 79 + 80 + if (dev_priv->card_type < NV_30 && reg == NV_PRAMDAC_MPLL_COEFF) { 81 + if (dev_priv->card_type == NV_20) 82 + nv_mask(dev, 0x1002c4, 0, 1 << 20); 83 + 84 + /* Reset the DLLs */ 85 + nv_mask(dev, 0x1002c0, 0, 1 << 8); 86 + } 87 + 79 88 kfree(state); 80 89 } 81 90
+10 -6
drivers/gpu/drm/nouveau/nv50_calc.c
··· 51 51 int *N, int *fN, int *M, int *P) 52 52 { 53 53 fixed20_12 fb_div, a, b; 54 + u32 refclk = pll->refclk / 10; 55 + u32 max_vco_freq = pll->vco1.maxfreq / 10; 56 + u32 max_vco_inputfreq = pll->vco1.max_inputfreq / 10; 57 + clk /= 10; 54 58 55 - *P = pll->vco1.maxfreq / clk; 59 + *P = max_vco_freq / clk; 56 60 if (*P > pll->max_p) 57 61 *P = pll->max_p; 58 62 if (*P < pll->min_p) 59 63 *P = pll->min_p; 60 64 61 - /* *M = ceil(refclk / pll->vco.max_inputfreq); */ 62 - a.full = dfixed_const(pll->refclk); 63 - b.full = dfixed_const(pll->vco1.max_inputfreq); 65 + /* *M = floor((refclk + max_vco_inputfreq) / max_vco_inputfreq); */ 66 + a.full = dfixed_const(refclk + max_vco_inputfreq); 67 + b.full = dfixed_const(max_vco_inputfreq); 64 68 a.full = dfixed_div(a, b); 65 - a.full = dfixed_ceil(a); 69 + a.full = dfixed_floor(a); 66 70 *M = dfixed_trunc(a); 67 71 68 72 /* fb_div = (vco * *M) / refclk; */ 69 73 fb_div.full = dfixed_const(clk * *P); 70 74 fb_div.full = dfixed_mul(fb_div, a); 71 - a.full = dfixed_const(pll->refclk); 75 + a.full = dfixed_const(refclk); 72 76 fb_div.full = dfixed_div(fb_div, a); 73 77 74 78 /* *N = floor(fb_div); */
+2 -2
drivers/gpu/drm/nouveau/nv50_crtc.c
··· 546 546 } 547 547 548 548 nv_crtc->fb.offset = fb->nvbo->bo.offset - dev_priv->vm_vram_base; 549 - nv_crtc->fb.tile_flags = fb->nvbo->tile_flags; 549 + nv_crtc->fb.tile_flags = nouveau_bo_tile_layout(fb->nvbo); 550 550 nv_crtc->fb.cpp = drm_fb->bits_per_pixel / 8; 551 551 if (!nv_crtc->fb.blanked && dev_priv->chipset != 0x50) { 552 552 ret = RING_SPACE(evo, 2); ··· 578 578 fb->nvbo->tile_mode); 579 579 } 580 580 if (dev_priv->chipset == 0x50) 581 - OUT_RING(evo, (fb->nvbo->tile_flags << 8) | format); 581 + OUT_RING(evo, (nv_crtc->fb.tile_flags << 8) | format); 582 582 else 583 583 OUT_RING(evo, format); 584 584
+26 -9
drivers/gpu/drm/nouveau/nv50_display.c
··· 1032 1032 struct drm_connector *connector; 1033 1033 const uint32_t gpio_reg[4] = { 0xe104, 0xe108, 0xe280, 0xe284 }; 1034 1034 uint32_t unplug_mask, plug_mask, change_mask; 1035 - uint32_t hpd0, hpd1 = 0; 1035 + uint32_t hpd0, hpd1; 1036 1036 1037 - hpd0 = nv_rd32(dev, 0xe054) & nv_rd32(dev, 0xe050); 1037 + spin_lock_irq(&dev_priv->hpd_state.lock); 1038 + hpd0 = dev_priv->hpd_state.hpd0_bits; 1039 + dev_priv->hpd_state.hpd0_bits = 0; 1040 + hpd1 = dev_priv->hpd_state.hpd1_bits; 1041 + dev_priv->hpd_state.hpd1_bits = 0; 1042 + spin_unlock_irq(&dev_priv->hpd_state.lock); 1043 + 1044 + hpd0 &= nv_rd32(dev, 0xe050); 1038 1045 if (dev_priv->chipset >= 0x90) 1039 - hpd1 = nv_rd32(dev, 0xe074) & nv_rd32(dev, 0xe070); 1046 + hpd1 &= nv_rd32(dev, 0xe070); 1040 1047 1041 1048 plug_mask = (hpd0 & 0x0000ffff) | (hpd1 << 16); 1042 1049 unplug_mask = (hpd0 >> 16) | (hpd1 & 0xffff0000); ··· 1085 1078 helper->dpms(connector->encoder, DRM_MODE_DPMS_OFF); 1086 1079 } 1087 1080 1088 - nv_wr32(dev, 0xe054, nv_rd32(dev, 0xe054)); 1089 - if (dev_priv->chipset >= 0x90) 1090 - nv_wr32(dev, 0xe074, nv_rd32(dev, 0xe074)); 1091 - 1092 1081 drm_helper_hpd_irq_event(dev); 1093 1082 } 1094 1083 ··· 1095 1092 uint32_t delayed = 0; 1096 1093 1097 1094 if (nv_rd32(dev, NV50_PMC_INTR_0) & NV50_PMC_INTR_0_HOTPLUG) { 1098 - if (!work_pending(&dev_priv->hpd_work)) 1099 - queue_work(dev_priv->wq, &dev_priv->hpd_work); 1095 + uint32_t hpd0_bits, hpd1_bits = 0; 1096 + 1097 + hpd0_bits = nv_rd32(dev, 0xe054); 1098 + nv_wr32(dev, 0xe054, hpd0_bits); 1099 + 1100 + if (dev_priv->chipset >= 0x90) { 1101 + hpd1_bits = nv_rd32(dev, 0xe074); 1102 + nv_wr32(dev, 0xe074, hpd1_bits); 1103 + } 1104 + 1105 + spin_lock(&dev_priv->hpd_state.lock); 1106 + dev_priv->hpd_state.hpd0_bits |= hpd0_bits; 1107 + dev_priv->hpd_state.hpd1_bits |= hpd1_bits; 1108 + spin_unlock(&dev_priv->hpd_state.lock); 1109 + 1110 + queue_work(dev_priv->wq, &dev_priv->hpd_work); 1100 1111 } 1101 1112 1102 1113 while (nv_rd32(dev, NV50_PMC_INTR_0) & NV50_PMC_INTR_0_DISPLAY) {
+5
drivers/gpu/drm/nouveau/nv50_fifo.c
··· 464 464 return 0; 465 465 } 466 466 467 + void 468 + nv50_fifo_tlb_flush(struct drm_device *dev) 469 + { 470 + nv50_vm_flush(dev, 5); 471 + }
+52
drivers/gpu/drm/nouveau/nv50_graph.c
··· 402 402 { 0x8597, false, NULL }, /* tesla (nva3, nva5, nva8) */ 403 403 {} 404 404 }; 405 + 406 + void 407 + nv50_graph_tlb_flush(struct drm_device *dev) 408 + { 409 + nv50_vm_flush(dev, 0); 410 + } 411 + 412 + void 413 + nv86_graph_tlb_flush(struct drm_device *dev) 414 + { 415 + struct drm_nouveau_private *dev_priv = dev->dev_private; 416 + struct nouveau_timer_engine *ptimer = &dev_priv->engine.timer; 417 + bool idle, timeout = false; 418 + unsigned long flags; 419 + u64 start; 420 + u32 tmp; 421 + 422 + spin_lock_irqsave(&dev_priv->context_switch_lock, flags); 423 + nv_mask(dev, 0x400500, 0x00000001, 0x00000000); 424 + 425 + start = ptimer->read(dev); 426 + do { 427 + idle = true; 428 + 429 + for (tmp = nv_rd32(dev, 0x400380); tmp && idle; tmp >>= 3) { 430 + if ((tmp & 7) == 1) 431 + idle = false; 432 + } 433 + 434 + for (tmp = nv_rd32(dev, 0x400384); tmp && idle; tmp >>= 3) { 435 + if ((tmp & 7) == 1) 436 + idle = false; 437 + } 438 + 439 + for (tmp = nv_rd32(dev, 0x400388); tmp && idle; tmp >>= 3) { 440 + if ((tmp & 7) == 1) 441 + idle = false; 442 + } 443 + } while (!idle && !(timeout = ptimer->read(dev) - start > 2000000000)); 444 + 445 + if (timeout) { 446 + NV_ERROR(dev, "PGRAPH TLB flush idle timeout fail: " 447 + "0x%08x 0x%08x 0x%08x 0x%08x\n", 448 + nv_rd32(dev, 0x400700), nv_rd32(dev, 0x400380), 449 + nv_rd32(dev, 0x400384), nv_rd32(dev, 0x400388)); 450 + } 451 + 452 + nv50_vm_flush(dev, 0); 453 + 454 + nv_mask(dev, 0x400500, 0x00000001, 0x00000001); 455 + spin_unlock_irqrestore(&dev_priv->context_switch_lock, flags); 456 + }
-1
drivers/gpu/drm/nouveau/nv50_instmem.c
··· 402 402 } 403 403 dev_priv->engine.instmem.flush(dev); 404 404 405 - nv50_vm_flush(dev, 4); 406 405 nv50_vm_flush(dev, 6); 407 406 408 407 gpuobj->im_bound = 1;
+30 -1
drivers/gpu/drm/radeon/evergreen.c
··· 1650 1650 } 1651 1651 } 1652 1652 1653 - rdev->config.evergreen.tile_config = gb_addr_config; 1653 + /* setup tiling info dword. gb_addr_config is not adequate since it does 1654 + * not have bank info, so create a custom tiling dword. 1655 + * bits 3:0 num_pipes 1656 + * bits 7:4 num_banks 1657 + * bits 11:8 group_size 1658 + * bits 15:12 row_size 1659 + */ 1660 + rdev->config.evergreen.tile_config = 0; 1661 + switch (rdev->config.evergreen.max_tile_pipes) { 1662 + case 1: 1663 + default: 1664 + rdev->config.evergreen.tile_config |= (0 << 0); 1665 + break; 1666 + case 2: 1667 + rdev->config.evergreen.tile_config |= (1 << 0); 1668 + break; 1669 + case 4: 1670 + rdev->config.evergreen.tile_config |= (2 << 0); 1671 + break; 1672 + case 8: 1673 + rdev->config.evergreen.tile_config |= (3 << 0); 1674 + break; 1675 + } 1676 + rdev->config.evergreen.tile_config |= 1677 + ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4; 1678 + rdev->config.evergreen.tile_config |= 1679 + ((mc_arb_ramcfg & BURSTLENGTH_MASK) >> BURSTLENGTH_SHIFT) << 8; 1680 + rdev->config.evergreen.tile_config |= 1681 + ((gb_addr_config & 0x30000000) >> 28) << 12; 1682 + 1654 1683 WREG32(GB_BACKEND_MAP, gb_backend_map); 1655 1684 WREG32(GB_ADDR_CONFIG, gb_addr_config); 1656 1685 WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
+1 -1
drivers/gpu/drm/radeon/evergreen_blit_kms.c
··· 459 459 obj_size += evergreen_ps_size * 4; 460 460 obj_size = ALIGN(obj_size, 256); 461 461 462 - r = radeon_bo_create(rdev, NULL, obj_size, true, RADEON_GEM_DOMAIN_VRAM, 462 + r = radeon_bo_create(rdev, NULL, obj_size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, 463 463 &rdev->r600_blit.shader_obj); 464 464 if (r) { 465 465 DRM_ERROR("evergreen failed to allocate shader\n");
+1 -1
drivers/gpu/drm/radeon/r600.c
··· 2718 2718 /* Allocate ring buffer */ 2719 2719 if (rdev->ih.ring_obj == NULL) { 2720 2720 r = radeon_bo_create(rdev, NULL, rdev->ih.ring_size, 2721 - true, 2721 + PAGE_SIZE, true, 2722 2722 RADEON_GEM_DOMAIN_GTT, 2723 2723 &rdev->ih.ring_obj); 2724 2724 if (r) {
+1 -1
drivers/gpu/drm/radeon/r600_blit_kms.c
··· 501 501 obj_size += r6xx_ps_size * 4; 502 502 obj_size = ALIGN(obj_size, 256); 503 503 504 - r = radeon_bo_create(rdev, NULL, obj_size, true, RADEON_GEM_DOMAIN_VRAM, 504 + r = radeon_bo_create(rdev, NULL, obj_size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, 505 505 &rdev->r600_blit.shader_obj); 506 506 if (r) { 507 507 DRM_ERROR("r600 failed to allocate shader\n");
+194 -117
drivers/gpu/drm/radeon/r600_cs.c
··· 50 50 u32 nsamples; 51 51 u32 cb_color_base_last[8]; 52 52 struct radeon_bo *cb_color_bo[8]; 53 + u64 cb_color_bo_mc[8]; 53 54 u32 cb_color_bo_offset[8]; 54 55 struct radeon_bo *cb_color_frag_bo[8]; 55 56 struct radeon_bo *cb_color_tile_bo[8]; ··· 68 67 u32 db_depth_size; 69 68 u32 db_offset; 70 69 struct radeon_bo *db_bo; 70 + u64 db_bo_mc; 71 71 }; 72 72 73 73 static inline int r600_bpe_from_format(u32 *bpe, u32 format) ··· 142 140 return 0; 143 141 } 144 142 143 + struct array_mode_checker { 144 + int array_mode; 145 + u32 group_size; 146 + u32 nbanks; 147 + u32 npipes; 148 + u32 nsamples; 149 + u32 bpe; 150 + }; 151 + 152 + /* returns alignment in pixels for pitch/height/depth and bytes for base */ 153 + static inline int r600_get_array_mode_alignment(struct array_mode_checker *values, 154 + u32 *pitch_align, 155 + u32 *height_align, 156 + u32 *depth_align, 157 + u64 *base_align) 158 + { 159 + u32 tile_width = 8; 160 + u32 tile_height = 8; 161 + u32 macro_tile_width = values->nbanks; 162 + u32 macro_tile_height = values->npipes; 163 + u32 tile_bytes = tile_width * tile_height * values->bpe * values->nsamples; 164 + u32 macro_tile_bytes = macro_tile_width * macro_tile_height * tile_bytes; 165 + 166 + switch (values->array_mode) { 167 + case ARRAY_LINEAR_GENERAL: 168 + /* technically tile_width/_height for pitch/height */ 169 + *pitch_align = 1; /* tile_width */ 170 + *height_align = 1; /* tile_height */ 171 + *depth_align = 1; 172 + *base_align = 1; 173 + break; 174 + case ARRAY_LINEAR_ALIGNED: 175 + *pitch_align = max((u32)64, (u32)(values->group_size / values->bpe)); 176 + *height_align = tile_height; 177 + *depth_align = 1; 178 + *base_align = values->group_size; 179 + break; 180 + case ARRAY_1D_TILED_THIN1: 181 + *pitch_align = max((u32)tile_width, 182 + (u32)(values->group_size / 183 + (tile_height * values->bpe * values->nsamples))); 184 + *height_align = tile_height; 185 + *depth_align = 1; 186 + *base_align = values->group_size; 187 + break; 188 + case ARRAY_2D_TILED_THIN1: 189 + *pitch_align = max((u32)macro_tile_width, 190 + (u32)(((values->group_size / tile_height) / 191 + (values->bpe * values->nsamples)) * 192 + values->nbanks)) * tile_width; 193 + *height_align = macro_tile_height * tile_height; 194 + *depth_align = 1; 195 + *base_align = max(macro_tile_bytes, 196 + (*pitch_align) * values->bpe * (*height_align) * values->nsamples); 197 + break; 198 + default: 199 + return -EINVAL; 200 + } 201 + 202 + return 0; 203 + } 204 + 145 205 static void r600_cs_track_init(struct r600_cs_track *track) 146 206 { 147 207 int i; ··· 217 153 track->cb_color_info[i] = 0; 218 154 track->cb_color_bo[i] = NULL; 219 155 track->cb_color_bo_offset[i] = 0xFFFFFFFF; 156 + track->cb_color_bo_mc[i] = 0xFFFFFFFF; 220 157 } 221 158 track->cb_target_mask = 0xFFFFFFFF; 222 159 track->cb_shader_mask = 0xFFFFFFFF; 223 160 track->db_bo = NULL; 161 + track->db_bo_mc = 0xFFFFFFFF; 224 162 /* assume the biggest format and that htile is enabled */ 225 163 track->db_depth_info = 7 | (1 << 25); 226 164 track->db_depth_view = 0xFFFFC000; ··· 234 168 static inline int r600_cs_track_validate_cb(struct radeon_cs_parser *p, int i) 235 169 { 236 170 struct r600_cs_track *track = p->track; 237 - u32 bpe = 0, pitch, slice_tile_max, size, tmp, height, pitch_align; 171 + u32 bpe = 0, slice_tile_max, size, tmp; 172 + u32 height, height_align, pitch, pitch_align, depth_align; 173 + u64 base_offset, base_align; 174 + struct array_mode_checker array_check; 238 175 volatile u32 *ib = p->ib->ptr; 239 176 unsigned array_mode; 240 177 ··· 252 183 i, track->cb_color_info[i]); 253 184 return -EINVAL; 254 185 } 255 - /* pitch is the number of 8x8 tiles per row */ 256 - pitch = G_028060_PITCH_TILE_MAX(track->cb_color_size[i]) + 1; 186 + /* pitch in pixels */ 187 + pitch = (G_028060_PITCH_TILE_MAX(track->cb_color_size[i]) + 1) * 8; 257 188 slice_tile_max = G_028060_SLICE_TILE_MAX(track->cb_color_size[i]) + 1; 258 189 slice_tile_max *= 64; 259 - height = slice_tile_max / (pitch * 8); 190 + height = slice_tile_max / pitch; 260 191 if (height > 8192) 261 192 height = 8192; 262 193 array_mode = G_0280A0_ARRAY_MODE(track->cb_color_info[i]); 194 + 195 + base_offset = track->cb_color_bo_mc[i] + track->cb_color_bo_offset[i]; 196 + array_check.array_mode = array_mode; 197 + array_check.group_size = track->group_size; 198 + array_check.nbanks = track->nbanks; 199 + array_check.npipes = track->npipes; 200 + array_check.nsamples = track->nsamples; 201 + array_check.bpe = bpe; 202 + if (r600_get_array_mode_alignment(&array_check, 203 + &pitch_align, &height_align, &depth_align, &base_align)) { 204 + dev_warn(p->dev, "%s invalid tiling %d for %d (0x%08X)\n", __func__, 205 + G_0280A0_ARRAY_MODE(track->cb_color_info[i]), i, 206 + track->cb_color_info[i]); 207 + return -EINVAL; 208 + } 263 209 switch (array_mode) { 264 210 case V_0280A0_ARRAY_LINEAR_GENERAL: 265 - /* technically height & 0x7 */ 266 211 break; 267 212 case V_0280A0_ARRAY_LINEAR_ALIGNED: 268 - pitch_align = max((u32)64, (u32)(track->group_size / bpe)) / 8; 269 - if (!IS_ALIGNED(pitch, pitch_align)) { 270 - dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n", 271 - __func__, __LINE__, pitch); 272 - return -EINVAL; 273 - } 274 - if (!IS_ALIGNED(height, 8)) { 275 - dev_warn(p->dev, "%s:%d cb height (%d) invalid\n", 276 - __func__, __LINE__, height); 277 - return -EINVAL; 278 - } 279 213 break; 280 214 case V_0280A0_ARRAY_1D_TILED_THIN1: 281 - pitch_align = max((u32)8, (u32)(track->group_size / (8 * bpe * track->nsamples))) / 8; 282 - if (!IS_ALIGNED(pitch, pitch_align)) { 283 - dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n", 284 - __func__, __LINE__, pitch); 285 - return -EINVAL; 286 - } 287 215 /* avoid breaking userspace */ 288 216 if (height > 7) 289 217 height &= ~0x7; 290 - if (!IS_ALIGNED(height, 8)) { 291 - dev_warn(p->dev, "%s:%d cb height (%d) invalid\n", 292 - __func__, __LINE__, height); 293 - return -EINVAL; 294 - } 295 218 break; 296 219 case V_0280A0_ARRAY_2D_TILED_THIN1: 297 - pitch_align = max((u32)track->nbanks, 298 - (u32)(((track->group_size / 8) / (bpe * track->nsamples)) * track->nbanks)) / 8; 299 - if (!IS_ALIGNED(pitch, pitch_align)) { 300 - dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n", 301 - __func__, __LINE__, pitch); 302 - return -EINVAL; 303 - } 304 - if (!IS_ALIGNED((height / 8), track->npipes)) { 305 - dev_warn(p->dev, "%s:%d cb height (%d) invalid\n", 306 - __func__, __LINE__, height); 307 - return -EINVAL; 308 - } 309 220 break; 310 221 default: 311 222 dev_warn(p->dev, "%s invalid tiling %d for %d (0x%08X)\n", __func__, ··· 293 244 track->cb_color_info[i]); 294 245 return -EINVAL; 295 246 } 247 + 248 + if (!IS_ALIGNED(pitch, pitch_align)) { 249 + dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n", 250 + __func__, __LINE__, pitch); 251 + return -EINVAL; 252 + } 253 + if (!IS_ALIGNED(height, height_align)) { 254 + dev_warn(p->dev, "%s:%d cb height (%d) invalid\n", 255 + __func__, __LINE__, height); 256 + return -EINVAL; 257 + } 258 + if (!IS_ALIGNED(base_offset, base_align)) { 259 + dev_warn(p->dev, "%s offset[%d] 0x%llx not aligned\n", __func__, i, base_offset); 260 + return -EINVAL; 261 + } 262 + 296 263 /* check offset */ 297 - tmp = height * pitch * 8 * bpe; 264 + tmp = height * pitch * bpe; 298 265 if ((tmp + track->cb_color_bo_offset[i]) > radeon_bo_size(track->cb_color_bo[i])) { 299 266 if (array_mode == V_0280A0_ARRAY_LINEAR_GENERAL) { 300 267 /* the initial DDX does bad things with the CB size occasionally */ 301 268 /* it rounds up height too far for slice tile max but the BO is smaller */ 302 - tmp = (height - 7) * 8 * bpe; 269 + tmp = (height - 7) * pitch * bpe; 303 270 if ((tmp + track->cb_color_bo_offset[i]) > radeon_bo_size(track->cb_color_bo[i])) { 304 271 dev_warn(p->dev, "%s offset[%d] %d %d %lu too big\n", __func__, i, track->cb_color_bo_offset[i], tmp, radeon_bo_size(track->cb_color_bo[i])); 305 272 return -EINVAL; ··· 325 260 return -EINVAL; 326 261 } 327 262 } 328 - if (!IS_ALIGNED(track->cb_color_bo_offset[i], track->group_size)) { 329 - dev_warn(p->dev, "%s offset[%d] %d not aligned\n", __func__, i, track->cb_color_bo_offset[i]); 330 - return -EINVAL; 331 - } 332 263 /* limit max tile */ 333 - tmp = (height * pitch * 8) >> 6; 264 + tmp = (height * pitch) >> 6; 334 265 if (tmp < slice_tile_max) 335 266 slice_tile_max = tmp; 336 - tmp = S_028060_PITCH_TILE_MAX(pitch - 1) | 267 + tmp = S_028060_PITCH_TILE_MAX((pitch / 8) - 1) | 337 268 S_028060_SLICE_TILE_MAX(slice_tile_max - 1); 338 269 ib[track->cb_color_size_idx[i]] = tmp; 339 270 return 0; ··· 371 310 /* Check depth buffer */ 372 311 if (G_028800_STENCIL_ENABLE(track->db_depth_control) || 373 312 G_028800_Z_ENABLE(track->db_depth_control)) { 374 - u32 nviews, bpe, ntiles, pitch, pitch_align, height, size, slice_tile_max; 313 + u32 nviews, bpe, ntiles, size, slice_tile_max; 314 + u32 height, height_align, pitch, pitch_align, depth_align; 315 + u64 base_offset, base_align; 316 + struct array_mode_checker array_check; 317 + int array_mode; 318 + 375 319 if (track->db_bo == NULL) { 376 320 dev_warn(p->dev, "z/stencil with no depth buffer\n"); 377 321 return -EINVAL; ··· 419 353 ib[track->db_depth_size_idx] = S_028000_SLICE_TILE_MAX(tmp - 1) | (track->db_depth_size & 0x3FF); 420 354 } else { 421 355 size = radeon_bo_size(track->db_bo); 422 - pitch = G_028000_PITCH_TILE_MAX(track->db_depth_size) + 1; 356 + /* pitch in pixels */ 357 + pitch = (G_028000_PITCH_TILE_MAX(track->db_depth_size) + 1) * 8; 423 358 slice_tile_max = G_028000_SLICE_TILE_MAX(track->db_depth_size) + 1; 424 359 slice_tile_max *= 64; 425 - height = slice_tile_max / (pitch * 8); 360 + height = slice_tile_max / pitch; 426 361 if (height > 8192) 427 362 height = 8192; 428 - switch (G_028010_ARRAY_MODE(track->db_depth_info)) { 363 + base_offset = track->db_bo_mc + track->db_offset; 364 + array_mode = G_028010_ARRAY_MODE(track->db_depth_info); 365 + array_check.array_mode = array_mode; 366 + array_check.group_size = track->group_size; 367 + array_check.nbanks = track->nbanks; 368 + array_check.npipes = track->npipes; 369 + array_check.nsamples = track->nsamples; 370 + array_check.bpe = bpe; 371 + if (r600_get_array_mode_alignment(&array_check, 372 + &pitch_align, &height_align, &depth_align, &base_align)) { 373 + dev_warn(p->dev, "%s invalid tiling %d (0x%08X)\n", __func__, 374 + G_028010_ARRAY_MODE(track->db_depth_info), 375 + track->db_depth_info); 376 + return -EINVAL; 377 + } 378 + switch (array_mode) { 429 379 case V_028010_ARRAY_1D_TILED_THIN1: 430 - pitch_align = (max((u32)8, (u32)(track->group_size / (8 * bpe))) / 8); 431 - if (!IS_ALIGNED(pitch, pitch_align)) { 432 - dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n", 433 - __func__, __LINE__, pitch); 434 - return -EINVAL; 435 - } 436 380 /* don't break userspace */ 437 381 height &= ~0x7; 438 - if (!IS_ALIGNED(height, 8)) { 439 - dev_warn(p->dev, "%s:%d db height (%d) invalid\n", 440 - __func__, __LINE__, height); 441 - return -EINVAL; 442 - } 443 382 break; 444 383 case V_028010_ARRAY_2D_TILED_THIN1: 445 - pitch_align = max((u32)track->nbanks, 446 - (u32)(((track->group_size / 8) / bpe) * track->nbanks)) / 8; 447 - if (!IS_ALIGNED(pitch, pitch_align)) { 448 - dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n", 449 - __func__, __LINE__, pitch); 450 - return -EINVAL; 451 - } 452 - if (!IS_ALIGNED((height / 8), track->npipes)) { 453 - dev_warn(p->dev, "%s:%d db height (%d) invalid\n", 454 - __func__, __LINE__, height); 455 - return -EINVAL; 456 - } 457 384 break; 458 385 default: 459 386 dev_warn(p->dev, "%s invalid tiling %d (0x%08X)\n", __func__, ··· 454 395 track->db_depth_info); 455 396 return -EINVAL; 456 397 } 457 - if (!IS_ALIGNED(track->db_offset, track->group_size)) { 458 - dev_warn(p->dev, "%s offset[%d] %d not aligned\n", __func__, i, track->db_offset); 398 + 399 + if (!IS_ALIGNED(pitch, pitch_align)) { 400 + dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n", 401 + __func__, __LINE__, pitch); 459 402 return -EINVAL; 460 403 } 404 + if (!IS_ALIGNED(height, height_align)) { 405 + dev_warn(p->dev, "%s:%d db height (%d) invalid\n", 406 + __func__, __LINE__, height); 407 + return -EINVAL; 408 + } 409 + if (!IS_ALIGNED(base_offset, base_align)) { 410 + dev_warn(p->dev, "%s offset[%d] 0x%llx not aligned\n", __func__, i, base_offset); 411 + return -EINVAL; 412 + } 413 + 461 414 ntiles = G_028000_SLICE_TILE_MAX(track->db_depth_size) + 1; 462 415 nviews = G_028004_SLICE_MAX(track->db_depth_view) + 1; 463 416 tmp = ntiles * bpe * 64 * nviews; 464 417 if ((tmp + track->db_offset) > radeon_bo_size(track->db_bo)) { 465 - dev_warn(p->dev, "z/stencil buffer too small (0x%08X %d %d %d -> %d have %ld)\n", 418 + dev_warn(p->dev, "z/stencil buffer too small (0x%08X %d %d %d -> %u have %lu)\n", 466 419 track->db_depth_size, ntiles, nviews, bpe, tmp + track->db_offset, 467 420 radeon_bo_size(track->db_bo)); 468 421 return -EINVAL; ··· 1025 954 ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff); 1026 955 track->cb_color_base_last[tmp] = ib[idx]; 1027 956 track->cb_color_bo[tmp] = reloc->robj; 957 + track->cb_color_bo_mc[tmp] = reloc->lobj.gpu_offset; 1028 958 break; 1029 959 case DB_DEPTH_BASE: 1030 960 r = r600_cs_packet_next_reloc(p, &reloc); ··· 1037 965 track->db_offset = radeon_get_ib_value(p, idx) << 8; 1038 966 ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff); 1039 967 track->db_bo = reloc->robj; 968 + track->db_bo_mc = reloc->lobj.gpu_offset; 1040 969 break; 1041 970 case DB_HTILE_DATA_BASE: 1042 971 case SQ_PGM_START_FS: ··· 1159 1086 static inline int r600_check_texture_resource(struct radeon_cs_parser *p, u32 idx, 1160 1087 struct radeon_bo *texture, 1161 1088 struct radeon_bo *mipmap, 1089 + u64 base_offset, 1090 + u64 mip_offset, 1162 1091 u32 tiling_flags) 1163 1092 { 1164 1093 struct r600_cs_track *track = p->track; 1165 1094 u32 nfaces, nlevels, blevel, w0, h0, d0, bpe = 0; 1166 - u32 word0, word1, l0_size, mipmap_size, pitch, pitch_align; 1095 + u32 word0, word1, l0_size, mipmap_size; 1096 + u32 height_align, pitch, pitch_align, depth_align; 1097 + u64 base_align; 1098 + struct array_mode_checker array_check; 1167 1099 1168 1100 /* on legacy kernel we don't perform advanced check */ 1169 1101 if (p->rdev == NULL) 1170 1102 return 0; 1103 + 1104 + /* convert to bytes */ 1105 + base_offset <<= 8; 1106 + mip_offset <<= 8; 1171 1107 1172 1108 word0 = radeon_get_ib_value(p, idx + 0); 1173 1109 if (tiling_flags & RADEON_TILING_MACRO) ··· 1210 1128 return -EINVAL; 1211 1129 } 1212 1130 1213 - pitch = G_038000_PITCH(word0) + 1; 1214 - switch (G_038000_TILE_MODE(word0)) { 1215 - case V_038000_ARRAY_LINEAR_GENERAL: 1216 - pitch_align = 1; 1217 - /* XXX check height align */ 1218 - break; 1219 - case V_038000_ARRAY_LINEAR_ALIGNED: 1220 - pitch_align = max((u32)64, (u32)(track->group_size / bpe)) / 8; 1221 - if (!IS_ALIGNED(pitch, pitch_align)) { 1222 - dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n", 1223 - __func__, __LINE__, pitch); 1224 - return -EINVAL; 1225 - } 1226 - /* XXX check height align */ 1227 - break; 1228 - case V_038000_ARRAY_1D_TILED_THIN1: 1229 - pitch_align = max((u32)8, (u32)(track->group_size / (8 * bpe))) / 8; 1230 - if (!IS_ALIGNED(pitch, pitch_align)) { 1231 - dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n", 1232 - __func__, __LINE__, pitch); 1233 - return -EINVAL; 1234 - } 1235 - /* XXX check height align */ 1236 - break; 1237 - case V_038000_ARRAY_2D_TILED_THIN1: 1238 - pitch_align = max((u32)track->nbanks, 1239 - (u32)(((track->group_size / 8) / bpe) * track->nbanks)) / 8; 1240 - if (!IS_ALIGNED(pitch, pitch_align)) { 1241 - dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n", 1242 - __func__, __LINE__, pitch); 1243 - return -EINVAL; 1244 - } 1245 - /* XXX check height align */ 1246 - break; 1247 - default: 1248 - dev_warn(p->dev, "%s invalid tiling %d (0x%08X)\n", __func__, 1249 - G_038000_TILE_MODE(word0), word0); 1131 + /* pitch in texels */ 1132 + pitch = (G_038000_PITCH(word0) + 1) * 8; 1133 + array_check.array_mode = G_038000_TILE_MODE(word0); 1134 + array_check.group_size = track->group_size; 1135 + array_check.nbanks = track->nbanks; 1136 + array_check.npipes = track->npipes; 1137 + array_check.nsamples = 1; 1138 + array_check.bpe = bpe; 1139 + if (r600_get_array_mode_alignment(&array_check, 1140 + &pitch_align, &height_align, &depth_align, &base_align)) { 1141 + dev_warn(p->dev, "%s:%d tex array mode (%d) invalid\n", 1142 + __func__, __LINE__, G_038000_TILE_MODE(word0)); 1250 1143 return -EINVAL; 1251 1144 } 1252 - /* XXX check offset align */ 1145 + 1146 + /* XXX check height as well... */ 1147 + 1148 + if (!IS_ALIGNED(pitch, pitch_align)) { 1149 + dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n", 1150 + __func__, __LINE__, pitch); 1151 + return -EINVAL; 1152 + } 1153 + if (!IS_ALIGNED(base_offset, base_align)) { 1154 + dev_warn(p->dev, "%s:%d tex base offset (0x%llx) invalid\n", 1155 + __func__, __LINE__, base_offset); 1156 + return -EINVAL; 1157 + } 1158 + if (!IS_ALIGNED(mip_offset, base_align)) { 1159 + dev_warn(p->dev, "%s:%d tex mip offset (0x%llx) invalid\n", 1160 + __func__, __LINE__, mip_offset); 1161 + return -EINVAL; 1162 + } 1253 1163 1254 1164 word0 = radeon_get_ib_value(p, idx + 4); 1255 1165 word1 = radeon_get_ib_value(p, idx + 5); ··· 1476 1402 mip_offset = (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff); 1477 1403 mipmap = reloc->robj; 1478 1404 r = r600_check_texture_resource(p, idx+(i*7)+1, 1479 - texture, mipmap, reloc->lobj.tiling_flags); 1405 + texture, mipmap, 1406 + base_offset + radeon_get_ib_value(p, idx+1+(i*7)+2), 1407 + mip_offset + radeon_get_ib_value(p, idx+1+(i*7)+3), 1408 + reloc->lobj.tiling_flags); 1480 1409 if (r) 1481 1410 return r; 1482 1411 ib[idx+1+(i*7)+2] += base_offset;
+6
drivers/gpu/drm/radeon/r600d.h
··· 51 51 #define PTE_READABLE (1 << 5) 52 52 #define PTE_WRITEABLE (1 << 6) 53 53 54 + /* tiling bits */ 55 + #define ARRAY_LINEAR_GENERAL 0x00000000 56 + #define ARRAY_LINEAR_ALIGNED 0x00000001 57 + #define ARRAY_1D_TILED_THIN1 0x00000002 58 + #define ARRAY_2D_TILED_THIN1 0x00000004 59 + 54 60 /* Registers */ 55 61 #define ARB_POP 0x2418 56 62 #define ENABLE_TC128 (1 << 30)
+4
drivers/gpu/drm/radeon/radeon.h
··· 1262 1262 (rdev->family == CHIP_RS400) || \ 1263 1263 (rdev->family == CHIP_RS480)) 1264 1264 #define ASIC_IS_AVIVO(rdev) ((rdev->family >= CHIP_RS600)) 1265 + #define ASIC_IS_DCE2(rdev) ((rdev->family == CHIP_RS600) || \ 1266 + (rdev->family == CHIP_RS690) || \ 1267 + (rdev->family == CHIP_RS740) || \ 1268 + (rdev->family >= CHIP_R600)) 1265 1269 #define ASIC_IS_DCE3(rdev) ((rdev->family >= CHIP_RV620)) 1266 1270 #define ASIC_IS_DCE32(rdev) ((rdev->family >= CHIP_RV730)) 1267 1271 #define ASIC_IS_DCE4(rdev) ((rdev->family >= CHIP_CEDAR))
+2 -2
drivers/gpu/drm/radeon/radeon_benchmark.c
··· 41 41 42 42 size = bsize; 43 43 n = 1024; 44 - r = radeon_bo_create(rdev, NULL, size, true, sdomain, &sobj); 44 + r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, sdomain, &sobj); 45 45 if (r) { 46 46 goto out_cleanup; 47 47 } ··· 53 53 if (r) { 54 54 goto out_cleanup; 55 55 } 56 - r = radeon_bo_create(rdev, NULL, size, true, ddomain, &dobj); 56 + r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, ddomain, &dobj); 57 57 if (r) { 58 58 goto out_cleanup; 59 59 }
+13
drivers/gpu/drm/radeon/radeon_combios.c
··· 571 571 } 572 572 573 573 if (clk_mask && data_mask) { 574 + /* system specific masks */ 574 575 i2c.mask_clk_mask = clk_mask; 575 576 i2c.mask_data_mask = data_mask; 576 577 i2c.a_clk_mask = clk_mask; ··· 580 579 i2c.en_data_mask = data_mask; 581 580 i2c.y_clk_mask = clk_mask; 582 581 i2c.y_data_mask = data_mask; 582 + } else if ((ddc_line == RADEON_GPIOPAD_MASK) || 583 + (ddc_line == RADEON_MDGPIO_MASK)) { 584 + /* default gpiopad masks */ 585 + i2c.mask_clk_mask = (0x20 << 8); 586 + i2c.mask_data_mask = 0x80; 587 + i2c.a_clk_mask = (0x20 << 8); 588 + i2c.a_data_mask = 0x80; 589 + i2c.en_clk_mask = (0x20 << 8); 590 + i2c.en_data_mask = 0x80; 591 + i2c.y_clk_mask = (0x20 << 8); 592 + i2c.y_data_mask = 0x80; 583 593 } else { 594 + /* default masks for ddc pads */ 584 595 i2c.mask_clk_mask = RADEON_GPIO_EN_1; 585 596 i2c.mask_data_mask = RADEON_GPIO_EN_0; 586 597 i2c.a_clk_mask = RADEON_GPIO_A_1;
+18
drivers/gpu/drm/radeon/radeon_connectors.c
··· 1008 1008 static int radeon_dp_get_modes(struct drm_connector *connector) 1009 1009 { 1010 1010 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 1011 + struct radeon_connector_atom_dig *radeon_dig_connector = radeon_connector->con_priv; 1011 1012 int ret; 1012 1013 1014 + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 1015 + if (!radeon_dig_connector->edp_on) 1016 + atombios_set_edp_panel_power(connector, 1017 + ATOM_TRANSMITTER_ACTION_POWER_ON); 1018 + } 1013 1019 ret = radeon_ddc_get_modes(radeon_connector); 1020 + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 1021 + if (!radeon_dig_connector->edp_on) 1022 + atombios_set_edp_panel_power(connector, 1023 + ATOM_TRANSMITTER_ACTION_POWER_OFF); 1024 + } 1025 + 1014 1026 return ret; 1015 1027 } 1016 1028 ··· 1041 1029 if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 1042 1030 /* eDP is always DP */ 1043 1031 radeon_dig_connector->dp_sink_type = CONNECTOR_OBJECT_ID_DISPLAYPORT; 1032 + if (!radeon_dig_connector->edp_on) 1033 + atombios_set_edp_panel_power(connector, 1034 + ATOM_TRANSMITTER_ACTION_POWER_ON); 1044 1035 if (radeon_dp_getdpcd(radeon_connector)) 1045 1036 ret = connector_status_connected; 1037 + if (!radeon_dig_connector->edp_on) 1038 + atombios_set_edp_panel_power(connector, 1039 + ATOM_TRANSMITTER_ACTION_POWER_OFF); 1046 1040 } else { 1047 1041 radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector); 1048 1042 if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
+1 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 180 180 int r; 181 181 182 182 if (rdev->wb.wb_obj == NULL) { 183 - r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE, true, 183 + r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE, PAGE_SIZE, true, 184 184 RADEON_GEM_DOMAIN_GTT, &rdev->wb.wb_obj); 185 185 if (r) { 186 186 dev_warn(rdev->dev, "(%d) create WB bo failed\n", r);
+307 -47
drivers/gpu/drm/radeon/radeon_encoders.c
··· 176 176 return false; 177 177 } 178 178 } 179 + 179 180 void 180 181 radeon_link_encoder_connector(struct drm_device *dev) 181 182 { ··· 225 224 radeon_connector = to_radeon_connector(connector); 226 225 if (radeon_encoder->active_device & radeon_connector->devices) 227 226 return connector; 227 + } 228 + return NULL; 229 + } 230 + 231 + struct drm_encoder *radeon_atom_get_external_encoder(struct drm_encoder *encoder) 232 + { 233 + struct drm_device *dev = encoder->dev; 234 + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 235 + struct drm_encoder *other_encoder; 236 + struct radeon_encoder *other_radeon_encoder; 237 + 238 + if (radeon_encoder->is_ext_encoder) 239 + return NULL; 240 + 241 + list_for_each_entry(other_encoder, &dev->mode_config.encoder_list, head) { 242 + if (other_encoder == encoder) 243 + continue; 244 + other_radeon_encoder = to_radeon_encoder(other_encoder); 245 + if (other_radeon_encoder->is_ext_encoder && 246 + (radeon_encoder->devices & other_radeon_encoder->devices)) 247 + return other_encoder; 228 248 } 229 249 return NULL; 230 250 } ··· 448 426 449 427 } 450 428 429 + union dvo_encoder_control { 430 + ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION ext_tmds; 431 + DVO_ENCODER_CONTROL_PS_ALLOCATION dvo; 432 + DVO_ENCODER_CONTROL_PS_ALLOCATION_V3 dvo_v3; 433 + }; 434 + 451 435 void 452 - atombios_external_tmds_setup(struct drm_encoder *encoder, int action) 436 + atombios_dvo_setup(struct drm_encoder *encoder, int action) 453 437 { 454 438 struct drm_device *dev = encoder->dev; 455 439 struct radeon_device *rdev = dev->dev_private; 456 440 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 457 - ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION args; 458 - int index = 0; 441 + union dvo_encoder_control args; 442 + int index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl); 459 443 460 444 memset(&args, 0, sizeof(args)); 461 445 462 - index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl); 446 + if (ASIC_IS_DCE3(rdev)) { 447 + /* DCE3+ */ 448 + args.dvo_v3.ucAction = action; 449 + args.dvo_v3.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); 450 + args.dvo_v3.ucDVOConfig = 0; /* XXX */ 451 + } else if (ASIC_IS_DCE2(rdev)) { 452 + /* DCE2 (pre-DCE3 R6xx, RS600/690/740 */ 453 + args.dvo.sDVOEncoder.ucAction = action; 454 + args.dvo.sDVOEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); 455 + /* DFP1, CRT1, TV1 depending on the type of port */ 456 + args.dvo.sDVOEncoder.ucDeviceType = ATOM_DEVICE_DFP1_INDEX; 463 457 464 - args.sXTmdsEncoder.ucEnable = action; 458 + if (radeon_encoder->pixel_clock > 165000) 459 + args.dvo.sDVOEncoder.usDevAttr.sDigAttrib.ucAttribute |= PANEL_ENCODER_MISC_DUAL; 460 + } else { 461 + /* R4xx, R5xx */ 462 + args.ext_tmds.sXTmdsEncoder.ucEnable = action; 465 463 466 - if (radeon_encoder->pixel_clock > 165000) 467 - args.sXTmdsEncoder.ucMisc = PANEL_ENCODER_MISC_DUAL; 464 + if (radeon_encoder->pixel_clock > 165000) 465 + args.ext_tmds.sXTmdsEncoder.ucMisc |= PANEL_ENCODER_MISC_DUAL; 468 466 469 - /*if (pScrn->rgbBits == 8)*/ 470 - args.sXTmdsEncoder.ucMisc |= (1 << 1); 471 - 472 - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 473 - 474 - } 475 - 476 - static void 477 - atombios_ddia_setup(struct drm_encoder *encoder, int action) 478 - { 479 - struct drm_device *dev = encoder->dev; 480 - struct radeon_device *rdev = dev->dev_private; 481 - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 482 - DVO_ENCODER_CONTROL_PS_ALLOCATION args; 483 - int index = 0; 484 - 485 - memset(&args, 0, sizeof(args)); 486 - 487 - index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl); 488 - 489 - args.sDVOEncoder.ucAction = action; 490 - args.sDVOEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); 491 - 492 - if (radeon_encoder->pixel_clock > 165000) 493 - args.sDVOEncoder.usDevAttr.sDigAttrib.ucAttribute = PANEL_ENCODER_MISC_DUAL; 467 + /*if (pScrn->rgbBits == 8)*/ 468 + args.ext_tmds.sXTmdsEncoder.ucMisc |= ATOM_PANEL_MISC_888RGB; 469 + } 494 470 495 471 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 496 - 497 472 } 498 473 499 474 union lvds_encoder_control { ··· 551 532 if (dig->lcd_misc & ATOM_PANEL_MISC_DUAL) 552 533 args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL; 553 534 if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) 554 - args.v1.ucMisc |= (1 << 1); 535 + args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB; 555 536 } else { 556 537 if (dig->linkb) 557 538 args.v1.ucMisc |= PANEL_ENCODER_MISC_TMDS_LINKB; 558 539 if (radeon_encoder->pixel_clock > 165000) 559 540 args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL; 560 541 /*if (pScrn->rgbBits == 8) */ 561 - args.v1.ucMisc |= (1 << 1); 542 + args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB; 562 543 } 563 544 break; 564 545 case 2: ··· 614 595 int 615 596 atombios_get_encoder_mode(struct drm_encoder *encoder) 616 597 { 598 + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 617 599 struct drm_device *dev = encoder->dev; 618 600 struct radeon_device *rdev = dev->dev_private; 619 601 struct drm_connector *connector; ··· 622 602 struct radeon_connector_atom_dig *dig_connector; 623 603 624 604 connector = radeon_get_connector_for_encoder(encoder); 625 - if (!connector) 626 - return 0; 627 - 605 + if (!connector) { 606 + switch (radeon_encoder->encoder_id) { 607 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 608 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 609 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 610 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 611 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: 612 + return ATOM_ENCODER_MODE_DVI; 613 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: 614 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: 615 + default: 616 + return ATOM_ENCODER_MODE_CRT; 617 + } 618 + } 628 619 radeon_connector = to_radeon_connector(connector); 629 620 630 621 switch (connector->connector_type) { ··· 865 834 memset(&args, 0, sizeof(args)); 866 835 867 836 switch (radeon_encoder->encoder_id) { 837 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: 838 + index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); 839 + break; 868 840 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 869 841 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 870 842 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: ··· 1012 978 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1013 979 } 1014 980 981 + void 982 + atombios_set_edp_panel_power(struct drm_connector *connector, int action) 983 + { 984 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 985 + struct drm_device *dev = radeon_connector->base.dev; 986 + struct radeon_device *rdev = dev->dev_private; 987 + union dig_transmitter_control args; 988 + int index = GetIndexIntoMasterTable(COMMAND, UNIPHYTransmitterControl); 989 + uint8_t frev, crev; 990 + 991 + if (connector->connector_type != DRM_MODE_CONNECTOR_eDP) 992 + return; 993 + 994 + if (!ASIC_IS_DCE4(rdev)) 995 + return; 996 + 997 + if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) || 998 + (action != ATOM_TRANSMITTER_ACTION_POWER_OFF)) 999 + return; 1000 + 1001 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 1002 + return; 1003 + 1004 + memset(&args, 0, sizeof(args)); 1005 + 1006 + args.v1.ucAction = action; 1007 + 1008 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1009 + } 1010 + 1011 + union external_encoder_control { 1012 + EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION v1; 1013 + }; 1014 + 1015 + static void 1016 + atombios_external_encoder_setup(struct drm_encoder *encoder, 1017 + struct drm_encoder *ext_encoder, 1018 + int action) 1019 + { 1020 + struct drm_device *dev = encoder->dev; 1021 + struct radeon_device *rdev = dev->dev_private; 1022 + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1023 + union external_encoder_control args; 1024 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1025 + int index = GetIndexIntoMasterTable(COMMAND, ExternalEncoderControl); 1026 + u8 frev, crev; 1027 + int dp_clock = 0; 1028 + int dp_lane_count = 0; 1029 + int connector_object_id = 0; 1030 + 1031 + if (connector) { 1032 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 1033 + struct radeon_connector_atom_dig *dig_connector = 1034 + radeon_connector->con_priv; 1035 + 1036 + dp_clock = dig_connector->dp_clock; 1037 + dp_lane_count = dig_connector->dp_lane_count; 1038 + connector_object_id = 1039 + (radeon_connector->connector_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; 1040 + } 1041 + 1042 + memset(&args, 0, sizeof(args)); 1043 + 1044 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 1045 + return; 1046 + 1047 + switch (frev) { 1048 + case 1: 1049 + /* no params on frev 1 */ 1050 + break; 1051 + case 2: 1052 + switch (crev) { 1053 + case 1: 1054 + case 2: 1055 + args.v1.sDigEncoder.ucAction = action; 1056 + args.v1.sDigEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); 1057 + args.v1.sDigEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder); 1058 + 1059 + if (args.v1.sDigEncoder.ucEncoderMode == ATOM_ENCODER_MODE_DP) { 1060 + if (dp_clock == 270000) 1061 + args.v1.sDigEncoder.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; 1062 + args.v1.sDigEncoder.ucLaneNum = dp_lane_count; 1063 + } else if (radeon_encoder->pixel_clock > 165000) 1064 + args.v1.sDigEncoder.ucLaneNum = 8; 1065 + else 1066 + args.v1.sDigEncoder.ucLaneNum = 4; 1067 + break; 1068 + default: 1069 + DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); 1070 + return; 1071 + } 1072 + break; 1073 + default: 1074 + DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); 1075 + return; 1076 + } 1077 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1078 + } 1079 + 1015 1080 static void 1016 1081 atombios_yuv_setup(struct drm_encoder *encoder, bool enable) 1017 1082 { ··· 1154 1021 struct drm_device *dev = encoder->dev; 1155 1022 struct radeon_device *rdev = dev->dev_private; 1156 1023 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1024 + struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); 1157 1025 DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION args; 1158 1026 int index = 0; 1159 1027 bool is_dig = false; ··· 1177 1043 break; 1178 1044 case ENCODER_OBJECT_ID_INTERNAL_DVO1: 1179 1045 case ENCODER_OBJECT_ID_INTERNAL_DDI: 1180 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: 1181 1046 index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); 1047 + break; 1048 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: 1049 + if (ASIC_IS_DCE3(rdev)) 1050 + is_dig = true; 1051 + else 1052 + index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); 1182 1053 break; 1183 1054 case ENCODER_OBJECT_ID_INTERNAL_LVDS: 1184 1055 index = GetIndexIntoMasterTable(COMMAND, LCD1OutputControl); ··· 1221 1082 if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) { 1222 1083 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1223 1084 1085 + if (connector && 1086 + (connector->connector_type == DRM_MODE_CONNECTOR_eDP)) { 1087 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 1088 + struct radeon_connector_atom_dig *radeon_dig_connector = 1089 + radeon_connector->con_priv; 1090 + atombios_set_edp_panel_power(connector, 1091 + ATOM_TRANSMITTER_ACTION_POWER_ON); 1092 + radeon_dig_connector->edp_on = true; 1093 + } 1224 1094 dp_link_train(encoder, connector); 1225 1095 if (ASIC_IS_DCE4(rdev)) 1226 1096 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON); 1227 1097 } 1098 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) 1099 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0); 1228 1100 break; 1229 1101 case DRM_MODE_DPMS_STANDBY: 1230 1102 case DRM_MODE_DPMS_SUSPEND: 1231 1103 case DRM_MODE_DPMS_OFF: 1232 1104 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1233 1105 if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) { 1106 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1107 + 1234 1108 if (ASIC_IS_DCE4(rdev)) 1235 1109 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF); 1110 + if (connector && 1111 + (connector->connector_type == DRM_MODE_CONNECTOR_eDP)) { 1112 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 1113 + struct radeon_connector_atom_dig *radeon_dig_connector = 1114 + radeon_connector->con_priv; 1115 + atombios_set_edp_panel_power(connector, 1116 + ATOM_TRANSMITTER_ACTION_POWER_OFF); 1117 + radeon_dig_connector->edp_on = false; 1118 + } 1236 1119 } 1120 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) 1121 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLOFF, 0, 0); 1237 1122 break; 1238 1123 } 1239 1124 } else { 1240 1125 switch (mode) { 1241 1126 case DRM_MODE_DPMS_ON: 1242 1127 args.ucAction = ATOM_ENABLE; 1128 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1129 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { 1130 + args.ucAction = ATOM_LCD_BLON; 1131 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1132 + } 1243 1133 break; 1244 1134 case DRM_MODE_DPMS_STANDBY: 1245 1135 case DRM_MODE_DPMS_SUSPEND: 1246 1136 case DRM_MODE_DPMS_OFF: 1247 1137 args.ucAction = ATOM_DISABLE; 1138 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1139 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { 1140 + args.ucAction = ATOM_LCD_BLOFF; 1141 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1142 + } 1248 1143 break; 1249 1144 } 1250 - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1251 1145 } 1146 + 1147 + if (ext_encoder) { 1148 + int action; 1149 + 1150 + switch (mode) { 1151 + case DRM_MODE_DPMS_ON: 1152 + default: 1153 + action = ATOM_ENABLE; 1154 + break; 1155 + case DRM_MODE_DPMS_STANDBY: 1156 + case DRM_MODE_DPMS_SUSPEND: 1157 + case DRM_MODE_DPMS_OFF: 1158 + action = ATOM_DISABLE; 1159 + break; 1160 + } 1161 + atombios_external_encoder_setup(encoder, ext_encoder, action); 1162 + } 1163 + 1252 1164 radeon_atombios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false); 1253 1165 1254 1166 } ··· 1432 1242 break; 1433 1243 default: 1434 1244 DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); 1435 - break; 1245 + return; 1436 1246 } 1437 1247 1438 1248 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); ··· 1547 1357 struct drm_device *dev = encoder->dev; 1548 1358 struct radeon_device *rdev = dev->dev_private; 1549 1359 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1360 + struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); 1550 1361 1551 1362 radeon_encoder->pixel_clock = adjusted_mode->clock; 1552 1363 ··· 1591 1400 } 1592 1401 break; 1593 1402 case ENCODER_OBJECT_ID_INTERNAL_DDI: 1594 - atombios_ddia_setup(encoder, ATOM_ENABLE); 1595 - break; 1596 1403 case ENCODER_OBJECT_ID_INTERNAL_DVO1: 1597 1404 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: 1598 - atombios_external_tmds_setup(encoder, ATOM_ENABLE); 1405 + atombios_dvo_setup(encoder, ATOM_ENABLE); 1599 1406 break; 1600 1407 case ENCODER_OBJECT_ID_INTERNAL_DAC1: 1601 1408 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: ··· 1608 1419 } 1609 1420 break; 1610 1421 } 1422 + 1423 + if (ext_encoder) { 1424 + atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); 1425 + } 1426 + 1611 1427 atombios_apply_encoder_quirks(encoder, adjusted_mode); 1612 1428 1613 1429 if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) { ··· 1789 1595 } 1790 1596 break; 1791 1597 case ENCODER_OBJECT_ID_INTERNAL_DDI: 1792 - atombios_ddia_setup(encoder, ATOM_DISABLE); 1793 - break; 1794 1598 case ENCODER_OBJECT_ID_INTERNAL_DVO1: 1795 1599 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: 1796 - atombios_external_tmds_setup(encoder, ATOM_DISABLE); 1600 + atombios_dvo_setup(encoder, ATOM_DISABLE); 1797 1601 break; 1798 1602 case ENCODER_OBJECT_ID_INTERNAL_DAC1: 1799 1603 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: ··· 1812 1620 } 1813 1621 radeon_encoder->active_device = 0; 1814 1622 } 1623 + 1624 + /* these are handled by the primary encoders */ 1625 + static void radeon_atom_ext_prepare(struct drm_encoder *encoder) 1626 + { 1627 + 1628 + } 1629 + 1630 + static void radeon_atom_ext_commit(struct drm_encoder *encoder) 1631 + { 1632 + 1633 + } 1634 + 1635 + static void 1636 + radeon_atom_ext_mode_set(struct drm_encoder *encoder, 1637 + struct drm_display_mode *mode, 1638 + struct drm_display_mode *adjusted_mode) 1639 + { 1640 + 1641 + } 1642 + 1643 + static void radeon_atom_ext_disable(struct drm_encoder *encoder) 1644 + { 1645 + 1646 + } 1647 + 1648 + static void 1649 + radeon_atom_ext_dpms(struct drm_encoder *encoder, int mode) 1650 + { 1651 + 1652 + } 1653 + 1654 + static bool radeon_atom_ext_mode_fixup(struct drm_encoder *encoder, 1655 + struct drm_display_mode *mode, 1656 + struct drm_display_mode *adjusted_mode) 1657 + { 1658 + return true; 1659 + } 1660 + 1661 + static const struct drm_encoder_helper_funcs radeon_atom_ext_helper_funcs = { 1662 + .dpms = radeon_atom_ext_dpms, 1663 + .mode_fixup = radeon_atom_ext_mode_fixup, 1664 + .prepare = radeon_atom_ext_prepare, 1665 + .mode_set = radeon_atom_ext_mode_set, 1666 + .commit = radeon_atom_ext_commit, 1667 + .disable = radeon_atom_ext_disable, 1668 + /* no detect for TMDS/LVDS yet */ 1669 + }; 1815 1670 1816 1671 static const struct drm_encoder_helper_funcs radeon_atom_dig_helper_funcs = { 1817 1672 .dpms = radeon_atom_encoder_dpms, ··· 1969 1730 radeon_encoder->devices = supported_device; 1970 1731 radeon_encoder->rmx_type = RMX_OFF; 1971 1732 radeon_encoder->underscan_type = UNDERSCAN_OFF; 1733 + radeon_encoder->is_ext_encoder = false; 1972 1734 1973 1735 switch (radeon_encoder->encoder_id) { 1974 1736 case ENCODER_OBJECT_ID_INTERNAL_LVDS: ··· 2011 1771 radeon_encoder->rmx_type = RMX_FULL; 2012 1772 drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); 2013 1773 radeon_encoder->enc_priv = radeon_atombios_get_lvds_info(radeon_encoder); 1774 + } else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) { 1775 + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); 1776 + radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); 2014 1777 } else { 2015 1778 drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); 2016 1779 radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); ··· 2021 1778 radeon_encoder->underscan_type = UNDERSCAN_AUTO; 2022 1779 } 2023 1780 drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs); 1781 + break; 1782 + case ENCODER_OBJECT_ID_SI170B: 1783 + case ENCODER_OBJECT_ID_CH7303: 1784 + case ENCODER_OBJECT_ID_EXTERNAL_SDVOA: 1785 + case ENCODER_OBJECT_ID_EXTERNAL_SDVOB: 1786 + case ENCODER_OBJECT_ID_TITFP513: 1787 + case ENCODER_OBJECT_ID_VT1623: 1788 + case ENCODER_OBJECT_ID_HDMI_SI1930: 1789 + /* these are handled by the primary encoders */ 1790 + radeon_encoder->is_ext_encoder = true; 1791 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) 1792 + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); 1793 + else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) 1794 + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); 1795 + else 1796 + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); 1797 + drm_encoder_helper_add(encoder, &radeon_atom_ext_helper_funcs); 2024 1798 break; 2025 1799 } 2026 1800 }
+2 -2
drivers/gpu/drm/radeon/radeon_gart.c
··· 79 79 80 80 if (rdev->gart.table.vram.robj == NULL) { 81 81 r = radeon_bo_create(rdev, NULL, rdev->gart.table_size, 82 - true, RADEON_GEM_DOMAIN_VRAM, 83 - &rdev->gart.table.vram.robj); 82 + PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, 83 + &rdev->gart.table.vram.robj); 84 84 if (r) { 85 85 return r; 86 86 }
+1 -1
drivers/gpu/drm/radeon/radeon_gem.c
··· 67 67 if (alignment < PAGE_SIZE) { 68 68 alignment = PAGE_SIZE; 69 69 } 70 - r = radeon_bo_create(rdev, gobj, size, kernel, initial_domain, &robj); 70 + r = radeon_bo_create(rdev, gobj, size, alignment, kernel, initial_domain, &robj); 71 71 if (r) { 72 72 if (r != -ERESTARTSYS) 73 73 DRM_ERROR("Failed to allocate GEM object (%d, %d, %u, %d)\n",
+6 -2
drivers/gpu/drm/radeon/radeon_i2c.c
··· 896 896 ((rdev->family <= CHIP_RS480) || 897 897 ((rdev->family >= CHIP_RV515) && (rdev->family <= CHIP_R580))))) { 898 898 /* set the radeon hw i2c adapter */ 899 - sprintf(i2c->adapter.name, "Radeon i2c hw bus %s", name); 899 + snprintf(i2c->adapter.name, sizeof(i2c->adapter.name), 900 + "Radeon i2c hw bus %s", name); 900 901 i2c->adapter.algo = &radeon_i2c_algo; 901 902 ret = i2c_add_adapter(&i2c->adapter); 902 903 if (ret) { ··· 906 905 } 907 906 } else { 908 907 /* set the radeon bit adapter */ 909 - sprintf(i2c->adapter.name, "Radeon i2c bit bus %s", name); 908 + snprintf(i2c->adapter.name, sizeof(i2c->adapter.name), 909 + "Radeon i2c bit bus %s", name); 910 910 i2c->adapter.algo_data = &i2c->algo.bit; 911 911 i2c->algo.bit.pre_xfer = pre_xfer; 912 912 i2c->algo.bit.post_xfer = post_xfer; ··· 948 946 i2c->rec = *rec; 949 947 i2c->adapter.owner = THIS_MODULE; 950 948 i2c->dev = dev; 949 + snprintf(i2c->adapter.name, sizeof(i2c->adapter.name), 950 + "Radeon aux bus %s", name); 951 951 i2c_set_adapdata(&i2c->adapter, i2c); 952 952 i2c->adapter.algo_data = &i2c->algo.dp; 953 953 i2c->algo.dp.aux_ch = radeon_dp_i2c_aux_ch;
+2 -2
drivers/gpu/drm/radeon/radeon_irq.c
··· 76 76 default: 77 77 DRM_ERROR("tried to enable vblank on non-existent crtc %d\n", 78 78 crtc); 79 - return EINVAL; 79 + return -EINVAL; 80 80 } 81 81 } else { 82 82 switch (crtc) { ··· 89 89 default: 90 90 DRM_ERROR("tried to enable vblank on non-existent crtc %d\n", 91 91 crtc); 92 - return EINVAL; 92 + return -EINVAL; 93 93 } 94 94 } 95 95
+1 -1
drivers/gpu/drm/radeon/radeon_legacy_encoders.c
··· 670 670 671 671 if (rdev->is_atom_bios) { 672 672 radeon_encoder->pixel_clock = adjusted_mode->clock; 673 - atombios_external_tmds_setup(encoder, ATOM_ENABLE); 673 + atombios_dvo_setup(encoder, ATOM_ENABLE); 674 674 fp2_gen_cntl = RREG32(RADEON_FP2_GEN_CNTL); 675 675 } else { 676 676 fp2_gen_cntl = RREG32(RADEON_FP2_GEN_CNTL);
+4 -1
drivers/gpu/drm/radeon/radeon_mode.h
··· 375 375 int hdmi_config_offset; 376 376 int hdmi_audio_workaround; 377 377 int hdmi_buffer_status; 378 + bool is_ext_encoder; 378 379 }; 379 380 380 381 struct radeon_connector_atom_dig { ··· 386 385 u8 dp_sink_type; 387 386 int dp_clock; 388 387 int dp_lane_count; 388 + bool edp_on; 389 389 }; 390 390 391 391 struct radeon_gpio_rec { ··· 525 523 struct drm_encoder *radeon_encoder_legacy_tv_dac_add(struct drm_device *dev, int bios_index, int with_tv); 526 524 struct drm_encoder *radeon_encoder_legacy_tmds_int_add(struct drm_device *dev, int bios_index); 527 525 struct drm_encoder *radeon_encoder_legacy_tmds_ext_add(struct drm_device *dev, int bios_index); 528 - extern void atombios_external_tmds_setup(struct drm_encoder *encoder, int action); 526 + extern void atombios_dvo_setup(struct drm_encoder *encoder, int action); 529 527 extern void atombios_digital_setup(struct drm_encoder *encoder, int action); 530 528 extern int atombios_get_encoder_mode(struct drm_encoder *encoder); 529 + extern void atombios_set_edp_panel_power(struct drm_connector *connector, int action); 531 530 extern void radeon_encoder_set_active_device(struct drm_encoder *encoder); 532 531 533 532 extern void radeon_crtc_load_lut(struct drm_crtc *crtc);
+4 -3
drivers/gpu/drm/radeon/radeon_object.c
··· 86 86 } 87 87 88 88 int radeon_bo_create(struct radeon_device *rdev, struct drm_gem_object *gobj, 89 - unsigned long size, bool kernel, u32 domain, 90 - struct radeon_bo **bo_ptr) 89 + unsigned long size, int byte_align, bool kernel, u32 domain, 90 + struct radeon_bo **bo_ptr) 91 91 { 92 92 struct radeon_bo *bo; 93 93 enum ttm_bo_type type; 94 + int page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT; 94 95 int r; 95 96 96 97 if (unlikely(rdev->mman.bdev.dev_mapping == NULL)) { ··· 116 115 /* Kernel allocation are uninterruptible */ 117 116 mutex_lock(&rdev->vram_mutex); 118 117 r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type, 119 - &bo->placement, 0, 0, !kernel, NULL, size, 118 + &bo->placement, page_align, 0, !kernel, NULL, size, 120 119 &radeon_ttm_bo_destroy); 121 120 mutex_unlock(&rdev->vram_mutex); 122 121 if (unlikely(r != 0)) {
+4 -3
drivers/gpu/drm/radeon/radeon_object.h
··· 137 137 } 138 138 139 139 extern int radeon_bo_create(struct radeon_device *rdev, 140 - struct drm_gem_object *gobj, unsigned long size, 141 - bool kernel, u32 domain, 142 - struct radeon_bo **bo_ptr); 140 + struct drm_gem_object *gobj, unsigned long size, 141 + int byte_align, 142 + bool kernel, u32 domain, 143 + struct radeon_bo **bo_ptr); 143 144 extern int radeon_bo_kmap(struct radeon_bo *bo, void **ptr); 144 145 extern void radeon_bo_kunmap(struct radeon_bo *bo); 145 146 extern void radeon_bo_unref(struct radeon_bo **bo);
+3 -3
drivers/gpu/drm/radeon/radeon_ring.c
··· 176 176 INIT_LIST_HEAD(&rdev->ib_pool.bogus_ib); 177 177 /* Allocate 1M object buffer */ 178 178 r = radeon_bo_create(rdev, NULL, RADEON_IB_POOL_SIZE*64*1024, 179 - true, RADEON_GEM_DOMAIN_GTT, 180 - &rdev->ib_pool.robj); 179 + PAGE_SIZE, true, RADEON_GEM_DOMAIN_GTT, 180 + &rdev->ib_pool.robj); 181 181 if (r) { 182 182 DRM_ERROR("radeon: failed to ib pool (%d).\n", r); 183 183 return r; ··· 332 332 rdev->cp.ring_size = ring_size; 333 333 /* Allocate ring buffer */ 334 334 if (rdev->cp.ring_obj == NULL) { 335 - r = radeon_bo_create(rdev, NULL, rdev->cp.ring_size, true, 335 + r = radeon_bo_create(rdev, NULL, rdev->cp.ring_size, PAGE_SIZE, true, 336 336 RADEON_GEM_DOMAIN_GTT, 337 337 &rdev->cp.ring_obj); 338 338 if (r) {
+2 -2
drivers/gpu/drm/radeon/radeon_test.c
··· 52 52 goto out_cleanup; 53 53 } 54 54 55 - r = radeon_bo_create(rdev, NULL, size, true, RADEON_GEM_DOMAIN_VRAM, 55 + r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, 56 56 &vram_obj); 57 57 if (r) { 58 58 DRM_ERROR("Failed to create VRAM object\n"); ··· 71 71 void **gtt_start, **gtt_end; 72 72 void **vram_start, **vram_end; 73 73 74 - r = radeon_bo_create(rdev, NULL, size, true, 74 + r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, 75 75 RADEON_GEM_DOMAIN_GTT, gtt_obj + i); 76 76 if (r) { 77 77 DRM_ERROR("Failed to create GTT object %d\n", i);
+1 -1
drivers/gpu/drm/radeon/radeon_ttm.c
··· 529 529 DRM_ERROR("Failed initializing VRAM heap.\n"); 530 530 return r; 531 531 } 532 - r = radeon_bo_create(rdev, NULL, 256 * 1024, true, 532 + r = radeon_bo_create(rdev, NULL, 256 * 1024, PAGE_SIZE, true, 533 533 RADEON_GEM_DOMAIN_VRAM, 534 534 &rdev->stollen_vga_memory); 535 535 if (r) {
+2 -2
drivers/gpu/drm/radeon/rv770.c
··· 915 915 916 916 if (rdev->vram_scratch.robj == NULL) { 917 917 r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE, 918 - true, RADEON_GEM_DOMAIN_VRAM, 919 - &rdev->vram_scratch.robj); 918 + PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, 919 + &rdev->vram_scratch.robj); 920 920 if (r) { 921 921 return r; 922 922 }
+11
drivers/gpu/drm/ttm/ttm_bo.c
··· 224 224 int ret; 225 225 226 226 while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) { 227 + /** 228 + * Deadlock avoidance for multi-bo reserving. 229 + */ 227 230 if (use_sequence && bo->seq_valid && 228 231 (sequence - bo->val_seq < (1 << 31))) { 229 232 return -EAGAIN; ··· 244 241 } 245 242 246 243 if (use_sequence) { 244 + /** 245 + * Wake up waiters that may need to recheck for deadlock, 246 + * if we decreased the sequence number. 247 + */ 248 + if (unlikely((bo->val_seq - sequence < (1 << 31)) 249 + || !bo->seq_valid)) 250 + wake_up_all(&bo->event_queue); 251 + 247 252 bo->val_seq = sequence; 248 253 bo->seq_valid = true; 249 254 } else {
+8 -6
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 862 862 &vmw_vram_sys_placement, true, 863 863 &vmw_user_dmabuf_destroy); 864 864 if (unlikely(ret != 0)) 865 - return ret; 865 + goto out_no_dmabuf; 866 866 867 867 tmp = ttm_bo_reference(&vmw_user_bo->dma.base); 868 868 ret = ttm_base_object_init(vmw_fpriv(file_priv)->tfile, ··· 870 870 false, 871 871 ttm_buffer_type, 872 872 &vmw_user_dmabuf_release, NULL); 873 - if (unlikely(ret != 0)) { 874 - ttm_bo_unref(&tmp); 875 - } else { 873 + if (unlikely(ret != 0)) 874 + goto out_no_base_object; 875 + else { 876 876 rep->handle = vmw_user_bo->base.hash.key; 877 877 rep->map_handle = vmw_user_bo->dma.base.addr_space_offset; 878 878 rep->cur_gmr_id = vmw_user_bo->base.hash.key; 879 879 rep->cur_gmr_offset = 0; 880 880 } 881 - ttm_bo_unref(&tmp); 882 881 882 + out_no_base_object: 883 + ttm_bo_unref(&tmp); 884 + out_no_dmabuf: 883 885 ttm_read_unlock(&vmaster->lock); 884 886 885 - return 0; 887 + return ret; 886 888 } 887 889 888 890 int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data,
-1
drivers/hid/hidraw.c
··· 32 32 #include <linux/hid.h> 33 33 #include <linux/mutex.h> 34 34 #include <linux/sched.h> 35 - #include <linux/smp_lock.h> 36 35 37 36 #include <linux/hidraw.h> 38 37
-1
drivers/hid/usbhid/hiddev.c
··· 29 29 #include <linux/slab.h> 30 30 #include <linux/module.h> 31 31 #include <linux/init.h> 32 - #include <linux/smp_lock.h> 33 32 #include <linux/input.h> 34 33 #include <linux/usb.h> 35 34 #include <linux/hid.h>
-1
drivers/infiniband/hw/ipath/ipath_file_ops.c
··· 40 40 #include <linux/highmem.h> 41 41 #include <linux/io.h> 42 42 #include <linux/jiffies.h> 43 - #include <linux/smp_lock.h> 44 43 #include <asm/pgtable.h> 45 44 46 45 #include "ipath_kernel.h"
+3 -1
drivers/infiniband/ulp/srp/ib_srp.c
··· 1123 1123 } 1124 1124 } 1125 1125 1126 - static int srp_queuecommand(struct scsi_cmnd *scmnd, 1126 + static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, 1127 1127 void (*done)(struct scsi_cmnd *)) 1128 1128 { 1129 1129 struct srp_target_port *target = host_to_target(scmnd->device->host); ··· 1195 1195 err: 1196 1196 return SCSI_MLQUEUE_HOST_BUSY; 1197 1197 } 1198 + 1199 + static DEF_SCSI_QCMD(srp_queuecommand) 1198 1200 1199 1201 static int srp_alloc_iu_bufs(struct srp_target_port *target) 1200 1202 {
+1 -2
drivers/input/input.c
··· 24 24 #include <linux/device.h> 25 25 #include <linux/mutex.h> 26 26 #include <linux/rcupdate.h> 27 - #include <linux/smp_lock.h> 28 27 #include "input-compat.h" 29 28 30 29 MODULE_AUTHOR("Vojtech Pavlik <vojtech@suse.cz>"); ··· 752 753 if (index >= dev->keycodemax) 753 754 return -EINVAL; 754 755 755 - if (dev->keycodesize < sizeof(dev->keycode) && 756 + if (dev->keycodesize < sizeof(ke->keycode) && 756 757 (ke->keycode >> (dev->keycodesize * 8))) 757 758 return -EINVAL; 758 759
-1
drivers/input/serio/serio_raw.c
··· 11 11 12 12 #include <linux/sched.h> 13 13 #include <linux/slab.h> 14 - #include <linux/smp_lock.h> 15 14 #include <linux/poll.h> 16 15 #include <linux/module.h> 17 16 #include <linux/serio.h>
+14 -14
drivers/input/tablet/aiptek.c
··· 1097 1097 } 1098 1098 1099 1099 static DEVICE_ATTR(pointer_mode, 1100 - S_IRUGO | S_IWUGO, 1100 + S_IRUGO | S_IWUSR, 1101 1101 show_tabletPointerMode, store_tabletPointerMode); 1102 1102 1103 1103 /*********************************************************************** ··· 1134 1134 } 1135 1135 1136 1136 static DEVICE_ATTR(coordinate_mode, 1137 - S_IRUGO | S_IWUGO, 1137 + S_IRUGO | S_IWUSR, 1138 1138 show_tabletCoordinateMode, store_tabletCoordinateMode); 1139 1139 1140 1140 /*********************************************************************** ··· 1176 1176 } 1177 1177 1178 1178 static DEVICE_ATTR(tool_mode, 1179 - S_IRUGO | S_IWUGO, 1179 + S_IRUGO | S_IWUSR, 1180 1180 show_tabletToolMode, store_tabletToolMode); 1181 1181 1182 1182 /*********************************************************************** ··· 1219 1219 } 1220 1220 1221 1221 static DEVICE_ATTR(xtilt, 1222 - S_IRUGO | S_IWUGO, show_tabletXtilt, store_tabletXtilt); 1222 + S_IRUGO | S_IWUSR, show_tabletXtilt, store_tabletXtilt); 1223 1223 1224 1224 /*********************************************************************** 1225 1225 * support routines for the 'ytilt' file. Note that this file ··· 1261 1261 } 1262 1262 1263 1263 static DEVICE_ATTR(ytilt, 1264 - S_IRUGO | S_IWUGO, show_tabletYtilt, store_tabletYtilt); 1264 + S_IRUGO | S_IWUSR, show_tabletYtilt, store_tabletYtilt); 1265 1265 1266 1266 /*********************************************************************** 1267 1267 * support routines for the 'jitter' file. Note that this file ··· 1288 1288 } 1289 1289 1290 1290 static DEVICE_ATTR(jitter, 1291 - S_IRUGO | S_IWUGO, 1291 + S_IRUGO | S_IWUSR, 1292 1292 show_tabletJitterDelay, store_tabletJitterDelay); 1293 1293 1294 1294 /*********************************************************************** ··· 1317 1317 } 1318 1318 1319 1319 static DEVICE_ATTR(delay, 1320 - S_IRUGO | S_IWUGO, 1320 + S_IRUGO | S_IWUSR, 1321 1321 show_tabletProgrammableDelay, store_tabletProgrammableDelay); 1322 1322 1323 1323 /*********************************************************************** ··· 1406 1406 } 1407 1407 1408 1408 static DEVICE_ATTR(stylus_upper, 1409 - S_IRUGO | S_IWUGO, 1409 + S_IRUGO | S_IWUSR, 1410 1410 show_tabletStylusUpper, store_tabletStylusUpper); 1411 1411 1412 1412 /*********************************************************************** ··· 1437 1437 } 1438 1438 1439 1439 static DEVICE_ATTR(stylus_lower, 1440 - S_IRUGO | S_IWUGO, 1440 + S_IRUGO | S_IWUSR, 1441 1441 show_tabletStylusLower, store_tabletStylusLower); 1442 1442 1443 1443 /*********************************************************************** ··· 1475 1475 } 1476 1476 1477 1477 static DEVICE_ATTR(mouse_left, 1478 - S_IRUGO | S_IWUGO, 1478 + S_IRUGO | S_IWUSR, 1479 1479 show_tabletMouseLeft, store_tabletMouseLeft); 1480 1480 1481 1481 /*********************************************************************** ··· 1505 1505 } 1506 1506 1507 1507 static DEVICE_ATTR(mouse_middle, 1508 - S_IRUGO | S_IWUGO, 1508 + S_IRUGO | S_IWUSR, 1509 1509 show_tabletMouseMiddle, store_tabletMouseMiddle); 1510 1510 1511 1511 /*********************************************************************** ··· 1535 1535 } 1536 1536 1537 1537 static DEVICE_ATTR(mouse_right, 1538 - S_IRUGO | S_IWUGO, 1538 + S_IRUGO | S_IWUSR, 1539 1539 show_tabletMouseRight, store_tabletMouseRight); 1540 1540 1541 1541 /*********************************************************************** ··· 1567 1567 } 1568 1568 1569 1569 static DEVICE_ATTR(wheel, 1570 - S_IRUGO | S_IWUGO, show_tabletWheel, store_tabletWheel); 1570 + S_IRUGO | S_IWUSR, show_tabletWheel, store_tabletWheel); 1571 1571 1572 1572 /*********************************************************************** 1573 1573 * support routines for the 'execute' file. Note that this file ··· 1600 1600 } 1601 1601 1602 1602 static DEVICE_ATTR(execute, 1603 - S_IRUGO | S_IWUGO, show_tabletExecute, store_tabletExecute); 1603 + S_IRUGO | S_IWUSR, show_tabletExecute, store_tabletExecute); 1604 1604 1605 1605 /*********************************************************************** 1606 1606 * support routines for the 'odm_code' file. Note that this file
-1
drivers/media/dvb/dvb-core/dvb_ca_en50221.c
··· 36 36 #include <linux/delay.h> 37 37 #include <linux/spinlock.h> 38 38 #include <linux/sched.h> 39 - #include <linux/smp_lock.h> 40 39 #include <linux/kthread.h> 41 40 42 41 #include "dvb_ca_en50221.h"
-1
drivers/media/dvb/dvb-core/dvb_frontend.c
··· 36 36 #include <linux/list.h> 37 37 #include <linux/freezer.h> 38 38 #include <linux/jiffies.h> 39 - #include <linux/smp_lock.h> 40 39 #include <linux/kthread.h> 41 40 #include <asm/processor.h> 42 41
-1
drivers/media/dvb/ngene/ngene-core.c
··· 34 34 #include <linux/io.h> 35 35 #include <asm/div64.h> 36 36 #include <linux/pci.h> 37 - #include <linux/smp_lock.h> 38 37 #include <linux/timer.h> 39 38 #include <linux/byteorder/generic.h> 40 39 #include <linux/firmware.h>
-1
drivers/media/dvb/ngene/ngene-dvb.c
··· 35 35 #include <linux/io.h> 36 36 #include <asm/div64.h> 37 37 #include <linux/pci.h> 38 - #include <linux/smp_lock.h> 39 38 #include <linux/timer.h> 40 39 #include <linux/byteorder/generic.h> 41 40 #include <linux/firmware.h>
-1
drivers/media/dvb/ngene/ngene-i2c.c
··· 37 37 #include <asm/div64.h> 38 38 #include <linux/pci.h> 39 39 #include <linux/pci_ids.h> 40 - #include <linux/smp_lock.h> 41 40 #include <linux/timer.h> 42 41 #include <linux/byteorder/generic.h> 43 42 #include <linux/firmware.h>
-1
drivers/media/radio/radio-mr800.c
··· 58 58 #include <linux/module.h> 59 59 #include <linux/init.h> 60 60 #include <linux/slab.h> 61 - #include <linux/smp_lock.h> 62 61 #include <linux/input.h> 63 62 #include <linux/videodev2.h> 64 63 #include <media/v4l2-device.h>
-1
drivers/media/radio/si470x/radio-si470x.h
··· 31 31 #include <linux/init.h> 32 32 #include <linux/sched.h> 33 33 #include <linux/slab.h> 34 - #include <linux/smp_lock.h> 35 34 #include <linux/input.h> 36 35 #include <linux/version.h> 37 36 #include <linux/videodev2.h>
-1
drivers/media/video/bt8xx/bttv-driver.c
··· 42 42 #include <linux/fs.h> 43 43 #include <linux/kernel.h> 44 44 #include <linux/sched.h> 45 - #include <linux/smp_lock.h> 46 45 #include <linux/interrupt.h> 47 46 #include <linux/kdev_t.h> 48 47 #include "bttvp.h"
-1
drivers/media/video/cx88/cx88-blackbird.c
··· 33 33 #include <linux/delay.h> 34 34 #include <linux/device.h> 35 35 #include <linux/firmware.h> 36 - #include <linux/smp_lock.h> 37 36 #include <media/v4l2-common.h> 38 37 #include <media/v4l2-ioctl.h> 39 38 #include <media/cx2341x.h>
-1
drivers/media/video/cx88/cx88-video.c
··· 31 31 #include <linux/kmod.h> 32 32 #include <linux/kernel.h> 33 33 #include <linux/slab.h> 34 - #include <linux/smp_lock.h> 35 34 #include <linux/interrupt.h> 36 35 #include <linux/dma-mapping.h> 37 36 #include <linux/delay.h>
-1
drivers/media/video/pwc/pwc-if.c
··· 62 62 #include <linux/module.h> 63 63 #include <linux/poll.h> 64 64 #include <linux/slab.h> 65 - #include <linux/smp_lock.h> 66 65 #ifdef CONFIG_USB_PWC_INPUT_EVDEV 67 66 #include <linux/usb/input.h> 68 67 #endif
-1
drivers/media/video/s2255drv.c
··· 49 49 #include <linux/videodev2.h> 50 50 #include <linux/version.h> 51 51 #include <linux/mm.h> 52 - #include <linux/smp_lock.h> 53 52 #include <media/videobuf-vmalloc.h> 54 53 #include <media/v4l2-common.h> 55 54 #include <media/v4l2-device.h>
-1
drivers/media/video/saa7134/saa7134-empress.c
··· 21 21 #include <linux/list.h> 22 22 #include <linux/module.h> 23 23 #include <linux/kernel.h> 24 - #include <linux/smp_lock.h> 25 24 #include <linux/delay.h> 26 25 27 26 #include "saa7134-reg.h"
-1
drivers/media/video/saa7164/saa7164.h
··· 58 58 #include <media/tveeprom.h> 59 59 #include <media/videobuf-dma-sg.h> 60 60 #include <media/videobuf-dvb.h> 61 - #include <linux/smp_lock.h> 62 61 #include <dvb_demux.h> 63 62 #include <dvb_frontend.h> 64 63 #include <dvb_net.h>
-1
drivers/media/video/usbvision/usbvision-video.c
··· 50 50 #include <linux/list.h> 51 51 #include <linux/timer.h> 52 52 #include <linux/slab.h> 53 - #include <linux/smp_lock.h> 54 53 #include <linux/mm.h> 55 54 #include <linux/highmem.h> 56 55 #include <linux/vmalloc.h>
-1
drivers/media/video/v4l2-compat-ioctl32.c
··· 18 18 #include <linux/videodev.h> 19 19 #include <linux/videodev2.h> 20 20 #include <linux/module.h> 21 - #include <linux/smp_lock.h> 22 21 #include <media/v4l2-ioctl.h> 23 22 24 23 #ifdef CONFIG_COMPAT
+4 -3
drivers/message/fusion/mptfc.c
··· 97 97 98 98 static int mptfc_target_alloc(struct scsi_target *starget); 99 99 static int mptfc_slave_alloc(struct scsi_device *sdev); 100 - static int mptfc_qcmd(struct scsi_cmnd *SCpnt, 101 - void (*done)(struct scsi_cmnd *)); 100 + static int mptfc_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *SCpnt); 102 101 static void mptfc_target_destroy(struct scsi_target *starget); 103 102 static void mptfc_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout); 104 103 static void __devexit mptfc_remove(struct pci_dev *pdev); ··· 649 650 } 650 651 651 652 static int 652 - mptfc_qcmd(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 653 + mptfc_qcmd_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 653 654 { 654 655 struct mptfc_rport_info *ri; 655 656 struct fc_rport *rport = starget_to_rport(scsi_target(SCpnt->device)); ··· 679 680 680 681 return mptscsih_qcmd(SCpnt,done); 681 682 } 683 + 684 + static DEF_SCSI_QCMD(mptfc_qcmd) 682 685 683 686 /* 684 687 * mptfc_display_port_link_speed - displaying link speed
+3 -1
drivers/message/fusion/mptsas.c
··· 1889 1889 } 1890 1890 1891 1891 static int 1892 - mptsas_qcmd(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1892 + mptsas_qcmd_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1893 1893 { 1894 1894 MPT_SCSI_HOST *hd; 1895 1895 MPT_ADAPTER *ioc; ··· 1912 1912 1913 1913 return mptscsih_qcmd(SCpnt,done); 1914 1914 } 1915 + 1916 + static DEF_SCSI_QCMD(mptsas_qcmd) 1915 1917 1916 1918 /** 1917 1919 * mptsas_mptsas_eh_timed_out - resets the scsi_cmnd timeout
+3 -1
drivers/message/fusion/mptspi.c
··· 780 780 } 781 781 782 782 static int 783 - mptspi_qcmd(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 783 + mptspi_qcmd_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 784 784 { 785 785 struct _MPT_SCSI_HOST *hd = shost_priv(SCpnt->device->host); 786 786 VirtDevice *vdevice = SCpnt->device->hostdata; ··· 804 804 805 805 return mptscsih_qcmd(SCpnt,done); 806 806 } 807 + 808 + static DEF_SCSI_QCMD(mptspi_qcmd) 807 809 808 810 static void mptspi_slave_destroy(struct scsi_device *sdev) 809 811 {
+4 -2
drivers/message/i2o/i2o_scsi.c
··· 506 506 * Locks: takes the controller lock on error path only 507 507 */ 508 508 509 - static int i2o_scsi_queuecommand(struct scsi_cmnd *SCpnt, 509 + static int i2o_scsi_queuecommand_lck(struct scsi_cmnd *SCpnt, 510 510 void (*done) (struct scsi_cmnd *)) 511 511 { 512 512 struct i2o_controller *c; ··· 688 688 689 689 exit: 690 690 return rc; 691 - }; 691 + } 692 + 693 + static DEF_SCSI_QCMD(i2o_scsi_queuecommand) 692 694 693 695 /** 694 696 * i2o_scsi_abort - abort a running command
+4 -2
drivers/net/3c59x.c
··· 699 699 #define DEVICE_PCI(dev) NULL 700 700 #endif 701 701 702 - #define VORTEX_PCI(vp) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL) 702 + #define VORTEX_PCI(vp) \ 703 + ((struct pci_dev *) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL)) 703 704 704 705 #ifdef CONFIG_EISA 705 706 #define DEVICE_EISA(dev) (((dev)->bus == &eisa_bus_type) ? to_eisa_device((dev)) : NULL) ··· 708 707 #define DEVICE_EISA(dev) NULL 709 708 #endif 710 709 711 - #define VORTEX_EISA(vp) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL) 710 + #define VORTEX_EISA(vp) \ 711 + ((struct eisa_device *) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL)) 712 712 713 713 /* The action to take with a media selection timer tick. 714 714 Note that we deviate from the 3Com order by checking 10base2 before AUI.
+4 -6
drivers/net/8139cp.c
··· 490 490 { 491 491 unsigned int protocol = (status >> 16) & 0x3; 492 492 493 - if (likely((protocol == RxProtoTCP) && (!(status & TCPFail)))) 493 + if (((protocol == RxProtoTCP) && !(status & TCPFail)) || 494 + ((protocol == RxProtoUDP) && !(status & UDPFail))) 494 495 return 1; 495 - else if ((protocol == RxProtoUDP) && (!(status & UDPFail))) 496 - return 1; 497 - else if ((protocol == RxProtoIP) && (!(status & IPFail))) 498 - return 1; 499 - return 0; 496 + else 497 + return 0; 500 498 } 501 499 502 500 static int cp_rx_poll(struct napi_struct *napi, int budget)
+6
drivers/net/benet/be_main.c
··· 2458 2458 int status, i = 0, num_imgs = 0; 2459 2459 const u8 *p; 2460 2460 2461 + if (!netif_running(adapter->netdev)) { 2462 + dev_err(&adapter->pdev->dev, 2463 + "Firmware load not allowed (interface is down)\n"); 2464 + return -EPERM; 2465 + } 2466 + 2461 2467 strcpy(fw_file, func); 2462 2468 2463 2469 status = request_firmware(&fw, fw_file, &adapter->pdev->dev);
+1 -1
drivers/net/bnx2x/bnx2x_main.c
··· 9064 9064 default: 9065 9065 pr_err("Unknown board_type (%ld), aborting\n", 9066 9066 ent->driver_data); 9067 - return ENODEV; 9067 + return -ENODEV; 9068 9068 } 9069 9069 9070 9070 cid_count += CNIC_CONTEXT_USE;
+2
drivers/net/bonding/bond_main.c
··· 878 878 rcu_read_lock(); 879 879 in_dev = __in_dev_get_rcu(dev); 880 880 if (in_dev) { 881 + read_lock(&in_dev->mc_list_lock); 881 882 for (im = in_dev->mc_list; im; im = im->next) 882 883 ip_mc_rejoin_group(im); 884 + read_unlock(&in_dev->mc_list_lock); 883 885 } 884 886 885 887 rcu_read_unlock();
+2 -2
drivers/net/caif/caif_spi.c
··· 635 635 636 636 ndev = alloc_netdev(sizeof(struct cfspi), 637 637 "cfspi%d", cfspi_setup); 638 - if (!dev) 639 - return -ENODEV; 638 + if (!ndev) 639 + return -ENOMEM; 640 640 641 641 cfspi = netdev_priv(ndev); 642 642 netif_stop_queue(ndev);
+3 -4
drivers/net/gianfar.c
··· 577 577 irq_of_parse_and_map(np, 1); 578 578 priv->gfargrp[priv->num_grps].interruptError = 579 579 irq_of_parse_and_map(np,2); 580 - if (priv->gfargrp[priv->num_grps].interruptTransmit < 0 || 581 - priv->gfargrp[priv->num_grps].interruptReceive < 0 || 582 - priv->gfargrp[priv->num_grps].interruptError < 0) { 580 + if (priv->gfargrp[priv->num_grps].interruptTransmit == NO_IRQ || 581 + priv->gfargrp[priv->num_grps].interruptReceive == NO_IRQ || 582 + priv->gfargrp[priv->num_grps].interruptError == NO_IRQ) 583 583 return -EINVAL; 584 - } 585 584 } 586 585 587 586 priv->gfargrp[priv->num_grps].grp_id = priv->num_grps;
+2 -4
drivers/net/ipg.c
··· 88 88 "IC PLUS IP1000 1000/100/10 based NIC", 89 89 "Sundance Technology ST2021 based NIC", 90 90 "Tamarack Microelectronics TC9020/9021 based NIC", 91 - "Tamarack Microelectronics TC9020/9021 based NIC", 92 91 "D-Link NIC IP1000A" 93 92 }; 94 93 95 94 static DEFINE_PCI_DEVICE_TABLE(ipg_pci_tbl) = { 96 95 { PCI_VDEVICE(SUNDANCE, 0x1023), 0 }, 97 96 { PCI_VDEVICE(SUNDANCE, 0x2021), 1 }, 98 - { PCI_VDEVICE(SUNDANCE, 0x1021), 2 }, 99 - { PCI_VDEVICE(DLINK, 0x9021), 3 }, 100 - { PCI_VDEVICE(DLINK, 0x4020), 4 }, 97 + { PCI_VDEVICE(DLINK, 0x9021), 2 }, 98 + { PCI_VDEVICE(DLINK, 0x4020), 3 }, 101 99 { 0, } 102 100 }; 103 101
+1 -2
drivers/net/r8169.c
··· 4440 4440 u32 status = opts1 & RxProtoMask; 4441 4441 4442 4442 if (((status == RxProtoTCP) && !(opts1 & TCPFail)) || 4443 - ((status == RxProtoUDP) && !(opts1 & UDPFail)) || 4444 - ((status == RxProtoIP) && !(opts1 & IPFail))) 4443 + ((status == RxProtoUDP) && !(opts1 & UDPFail))) 4445 4444 skb->ip_summed = CHECKSUM_UNNECESSARY; 4446 4445 else 4447 4446 skb_checksum_none_assert(skb);
+1 -1
drivers/net/wireless/ath/ath9k/eeprom_9287.c
··· 37 37 int addr, eep_start_loc; 38 38 eep_data = (u16 *)eep; 39 39 40 - if (ah->hw_version.devid == 0x7015) 40 + if (AR9287_HTC_DEVID(ah)) 41 41 eep_start_loc = AR9287_HTC_EEP_START_LOC; 42 42 else 43 43 eep_start_loc = AR9287_EEP_START_LOC;
+9
drivers/net/wireless/ath/ath9k/hif_usb.c
··· 36 36 { USB_DEVICE(0x13D3, 0x3327) }, /* Azurewave */ 37 37 { USB_DEVICE(0x13D3, 0x3328) }, /* Azurewave */ 38 38 { USB_DEVICE(0x13D3, 0x3346) }, /* IMC Networks */ 39 + { USB_DEVICE(0x13D3, 0x3348) }, /* Azurewave */ 40 + { USB_DEVICE(0x13D3, 0x3349) }, /* Azurewave */ 41 + { USB_DEVICE(0x13D3, 0x3350) }, /* Azurewave */ 39 42 { USB_DEVICE(0x04CA, 0x4605) }, /* Liteon */ 40 43 { USB_DEVICE(0x083A, 0xA704) }, /* SMC Networks */ 44 + { USB_DEVICE(0x040D, 0x3801) }, /* VIA */ 45 + { USB_DEVICE(0x1668, 0x1200) }, /* Verizon */ 41 46 { }, 42 47 }; 43 48 ··· 811 806 case 0x7010: 812 807 case 0x7015: 813 808 case 0x9018: 809 + case 0xA704: 810 + case 0x1200: 814 811 firm_offset = AR7010_FIRMWARE_TEXT; 815 812 break; 816 813 default: ··· 935 928 case 0x7010: 936 929 case 0x7015: 937 930 case 0x9018: 931 + case 0xA704: 932 + case 0x1200: 938 933 if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x0202) 939 934 hif_dev->fw_name = FIRMWARE_AR7010_1_1; 940 935 else
+2
drivers/net/wireless/ath/ath9k/htc_drv_init.c
··· 249 249 case 0x7010: 250 250 case 0x7015: 251 251 case 0x9018: 252 + case 0xA704: 253 + case 0x1200: 252 254 priv->htc->credits = 45; 253 255 break; 254 256 default:
+1 -1
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
··· 121 121 tx_hdr.data_type = ATH9K_HTC_NORMAL; 122 122 } 123 123 124 - if (ieee80211_is_data(fc)) { 124 + if (ieee80211_is_data_qos(fc)) { 125 125 qc = ieee80211_get_qos_ctl(hdr); 126 126 tx_hdr.tidno = qc[0] & IEEE80211_QOS_CTL_TID_MASK; 127 127 }
+1 -2
drivers/net/wireless/ath/ath9k/init.c
··· 817 817 818 818 ath9k_ps_wakeup(sc); 819 819 820 - pm_qos_remove_request(&ath9k_pm_qos_req); 821 - 822 820 wiphy_rfkill_stop_polling(sc->hw->wiphy); 823 821 ath_deinit_leds(sc); 824 822 ··· 830 832 } 831 833 832 834 ieee80211_unregister_hw(hw); 835 + pm_qos_remove_request(&ath9k_pm_qos_req); 833 836 ath_rx_cleanup(sc); 834 837 ath_tx_cleanup(sc); 835 838 ath9k_deinit_softc(sc);
+7 -1
drivers/net/wireless/ath/ath9k/reg.h
··· 866 866 #define AR_DEVID_7010(_ah) \ 867 867 (((_ah)->hw_version.devid == 0x7010) || \ 868 868 ((_ah)->hw_version.devid == 0x7015) || \ 869 - ((_ah)->hw_version.devid == 0x9018)) 869 + ((_ah)->hw_version.devid == 0x9018) || \ 870 + ((_ah)->hw_version.devid == 0xA704) || \ 871 + ((_ah)->hw_version.devid == 0x1200)) 872 + 873 + #define AR9287_HTC_DEVID(_ah) \ 874 + (((_ah)->hw_version.devid == 0x7015) || \ 875 + ((_ah)->hw_version.devid == 0x1200)) 870 876 871 877 #define AR_RADIO_SREV_MAJOR 0xf0 872 878 #define AR_RAD5133_SREV_MAJOR 0xc0
+2 -2
drivers/net/wireless/ath/carl9170/usb.c
··· 553 553 usb_free_urb(urb); 554 554 } 555 555 556 - ret = usb_wait_anchor_empty_timeout(&ar->tx_cmd, HZ); 556 + ret = usb_wait_anchor_empty_timeout(&ar->tx_cmd, 1000); 557 557 if (ret == 0) 558 558 err = -ETIMEDOUT; 559 559 560 560 /* lets wait a while until the tx - queues are dried out */ 561 - ret = usb_wait_anchor_empty_timeout(&ar->tx_anch, HZ); 561 + ret = usb_wait_anchor_empty_timeout(&ar->tx_anch, 1000); 562 562 if (ret == 0) 563 563 err = -ETIMEDOUT; 564 564
-1
drivers/net/wireless/orinoco/orinoco_usb.c
··· 57 57 #include <linux/fcntl.h> 58 58 #include <linux/spinlock.h> 59 59 #include <linux/list.h> 60 - #include <linux/smp_lock.h> 61 60 #include <linux/usb.h> 62 61 #include <linux/timer.h> 63 62
-1
drivers/parisc/eisa_eeprom.c
··· 24 24 #include <linux/kernel.h> 25 25 #include <linux/miscdevice.h> 26 26 #include <linux/slab.h> 27 - #include <linux/smp_lock.h> 28 27 #include <linux/fs.h> 29 28 #include <asm/io.h> 30 29 #include <asm/uaccess.h>
+1 -1
drivers/pci/pci-sysfs.c
··· 715 715 nr = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 716 716 start = vma->vm_pgoff; 717 717 size = ((pci_resource_len(pdev, resno) - 1) >> PAGE_SHIFT) + 1; 718 - pci_start = (mmap_api == PCI_MMAP_SYSFS) ? 718 + pci_start = (mmap_api == PCI_MMAP_PROCFS) ? 719 719 pci_resource_start(pdev, resno) >> PAGE_SHIFT : 0; 720 720 if (start >= pci_start && start < pci_start + size && 721 721 start + nr <= pci_start + size)
-1
drivers/pci/proc.c
··· 10 10 #include <linux/module.h> 11 11 #include <linux/proc_fs.h> 12 12 #include <linux/seq_file.h> 13 - #include <linux/smp_lock.h> 14 13 #include <linux/capability.h> 15 14 #include <asm/uaccess.h> 16 15 #include <asm/byteorder.h>
-1
drivers/pnp/isapnp/proc.c
··· 21 21 #include <linux/isapnp.h> 22 22 #include <linux/proc_fs.h> 23 23 #include <linux/init.h> 24 - #include <linux/smp_lock.h> 25 24 #include <asm/uaccess.h> 26 25 27 26 extern struct pnp_protocol isapnp_protocol;
-1
drivers/s390/block/dasd_eer.c
··· 17 17 #include <linux/device.h> 18 18 #include <linux/poll.h> 19 19 #include <linux/mutex.h> 20 - #include <linux/smp_lock.h> 21 20 #include <linux/err.h> 22 21 #include <linux/slab.h> 23 22
-1
drivers/s390/char/fs3270.c
··· 14 14 #include <linux/list.h> 15 15 #include <linux/slab.h> 16 16 #include <linux/types.h> 17 - #include <linux/smp_lock.h> 18 17 19 18 #include <asm/compat.h> 20 19 #include <asm/ccwdev.h>
-1
drivers/s390/char/tape_char.c
··· 17 17 #include <linux/types.h> 18 18 #include <linux/proc_fs.h> 19 19 #include <linux/mtio.h> 20 - #include <linux/smp_lock.h> 21 20 #include <linux/compat.h> 22 21 23 22 #include <asm/uaccess.h>
+59 -9
drivers/s390/char/tape_core.c
··· 209 209 wake_up(&device->state_change_wq); 210 210 } 211 211 212 + struct tape_med_state_work_data { 213 + struct tape_device *device; 214 + enum tape_medium_state state; 215 + struct work_struct work; 216 + }; 217 + 218 + static void 219 + tape_med_state_work_handler(struct work_struct *work) 220 + { 221 + static char env_state_loaded[] = "MEDIUM_STATE=LOADED"; 222 + static char env_state_unloaded[] = "MEDIUM_STATE=UNLOADED"; 223 + struct tape_med_state_work_data *p = 224 + container_of(work, struct tape_med_state_work_data, work); 225 + struct tape_device *device = p->device; 226 + char *envp[] = { NULL, NULL }; 227 + 228 + switch (p->state) { 229 + case MS_UNLOADED: 230 + pr_info("%s: The tape cartridge has been successfully " 231 + "unloaded\n", dev_name(&device->cdev->dev)); 232 + envp[0] = env_state_unloaded; 233 + kobject_uevent_env(&device->cdev->dev.kobj, KOBJ_CHANGE, envp); 234 + break; 235 + case MS_LOADED: 236 + pr_info("%s: A tape cartridge has been mounted\n", 237 + dev_name(&device->cdev->dev)); 238 + envp[0] = env_state_loaded; 239 + kobject_uevent_env(&device->cdev->dev.kobj, KOBJ_CHANGE, envp); 240 + break; 241 + default: 242 + break; 243 + } 244 + tape_put_device(device); 245 + kfree(p); 246 + } 247 + 248 + static void 249 + tape_med_state_work(struct tape_device *device, enum tape_medium_state state) 250 + { 251 + struct tape_med_state_work_data *p; 252 + 253 + p = kzalloc(sizeof(*p), GFP_ATOMIC); 254 + if (p) { 255 + INIT_WORK(&p->work, tape_med_state_work_handler); 256 + p->device = tape_get_device(device); 257 + p->state = state; 258 + schedule_work(&p->work); 259 + } 260 + } 261 + 212 262 void 213 263 tape_med_state_set(struct tape_device *device, enum tape_medium_state newstate) 214 264 { 215 - if (device->medium_state == newstate) 265 + enum tape_medium_state oldstate; 266 + 267 + oldstate = device->medium_state; 268 + if (oldstate == newstate) 216 269 return; 270 + device->medium_state = newstate; 217 271 switch(newstate){ 218 272 case MS_UNLOADED: 219 273 device->tape_generic_status |= GMT_DR_OPEN(~0); 220 - if (device->medium_state == MS_LOADED) 221 - pr_info("%s: The tape cartridge has been successfully " 222 - "unloaded\n", dev_name(&device->cdev->dev)); 274 + if (oldstate == MS_LOADED) 275 + tape_med_state_work(device, MS_UNLOADED); 223 276 break; 224 277 case MS_LOADED: 225 278 device->tape_generic_status &= ~GMT_DR_OPEN(~0); 226 - if (device->medium_state == MS_UNLOADED) 227 - pr_info("%s: A tape cartridge has been mounted\n", 228 - dev_name(&device->cdev->dev)); 279 + if (oldstate == MS_UNLOADED) 280 + tape_med_state_work(device, MS_LOADED); 229 281 break; 230 282 default: 231 - // print nothing 232 283 break; 233 284 } 234 - device->medium_state = newstate; 235 285 wake_up(&device->state_change_wq); 236 286 } 237 287
+24 -13
drivers/s390/char/vmlogrdr.c
··· 30 30 #include <linux/kmod.h> 31 31 #include <linux/cdev.h> 32 32 #include <linux/device.h> 33 - #include <linux/smp_lock.h> 34 33 #include <linux/string.h> 35 34 36 35 MODULE_AUTHOR ··· 248 249 char cp_command[80]; 249 250 char cp_response[160]; 250 251 char *onoff, *qid_string; 252 + int rc; 251 253 252 - memset(cp_command, 0x00, sizeof(cp_command)); 253 - memset(cp_response, 0x00, sizeof(cp_response)); 254 - 255 - onoff = ((action == 1) ? "ON" : "OFF"); 254 + onoff = ((action == 1) ? "ON" : "OFF"); 256 255 qid_string = ((recording_class_AB == 1) ? " QID * " : ""); 257 256 258 - /* 257 + /* 259 258 * The recording commands needs to be called with option QID 260 259 * for guests that have previlege classes A or B. 261 260 * Purging has to be done as separate step, because recording 262 261 * can't be switched on as long as records are on the queue. 263 262 * Doing both at the same time doesn't work. 264 263 */ 265 - 266 - if (purge) { 264 + if (purge && (action == 1)) { 265 + memset(cp_command, 0x00, sizeof(cp_command)); 266 + memset(cp_response, 0x00, sizeof(cp_response)); 267 267 snprintf(cp_command, sizeof(cp_command), 268 268 "RECORDING %s PURGE %s", 269 269 logptr->recording_name, 270 270 qid_string); 271 - 272 271 cpcmd(cp_command, cp_response, sizeof(cp_response), NULL); 273 272 } 274 273 ··· 276 279 logptr->recording_name, 277 280 onoff, 278 281 qid_string); 279 - 280 282 cpcmd(cp_command, cp_response, sizeof(cp_response), NULL); 281 283 /* The recording command will usually answer with 'Command complete' 282 284 * on success, but when the specific service was never connected 283 285 * before then there might be an additional informational message 284 286 * 'HCPCRC8072I Recording entry not found' before the 285 - * 'Command complete'. So I use strstr rather then the strncmp. 287 + * 'Command complete'. So I use strstr rather then the strncmp. 286 288 */ 287 289 if (strstr(cp_response,"Command complete")) 288 - return 0; 290 + rc = 0; 289 291 else 290 - return -EIO; 292 + rc = -EIO; 293 + /* 294 + * If we turn recording off, we have to purge any remaining records 295 + * afterwards, as a large number of queued records may impact z/VM 296 + * performance. 297 + */ 298 + if (purge && (action == 0)) { 299 + memset(cp_command, 0x00, sizeof(cp_command)); 300 + memset(cp_response, 0x00, sizeof(cp_response)); 301 + snprintf(cp_command, sizeof(cp_command), 302 + "RECORDING %s PURGE %s", 303 + logptr->recording_name, 304 + qid_string); 305 + cpcmd(cp_command, cp_response, sizeof(cp_response), NULL); 306 + } 291 307 308 + return rc; 292 309 } 293 310 294 311
-1
drivers/s390/char/vmur.c
··· 13 13 14 14 #include <linux/cdev.h> 15 15 #include <linux/slab.h> 16 - #include <linux/smp_lock.h> 17 16 18 17 #include <asm/uaccess.h> 19 18 #include <asm/cio.h>
+10 -1
drivers/s390/cio/device.c
··· 1455 1455 break; 1456 1456 case IO_SCH_UNREG_ATTACH: 1457 1457 case IO_SCH_UNREG: 1458 - if (cdev) 1458 + if (!cdev) 1459 + break; 1460 + if (cdev->private->state == DEV_STATE_SENSE_ID) { 1461 + /* 1462 + * Note: delayed work triggered by this event 1463 + * and repeated calls to sch_event are synchronized 1464 + * by the above check for work_pending(cdev). 1465 + */ 1466 + dev_fsm_event(cdev, DEV_EVENT_NOTOPER); 1467 + } else 1459 1468 ccw_device_set_notoper(cdev); 1460 1469 break; 1461 1470 case IO_SCH_NOP:
-1
drivers/s390/crypto/zcrypt_api.c
··· 35 35 #include <linux/proc_fs.h> 36 36 #include <linux/seq_file.h> 37 37 #include <linux/compat.h> 38 - #include <linux/smp_lock.h> 39 38 #include <linux/slab.h> 40 39 #include <asm/atomic.h> 41 40 #include <asm/uaccess.h>
+3 -1
drivers/s390/scsi/zfcp_scsi.c
··· 76 76 scpnt->scsi_done(scpnt); 77 77 } 78 78 79 - static int zfcp_scsi_queuecommand(struct scsi_cmnd *scpnt, 79 + static int zfcp_scsi_queuecommand_lck(struct scsi_cmnd *scpnt, 80 80 void (*done) (struct scsi_cmnd *)) 81 81 { 82 82 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scpnt->device); ··· 126 126 127 127 return ret; 128 128 } 129 + 130 + static DEF_SCSI_QCMD(zfcp_scsi_queuecommand) 129 131 130 132 static int zfcp_scsi_slave_alloc(struct scsi_device *sdev) 131 133 {
+3 -1
drivers/scsi/3w-9xxx.c
··· 1765 1765 } /* End twa_scsi_eh_reset() */ 1766 1766 1767 1767 /* This is the main scsi queue function to handle scsi opcodes */ 1768 - static int twa_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1768 + static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1769 1769 { 1770 1770 int request_id, retval; 1771 1771 TW_Device_Extension *tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata; ··· 1811 1811 out: 1812 1812 return retval; 1813 1813 } /* End twa_scsi_queue() */ 1814 + 1815 + static DEF_SCSI_QCMD(twa_scsi_queue) 1814 1816 1815 1817 /* This function hands scsi cdb's to the firmware */ 1816 1818 static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry *sglistarg)
+3 -1
drivers/scsi/3w-sas.c
··· 1501 1501 } /* End twl_scsi_eh_reset() */ 1502 1502 1503 1503 /* This is the main scsi queue function to handle scsi opcodes */ 1504 - static int twl_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1504 + static int twl_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1505 1505 { 1506 1506 int request_id, retval; 1507 1507 TW_Device_Extension *tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata; ··· 1535 1535 out: 1536 1536 return retval; 1537 1537 } /* End twl_scsi_queue() */ 1538 + 1539 + static DEF_SCSI_QCMD(twl_scsi_queue) 1538 1540 1539 1541 /* This function tells the controller to shut down */ 1540 1542 static void __twl_shutdown(TW_Device_Extension *tw_dev)
+3 -1
drivers/scsi/3w-xxxx.c
··· 1947 1947 } /* End tw_scsiop_test_unit_ready_complete() */ 1948 1948 1949 1949 /* This is the main scsi queue function to handle scsi opcodes */ 1950 - static int tw_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1950 + static int tw_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 1951 1951 { 1952 1952 unsigned char *command = SCpnt->cmnd; 1953 1953 int request_id = 0; ··· 2022 2022 } 2023 2023 return retval; 2024 2024 } /* End tw_scsi_queue() */ 2025 + 2026 + static DEF_SCSI_QCMD(tw_scsi_queue) 2025 2027 2026 2028 /* This function is the interrupt service routine */ 2027 2029 static irqreturn_t tw_interrupt(int irq, void *dev_instance)
+5 -3
drivers/scsi/53c700.c
··· 167 167 #include "53c700_d.h" 168 168 169 169 170 - STATIC int NCR_700_queuecommand(struct scsi_cmnd *, void (*done)(struct scsi_cmnd *)); 170 + STATIC int NCR_700_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *); 171 171 STATIC int NCR_700_abort(struct scsi_cmnd * SCpnt); 172 172 STATIC int NCR_700_bus_reset(struct scsi_cmnd * SCpnt); 173 173 STATIC int NCR_700_host_reset(struct scsi_cmnd * SCpnt); ··· 1749 1749 return IRQ_RETVAL(handled); 1750 1750 } 1751 1751 1752 - STATIC int 1753 - NCR_700_queuecommand(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *)) 1752 + static int 1753 + NCR_700_queuecommand_lck(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *)) 1754 1754 { 1755 1755 struct NCR_700_Host_Parameters *hostdata = 1756 1756 (struct NCR_700_Host_Parameters *)SCp->device->host->hostdata[0]; ··· 1903 1903 NCR_700_start_command(SCp); 1904 1904 return 0; 1905 1905 } 1906 + 1907 + STATIC DEF_SCSI_QCMD(NCR_700_queuecommand) 1906 1908 1907 1909 STATIC int 1908 1910 NCR_700_abort(struct scsi_cmnd * SCp)
+2 -1
drivers/scsi/BusLogic.c
··· 2807 2807 Outgoing Mailbox for execution by the associated Host Adapter. 2808 2808 */ 2809 2809 2810 - static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRoutine) (struct scsi_cmnd *)) 2810 + static int BusLogic_QueueCommand_lck(struct scsi_cmnd *Command, void (*CompletionRoutine) (struct scsi_cmnd *)) 2811 2811 { 2812 2812 struct BusLogic_HostAdapter *HostAdapter = (struct BusLogic_HostAdapter *) Command->device->host->hostdata; 2813 2813 struct BusLogic_TargetFlags *TargetFlags = &HostAdapter->TargetFlags[Command->device->id]; ··· 2994 2994 return 0; 2995 2995 } 2996 2996 2997 + static DEF_SCSI_QCMD(BusLogic_QueueCommand) 2997 2998 2998 2999 #if 0 2999 3000 /*
+1 -1
drivers/scsi/BusLogic.h
··· 1319 1319 */ 1320 1320 1321 1321 static const char *BusLogic_DriverInfo(struct Scsi_Host *); 1322 - static int BusLogic_QueueCommand(struct scsi_cmnd *, void (*CompletionRoutine) (struct scsi_cmnd *)); 1322 + static int BusLogic_QueueCommand(struct Scsi_Host *h, struct scsi_cmnd *); 1323 1323 static int BusLogic_BIOSDiskParameters(struct scsi_device *, struct block_device *, sector_t, int *); 1324 1324 static int BusLogic_ProcDirectoryInfo(struct Scsi_Host *, char *, char **, off_t, int, int); 1325 1325 static int BusLogic_SlaveConfigure(struct scsi_device *);
+2 -1
drivers/scsi/NCR5380.c
··· 952 952 * Locks: host lock taken by caller 953 953 */ 954 954 955 - static int NCR5380_queue_command(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) 955 + static int NCR5380_queue_command_lck(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) 956 956 { 957 957 struct Scsi_Host *instance = cmd->device->host; 958 958 struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata; ··· 1021 1021 return 0; 1022 1022 } 1023 1023 1024 + static DEF_SCSI_QCMD(NCR5380_queue_command) 1024 1025 1025 1026 /** 1026 1027 * NCR5380_main - NCR state machines
+1 -1
drivers/scsi/NCR5380.h
··· 313 313 #endif 314 314 static int NCR5380_abort(Scsi_Cmnd * cmd); 315 315 static int NCR5380_bus_reset(Scsi_Cmnd * cmd); 316 - static int NCR5380_queue_command(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)); 316 + static int NCR5380_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 317 317 static int __maybe_unused NCR5380_proc_info(struct Scsi_Host *instance, 318 318 char *buffer, char **start, off_t offset, int length, int inout); 319 319
+3 -1
drivers/scsi/NCR53c406a.c
··· 693 693 } 694 694 #endif 695 695 696 - static int NCR53c406a_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) 696 + static int NCR53c406a_queue_lck(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) 697 697 { 698 698 int i; 699 699 ··· 725 725 rtrc(1); 726 726 return 0; 727 727 } 728 + 729 + static DEF_SCSI_QCMD(NCR53c406a_queue) 728 730 729 731 static int NCR53c406a_host_reset(Scsi_Cmnd * SCpnt) 730 732 {
+3 -1
drivers/scsi/a100u2w.c
··· 911 911 * queue the command down to the controller 912 912 */ 913 913 914 - static int inia100_queue(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *)) 914 + static int inia100_queue_lck(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *)) 915 915 { 916 916 struct orc_scb *scb; 917 917 struct orc_host *host; /* Point to Host adapter control block */ ··· 929 929 orc_exec_scb(host, scb); /* Start execute SCB */ 930 930 return 0; 931 931 } 932 + 933 + static DEF_SCSI_QCMD(inia100_queue) 932 934 933 935 /***************************************************************************** 934 936 Function name : inia100_abort
+3 -1
drivers/scsi/aacraid/linit.c
··· 248 248 * TODO: unify with aac_scsi_cmd(). 249 249 */ 250 250 251 - static int aac_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 251 + static int aac_queuecommand_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 252 252 { 253 253 struct Scsi_Host *host = cmd->device->host; 254 254 struct aac_dev *dev = (struct aac_dev *)host->hostdata; ··· 266 266 cmd->SCp.phase = AAC_OWNER_LOWLEVEL; 267 267 return (aac_scsi_cmd(cmd) ? FAILED : 0); 268 268 } 269 + 270 + static DEF_SCSI_QCMD(aac_queuecommand) 269 271 270 272 /** 271 273 * aac_info - Returns the host adapter name
+3 -1
drivers/scsi/advansys.c
··· 9500 9500 * in the 'scp' result field. 9501 9501 */ 9502 9502 static int 9503 - advansys_queuecommand(struct scsi_cmnd *scp, void (*done)(struct scsi_cmnd *)) 9503 + advansys_queuecommand_lck(struct scsi_cmnd *scp, void (*done)(struct scsi_cmnd *)) 9504 9504 { 9505 9505 struct Scsi_Host *shost = scp->device->host; 9506 9506 int asc_res, result = 0; ··· 9524 9524 9525 9525 return result; 9526 9526 } 9527 + 9528 + static DEF_SCSI_QCMD(advansys_queuecommand) 9527 9529 9528 9530 static ushort __devinit AscGetEisaChipCfg(PortAddr iop_base) 9529 9531 {
+3 -1
drivers/scsi/aha152x.c
··· 1056 1056 * queue a command 1057 1057 * 1058 1058 */ 1059 - static int aha152x_queue(Scsi_Cmnd *SCpnt, void (*done)(Scsi_Cmnd *)) 1059 + static int aha152x_queue_lck(Scsi_Cmnd *SCpnt, void (*done)(Scsi_Cmnd *)) 1060 1060 { 1061 1061 #if 0 1062 1062 if(*SCpnt->cmnd == REQUEST_SENSE) { ··· 1069 1069 1070 1070 return aha152x_internal_queue(SCpnt, NULL, 0, done); 1071 1071 } 1072 + 1073 + static DEF_SCSI_QCMD(aha152x_queue) 1072 1074 1073 1075 1074 1076 /*
+3 -1
drivers/scsi/aha1542.c
··· 558 558 }; 559 559 } 560 560 561 - static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) 561 + static int aha1542_queuecommand_lck(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) 562 562 { 563 563 unchar ahacmd = CMD_START_SCSI; 564 564 unchar direction; ··· 717 717 718 718 return 0; 719 719 } 720 + 721 + static DEF_SCSI_QCMD(aha1542_queuecommand) 720 722 721 723 /* Initialize mailboxes */ 722 724 static void setup_mailboxes(int bse, struct Scsi_Host *shpnt)
+1 -1
drivers/scsi/aha1542.h
··· 132 132 }; 133 133 134 134 static int aha1542_detect(struct scsi_host_template *); 135 - static int aha1542_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *)); 135 + static int aha1542_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 136 136 static int aha1542_bus_reset(Scsi_Cmnd * SCpnt); 137 137 static int aha1542_dev_reset(Scsi_Cmnd * SCpnt); 138 138 static int aha1542_host_reset(Scsi_Cmnd * SCpnt);
+3 -1
drivers/scsi/aha1740.c
··· 331 331 return IRQ_RETVAL(handled); 332 332 } 333 333 334 - static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *)) 334 + static int aha1740_queuecommand_lck(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *)) 335 335 { 336 336 unchar direction; 337 337 unchar *cmd = (unchar *) SCpnt->cmnd; ··· 502 502 printk(KERN_ALERT "aha1740_queuecommand: done can't be NULL\n"); 503 503 return 0; 504 504 } 505 + 506 + static DEF_SCSI_QCMD(aha1740_queuecommand) 505 507 506 508 /* Query the board for its irq_level and irq_type. Nothing else matters 507 509 in enhanced mode on an EISA bus. */
+3 -1
drivers/scsi/aic7xxx/aic79xx_osm.c
··· 573 573 * Queue an SCB to the controller. 574 574 */ 575 575 static int 576 - ahd_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *)) 576 + ahd_linux_queue_lck(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *)) 577 577 { 578 578 struct ahd_softc *ahd; 579 579 struct ahd_linux_device *dev = scsi_transport_device_data(cmd->device); ··· 587 587 588 588 return rtn; 589 589 } 590 + 591 + static DEF_SCSI_QCMD(ahd_linux_queue) 590 592 591 593 static struct scsi_target ** 592 594 ahd_linux_target_in_softc(struct scsi_target *starget)
+3 -1
drivers/scsi/aic7xxx/aic7xxx_osm.c
··· 528 528 * Queue an SCB to the controller. 529 529 */ 530 530 static int 531 - ahc_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *)) 531 + ahc_linux_queue_lck(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *)) 532 532 { 533 533 struct ahc_softc *ahc; 534 534 struct ahc_linux_device *dev = scsi_transport_device_data(cmd->device); ··· 547 547 548 548 return rtn; 549 549 } 550 + 551 + static DEF_SCSI_QCMD(ahc_linux_queue) 550 552 551 553 static inline struct scsi_target ** 552 554 ahc_linux_target_in_softc(struct scsi_target *starget)
+3 -1
drivers/scsi/aic7xxx_old.c
··· 10234 10234 * Description: 10235 10235 * Queue a SCB to the controller. 10236 10236 *-F*************************************************************************/ 10237 - static int aic7xxx_queue(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *)) 10237 + static int aic7xxx_queue_lck(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *)) 10238 10238 { 10239 10239 struct aic7xxx_host *p; 10240 10240 struct aic7xxx_scb *scb; ··· 10291 10291 aic7xxx_run_waiting_queues(p); 10292 10292 return (0); 10293 10293 } 10294 + 10295 + static DEF_SCSI_QCMD(aic7xxx_queue) 10294 10296 10295 10297 /*+F************************************************************************* 10296 10298 * Function:
+4 -3
drivers/scsi/arcmsr/arcmsr_hba.c
··· 85 85 static int arcmsr_bus_reset(struct scsi_cmnd *); 86 86 static int arcmsr_bios_param(struct scsi_device *sdev, 87 87 struct block_device *bdev, sector_t capacity, int *info); 88 - static int arcmsr_queue_command(struct scsi_cmnd *cmd, 89 - void (*done) (struct scsi_cmnd *)); 88 + static int arcmsr_queue_command(struct Scsi_Host *h, struct scsi_cmnd *cmd); 90 89 static int arcmsr_probe(struct pci_dev *pdev, 91 90 const struct pci_device_id *id); 92 91 static void arcmsr_remove(struct pci_dev *pdev); ··· 2080 2081 } 2081 2082 } 2082 2083 2083 - static int arcmsr_queue_command(struct scsi_cmnd *cmd, 2084 + static int arcmsr_queue_command_lck(struct scsi_cmnd *cmd, 2084 2085 void (* done)(struct scsi_cmnd *)) 2085 2086 { 2086 2087 struct Scsi_Host *host = cmd->device->host; ··· 2122 2123 arcmsr_post_ccb(acb, ccb); 2123 2124 return 0; 2124 2125 } 2126 + 2127 + static DEF_SCSI_QCMD(arcmsr_queue_command) 2125 2128 2126 2129 static bool arcmsr_get_hba_config(struct AdapterControlBlock *acb) 2127 2130 {
+3 -1
drivers/scsi/arm/acornscsi.c
··· 2511 2511 * done - function called on completion, with pointer to command descriptor 2512 2512 * Returns : 0, or < 0 on error. 2513 2513 */ 2514 - int acornscsi_queuecmd(struct scsi_cmnd *SCpnt, 2514 + static int acornscsi_queuecmd_lck(struct scsi_cmnd *SCpnt, 2515 2515 void (*done)(struct scsi_cmnd *)) 2516 2516 { 2517 2517 AS_Host *host = (AS_Host *)SCpnt->device->host->hostdata; ··· 2560 2560 } 2561 2561 return 0; 2562 2562 } 2563 + 2564 + DEF_SCSI_QCMD(acornscsi_queuecmd) 2563 2565 2564 2566 /* 2565 2567 * Prototype: void acornscsi_reportstatus(struct scsi_cmnd **SCpntp1, struct scsi_cmnd **SCpntp2, int result)
+7 -3
drivers/scsi/arm/fas216.c
··· 2198 2198 * Returns: 0 on success, else error. 2199 2199 * Notes: io_request_lock is held, interrupts are disabled. 2200 2200 */ 2201 - int fas216_queue_command(struct scsi_cmnd *SCpnt, 2201 + static int fas216_queue_command_lck(struct scsi_cmnd *SCpnt, 2202 2202 void (*done)(struct scsi_cmnd *)) 2203 2203 { 2204 2204 FAS216_Info *info = (FAS216_Info *)SCpnt->device->host->hostdata; ··· 2240 2240 return result; 2241 2241 } 2242 2242 2243 + DEF_SCSI_QCMD(fas216_queue_command) 2244 + 2243 2245 /** 2244 2246 * fas216_internal_done - trigger restart of a waiting thread in fas216_noqueue_command 2245 2247 * @SCpnt: Command to wake ··· 2265 2263 * Returns: scsi result code. 2266 2264 * Notes: io_request_lock is held, interrupts are disabled. 2267 2265 */ 2268 - int fas216_noqueue_command(struct scsi_cmnd *SCpnt, 2266 + static int fas216_noqueue_command_lck(struct scsi_cmnd *SCpnt, 2269 2267 void (*done)(struct scsi_cmnd *)) 2270 2268 { 2271 2269 FAS216_Info *info = (FAS216_Info *)SCpnt->device->host->hostdata; ··· 2279 2277 BUG_ON(info->scsi.irq != NO_IRQ); 2280 2278 2281 2279 info->internal_done = 0; 2282 - fas216_queue_command(SCpnt, fas216_internal_done); 2280 + fas216_queue_command_lck(SCpnt, fas216_internal_done); 2283 2281 2284 2282 /* 2285 2283 * This wastes time, since we can't return until the command is ··· 2311 2309 2312 2310 return 0; 2313 2311 } 2312 + 2313 + DEF_SCSI_QCMD(fas216_noqueue_command) 2314 2314 2315 2315 /* 2316 2316 * Error handler timeout function. Indicate that we timed out,
+8 -10
drivers/scsi/arm/fas216.h
··· 331 331 */ 332 332 extern int fas216_add (struct Scsi_Host *instance, struct device *dev); 333 333 334 - /* Function: int fas216_queue_command(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 334 + /* Function: int fas216_queue_command(struct Scsi_Host *h, struct scsi_cmnd *SCpnt) 335 335 * Purpose : queue a command for adapter to process. 336 - * Params : SCpnt - Command to queue 337 - * done - done function to call once command is complete 336 + * Params : h - host adapter 337 + * : SCpnt - Command to queue 338 338 * Returns : 0 - success, else error 339 339 */ 340 - extern int fas216_queue_command(struct scsi_cmnd *, 341 - void (*done)(struct scsi_cmnd *)); 340 + extern int fas216_queue_command(struct Scsi_Host *h, struct scsi_cmnd *SCpnt); 342 341 343 - /* Function: int fas216_noqueue_command(istruct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 342 + /* Function: int fas216_noqueue_command(struct Scsi_Host *h, struct scsi_cmnd *SCpnt) 344 343 * Purpose : queue a command for adapter to process, and process it to completion. 345 - * Params : SCpnt - Command to queue 346 - * done - done function to call once command is complete 344 + * Params : h - host adapter 345 + * : SCpnt - Command to queue 347 346 * Returns : 0 - success, else error 348 347 */ 349 - extern int fas216_noqueue_command(struct scsi_cmnd *, 350 - void (*done)(struct scsi_cmnd *)); 348 + extern int fas216_noqueue_command(struct Scsi_Host *, struct scsi_cmnd *) 351 349 352 350 /* Function: irqreturn_t fas216_intr (FAS216_Info *info) 353 351 * Purpose : handle interrupts from the interface to progress a command
+3 -1
drivers/scsi/atari_NCR5380.c
··· 910 910 * 911 911 */ 912 912 913 - static int NCR5380_queue_command(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *)) 913 + static int NCR5380_queue_command_lck(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *)) 914 914 { 915 915 SETUP_HOSTDATA(cmd->device->host); 916 916 Scsi_Cmnd *tmp; ··· 1021 1021 NCR5380_main(NULL); 1022 1022 return 0; 1023 1023 } 1024 + 1025 + static DEF_SCSI_QCMD(NCR5380_queue_command) 1024 1026 1025 1027 /* 1026 1028 * Function : NCR5380_main (void)
-17
drivers/scsi/atari_scsi.c
··· 572 572 } 573 573 574 574 575 - /* This is the wrapper function for NCR5380_queue_command(). It just 576 - * tries to get the lock on the ST-DMA (see above) and then calls the 577 - * original function. 578 - */ 579 - 580 - #if 0 581 - int atari_queue_command(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *)) 582 - { 583 - /* falcon_get_lock(); 584 - * ++guenther: moved to NCR5380_queue_command() to prevent 585 - * race condition, see there for an explanation. 586 - */ 587 - return NCR5380_queue_command(cmd, done); 588 - } 589 - #endif 590 - 591 - 592 575 int __init atari_scsi_detect(struct scsi_host_template *host) 593 576 { 594 577 static int called = 0;
+3 -1
drivers/scsi/atp870u.c
··· 605 605 * 606 606 * Queue a command to the ATP queue. Called with the host lock held. 607 607 */ 608 - static int atp870u_queuecommand(struct scsi_cmnd * req_p, 608 + static int atp870u_queuecommand_lck(struct scsi_cmnd *req_p, 609 609 void (*done) (struct scsi_cmnd *)) 610 610 { 611 611 unsigned char c; ··· 693 693 #endif 694 694 return 0; 695 695 } 696 + 697 + static DEF_SCSI_QCMD(atp870u_queuecommand) 696 698 697 699 /** 698 700 * send_s870 - send a command to the controller
+4 -3
drivers/scsi/bfa/bfad_im.c
··· 30 30 struct scsi_transport_template *bfad_im_scsi_transport_template; 31 31 struct scsi_transport_template *bfad_im_scsi_vport_transport_template; 32 32 static void bfad_im_itnim_work_handler(struct work_struct *work); 33 - static int bfad_im_queuecommand(struct scsi_cmnd *cmnd, 34 - void (*done)(struct scsi_cmnd *)); 33 + static int bfad_im_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *cmnd); 35 34 static int bfad_im_slave_alloc(struct scsi_device *sdev); 36 35 static void bfad_im_fc_rport_add(struct bfad_im_port_s *im_port, 37 36 struct bfad_itnim_s *itnim); ··· 1119 1120 * Scsi_Host template entry, queue a SCSI command to the BFAD. 1120 1121 */ 1121 1122 static int 1122 - bfad_im_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *)) 1123 + bfad_im_queuecommand_lck(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *)) 1123 1124 { 1124 1125 struct bfad_im_port_s *im_port = 1125 1126 (struct bfad_im_port_s *) cmnd->device->host->hostdata[0]; ··· 1185 1186 1186 1187 return 0; 1187 1188 } 1189 + 1190 + static DEF_SCSI_QCMD(bfad_im_queuecommand) 1188 1191 1189 1192 void 1190 1193 bfad_os_rport_online_wait(struct bfad_s *bfad)
+2 -1
drivers/scsi/dc395x.c
··· 1080 1080 * and is expected to be held on return. 1081 1081 * 1082 1082 **/ 1083 - static int dc395x_queue_command(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 1083 + static int dc395x_queue_command_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 1084 1084 { 1085 1085 struct DeviceCtlBlk *dcb; 1086 1086 struct ScsiReqBlk *srb; ··· 1154 1154 return 0; 1155 1155 } 1156 1156 1157 + static DEF_SCSI_QCMD(dc395x_queue_command) 1157 1158 1158 1159 /* 1159 1160 * Return the disk geometry for the given SCSI device.
+3 -1
drivers/scsi/dpt_i2o.c
··· 423 423 return 0; 424 424 } 425 425 426 - static int adpt_queue(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *)) 426 + static int adpt_queue_lck(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *)) 427 427 { 428 428 adpt_hba* pHba = NULL; 429 429 struct adpt_device* pDev = NULL; /* dpt per device information */ ··· 490 490 } 491 491 return adpt_scsi_to_i2o(pHba, cmd, pDev); 492 492 } 493 + 494 + static DEF_SCSI_QCMD(adpt_queue) 493 495 494 496 static int adpt_bios_param(struct scsi_device *sdev, struct block_device *dev, 495 497 sector_t capacity, int geom[])
+1 -1
drivers/scsi/dpti.h
··· 29 29 */ 30 30 31 31 static int adpt_detect(struct scsi_host_template * sht); 32 - static int adpt_queue(struct scsi_cmnd * cmd, void (*cmdcomplete) (struct scsi_cmnd *)); 32 + static int adpt_queue(struct Scsi_Host *h, struct scsi_cmnd * cmd); 33 33 static int adpt_abort(struct scsi_cmnd * cmd); 34 34 static int adpt_reset(struct scsi_cmnd* cmd); 35 35 static int adpt_release(struct Scsi_Host *host);
+1 -1
drivers/scsi/dtc.h
··· 36 36 static int dtc_biosparam(struct scsi_device *, struct block_device *, 37 37 sector_t, int*); 38 38 static int dtc_detect(struct scsi_host_template *); 39 - static int dtc_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *)); 39 + static int dtc_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 40 40 static int dtc_bus_reset(Scsi_Cmnd *); 41 41 42 42 #ifndef CMD_PER_LUN
+4 -3
drivers/scsi/eata.c
··· 505 505 506 506 static int eata2x_detect(struct scsi_host_template *); 507 507 static int eata2x_release(struct Scsi_Host *); 508 - static int eata2x_queuecommand(struct scsi_cmnd *, 509 - void (*done) (struct scsi_cmnd *)); 508 + static int eata2x_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 510 509 static int eata2x_eh_abort(struct scsi_cmnd *); 511 510 static int eata2x_eh_host_reset(struct scsi_cmnd *); 512 511 static int eata2x_bios_param(struct scsi_device *, struct block_device *, ··· 1757 1758 1758 1759 } 1759 1760 1760 - static int eata2x_queuecommand(struct scsi_cmnd *SCpnt, 1761 + static int eata2x_queuecommand_lck(struct scsi_cmnd *SCpnt, 1761 1762 void (*done) (struct scsi_cmnd *)) 1762 1763 { 1763 1764 struct Scsi_Host *shost = SCpnt->device->host; ··· 1841 1842 ha->cp_stat[i] = IN_USE; 1842 1843 return 0; 1843 1844 } 1845 + 1846 + static DEF_SCSI_QCMD(eata2x_queuecommand) 1844 1847 1845 1848 static int eata2x_eh_abort(struct scsi_cmnd *SCarg) 1846 1849 {
+3 -1
drivers/scsi/eata_pio.c
··· 335 335 return 0; 336 336 } 337 337 338 - static int eata_pio_queue(struct scsi_cmnd *cmd, 338 + static int eata_pio_queue_lck(struct scsi_cmnd *cmd, 339 339 void (*done)(struct scsi_cmnd *)) 340 340 { 341 341 unsigned int x, y; ··· 437 437 438 438 return 0; 439 439 } 440 + 441 + static DEF_SCSI_QCMD(eata_pio_queue) 440 442 441 443 static int eata_pio_abort(struct scsi_cmnd *cmd) 442 444 {
+3 -1
drivers/scsi/esp_scsi.c
··· 916 916 scsi_track_queue_full(dev, lp->num_tagged - 1); 917 917 } 918 918 919 - static int esp_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 919 + static int esp_queuecommand_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 920 920 { 921 921 struct scsi_device *dev = cmd->device; 922 922 struct esp *esp = shost_priv(dev->host); ··· 940 940 941 941 return 0; 942 942 } 943 + 944 + static DEF_SCSI_QCMD(esp_queuecommand) 943 945 944 946 static int esp_check_gross_error(struct esp *esp) 945 947 {
+3 -1
drivers/scsi/fd_mcs.c
··· 1072 1072 return 0; 1073 1073 } 1074 1074 1075 - static int fd_mcs_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) 1075 + static int fd_mcs_queue_lck(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) 1076 1076 { 1077 1077 struct Scsi_Host *shpnt = SCpnt->device->host; 1078 1078 ··· 1121 1121 1122 1122 return 0; 1123 1123 } 1124 + 1125 + static DEF_SCSI_QCMD(fd_mcs_queue) 1124 1126 1125 1127 #if DEBUG_ABORT || DEBUG_RESET 1126 1128 static void fd_mcs_print_info(Scsi_Cmnd * SCpnt)
+3 -1
drivers/scsi/fdomain.c
··· 1419 1419 return IRQ_HANDLED; 1420 1420 } 1421 1421 1422 - static int fdomain_16x0_queue(struct scsi_cmnd *SCpnt, 1422 + static int fdomain_16x0_queue_lck(struct scsi_cmnd *SCpnt, 1423 1423 void (*done)(struct scsi_cmnd *)) 1424 1424 { 1425 1425 if (in_command) { ··· 1468 1468 1469 1469 return 0; 1470 1470 } 1471 + 1472 + static DEF_SCSI_QCMD(fdomain_16x0_queue) 1471 1473 1472 1474 #if DEBUG_ABORT 1473 1475 static void print_info(struct scsi_cmnd *SCpnt)
+1 -1
drivers/scsi/fnic/fnic.h
··· 246 246 void fnic_update_mac(struct fc_lport *, u8 *new); 247 247 void fnic_update_mac_locked(struct fnic *, u8 *new); 248 248 249 - int fnic_queuecommand(struct scsi_cmnd *, void (*done)(struct scsi_cmnd *)); 249 + int fnic_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 250 250 int fnic_abort_cmd(struct scsi_cmnd *); 251 251 int fnic_device_reset(struct scsi_cmnd *); 252 252 int fnic_host_reset(struct scsi_cmnd *);
+3 -1
drivers/scsi/fnic/fnic_scsi.c
··· 349 349 * Routine to send a scsi cdb 350 350 * Called with host_lock held and interrupts disabled. 351 351 */ 352 - int fnic_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 352 + static int fnic_queuecommand_lck(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 353 353 { 354 354 struct fc_lport *lp; 355 355 struct fc_rport *rport; ··· 456 456 spin_lock(lp->host->host_lock); 457 457 return ret; 458 458 } 459 + 460 + DEF_SCSI_QCMD(fnic_queuecommand) 459 461 460 462 /* 461 463 * fnic_fcpio_fw_reset_cmpl_handler
+1 -1
drivers/scsi/g_NCR5380.h
··· 46 46 static int generic_NCR5380_abort(Scsi_Cmnd *); 47 47 static int generic_NCR5380_detect(struct scsi_host_template *); 48 48 static int generic_NCR5380_release_resources(struct Scsi_Host *); 49 - static int generic_NCR5380_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *)); 49 + static int generic_NCR5380_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 50 50 static int generic_NCR5380_bus_reset(Scsi_Cmnd *); 51 51 static const char* generic_NCR5380_info(struct Scsi_Host *); 52 52
+4 -2
drivers/scsi/gdth.c
··· 185 185 unsigned long arg); 186 186 187 187 static void gdth_flush(gdth_ha_str *ha); 188 - static int gdth_queuecommand(Scsi_Cmnd *scp,void (*done)(Scsi_Cmnd *)); 188 + static int gdth_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *cmd); 189 189 static int __gdth_queuecommand(gdth_ha_str *ha, struct scsi_cmnd *scp, 190 190 struct gdth_cmndinfo *cmndinfo); 191 191 static void gdth_scsi_done(struct scsi_cmnd *scp); ··· 4004 4004 } 4005 4005 4006 4006 4007 - static int gdth_queuecommand(struct scsi_cmnd *scp, 4007 + static int gdth_queuecommand_lck(struct scsi_cmnd *scp, 4008 4008 void (*done)(struct scsi_cmnd *)) 4009 4009 { 4010 4010 gdth_ha_str *ha = shost_priv(scp->device->host); ··· 4021 4021 4022 4022 return __gdth_queuecommand(ha, scp, cmndinfo); 4023 4023 } 4024 + 4025 + static DEF_SCSI_QCMD(gdth_queuecommand) 4024 4026 4025 4027 static int __gdth_queuecommand(gdth_ha_str *ha, struct scsi_cmnd *scp, 4026 4028 struct gdth_cmndinfo *cmndinfo)
+4 -4
drivers/scsi/hpsa.c
··· 31 31 #include <linux/seq_file.h> 32 32 #include <linux/init.h> 33 33 #include <linux/spinlock.h> 34 - #include <linux/smp_lock.h> 35 34 #include <linux/compat.h> 36 35 #include <linux/blktrace_api.h> 37 36 #include <linux/uaccess.h> ··· 142 143 void *buff, size_t size, u8 page_code, unsigned char *scsi3addr, 143 144 int cmd_type); 144 145 145 - static int hpsa_scsi_queue_command(struct scsi_cmnd *cmd, 146 - void (*done)(struct scsi_cmnd *)); 146 + static int hpsa_scsi_queue_command(struct Scsi_Host *h, struct scsi_cmnd *cmd); 147 147 static void hpsa_scan_start(struct Scsi_Host *); 148 148 static int hpsa_scan_finished(struct Scsi_Host *sh, 149 149 unsigned long elapsed_time); ··· 1924 1926 } 1925 1927 1926 1928 1927 - static int hpsa_scsi_queue_command(struct scsi_cmnd *cmd, 1929 + static int hpsa_scsi_queue_command_lck(struct scsi_cmnd *cmd, 1928 1930 void (*done)(struct scsi_cmnd *)) 1929 1931 { 1930 1932 struct ctlr_info *h; ··· 2017 2019 /* the cmd'll come back via intr handler in complete_scsi_command() */ 2018 2020 return 0; 2019 2021 } 2022 + 2023 + static DEF_SCSI_QCMD(hpsa_scsi_queue_command) 2020 2024 2021 2025 static void hpsa_scan_start(struct Scsi_Host *sh) 2022 2026 {
+3 -1
drivers/scsi/hptiop.c
··· 751 751 MVIOP_MU_QUEUE_ADDR_HOST_BIT | size_bit, hba); 752 752 } 753 753 754 - static int hptiop_queuecommand(struct scsi_cmnd *scp, 754 + static int hptiop_queuecommand_lck(struct scsi_cmnd *scp, 755 755 void (*done)(struct scsi_cmnd *)) 756 756 { 757 757 struct Scsi_Host *host = scp->device->host; ··· 818 818 scp->scsi_done(scp); 819 819 return 0; 820 820 } 821 + 822 + static DEF_SCSI_QCMD(hptiop_queuecommand) 821 823 822 824 static const char *hptiop_info(struct Scsi_Host *host) 823 825 {
+4 -2
drivers/scsi/ibmmca.c
··· 39 39 #include <scsi/scsi_host.h> 40 40 41 41 /* Common forward declarations for all Linux-versions: */ 42 - static int ibmmca_queuecommand (Scsi_Cmnd *, void (*done) (Scsi_Cmnd *)); 42 + static int ibmmca_queuecommand (struct Scsi_Host *, struct scsi_cmnd *); 43 43 static int ibmmca_abort (Scsi_Cmnd *); 44 44 static int ibmmca_host_reset (Scsi_Cmnd *); 45 45 static int ibmmca_biosparam (struct scsi_device *, struct block_device *, sector_t, int *); ··· 1691 1691 } 1692 1692 1693 1693 /* The following routine is the SCSI command queue for the midlevel driver */ 1694 - static int ibmmca_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) 1694 + static int ibmmca_queuecommand_lck(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) 1695 1695 { 1696 1696 unsigned int ldn; 1697 1697 unsigned int scsi_cmd; ··· 1995 1995 } 1996 1996 return 0; 1997 1997 } 1998 + 1999 + static DEF_SCSI_QCMD(ibmmca_queuecommand) 1998 2000 1999 2001 static int __ibmmca_abort(Scsi_Cmnd * cmd) 2000 2002 {
+3 -1
drivers/scsi/ibmvscsi/ibmvfc.c
··· 1606 1606 * Returns: 1607 1607 * 0 on success / other on failure 1608 1608 **/ 1609 - static int ibmvfc_queuecommand(struct scsi_cmnd *cmnd, 1609 + static int ibmvfc_queuecommand_lck(struct scsi_cmnd *cmnd, 1610 1610 void (*done) (struct scsi_cmnd *)) 1611 1611 { 1612 1612 struct ibmvfc_host *vhost = shost_priv(cmnd->device->host); ··· 1671 1671 done(cmnd); 1672 1672 return 0; 1673 1673 } 1674 + 1675 + static DEF_SCSI_QCMD(ibmvfc_queuecommand) 1674 1676 1675 1677 /** 1676 1678 * ibmvfc_sync_completion - Signal that a synchronous command has completed
+3 -1
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 713 713 * @cmd: struct scsi_cmnd to be executed 714 714 * @done: Callback function to be called when cmd is completed 715 715 */ 716 - static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd, 716 + static int ibmvscsi_queuecommand_lck(struct scsi_cmnd *cmnd, 717 717 void (*done) (struct scsi_cmnd *)) 718 718 { 719 719 struct srp_cmd *srp_cmd; ··· 765 765 766 766 return ibmvscsi_send_srp_event(evt_struct, hostdata, 0); 767 767 } 768 + 769 + static DEF_SCSI_QCMD(ibmvscsi_queuecommand) 768 770 769 771 /* ------------------------------------------------------------ 770 772 * Routines for driver initialization
+3 -1
drivers/scsi/imm.c
··· 926 926 return 0; 927 927 } 928 928 929 - static int imm_queuecommand(struct scsi_cmnd *cmd, 929 + static int imm_queuecommand_lck(struct scsi_cmnd *cmd, 930 930 void (*done)(struct scsi_cmnd *)) 931 931 { 932 932 imm_struct *dev = imm_dev(cmd->device->host); ··· 948 948 949 949 return 0; 950 950 } 951 + 952 + static DEF_SCSI_QCMD(imm_queuecommand) 951 953 952 954 /* 953 955 * Apparently the disk->capacity attribute is off by 1 sector
+3 -1
drivers/scsi/in2000.c
··· 334 334 335 335 static void in2000_execute(struct Scsi_Host *instance); 336 336 337 - static int in2000_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) 337 + static int in2000_queuecommand_lck(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) 338 338 { 339 339 struct Scsi_Host *instance; 340 340 struct IN2000_hostdata *hostdata; ··· 430 430 DB(DB_QUEUE_COMMAND, printk(")Q-%ld ", cmd->serial_number)) 431 431 return 0; 432 432 } 433 + 434 + static DEF_SCSI_QCMD(in2000_queuecommand) 433 435 434 436 435 437
+1 -1
drivers/scsi/in2000.h
··· 396 396 flags) 397 397 398 398 static int in2000_detect(struct scsi_host_template *) in2000__INIT; 399 - static int in2000_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *)); 399 + static int in2000_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 400 400 static int in2000_abort(Scsi_Cmnd *); 401 401 static void in2000_setup(char *, int *) in2000__INIT; 402 402 static int in2000_biosparam(struct scsi_device *, struct block_device *,
+3 -1
drivers/scsi/initio.c
··· 2639 2639 * will cause the mid layer to call us again later with the command) 2640 2640 */ 2641 2641 2642 - static int i91u_queuecommand(struct scsi_cmnd *cmd, 2642 + static int i91u_queuecommand_lck(struct scsi_cmnd *cmd, 2643 2643 void (*done)(struct scsi_cmnd *)) 2644 2644 { 2645 2645 struct initio_host *host = (struct initio_host *) cmd->device->host->hostdata; ··· 2655 2655 initio_exec_scb(host, cmnd); 2656 2656 return 0; 2657 2657 } 2658 + 2659 + static DEF_SCSI_QCMD(i91u_queuecommand) 2658 2660 2659 2661 /** 2660 2662 * i91u_bus_reset - reset the SCSI bus
+3 -1
drivers/scsi/ipr.c
··· 5709 5709 * SCSI_MLQUEUE_DEVICE_BUSY if device is busy 5710 5710 * SCSI_MLQUEUE_HOST_BUSY if host is busy 5711 5711 **/ 5712 - static int ipr_queuecommand(struct scsi_cmnd *scsi_cmd, 5712 + static int ipr_queuecommand_lck(struct scsi_cmnd *scsi_cmd, 5713 5713 void (*done) (struct scsi_cmnd *)) 5714 5714 { 5715 5715 struct ipr_ioa_cfg *ioa_cfg; ··· 5791 5791 5792 5792 return 0; 5793 5793 } 5794 + 5795 + static DEF_SCSI_QCMD(ipr_queuecommand) 5794 5796 5795 5797 /** 5796 5798 * ipr_ioctl - IOCTL handler
+4 -2
drivers/scsi/ips.c
··· 232 232 static int ips_release(struct Scsi_Host *); 233 233 static int ips_eh_abort(struct scsi_cmnd *); 234 234 static int ips_eh_reset(struct scsi_cmnd *); 235 - static int ips_queue(struct scsi_cmnd *, void (*)(struct scsi_cmnd *)); 235 + static int ips_queue(struct Scsi_Host *, struct scsi_cmnd *); 236 236 static const char *ips_info(struct Scsi_Host *); 237 237 static irqreturn_t do_ipsintr(int, void *); 238 238 static int ips_hainit(ips_ha_t *); ··· 1046 1046 /* Linux obtains io_request_lock before calling this function */ 1047 1047 /* */ 1048 1048 /****************************************************************************/ 1049 - static int ips_queue(struct scsi_cmnd *SC, void (*done) (struct scsi_cmnd *)) 1049 + static int ips_queue_lck(struct scsi_cmnd *SC, void (*done) (struct scsi_cmnd *)) 1050 1050 { 1051 1051 ips_ha_t *ha; 1052 1052 ips_passthru_t *pt; ··· 1136 1136 1137 1137 return (0); 1138 1138 } 1139 + 1140 + static DEF_SCSI_QCMD(ips_queue) 1139 1141 1140 1142 /****************************************************************************/ 1141 1143 /* */
+3 -1
drivers/scsi/libfc/fc_fcp.c
··· 1753 1753 * This is the i/o strategy routine, called by the SCSI layer. This routine 1754 1754 * is called with the host_lock held. 1755 1755 */ 1756 - int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *)) 1756 + static int fc_queuecommand_lck(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *)) 1757 1757 { 1758 1758 struct fc_lport *lport; 1759 1759 struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); ··· 1851 1851 spin_lock_irq(lport->host->host_lock); 1852 1852 return rc; 1853 1853 } 1854 + 1855 + DEF_SCSI_QCMD(fc_queuecommand) 1854 1856 EXPORT_SYMBOL(fc_queuecommand); 1855 1857 1856 1858 /**
+3 -1
drivers/scsi/libiscsi.c
··· 1599 1599 FAILURE_SESSION_NOT_READY, 1600 1600 }; 1601 1601 1602 - int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 1602 + static int iscsi_queuecommand_lck(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 1603 1603 { 1604 1604 struct iscsi_cls_session *cls_session; 1605 1605 struct Scsi_Host *host; ··· 1736 1736 spin_lock(host->host_lock); 1737 1737 return 0; 1738 1738 } 1739 + 1740 + DEF_SCSI_QCMD(iscsi_queuecommand) 1739 1741 EXPORT_SYMBOL_GPL(iscsi_queuecommand); 1740 1742 1741 1743 int iscsi_change_queue_depth(struct scsi_device *sdev, int depth, int reason)
+3 -1
drivers/scsi/libsas/sas_scsi_host.c
··· 189 189 * Note: XXX: Remove the host unlock/lock pair when SCSI Core can 190 190 * call us without holding an IRQ spinlock... 191 191 */ 192 - int sas_queuecommand(struct scsi_cmnd *cmd, 192 + static int sas_queuecommand_lck(struct scsi_cmnd *cmd, 193 193 void (*scsi_done)(struct scsi_cmnd *)) 194 194 __releases(host->host_lock) 195 195 __acquires(dev->sata_dev.ap->lock) ··· 253 253 spin_lock_irq(host->host_lock); 254 254 return res; 255 255 } 256 + 257 + DEF_SCSI_QCMD(sas_queuecommand) 256 258 257 259 static void sas_eh_finish_cmd(struct scsi_cmnd *cmd) 258 260 {
+3 -1
drivers/scsi/lpfc/lpfc_scsi.c
··· 2899 2899 * SCSI_MLQUEUE_HOST_BUSY - Block all devices served by this host temporarily. 2900 2900 **/ 2901 2901 static int 2902 - lpfc_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *)) 2902 + lpfc_queuecommand_lck(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *)) 2903 2903 { 2904 2904 struct Scsi_Host *shost = cmnd->device->host; 2905 2905 struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; ··· 3059 3059 done(cmnd); 3060 3060 return 0; 3061 3061 } 3062 + 3063 + static DEF_SCSI_QCMD(lpfc_queuecommand) 3062 3064 3063 3065 /** 3064 3066 * lpfc_abort_handler - scsi_host_template eh_abort_handler entry point
+3 -1
drivers/scsi/mac53c94.c
··· 66 66 static void set_dma_cmds(struct fsc_state *, struct scsi_cmnd *); 67 67 68 68 69 - static int mac53c94_queue(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 69 + static int mac53c94_queue_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 70 70 { 71 71 struct fsc_state *state; 72 72 ··· 98 98 99 99 return 0; 100 100 } 101 + 102 + static DEF_SCSI_QCMD(mac53c94_queue) 101 103 102 104 static int mac53c94_host_reset(struct scsi_cmnd *cmd) 103 105 {
+4 -2
drivers/scsi/megaraid.c
··· 366 366 * The command queuing entry point for the mid-layer. 367 367 */ 368 368 static int 369 - megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *)) 369 + megaraid_queue_lck(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *)) 370 370 { 371 371 adapter_t *adapter; 372 372 scb_t *scb; ··· 408 408 spin_unlock_irqrestore(&adapter->lock, flags); 409 409 return busy; 410 410 } 411 + 412 + static DEF_SCSI_QCMD(megaraid_queue) 411 413 412 414 /** 413 415 * mega_allocate_scb() ··· 4458 4456 4459 4457 scb->idx = CMDID_INT_CMDS; 4460 4458 4461 - megaraid_queue(scmd, mega_internal_done); 4459 + megaraid_queue_lck(scmd, mega_internal_done); 4462 4460 4463 4461 wait_for_completion(&adapter->int_waitq); 4464 4462
+1 -1
drivers/scsi/megaraid.h
··· 987 987 static int issue_scb(adapter_t *, scb_t *); 988 988 static int mega_setup_mailbox(adapter_t *); 989 989 990 - static int megaraid_queue (Scsi_Cmnd *, void (*)(Scsi_Cmnd *)); 990 + static int megaraid_queue (struct Scsi_Host *, struct scsi_cmnd *); 991 991 static scb_t * mega_build_cmd(adapter_t *, Scsi_Cmnd *, int *); 992 992 static void __mega_runpendq(adapter_t *); 993 993 static int issue_scb_block(adapter_t *, u_char *);
+4 -3
drivers/scsi/megaraid/megaraid_mbox.c
··· 113 113 static void megaraid_mbox_display_scb(adapter_t *, scb_t *); 114 114 static void megaraid_mbox_setup_device_map(adapter_t *); 115 115 116 - static int megaraid_queue_command(struct scsi_cmnd *, 117 - void (*)(struct scsi_cmnd *)); 116 + static int megaraid_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 118 117 static scb_t *megaraid_mbox_build_cmd(adapter_t *, struct scsi_cmnd *, int *); 119 118 static void megaraid_mbox_runpendq(adapter_t *, scb_t *); 120 119 static void megaraid_mbox_prepare_pthru(adapter_t *, scb_t *, ··· 1483 1484 * Queue entry point for mailbox based controllers. 1484 1485 */ 1485 1486 static int 1486 - megaraid_queue_command(struct scsi_cmnd *scp, void (*done)(struct scsi_cmnd *)) 1487 + megaraid_queue_command_lck(struct scsi_cmnd *scp, void (*done)(struct scsi_cmnd *)) 1487 1488 { 1488 1489 adapter_t *adapter; 1489 1490 scb_t *scb; ··· 1511 1512 megaraid_mbox_runpendq(adapter, scb); 1512 1513 return if_busy; 1513 1514 } 1515 + 1516 + static DEF_SCSI_QCMD(megaraid_queue_command) 1514 1517 1515 1518 /** 1516 1519 * megaraid_mbox_build_cmd - transform the mid-layer scsi commands
+3 -1
drivers/scsi/megaraid/megaraid_sas.c
··· 1334 1334 * @done: Callback entry point 1335 1335 */ 1336 1336 static int 1337 - megasas_queue_command(struct scsi_cmnd *scmd, void (*done) (struct scsi_cmnd *)) 1337 + megasas_queue_command_lck(struct scsi_cmnd *scmd, void (*done) (struct scsi_cmnd *)) 1338 1338 { 1339 1339 u32 frame_count; 1340 1340 struct megasas_cmd *cmd; ··· 1416 1416 done(scmd); 1417 1417 return 0; 1418 1418 } 1419 + 1420 + static DEF_SCSI_QCMD(megasas_queue_command) 1419 1421 1420 1422 static struct megasas_instance *megasas_lookup_instance(u16 host_no) 1421 1423 {
+3 -1
drivers/scsi/mesh.c
··· 1627 1627 * Called by midlayer with host locked to queue a new 1628 1628 * request 1629 1629 */ 1630 - static int mesh_queue(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 1630 + static int mesh_queue_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 1631 1631 { 1632 1632 struct mesh_state *ms; 1633 1633 ··· 1647 1647 1648 1648 return 0; 1649 1649 } 1650 + 1651 + static DEF_SCSI_QCMD(mesh_queue) 1650 1652 1651 1653 /* 1652 1654 * Called to handle interrupts, either call by the interrupt
+3 -1
drivers/scsi/mpt2sas/mpt2sas_scsih.c
··· 3315 3315 * SCSI_MLQUEUE_HOST_BUSY if the entire host queue is full 3316 3316 */ 3317 3317 static int 3318 - _scsih_qcmd(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *)) 3318 + _scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *)) 3319 3319 { 3320 3320 struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host); 3321 3321 struct MPT2SAS_DEVICE *sas_device_priv_data; ··· 3440 3440 out: 3441 3441 return SCSI_MLQUEUE_HOST_BUSY; 3442 3442 } 3443 + 3444 + static DEF_SCSI_QCMD(_scsih_qcmd) 3443 3445 3444 3446 /** 3445 3447 * _scsih_normalize_sense - normalize descriptor and fixed format sense data
+3 -1
drivers/scsi/ncr53c8xx.c
··· 8029 8029 return 0; 8030 8030 } 8031 8031 8032 - static int ncr53c8xx_queue_command (struct scsi_cmnd *cmd, void (* done)(struct scsi_cmnd *)) 8032 + static int ncr53c8xx_queue_command_lck (struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 8033 8033 { 8034 8034 struct ncb *np = ((struct host_data *) cmd->device->host->hostdata)->ncb; 8035 8035 unsigned long flags; ··· 8067 8067 8068 8068 return sts; 8069 8069 } 8070 + 8071 + static DEF_SCSI_QCMD(ncr53c8xx_queue_command) 8070 8072 8071 8073 irqreturn_t ncr53c8xx_intr(int irq, void *dev_id) 8072 8074 {
+4 -3
drivers/scsi/nsp32.c
··· 196 196 static int nsp32_proc_info (struct Scsi_Host *, char *, char **, off_t, int, int); 197 197 198 198 static int nsp32_detect (struct pci_dev *pdev); 199 - static int nsp32_queuecommand(struct scsi_cmnd *, 200 - void (*done)(struct scsi_cmnd *)); 199 + static int nsp32_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 201 200 static const char *nsp32_info (struct Scsi_Host *); 202 201 static int nsp32_release (struct Scsi_Host *); 203 202 ··· 908 909 return TRUE; 909 910 } 910 911 911 - static int nsp32_queuecommand(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 912 + static int nsp32_queuecommand_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 912 913 { 913 914 nsp32_hw_data *data = (nsp32_hw_data *)SCpnt->device->host->hostdata; 914 915 nsp32_target *target; ··· 1048 1049 1049 1050 return 0; 1050 1051 } 1052 + 1053 + static DEF_SCSI_QCMD(nsp32_queuecommand) 1051 1054 1052 1055 /* initialize asic */ 1053 1056 static int nsp32hw_init(nsp32_hw_data *data)
+1 -1
drivers/scsi/pas16.h
··· 118 118 static int pas16_biosparam(struct scsi_device *, struct block_device *, 119 119 sector_t, int*); 120 120 static int pas16_detect(struct scsi_host_template *); 121 - static int pas16_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *)); 121 + static int pas16_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 122 122 static int pas16_bus_reset(Scsi_Cmnd *); 123 123 124 124 #ifndef CMD_PER_LUN
+3 -1
drivers/scsi/pcmcia/nsp_cs.c
··· 184 184 SCpnt->scsi_done(SCpnt); 185 185 } 186 186 187 - static int nsp_queuecommand(struct scsi_cmnd *SCpnt, 187 + static int nsp_queuecommand_lck(struct scsi_cmnd *SCpnt, 188 188 void (*done)(struct scsi_cmnd *)) 189 189 { 190 190 #ifdef NSP_DEBUG ··· 263 263 #endif 264 264 return 0; 265 265 } 266 + 267 + static DEF_SCSI_QCMD(nsp_queuecommand) 266 268 267 269 /* 268 270 * setup PIO FIFO transfer mode and enable/disable to data out
+1 -2
drivers/scsi/pcmcia/nsp_cs.h
··· 299 299 off_t offset, 300 300 int length, 301 301 int inout); 302 - static int nsp_queuecommand(struct scsi_cmnd *SCpnt, 303 - void (* done)(struct scsi_cmnd *SCpnt)); 302 + static int nsp_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *SCpnt); 304 303 305 304 /* Error handler */ 306 305 /*static int nsp_eh_abort (struct scsi_cmnd *SCpnt);*/
+3 -1
drivers/scsi/pcmcia/sym53c500_cs.c
··· 547 547 } 548 548 549 549 static int 550 - SYM53C500_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 550 + SYM53C500_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 551 551 { 552 552 int i; 553 553 int port_base = SCpnt->device->host->io_port; ··· 582 582 583 583 return 0; 584 584 } 585 + 586 + static DEF_SCSI_QCMD(SYM53C500_queue) 585 587 586 588 static int 587 589 SYM53C500_host_reset(struct scsi_cmnd *SCpnt)
-1
drivers/scsi/pm8001/pm8001_sas.h
··· 50 50 #include <linux/dma-mapping.h> 51 51 #include <linux/pci.h> 52 52 #include <linux/interrupt.h> 53 - #include <linux/smp_lock.h> 54 53 #include <scsi/libsas.h> 55 54 #include <scsi/scsi_tcq.h> 56 55 #include <scsi/sas_ata.h>
+3 -1
drivers/scsi/pmcraid.c
··· 3478 3478 * SCSI_MLQUEUE_DEVICE_BUSY if device is busy 3479 3479 * SCSI_MLQUEUE_HOST_BUSY if host is busy 3480 3480 */ 3481 - static int pmcraid_queuecommand( 3481 + static int pmcraid_queuecommand_lck( 3482 3482 struct scsi_cmnd *scsi_cmd, 3483 3483 void (*done) (struct scsi_cmnd *) 3484 3484 ) ··· 3583 3583 3584 3584 return rc; 3585 3585 } 3586 + 3587 + static DEF_SCSI_QCMD(pmcraid_queuecommand) 3586 3588 3587 3589 /** 3588 3590 * pmcraid_open -char node "open" entry, allowed only users with admin access
+3 -1
drivers/scsi/ppa.c
··· 798 798 return 0; 799 799 } 800 800 801 - static int ppa_queuecommand(struct scsi_cmnd *cmd, 801 + static int ppa_queuecommand_lck(struct scsi_cmnd *cmd, 802 802 void (*done) (struct scsi_cmnd *)) 803 803 { 804 804 ppa_struct *dev = ppa_dev(cmd->device->host); ··· 820 820 821 821 return 0; 822 822 } 823 + 824 + static DEF_SCSI_QCMD(ppa_queuecommand) 823 825 824 826 /* 825 827 * Apparently the disk->capacity attribute is off by 1 sector
+3 -1
drivers/scsi/ps3rom.c
··· 211 211 return 0; 212 212 } 213 213 214 - static int ps3rom_queuecommand(struct scsi_cmnd *cmd, 214 + static int ps3rom_queuecommand_lck(struct scsi_cmnd *cmd, 215 215 void (*done)(struct scsi_cmnd *)) 216 216 { 217 217 struct ps3rom_private *priv = shost_priv(cmd->device->host); ··· 259 259 260 260 return 0; 261 261 } 262 + 263 + static DEF_SCSI_QCMD(ps3rom_queuecommand) 262 264 263 265 static int decode_lv1_status(u64 status, unsigned char *sense_key, 264 266 unsigned char *asc, unsigned char *ascq)
+3 -1
drivers/scsi/qla1280.c
··· 727 727 * context which is a big NO! NO!. 728 728 **************************************************************************/ 729 729 static int 730 - qla1280_queuecommand(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *)) 730 + qla1280_queuecommand_lck(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *)) 731 731 { 732 732 struct Scsi_Host *host = cmd->device->host; 733 733 struct scsi_qla_host *ha = (struct scsi_qla_host *)host->hostdata; ··· 755 755 #endif 756 756 return status; 757 757 } 758 + 759 + static DEF_SCSI_QCMD(qla1280_queuecommand) 758 760 759 761 enum action { 760 762 ABORT_COMMAND,
+4 -3
drivers/scsi/qla2xxx/qla_os.c
··· 179 179 static int qla2xxx_scan_finished(struct Scsi_Host *, unsigned long time); 180 180 static void qla2xxx_scan_start(struct Scsi_Host *); 181 181 static void qla2xxx_slave_destroy(struct scsi_device *); 182 - static int qla2xxx_queuecommand(struct scsi_cmnd *cmd, 183 - void (*fn)(struct scsi_cmnd *)); 182 + static int qla2xxx_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *cmd); 184 183 static int qla2xxx_eh_abort(struct scsi_cmnd *); 185 184 static int qla2xxx_eh_device_reset(struct scsi_cmnd *); 186 185 static int qla2xxx_eh_target_reset(struct scsi_cmnd *); ··· 534 535 } 535 536 536 537 static int 537 - qla2xxx_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 538 + qla2xxx_queuecommand_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 538 539 { 539 540 scsi_qla_host_t *vha = shost_priv(cmd->device->host); 540 541 fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; ··· 607 608 608 609 return 0; 609 610 } 611 + 612 + static DEF_SCSI_QCMD(qla2xxx_queuecommand) 610 613 611 614 612 615 /*
+4 -3
drivers/scsi/qla4xxx/ql4_os.c
··· 79 79 /* 80 80 * SCSI host template entry points 81 81 */ 82 - static int qla4xxx_queuecommand(struct scsi_cmnd *cmd, 83 - void (*done) (struct scsi_cmnd *)); 82 + static int qla4xxx_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *cmd); 84 83 static int qla4xxx_eh_abort(struct scsi_cmnd *cmd); 85 84 static int qla4xxx_eh_device_reset(struct scsi_cmnd *cmd); 86 85 static int qla4xxx_eh_target_reset(struct scsi_cmnd *cmd); ··· 463 464 * completion handling). Unfortunely, it sometimes calls the scheduler 464 465 * in interrupt context which is a big NO! NO!. 465 466 **/ 466 - static int qla4xxx_queuecommand(struct scsi_cmnd *cmd, 467 + static int qla4xxx_queuecommand_lck(struct scsi_cmnd *cmd, 467 468 void (*done)(struct scsi_cmnd *)) 468 469 { 469 470 struct scsi_qla_host *ha = to_qla_host(cmd->device->host); ··· 536 537 537 538 return 0; 538 539 } 540 + 541 + static DEF_SCSI_QCMD(qla4xxx_queuecommand) 539 542 540 543 /** 541 544 * qla4xxx_mem_free - frees memory allocated to adapter
+3 -1
drivers/scsi/qlogicfas408.c
··· 439 439 * Queued command 440 440 */ 441 441 442 - int qlogicfas408_queuecommand(struct scsi_cmnd *cmd, 442 + static int qlogicfas408_queuecommand_lck(struct scsi_cmnd *cmd, 443 443 void (*done) (struct scsi_cmnd *)) 444 444 { 445 445 struct qlogicfas408_priv *priv = get_priv_by_cmd(cmd); ··· 458 458 ql_icmd(cmd); 459 459 return 0; 460 460 } 461 + 462 + DEF_SCSI_QCMD(qlogicfas408_queuecommand) 461 463 462 464 /* 463 465 * Return bios parameters
+1 -2
drivers/scsi/qlogicfas408.h
··· 103 103 #define get_priv_by_host(x) (struct qlogicfas408_priv *)&((x)->hostdata[0]) 104 104 105 105 irqreturn_t qlogicfas408_ihandl(int irq, void *dev_id); 106 - int qlogicfas408_queuecommand(struct scsi_cmnd * cmd, 107 - void (*done) (struct scsi_cmnd *)); 106 + int qlogicfas408_queuecommand(struct Scsi_Host *h, struct scsi_cmnd * cmd); 108 107 int qlogicfas408_biosparam(struct scsi_device * disk, 109 108 struct block_device *dev, 110 109 sector_t capacity, int ip[]);
+3 -1
drivers/scsi/qlogicpti.c
··· 1003 1003 * 1004 1004 * "This code must fly." -davem 1005 1005 */ 1006 - static int qlogicpti_queuecommand(struct scsi_cmnd *Cmnd, void (*done)(struct scsi_cmnd *)) 1006 + static int qlogicpti_queuecommand_lck(struct scsi_cmnd *Cmnd, void (*done)(struct scsi_cmnd *)) 1007 1007 { 1008 1008 struct Scsi_Host *host = Cmnd->device->host; 1009 1009 struct qlogicpti *qpti = (struct qlogicpti *) host->hostdata; ··· 1051 1051 done(Cmnd); 1052 1052 return 1; 1053 1053 } 1054 + 1055 + static DEF_SCSI_QCMD(qlogicpti_queuecommand) 1054 1056 1055 1057 static int qlogicpti_return_status(struct Status_Entry *sts, int id) 1056 1058 {
+5 -13
drivers/scsi/scsi.c
··· 634 634 * Description: a serial number identifies a request for error recovery 635 635 * and debugging purposes. Protected by the Host_Lock of host. 636 636 */ 637 - static inline void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd *cmd) 637 + void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd *cmd) 638 638 { 639 639 cmd->serial_number = host->cmd_serial_number++; 640 640 if (cmd->serial_number == 0) 641 641 cmd->serial_number = host->cmd_serial_number++; 642 642 } 643 + EXPORT_SYMBOL(scsi_cmd_get_serial); 643 644 644 645 /** 645 646 * scsi_dispatch_command - Dispatch a command to the low-level driver. ··· 652 651 int scsi_dispatch_cmd(struct scsi_cmnd *cmd) 653 652 { 654 653 struct Scsi_Host *host = cmd->device->host; 655 - unsigned long flags = 0; 656 654 unsigned long timeout; 657 655 int rtn = 0; 658 656 ··· 737 737 goto out; 738 738 } 739 739 740 - spin_lock_irqsave(host->host_lock, flags); 741 - /* 742 - * AK: unlikely race here: for some reason the timer could 743 - * expire before the serial number is set up below. 744 - * 745 - * TODO: kill serial or move to blk layer 746 - */ 747 - scsi_cmd_get_serial(host, cmd); 748 - 749 740 if (unlikely(host->shost_state == SHOST_DEL)) { 750 741 cmd->result = (DID_NO_CONNECT << 16); 751 742 scsi_done(cmd); 752 743 } else { 753 744 trace_scsi_dispatch_cmd_start(cmd); 754 - rtn = host->hostt->queuecommand(cmd, scsi_done); 745 + cmd->scsi_done = scsi_done; 746 + rtn = host->hostt->queuecommand(host, cmd); 755 747 } 756 - spin_unlock_irqrestore(host->host_lock, flags); 748 + 757 749 if (rtn) { 758 750 trace_scsi_dispatch_cmd_error(cmd, rtn); 759 751 if (rtn != SCSI_MLQUEUE_DEVICE_BUSY &&
+3 -1
drivers/scsi/scsi_debug.c
··· 3538 3538 } 3539 3539 3540 3540 static 3541 - int scsi_debug_queuecommand(struct scsi_cmnd *SCpnt, done_funct_t done) 3541 + int scsi_debug_queuecommand_lck(struct scsi_cmnd *SCpnt, done_funct_t done) 3542 3542 { 3543 3543 unsigned char *cmd = (unsigned char *) SCpnt->cmnd; 3544 3544 int len, k; ··· 3883 3883 return schedule_resp(SCpnt, devip, done, errsts, 3884 3884 (delay_override ? 0 : scsi_debug_delay)); 3885 3885 } 3886 + 3887 + static DEF_SCSI_QCMD(scsi_debug_queuecommand) 3886 3888 3887 3889 static struct scsi_host_template sdebug_driver_template = { 3888 3890 .proc_info = scsi_debug_proc_info,
+2 -4
drivers/scsi/scsi_error.c
··· 773 773 struct Scsi_Host *shost = sdev->host; 774 774 DECLARE_COMPLETION_ONSTACK(done); 775 775 unsigned long timeleft; 776 - unsigned long flags; 777 776 struct scsi_eh_save ses; 778 777 int rtn; 779 778 780 779 scsi_eh_prep_cmnd(scmd, &ses, cmnd, cmnd_size, sense_bytes); 781 780 shost->eh_action = &done; 782 781 783 - spin_lock_irqsave(shost->host_lock, flags); 784 782 scsi_log_send(scmd); 785 - shost->hostt->queuecommand(scmd, scsi_eh_done); 786 - spin_unlock_irqrestore(shost->host_lock, flags); 783 + scmd->scsi_done = scsi_eh_done; 784 + shost->hostt->queuecommand(shost, scmd); 787 785 788 786 timeleft = wait_for_completion_timeout(&done, timeout); 789 787
-1
drivers/scsi/sd.c
··· 46 46 #include <linux/blkdev.h> 47 47 #include <linux/blkpg.h> 48 48 #include <linux/delay.h> 49 - #include <linux/smp_lock.h> 50 49 #include <linux/mutex.h> 51 50 #include <linux/string_helpers.h> 52 51 #include <linux/async.h>
+3 -1
drivers/scsi/stex.c
··· 572 572 } 573 573 574 574 static int 575 - stex_queuecommand(struct scsi_cmnd *cmd, void (* done)(struct scsi_cmnd *)) 575 + stex_queuecommand_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 576 576 { 577 577 struct st_hba *hba; 578 578 struct Scsi_Host *host; ··· 697 697 hba->send(hba, req, tag); 698 698 return 0; 699 699 } 700 + 701 + static DEF_SCSI_QCMD(stex_queuecommand) 700 702 701 703 static void stex_scsi_done(struct st_ccb *ccb) 702 704 {
+3 -1
drivers/scsi/sun3_NCR5380.c
··· 908 908 */ 909 909 910 910 /* Only make static if a wrapper function is used */ 911 - static int NCR5380_queue_command(struct scsi_cmnd *cmd, 911 + static int NCR5380_queue_command_lck(struct scsi_cmnd *cmd, 912 912 void (*done)(struct scsi_cmnd *)) 913 913 { 914 914 SETUP_HOSTDATA(cmd->device->host); ··· 1018 1018 NCR5380_main(NULL); 1019 1019 return 0; 1020 1020 } 1021 + 1022 + static DEF_SCSI_QCMD(NCR5380_queue_command) 1021 1023 1022 1024 /* 1023 1025 * Function : NCR5380_main (void)
+1 -2
drivers/scsi/sun3_scsi.h
··· 51 51 static int sun3scsi_detect (struct scsi_host_template *); 52 52 static const char *sun3scsi_info (struct Scsi_Host *); 53 53 static int sun3scsi_bus_reset(struct scsi_cmnd *); 54 - static int sun3scsi_queue_command(struct scsi_cmnd *, 55 - void (*done)(struct scsi_cmnd *)); 54 + static int sun3scsi_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 56 55 static int sun3scsi_release (struct Scsi_Host *); 57 56 58 57 #ifndef CMD_PER_LUN
+3 -1
drivers/scsi/sym53c416.c
··· 734 734 return info; 735 735 } 736 736 737 - int sym53c416_queuecommand(Scsi_Cmnd *SCpnt, void (*done)(Scsi_Cmnd *)) 737 + static int sym53c416_queuecommand_lck(Scsi_Cmnd *SCpnt, void (*done)(Scsi_Cmnd *)) 738 738 { 739 739 int base; 740 740 unsigned long flags = 0; ··· 760 760 spin_unlock_irqrestore(&sym53c416_lock, flags); 761 761 return 0; 762 762 } 763 + 764 + DEF_SCSI_QCMD(sym53c416_queuecommand) 763 765 764 766 static int sym53c416_host_reset(Scsi_Cmnd *SCpnt) 765 767 {
+1 -1
drivers/scsi/sym53c416.h
··· 25 25 static int sym53c416_detect(struct scsi_host_template *); 26 26 static const char *sym53c416_info(struct Scsi_Host *); 27 27 static int sym53c416_release(struct Scsi_Host *); 28 - static int sym53c416_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *)); 28 + static int sym53c416_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 29 29 static int sym53c416_host_reset(Scsi_Cmnd *); 30 30 static int sym53c416_bios_param(struct scsi_device *, struct block_device *, 31 31 sector_t, int *);
+3 -1
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 505 505 * queuecommand method. Entered with the host adapter lock held and 506 506 * interrupts disabled. 507 507 */ 508 - static int sym53c8xx_queue_command(struct scsi_cmnd *cmd, 508 + static int sym53c8xx_queue_command_lck(struct scsi_cmnd *cmd, 509 509 void (*done)(struct scsi_cmnd *)) 510 510 { 511 511 struct sym_hcb *np = SYM_SOFTC_PTR(cmd); ··· 535 535 return SCSI_MLQUEUE_HOST_BUSY; 536 536 return 0; 537 537 } 538 + 539 + static DEF_SCSI_QCMD(sym53c8xx_queue_command) 538 540 539 541 /* 540 542 * Linux entry point of the interrupt handler.
+1 -2
drivers/scsi/t128.h
··· 96 96 static int t128_biosparam(struct scsi_device *, struct block_device *, 97 97 sector_t, int*); 98 98 static int t128_detect(struct scsi_host_template *); 99 - static int t128_queue_command(struct scsi_cmnd *, 100 - void (*done)(struct scsi_cmnd *)); 99 + static int t128_queue_command(struct Scsi_Host *, struct scsi_cmnd *); 101 100 static int t128_bus_reset(struct scsi_cmnd *); 102 101 103 102 #ifndef CMD_PER_LUN
+3 -1
drivers/scsi/tmscsim.c
··· 1883 1883 return; 1884 1884 } 1885 1885 1886 - static int DC390_queuecommand(struct scsi_cmnd *cmd, 1886 + static int DC390_queuecommand_lck(struct scsi_cmnd *cmd, 1887 1887 void (*done)(struct scsi_cmnd *)) 1888 1888 { 1889 1889 struct scsi_device *sdev = cmd->device; ··· 1943 1943 device_busy: 1944 1944 return SCSI_MLQUEUE_DEVICE_BUSY; 1945 1945 } 1946 + 1947 + static DEF_SCSI_QCMD(DC390_queuecommand) 1946 1948 1947 1949 static void dc390_dumpinfo (struct dc390_acb* pACB, struct dc390_dcb* pDCB, struct dc390_srb* pSRB) 1948 1950 {
+4 -2
drivers/scsi/u14-34f.c
··· 433 433 434 434 static int u14_34f_detect(struct scsi_host_template *); 435 435 static int u14_34f_release(struct Scsi_Host *); 436 - static int u14_34f_queuecommand(struct scsi_cmnd *, void (*done)(struct scsi_cmnd *)); 436 + static int u14_34f_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 437 437 static int u14_34f_eh_abort(struct scsi_cmnd *); 438 438 static int u14_34f_eh_host_reset(struct scsi_cmnd *); 439 439 static int u14_34f_bios_param(struct scsi_device *, struct block_device *, ··· 1248 1248 1249 1249 } 1250 1250 1251 - static int u14_34f_queuecommand(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) { 1251 + static int u14_34f_queuecommand_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) { 1252 1252 unsigned int i, j, k; 1253 1253 struct mscp *cpp; 1254 1254 ··· 1328 1328 HD(j)->cp_stat[i] = IN_USE; 1329 1329 return 0; 1330 1330 } 1331 + 1332 + static DEF_SCSI_QCMD(u14_34f_queuecommand) 1331 1333 1332 1334 static int u14_34f_eh_abort(struct scsi_cmnd *SCarg) { 1333 1335 unsigned int i, j;
+3 -1
drivers/scsi/ultrastor.c
··· 700 700 mscp->transfer_data_length = transfer_length; 701 701 } 702 702 703 - static int ultrastor_queuecommand(struct scsi_cmnd *SCpnt, 703 + static int ultrastor_queuecommand_lck(struct scsi_cmnd *SCpnt, 704 704 void (*done) (struct scsi_cmnd *)) 705 705 { 706 706 struct mscp *my_mscp; ··· 824 824 825 825 return 0; 826 826 } 827 + 828 + static DEF_SCSI_QCMD(ultrastor_queuecommand) 827 829 828 830 /* This code must deal with 2 cases: 829 831
+1 -2
drivers/scsi/ultrastor.h
··· 15 15 16 16 static int ultrastor_detect(struct scsi_host_template *); 17 17 static const char *ultrastor_info(struct Scsi_Host *shpnt); 18 - static int ultrastor_queuecommand(struct scsi_cmnd *, 19 - void (*done)(struct scsi_cmnd *)); 18 + static int ultrastor_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 20 19 static int ultrastor_abort(struct scsi_cmnd *); 21 20 static int ultrastor_host_reset(struct scsi_cmnd *); 22 21 static int ultrastor_biosparam(struct scsi_device *, struct block_device *,
+3 -1
drivers/scsi/vmw_pvscsi.c
··· 690 690 return 0; 691 691 } 692 692 693 - static int pvscsi_queue(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 693 + static int pvscsi_queue_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 694 694 { 695 695 struct Scsi_Host *host = cmd->device->host; 696 696 struct pvscsi_adapter *adapter = shost_priv(host); ··· 718 718 719 719 return 0; 720 720 } 721 + 722 + static DEF_SCSI_QCMD(pvscsi_queue) 721 723 722 724 static int pvscsi_abort(struct scsi_cmnd *cmd) 723 725 {
+4 -2
drivers/scsi/wd33c93.c
··· 371 371 msg[1] = offset; 372 372 } 373 373 374 - int 375 - wd33c93_queuecommand(struct scsi_cmnd *cmd, 374 + static int 375 + wd33c93_queuecommand_lck(struct scsi_cmnd *cmd, 376 376 void (*done)(struct scsi_cmnd *)) 377 377 { 378 378 struct WD33C93_hostdata *hostdata; ··· 467 467 spin_unlock_irq(&hostdata->lock); 468 468 return 0; 469 469 } 470 + 471 + DEF_SCSI_QCMD(wd33c93_queuecommand) 470 472 471 473 /* 472 474 * This routine attempts to start a scsi command. If the host_card is
+1 -2
drivers/scsi/wd33c93.h
··· 343 343 void wd33c93_init (struct Scsi_Host *instance, const wd33c93_regs regs, 344 344 dma_setup_t setup, dma_stop_t stop, int clock_freq); 345 345 int wd33c93_abort (struct scsi_cmnd *cmd); 346 - int wd33c93_queuecommand (struct scsi_cmnd *cmd, 347 - void (*done)(struct scsi_cmnd *)); 346 + int wd33c93_queuecommand (struct Scsi_Host *h, struct scsi_cmnd *cmd); 348 347 void wd33c93_intr (struct Scsi_Host *instance); 349 348 int wd33c93_proc_info(struct Scsi_Host *, char *, char **, off_t, int, int); 350 349 int wd33c93_host_reset (struct scsi_cmnd *);
+3 -1
drivers/scsi/wd7000.c
··· 1082 1082 return IRQ_HANDLED; 1083 1083 } 1084 1084 1085 - static int wd7000_queuecommand(struct scsi_cmnd *SCpnt, 1085 + static int wd7000_queuecommand_lck(struct scsi_cmnd *SCpnt, 1086 1086 void (*done)(struct scsi_cmnd *)) 1087 1087 { 1088 1088 Scb *scb; ··· 1138 1138 1139 1139 return 0; 1140 1140 } 1141 + 1142 + static DEF_SCSI_QCMD(wd7000_queuecommand) 1141 1143 1142 1144 static int wd7000_diagnostics(Adapter * host, int code) 1143 1145 {
-1
drivers/serial/crisv10.c
··· 18 18 #include <linux/tty.h> 19 19 #include <linux/tty_flip.h> 20 20 #include <linux/major.h> 21 - #include <linux/smp_lock.h> 22 21 #include <linux/string.h> 23 22 #include <linux/fcntl.h> 24 23 #include <linux/mm.h>
-1
drivers/serial/serial_core.c
··· 29 29 #include <linux/console.h> 30 30 #include <linux/proc_fs.h> 31 31 #include <linux/seq_file.h> 32 - #include <linux/smp_lock.h> 33 32 #include <linux/device.h> 34 33 #include <linux/serial.h> /* for serial_state and serial_icounter_struct */ 35 34 #include <linux/serial_core.h>
-1
drivers/staging/easycap/easycap.h
··· 77 77 #include <linux/slab.h> 78 78 #include <linux/module.h> 79 79 #include <linux/kref.h> 80 - #include <linux/smp_lock.h> 81 80 #include <linux/usb.h> 82 81 #include <linux/uaccess.h> 83 82
+4 -3
drivers/staging/hv/storvsc_drv.c
··· 72 72 73 73 /* Static decl */ 74 74 static int storvsc_probe(struct device *dev); 75 - static int storvsc_queuecommand(struct scsi_cmnd *scmnd, 76 - void (*done)(struct scsi_cmnd *)); 75 + static int storvsc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd); 77 76 static int storvsc_device_alloc(struct scsi_device *); 78 77 static int storvsc_device_configure(struct scsi_device *); 79 78 static int storvsc_host_reset_handler(struct scsi_cmnd *scmnd); ··· 594 595 /* 595 596 * storvsc_queuecommand - Initiate command processing 596 597 */ 597 - static int storvsc_queuecommand(struct scsi_cmnd *scmnd, 598 + static int storvsc_queuecommand_lck(struct scsi_cmnd *scmnd, 598 599 void (*done)(struct scsi_cmnd *)) 599 600 { 600 601 int ret; ··· 781 782 782 783 return ret; 783 784 } 785 + 786 + static DEF_SCSI_QCMD(storvsc_queuecommand) 784 787 785 788 static int storvsc_merge_bvec(struct request_queue *q, 786 789 struct bvec_merge_data *bmd, struct bio_vec *bvec)
-1
drivers/staging/intel_sst/intel_sst_app_interface.c
··· 34 34 #include <linux/uaccess.h> 35 35 #include <linux/firmware.h> 36 36 #include <linux/ioctl.h> 37 - #include <linux/smp_lock.h> 38 37 #ifdef CONFIG_MRST_RAR_HANDLER 39 38 #include <linux/rar_register.h> 40 39 #include "../../../drivers/staging/memrar/memrar.h"
+3 -1
drivers/staging/keucr/scsiglue.c
··· 87 87 88 88 /* This is always called with scsi_lock(host) held */ 89 89 //----- queuecommand() --------------------- 90 - static int queuecommand(struct scsi_cmnd *srb, void (*done)(struct scsi_cmnd *)) 90 + static int queuecommand_lck(struct scsi_cmnd *srb, void (*done)(struct scsi_cmnd *)) 91 91 { 92 92 struct us_data *us = host_to_us(srb->device->host); 93 93 ··· 116 116 117 117 return 0; 118 118 } 119 + 120 + static DEF_SCSI_QCMD(queuecommand) 119 121 120 122 /*********************************************************************** 121 123 * Error handling functions
-1
drivers/staging/rtl8712/osdep_service.h
··· 22 22 #include <linux/module.h> 23 23 #include <linux/sched.h> 24 24 #include <linux/kref.h> 25 - #include <linux/smp_lock.h> 26 25 #include <linux/netdevice.h> 27 26 #include <linux/skbuff.h> 28 27 #include <linux/usb.h>
-1
drivers/staging/speakup/buffers.c
··· 1 1 #include <linux/console.h> 2 - #include <linux/smp_lock.h> 3 2 #include <linux/types.h> 4 3 #include <linux/wait.h> 5 4
+1 -1
drivers/staging/stradis/Kconfig
··· 1 1 config VIDEO_STRADIS 2 2 tristate "Stradis 4:2:2 MPEG-2 video driver (DEPRECATED)" 3 - depends on EXPERIMENTAL && PCI && VIDEO_V4L1 && VIRT_TO_BUS 3 + depends on EXPERIMENTAL && PCI && VIDEO_V4L1 && VIRT_TO_BUS && BKL 4 4 help 5 5 Say Y here to enable support for the Stradis 4:2:2 MPEG-2 video 6 6 driver for PCI. There is a product page at
+127 -48
drivers/tty/sysrq.c
··· 554 554 #ifdef CONFIG_INPUT 555 555 556 556 /* Simple translation table for the SysRq keys */ 557 - static const unsigned char sysrq_xlate[KEY_MAX + 1] = 557 + static const unsigned char sysrq_xlate[KEY_CNT] = 558 558 "\000\0331234567890-=\177\t" /* 0x00 - 0x0f */ 559 559 "qwertyuiop[]\r\000as" /* 0x10 - 0x1f */ 560 560 "dfghjkl;'`\000\\zxcv" /* 0x20 - 0x2f */ ··· 563 563 "230\177\000\000\213\214\000\000\000\000\000\000\000\000\000\000" /* 0x50 - 0x5f */ 564 564 "\r\000/"; /* 0x60 - 0x6f */ 565 565 566 - static bool sysrq_down; 567 - static int sysrq_alt_use; 568 - static int sysrq_alt; 569 - static DEFINE_SPINLOCK(sysrq_event_lock); 566 + struct sysrq_state { 567 + struct input_handle handle; 568 + struct work_struct reinject_work; 569 + unsigned long key_down[BITS_TO_LONGS(KEY_CNT)]; 570 + unsigned int alt; 571 + unsigned int alt_use; 572 + bool active; 573 + bool need_reinject; 574 + }; 570 575 571 - static bool sysrq_filter(struct input_handle *handle, unsigned int type, 572 - unsigned int code, int value) 576 + static void sysrq_reinject_alt_sysrq(struct work_struct *work) 573 577 { 578 + struct sysrq_state *sysrq = 579 + container_of(work, struct sysrq_state, reinject_work); 580 + struct input_handle *handle = &sysrq->handle; 581 + unsigned int alt_code = sysrq->alt_use; 582 + 583 + if (sysrq->need_reinject) { 584 + /* Simulate press and release of Alt + SysRq */ 585 + input_inject_event(handle, EV_KEY, alt_code, 1); 586 + input_inject_event(handle, EV_KEY, KEY_SYSRQ, 1); 587 + input_inject_event(handle, EV_SYN, SYN_REPORT, 1); 588 + 589 + input_inject_event(handle, EV_KEY, KEY_SYSRQ, 0); 590 + input_inject_event(handle, EV_KEY, alt_code, 0); 591 + input_inject_event(handle, EV_SYN, SYN_REPORT, 1); 592 + } 593 + } 594 + 595 + static bool sysrq_filter(struct input_handle *handle, 596 + unsigned int type, unsigned int code, int value) 597 + { 598 + struct sysrq_state *sysrq = handle->private; 599 + bool was_active = sysrq->active; 574 600 bool suppress; 575 601 576 - /* We are called with interrupts disabled, just take the lock */ 577 - spin_lock(&sysrq_event_lock); 602 + switch (type) { 578 603 579 - if (type != EV_KEY) 580 - goto out; 581 - 582 - switch (code) { 583 - 584 - case KEY_LEFTALT: 585 - case KEY_RIGHTALT: 586 - if (value) 587 - sysrq_alt = code; 588 - else { 589 - if (sysrq_down && code == sysrq_alt_use) 590 - sysrq_down = false; 591 - 592 - sysrq_alt = 0; 593 - } 604 + case EV_SYN: 605 + suppress = false; 594 606 break; 595 607 596 - case KEY_SYSRQ: 597 - if (value == 1 && sysrq_alt) { 598 - sysrq_down = true; 599 - sysrq_alt_use = sysrq_alt; 608 + case EV_KEY: 609 + switch (code) { 610 + 611 + case KEY_LEFTALT: 612 + case KEY_RIGHTALT: 613 + if (!value) { 614 + /* One of ALTs is being released */ 615 + if (sysrq->active && code == sysrq->alt_use) 616 + sysrq->active = false; 617 + 618 + sysrq->alt = KEY_RESERVED; 619 + 620 + } else if (value != 2) { 621 + sysrq->alt = code; 622 + sysrq->need_reinject = false; 623 + } 624 + break; 625 + 626 + case KEY_SYSRQ: 627 + if (value == 1 && sysrq->alt != KEY_RESERVED) { 628 + sysrq->active = true; 629 + sysrq->alt_use = sysrq->alt; 630 + /* 631 + * If nothing else will be pressed we'll need 632 + * to * re-inject Alt-SysRq keysroke. 633 + */ 634 + sysrq->need_reinject = true; 635 + } 636 + 637 + /* 638 + * Pretend that sysrq was never pressed at all. This 639 + * is needed to properly handle KGDB which will try 640 + * to release all keys after exiting debugger. If we 641 + * do not clear key bit it KGDB will end up sending 642 + * release events for Alt and SysRq, potentially 643 + * triggering print screen function. 644 + */ 645 + if (sysrq->active) 646 + clear_bit(KEY_SYSRQ, handle->dev->key); 647 + 648 + break; 649 + 650 + default: 651 + if (sysrq->active && value && value != 2) { 652 + sysrq->need_reinject = false; 653 + __handle_sysrq(sysrq_xlate[code], true); 654 + } 655 + break; 656 + } 657 + 658 + suppress = sysrq->active; 659 + 660 + if (!sysrq->active) { 661 + /* 662 + * If we are not suppressing key presses keep track of 663 + * keyboard state so we can release keys that have been 664 + * pressed before entering SysRq mode. 665 + */ 666 + if (value) 667 + set_bit(code, sysrq->key_down); 668 + else 669 + clear_bit(code, sysrq->key_down); 670 + 671 + if (was_active) 672 + schedule_work(&sysrq->reinject_work); 673 + 674 + } else if (value == 0 && 675 + test_and_clear_bit(code, sysrq->key_down)) { 676 + /* 677 + * Pass on release events for keys that was pressed before 678 + * entering SysRq mode. 679 + */ 680 + suppress = false; 600 681 } 601 682 break; 602 683 603 684 default: 604 - if (sysrq_down && value && value != 2) 605 - __handle_sysrq(sysrq_xlate[code], true); 685 + suppress = sysrq->active; 606 686 break; 607 687 } 608 - 609 - out: 610 - suppress = sysrq_down; 611 - spin_unlock(&sysrq_event_lock); 612 688 613 689 return suppress; 614 690 } ··· 693 617 struct input_dev *dev, 694 618 const struct input_device_id *id) 695 619 { 696 - struct input_handle *handle; 620 + struct sysrq_state *sysrq; 697 621 int error; 698 622 699 - sysrq_down = false; 700 - sysrq_alt = 0; 701 - 702 - handle = kzalloc(sizeof(struct input_handle), GFP_KERNEL); 703 - if (!handle) 623 + sysrq = kzalloc(sizeof(struct sysrq_state), GFP_KERNEL); 624 + if (!sysrq) 704 625 return -ENOMEM; 705 626 706 - handle->dev = dev; 707 - handle->handler = handler; 708 - handle->name = "sysrq"; 627 + INIT_WORK(&sysrq->reinject_work, sysrq_reinject_alt_sysrq); 709 628 710 - error = input_register_handle(handle); 629 + sysrq->handle.dev = dev; 630 + sysrq->handle.handler = handler; 631 + sysrq->handle.name = "sysrq"; 632 + sysrq->handle.private = sysrq; 633 + 634 + error = input_register_handle(&sysrq->handle); 711 635 if (error) { 712 636 pr_err("Failed to register input sysrq handler, error %d\n", 713 637 error); 714 638 goto err_free; 715 639 } 716 640 717 - error = input_open_device(handle); 641 + error = input_open_device(&sysrq->handle); 718 642 if (error) { 719 643 pr_err("Failed to open input device, error %d\n", error); 720 644 goto err_unregister; ··· 723 647 return 0; 724 648 725 649 err_unregister: 726 - input_unregister_handle(handle); 650 + input_unregister_handle(&sysrq->handle); 727 651 err_free: 728 - kfree(handle); 652 + kfree(sysrq); 729 653 return error; 730 654 } 731 655 732 656 static void sysrq_disconnect(struct input_handle *handle) 733 657 { 658 + struct sysrq_state *sysrq = handle->private; 659 + 734 660 input_close_device(handle); 661 + cancel_work_sync(&sysrq->reinject_work); 735 662 input_unregister_handle(handle); 736 - kfree(handle); 663 + kfree(sysrq); 737 664 } 738 665 739 666 /*
-1
drivers/usb/core/devices.c
··· 54 54 #include <linux/gfp.h> 55 55 #include <linux/poll.h> 56 56 #include <linux/usb.h> 57 - #include <linux/smp_lock.h> 58 57 #include <linux/usbdevice_fs.h> 59 58 #include <linux/usb/hcd.h> 60 59 #include <linux/mutex.h>
-1
drivers/usb/core/devio.c
··· 37 37 #include <linux/fs.h> 38 38 #include <linux/mm.h> 39 39 #include <linux/slab.h> 40 - #include <linux/smp_lock.h> 41 40 #include <linux/signal.h> 42 41 #include <linux/poll.h> 43 42 #include <linux/module.h>
-1
drivers/usb/core/file.c
··· 19 19 #include <linux/errno.h> 20 20 #include <linux/rwsem.h> 21 21 #include <linux/slab.h> 22 - #include <linux/smp_lock.h> 23 22 #include <linux/usb.h> 24 23 25 24 #include "usb.h"
-1
drivers/usb/core/inode.c
··· 39 39 #include <linux/parser.h> 40 40 #include <linux/notifier.h> 41 41 #include <linux/seq_file.h> 42 - #include <linux/smp_lock.h> 43 42 #include <linux/usb/hcd.h> 44 43 #include <asm/byteorder.h> 45 44 #include "usb.h"
-1
drivers/usb/gadget/f_fs.c
··· 30 30 #include <linux/blkdev.h> 31 31 #include <linux/pagemap.h> 32 32 #include <asm/unaligned.h> 33 - #include <linux/smp_lock.h> 34 33 35 34 #include <linux/usb/composite.h> 36 35 #include <linux/usb/functionfs.h>
-1
drivers/usb/gadget/f_hid.c
··· 25 25 #include <linux/cdev.h> 26 26 #include <linux/mutex.h> 27 27 #include <linux/poll.h> 28 - #include <linux/smp_lock.h> 29 28 #include <linux/uaccess.h> 30 29 #include <linux/wait.h> 31 30 #include <linux/usb/g_hid.h>
-1
drivers/usb/host/isp1362-hcd.c
··· 70 70 #include <linux/ioport.h> 71 71 #include <linux/sched.h> 72 72 #include <linux/slab.h> 73 - #include <linux/smp_lock.h> 74 73 #include <linux/errno.h> 75 74 #include <linux/init.h> 76 75 #include <linux/list.h>
-1
drivers/usb/host/uhci-debug.c
··· 12 12 #include <linux/slab.h> 13 13 #include <linux/kernel.h> 14 14 #include <linux/debugfs.h> 15 - #include <linux/smp_lock.h> 16 15 #include <asm/io.h> 17 16 18 17 #include "uhci-hcd.h"
+4 -2
drivers/usb/image/microtek.c
··· 364 364 } 365 365 366 366 static int 367 - mts_scsi_queuecommand(struct scsi_cmnd *srb, mts_scsi_cmnd_callback callback); 367 + mts_scsi_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *srb); 368 368 369 369 static void mts_transfer_cleanup( struct urb *transfer ); 370 370 static void mts_do_sg(struct urb * transfer); ··· 573 573 574 574 575 575 static int 576 - mts_scsi_queuecommand(struct scsi_cmnd *srb, mts_scsi_cmnd_callback callback) 576 + mts_scsi_queuecommand_lck(struct scsi_cmnd *srb, mts_scsi_cmnd_callback callback) 577 577 { 578 578 struct mts_desc* desc = (struct mts_desc*)(srb->device->host->hostdata[0]); 579 579 int err = 0; ··· 625 625 out: 626 626 return err; 627 627 } 628 + 629 + static DEF_SCSI_QCMD(mts_scsi_queuecommand) 628 630 629 631 static struct scsi_host_template mts_scsi_host_template = { 630 632 .module = THIS_MODULE,
-1
drivers/usb/mon/mon_bin.c
··· 15 15 #include <linux/poll.h> 16 16 #include <linux/compat.h> 17 17 #include <linux/mm.h> 18 - #include <linux/smp_lock.h> 19 18 #include <linux/scatterlist.h> 20 19 #include <linux/slab.h> 21 20
-1
drivers/usb/mon/mon_stat.c
··· 11 11 #include <linux/slab.h> 12 12 #include <linux/usb.h> 13 13 #include <linux/fs.h> 14 - #include <linux/smp_lock.h> 15 14 #include <asm/uaccess.h> 16 15 17 16 #include "usb_mon.h"
-1
drivers/usb/serial/usb-serial.c
··· 21 21 #include <linux/errno.h> 22 22 #include <linux/init.h> 23 23 #include <linux/slab.h> 24 - #include <linux/smp_lock.h> 25 24 #include <linux/tty.h> 26 25 #include <linux/tty_driver.h> 27 26 #include <linux/tty_flip.h>
+3 -1
drivers/usb/storage/scsiglue.c
··· 285 285 286 286 /* queue a command */ 287 287 /* This is always called with scsi_lock(host) held */ 288 - static int queuecommand(struct scsi_cmnd *srb, 288 + static int queuecommand_lck(struct scsi_cmnd *srb, 289 289 void (*done)(struct scsi_cmnd *)) 290 290 { 291 291 struct us_data *us = host_to_us(srb->device->host); ··· 314 314 315 315 return 0; 316 316 } 317 + 318 + static DEF_SCSI_QCMD(queuecommand) 317 319 318 320 /*********************************************************************** 319 321 * Error handling functions
+3 -1
drivers/usb/storage/uas.c
··· 430 430 return 0; 431 431 } 432 432 433 - static int uas_queuecommand(struct scsi_cmnd *cmnd, 433 + static int uas_queuecommand_lck(struct scsi_cmnd *cmnd, 434 434 void (*done)(struct scsi_cmnd *)) 435 435 { 436 436 struct scsi_device *sdev = cmnd->device; ··· 487 487 488 488 return 0; 489 489 } 490 + 491 + static DEF_SCSI_QCMD(uas_queuecommand) 490 492 491 493 static int uas_eh_abort_handler(struct scsi_cmnd *cmnd) 492 494 {
-1
drivers/video/console/vgacon.c
··· 47 47 #include <linux/ioport.h> 48 48 #include <linux/init.h> 49 49 #include <linux/screen_info.h> 50 - #include <linux/smp_lock.h> 51 50 #include <video/vga.h> 52 51 #include <asm/io.h> 53 52
-1
drivers/xen/xenfs/privcmd.c
··· 15 15 #include <linux/mman.h> 16 16 #include <linux/uaccess.h> 17 17 #include <linux/swap.h> 18 - #include <linux/smp_lock.h> 19 18 #include <linux/highmem.h> 20 19 #include <linux/pagemap.h> 21 20 #include <linux/seq_file.h>
-1
drivers/zorro/proc.c
··· 13 13 #include <linux/proc_fs.h> 14 14 #include <linux/seq_file.h> 15 15 #include <linux/init.h> 16 - #include <linux/smp_lock.h> 17 16 #include <asm/uaccess.h> 18 17 #include <asm/amigahw.h> 19 18 #include <asm/setup.h>
-1
fs/block_dev.c
··· 11 11 #include <linux/slab.h> 12 12 #include <linux/kmod.h> 13 13 #include <linux/major.h> 14 - #include <linux/smp_lock.h> 15 14 #include <linux/device_cgroup.h> 16 15 #include <linux/highmem.h> 17 16 #include <linux/blkdev.h>
+3 -3
fs/ceph/addr.c
··· 204 204 err = ceph_osdc_readpages(osdc, ceph_vino(inode), &ci->i_layout, 205 205 page->index << PAGE_CACHE_SHIFT, &len, 206 206 ci->i_truncate_seq, ci->i_truncate_size, 207 - &page, 1); 207 + &page, 1, 0); 208 208 if (err == -ENOENT) 209 209 err = 0; 210 210 if (err < 0) { ··· 287 287 rc = ceph_osdc_readpages(osdc, ceph_vino(inode), &ci->i_layout, 288 288 offset, &len, 289 289 ci->i_truncate_seq, ci->i_truncate_size, 290 - pages, nr_pages); 290 + pages, nr_pages, 0); 291 291 if (rc == -ENOENT) 292 292 rc = 0; 293 293 if (rc < 0) ··· 774 774 snapc, do_sync, 775 775 ci->i_truncate_seq, 776 776 ci->i_truncate_size, 777 - &inode->i_mtime, true, 1); 777 + &inode->i_mtime, true, 1, 0); 778 778 max_pages = req->r_num_pages; 779 779 780 780 alloc_page_vec(fsc, req);
+10 -7
fs/ceph/caps.c
··· 1430 1430 invalidating_gen == ci->i_rdcache_gen) { 1431 1431 /* success. */ 1432 1432 dout("try_nonblocking_invalidate %p success\n", inode); 1433 - ci->i_rdcache_gen = 0; 1434 - ci->i_rdcache_revoking = 0; 1433 + /* save any racing async invalidate some trouble */ 1434 + ci->i_rdcache_revoking = ci->i_rdcache_gen - 1; 1435 1435 return 0; 1436 1436 } 1437 1437 dout("try_nonblocking_invalidate %p failed\n", inode); ··· 2273 2273 { 2274 2274 struct ceph_inode_info *ci = ceph_inode(inode); 2275 2275 int mds = session->s_mds; 2276 - unsigned seq = le32_to_cpu(grant->seq); 2277 - unsigned issue_seq = le32_to_cpu(grant->issue_seq); 2276 + int seq = le32_to_cpu(grant->seq); 2278 2277 int newcaps = le32_to_cpu(grant->caps); 2279 2278 int issued, implemented, used, wanted, dirty; 2280 2279 u64 size = le64_to_cpu(grant->size); ··· 2285 2286 int revoked_rdcache = 0; 2286 2287 int queue_invalidate = 0; 2287 2288 2288 - dout("handle_cap_grant inode %p cap %p mds%d seq %u/%u %s\n", 2289 - inode, cap, mds, seq, issue_seq, ceph_cap_string(newcaps)); 2289 + dout("handle_cap_grant inode %p cap %p mds%d seq %d %s\n", 2290 + inode, cap, mds, seq, ceph_cap_string(newcaps)); 2290 2291 dout(" size %llu max_size %llu, i_size %llu\n", size, max_size, 2291 2292 inode->i_size); 2292 2293 ··· 2382 2383 } 2383 2384 2384 2385 cap->seq = seq; 2385 - cap->issue_seq = issue_seq; 2386 2386 2387 2387 /* file layout may have changed */ 2388 2388 ci->i_layout = grant->layout; ··· 2689 2691 NULL /* no caps context */); 2690 2692 try_flush_caps(inode, session, NULL); 2691 2693 up_read(&mdsc->snap_rwsem); 2694 + 2695 + /* make sure we re-request max_size, if necessary */ 2696 + spin_lock(&inode->i_lock); 2697 + ci->i_requested_max_size = 0; 2698 + spin_unlock(&inode->i_lock); 2692 2699 } 2693 2700 2694 2701 /*
+12 -4
fs/ceph/dir.c
··· 336 336 if (req->r_reply_info.dir_end) { 337 337 kfree(fi->last_name); 338 338 fi->last_name = NULL; 339 - fi->next_offset = 2; 339 + if (ceph_frag_is_rightmost(frag)) 340 + fi->next_offset = 2; 341 + else 342 + fi->next_offset = 0; 340 343 } else { 341 344 rinfo = &req->r_reply_info; 342 345 err = note_last_dentry(fi, ··· 358 355 u64 pos = ceph_make_fpos(frag, off); 359 356 struct ceph_mds_reply_inode *in = 360 357 rinfo->dir_in[off - fi->offset].in; 358 + struct ceph_vino vino; 359 + ino_t ino; 360 + 361 361 dout("readdir off %d (%d/%d) -> %lld '%.*s' %p\n", 362 362 off, off - fi->offset, rinfo->dir_nr, pos, 363 363 rinfo->dir_dname_len[off - fi->offset], 364 364 rinfo->dir_dname[off - fi->offset], in); 365 365 BUG_ON(!in); 366 366 ftype = le32_to_cpu(in->mode) >> 12; 367 + vino.ino = le64_to_cpu(in->ino); 368 + vino.snap = le64_to_cpu(in->snapid); 369 + ino = ceph_vino_to_ino(vino); 367 370 if (filldir(dirent, 368 371 rinfo->dir_dname[off - fi->offset], 369 372 rinfo->dir_dname_len[off - fi->offset], 370 - pos, 371 - le64_to_cpu(in->ino), 372 - ftype) < 0) { 373 + pos, ino, ftype) < 0) { 373 374 dout("filldir stopping us...\n"); 374 375 return 0; 375 376 } ··· 421 414 fi->last_readdir = NULL; 422 415 } 423 416 kfree(fi->last_name); 417 + fi->last_name = NULL; 424 418 fi->next_offset = 2; /* compensate for . and .. */ 425 419 if (fi->dentry) { 426 420 dput(fi->dentry);
+34 -18
fs/ceph/file.c
··· 154 154 } 155 155 156 156 /* 157 - * No need to block if we have any caps. Update wanted set 157 + * No need to block if we have caps on the auth MDS (for 158 + * write) or any MDS (for read). Update wanted set 158 159 * asynchronously. 159 160 */ 160 161 spin_lock(&inode->i_lock); 161 - if (__ceph_is_any_real_caps(ci)) { 162 + if (__ceph_is_any_real_caps(ci) && 163 + (((fmode & CEPH_FILE_MODE_WR) == 0) || ci->i_auth_cap)) { 162 164 int mds_wanted = __ceph_caps_mds_wanted(ci); 163 165 int issued = __ceph_caps_issued(ci, NULL); 164 166 ··· 282 280 static int striped_read(struct inode *inode, 283 281 u64 off, u64 len, 284 282 struct page **pages, int num_pages, 285 - int *checkeof) 283 + int *checkeof, bool align_to_pages) 286 284 { 287 285 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 288 286 struct ceph_inode_info *ci = ceph_inode(inode); 289 287 u64 pos, this_len; 288 + int io_align, page_align; 290 289 int page_off = off & ~PAGE_CACHE_MASK; /* first byte's offset in page */ 291 290 int left, pages_left; 292 291 int read; ··· 303 300 page_pos = pages; 304 301 pages_left = num_pages; 305 302 read = 0; 303 + io_align = off & ~PAGE_MASK; 306 304 307 305 more: 306 + if (align_to_pages) 307 + page_align = (pos - io_align) & ~PAGE_MASK; 308 + else 309 + page_align = pos & ~PAGE_MASK; 308 310 this_len = left; 309 311 ret = ceph_osdc_readpages(&fsc->client->osdc, ceph_vino(inode), 310 312 &ci->i_layout, pos, &this_len, 311 313 ci->i_truncate_seq, 312 314 ci->i_truncate_size, 313 - page_pos, pages_left); 315 + page_pos, pages_left, page_align); 314 316 hit_stripe = this_len < left; 315 317 was_short = ret >= 0 && ret < this_len; 316 318 if (ret == -ENOENT) ··· 382 374 dout("sync_read on file %p %llu~%u %s\n", file, off, len, 383 375 (file->f_flags & O_DIRECT) ? "O_DIRECT" : ""); 384 376 385 - if (file->f_flags & O_DIRECT) { 386 - pages = ceph_get_direct_page_vector(data, num_pages, off, len); 387 - 388 - /* 389 - * flush any page cache pages in this range. this 390 - * will make concurrent normal and O_DIRECT io slow, 391 - * but it will at least behave sensibly when they are 392 - * in sequence. 393 - */ 394 - } else { 377 + if (file->f_flags & O_DIRECT) 378 + pages = ceph_get_direct_page_vector(data, num_pages); 379 + else 395 380 pages = ceph_alloc_page_vector(num_pages, GFP_NOFS); 396 - } 397 381 if (IS_ERR(pages)) 398 382 return PTR_ERR(pages); 399 383 384 + /* 385 + * flush any page cache pages in this range. this 386 + * will make concurrent normal and sync io slow, 387 + * but it will at least behave sensibly when they are 388 + * in sequence. 389 + */ 400 390 ret = filemap_write_and_wait(inode->i_mapping); 401 391 if (ret < 0) 402 392 goto done; 403 393 404 - ret = striped_read(inode, off, len, pages, num_pages, checkeof); 394 + ret = striped_read(inode, off, len, pages, num_pages, checkeof, 395 + file->f_flags & O_DIRECT); 405 396 406 397 if (ret >= 0 && (file->f_flags & O_DIRECT) == 0) 407 398 ret = ceph_copy_page_vector_to_user(pages, data, off, ret); ··· 455 448 int flags; 456 449 int do_sync = 0; 457 450 int check_caps = 0; 451 + int page_align, io_align; 458 452 int ret; 459 453 struct timespec mtime = CURRENT_TIME; 460 454 ··· 469 461 pos = i_size_read(inode); 470 462 else 471 463 pos = *offset; 464 + 465 + io_align = pos & ~PAGE_MASK; 472 466 473 467 ret = filemap_write_and_wait_range(inode->i_mapping, pos, pos + left); 474 468 if (ret < 0) ··· 496 486 */ 497 487 more: 498 488 len = left; 489 + if (file->f_flags & O_DIRECT) 490 + /* write from beginning of first page, regardless of 491 + io alignment */ 492 + page_align = (pos - io_align) & ~PAGE_MASK; 493 + else 494 + page_align = pos & ~PAGE_MASK; 499 495 req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, 500 496 ceph_vino(inode), pos, &len, 501 497 CEPH_OSD_OP_WRITE, flags, 502 498 ci->i_snap_realm->cached_context, 503 499 do_sync, 504 500 ci->i_truncate_seq, ci->i_truncate_size, 505 - &mtime, false, 2); 501 + &mtime, false, 2, page_align); 506 502 if (!req) 507 503 return -ENOMEM; 508 504 509 505 num_pages = calc_pages_for(pos, len); 510 506 511 507 if (file->f_flags & O_DIRECT) { 512 - pages = ceph_get_direct_page_vector(data, num_pages, pos, len); 508 + pages = ceph_get_direct_page_vector(data, num_pages); 513 509 if (IS_ERR(pages)) { 514 510 ret = PTR_ERR(pages); 515 511 goto out;
+31 -19
fs/ceph/inode.c
··· 2 2 3 3 #include <linux/module.h> 4 4 #include <linux/fs.h> 5 - #include <linux/smp_lock.h> 6 5 #include <linux/slab.h> 7 6 #include <linux/string.h> 8 7 #include <linux/uaccess.h> ··· 470 471 471 472 if (issued & (CEPH_CAP_FILE_EXCL| 472 473 CEPH_CAP_FILE_WR| 473 - CEPH_CAP_FILE_BUFFER)) { 474 + CEPH_CAP_FILE_BUFFER| 475 + CEPH_CAP_AUTH_EXCL| 476 + CEPH_CAP_XATTR_EXCL)) { 474 477 if (timespec_compare(ctime, &inode->i_ctime) > 0) { 475 478 dout("ctime %ld.%09ld -> %ld.%09ld inc w/ cap\n", 476 479 inode->i_ctime.tv_sec, inode->i_ctime.tv_nsec, ··· 512 511 warn = 1; 513 512 } 514 513 } else { 515 - /* we have no write caps; whatever the MDS says is true */ 514 + /* we have no write|excl caps; whatever the MDS says is true */ 516 515 if (ceph_seq_cmp(time_warp_seq, ci->i_time_warp_seq) >= 0) { 517 516 inode->i_ctime = *ctime; 518 517 inode->i_mtime = *mtime; ··· 568 567 569 568 /* 570 569 * provided version will be odd if inode value is projected, 571 - * even if stable. skip the update if we have a newer info 572 - * (e.g., due to inode info racing form multiple MDSs), or if 573 - * we are getting projected (unstable) inode info. 570 + * even if stable. skip the update if we have newer stable 571 + * info (ours>=theirs, e.g. due to racing mds replies), unless 572 + * we are getting projected (unstable) info (in which case the 573 + * version is odd, and we want ours>theirs). 574 + * us them 575 + * 2 2 skip 576 + * 3 2 skip 577 + * 3 3 update 574 578 */ 575 579 if (le64_to_cpu(info->version) > 0 && 576 - (ci->i_version & ~1) > le64_to_cpu(info->version)) 580 + (ci->i_version & ~1) >= le64_to_cpu(info->version)) 577 581 goto no_change; 578 582 579 583 issued = __ceph_caps_issued(ci, &implemented); ··· 612 606 le32_to_cpu(info->time_warp_seq), 613 607 &ctime, &mtime, &atime); 614 608 615 - ci->i_max_size = le64_to_cpu(info->max_size); 609 + /* only update max_size on auth cap */ 610 + if ((info->cap.flags & CEPH_CAP_FLAG_AUTH) && 611 + ci->i_max_size != le64_to_cpu(info->max_size)) { 612 + dout("max_size %lld -> %llu\n", ci->i_max_size, 613 + le64_to_cpu(info->max_size)); 614 + ci->i_max_size = le64_to_cpu(info->max_size); 615 + } 616 + 616 617 ci->i_layout = info->layout; 617 618 inode->i_blkbits = fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1; 618 619 ··· 1068 1055 ininfo = rinfo->targeti.in; 1069 1056 vino.ino = le64_to_cpu(ininfo->ino); 1070 1057 vino.snap = le64_to_cpu(ininfo->snapid); 1071 - if (!dn->d_inode) { 1058 + in = dn->d_inode; 1059 + if (!in) { 1072 1060 in = ceph_get_inode(sb, vino); 1073 1061 if (IS_ERR(in)) { 1074 1062 pr_err("fill_trace bad get_inode " ··· 1400 1386 spin_lock(&inode->i_lock); 1401 1387 dout("invalidate_pages %p gen %d revoking %d\n", inode, 1402 1388 ci->i_rdcache_gen, ci->i_rdcache_revoking); 1403 - if (ci->i_rdcache_gen == 0 || 1404 - ci->i_rdcache_revoking != ci->i_rdcache_gen) { 1405 - BUG_ON(ci->i_rdcache_revoking > ci->i_rdcache_gen); 1389 + if (ci->i_rdcache_revoking != ci->i_rdcache_gen) { 1406 1390 /* nevermind! */ 1407 - ci->i_rdcache_revoking = 0; 1408 1391 spin_unlock(&inode->i_lock); 1409 1392 goto out; 1410 1393 } ··· 1411 1400 ceph_invalidate_nondirty_pages(inode->i_mapping); 1412 1401 1413 1402 spin_lock(&inode->i_lock); 1414 - if (orig_gen == ci->i_rdcache_gen) { 1403 + if (orig_gen == ci->i_rdcache_gen && 1404 + orig_gen == ci->i_rdcache_revoking) { 1415 1405 dout("invalidate_pages %p gen %d successful\n", inode, 1416 1406 ci->i_rdcache_gen); 1417 - ci->i_rdcache_gen = 0; 1418 - ci->i_rdcache_revoking = 0; 1407 + ci->i_rdcache_revoking--; 1419 1408 check = 1; 1420 1409 } else { 1421 - dout("invalidate_pages %p gen %d raced, gen now %d\n", 1422 - inode, orig_gen, ci->i_rdcache_gen); 1410 + dout("invalidate_pages %p gen %d raced, now %d revoking %d\n", 1411 + inode, orig_gen, ci->i_rdcache_gen, 1412 + ci->i_rdcache_revoking); 1423 1413 } 1424 1414 spin_unlock(&inode->i_lock); 1425 1415 ··· 1751 1739 return 0; 1752 1740 } 1753 1741 1754 - dout("do_getattr inode %p mask %s\n", inode, ceph_cap_string(mask)); 1742 + dout("do_getattr inode %p mask %s mode 0%o\n", inode, ceph_cap_string(mask), inode->i_mode); 1755 1743 if (ceph_caps_issued_mask(ceph_inode(inode), mask, 1)) 1756 1744 return 0; 1757 1745
+5 -3
fs/ceph/mds_client.c
··· 6 6 #include <linux/sched.h> 7 7 #include <linux/debugfs.h> 8 8 #include <linux/seq_file.h> 9 - #include <linux/smp_lock.h> 10 9 11 10 #include "super.h" 12 11 #include "mds_client.h" ··· 527 528 dout("__register_request %p tid %lld\n", req, req->r_tid); 528 529 ceph_mdsc_get_request(req); 529 530 __insert_request(mdsc, req); 531 + 532 + req->r_uid = current_fsuid(); 533 + req->r_gid = current_fsgid(); 530 534 531 535 if (dir) { 532 536 struct ceph_inode_info *ci = ceph_inode(dir); ··· 1590 1588 1591 1589 head->mdsmap_epoch = cpu_to_le32(mdsc->mdsmap->m_epoch); 1592 1590 head->op = cpu_to_le32(req->r_op); 1593 - head->caller_uid = cpu_to_le32(current_fsuid()); 1594 - head->caller_gid = cpu_to_le32(current_fsgid()); 1591 + head->caller_uid = cpu_to_le32(req->r_uid); 1592 + head->caller_gid = cpu_to_le32(req->r_gid); 1595 1593 head->args = req->r_args; 1596 1594 1597 1595 ceph_encode_filepath(&p, end, ino1, path1);
+2
fs/ceph/mds_client.h
··· 170 170 171 171 union ceph_mds_request_args r_args; 172 172 int r_fmode; /* file mode, if expecting cap */ 173 + uid_t r_uid; 174 + gid_t r_gid; 173 175 174 176 /* for choosing which mds to send this request to */ 175 177 int r_direct_mode;
+1 -3
fs/ceph/super.h
··· 293 293 int i_rd_ref, i_rdcache_ref, i_wr_ref; 294 294 int i_wrbuffer_ref, i_wrbuffer_ref_head; 295 295 u32 i_shared_gen; /* increment each time we get FILE_SHARED */ 296 - u32 i_rdcache_gen; /* we increment this each time we get 297 - FILE_CACHE. If it's non-zero, we 298 - _may_ have cached pages. */ 296 + u32 i_rdcache_gen; /* incremented each time we get FILE_CACHE. */ 299 297 u32 i_rdcache_revoking; /* RDCACHE gen to async invalidate, if any */ 300 298 301 299 struct list_head i_unsafe_writes; /* uncommitted sync writes */
-1
fs/compat_ioctl.c
··· 19 19 #include <linux/compiler.h> 20 20 #include <linux/sched.h> 21 21 #include <linux/smp.h> 22 - #include <linux/smp_lock.h> 23 22 #include <linux/ioctl.h> 24 23 #include <linux/if.h> 25 24 #include <linux/if_bridge.h>
-1
fs/ecryptfs/super.c
··· 28 28 #include <linux/key.h> 29 29 #include <linux/slab.h> 30 30 #include <linux/seq_file.h> 31 - #include <linux/smp_lock.h> 32 31 #include <linux/file.h> 33 32 #include <linux/crypto.h> 34 33 #include "ecryptfs_kernel.h"
-1
fs/ext3/super.c
··· 27 27 #include <linux/init.h> 28 28 #include <linux/blkdev.h> 29 29 #include <linux/parser.h> 30 - #include <linux/smp_lock.h> 31 30 #include <linux/buffer_head.h> 32 31 #include <linux/exportfs.h> 33 32 #include <linux/vfs.h>
+24
fs/ext4/ioctl.c
··· 331 331 return err; 332 332 } 333 333 334 + case FITRIM: 335 + { 336 + struct super_block *sb = inode->i_sb; 337 + struct fstrim_range range; 338 + int ret = 0; 339 + 340 + if (!capable(CAP_SYS_ADMIN)) 341 + return -EPERM; 342 + 343 + if (copy_from_user(&range, (struct fstrim_range *)arg, 344 + sizeof(range))) 345 + return -EFAULT; 346 + 347 + ret = ext4_trim_fs(sb, &range); 348 + if (ret < 0) 349 + return ret; 350 + 351 + if (copy_to_user((struct fstrim_range *)arg, &range, 352 + sizeof(range))) 353 + return -EFAULT; 354 + 355 + return 0; 356 + } 357 + 334 358 default: 335 359 return -ENOTTY; 336 360 }
+2 -2
fs/ext4/page-io.c
··· 237 237 } while (bh != head); 238 238 } 239 239 240 - put_io_page(io_end->pages[i]); 241 - 242 240 /* 243 241 * If this is a partial write which happened to make 244 242 * all buffers uptodate then we can optimize away a ··· 246 248 */ 247 249 if (!partial_write) 248 250 SetPageUptodate(page); 251 + 252 + put_io_page(io_end->pages[i]); 249 253 } 250 254 io_end->num_io_pages = 0; 251 255 inode = io_end->inode;
+3 -6
fs/ext4/super.c
··· 1197 1197 .quota_write = ext4_quota_write, 1198 1198 #endif 1199 1199 .bdev_try_to_free_page = bdev_try_to_free_page, 1200 - .trim_fs = ext4_trim_fs 1201 1200 }; 1202 1201 1203 1202 static const struct super_operations ext4_nojournal_sops = { ··· 2798 2799 struct ext4_li_request *elr; 2799 2800 2800 2801 mutex_lock(&ext4_li_info->li_list_mtx); 2801 - if (list_empty(&ext4_li_info->li_request_list)) 2802 - return; 2803 - 2804 2802 list_for_each_safe(pos, n, &ext4_li_info->li_request_list) { 2805 2803 elr = list_entry(pos, struct ext4_li_request, 2806 2804 lr_request); ··· 3264 3268 * Test whether we have more sectors than will fit in sector_t, 3265 3269 * and whether the max offset is addressable by the page cache. 3266 3270 */ 3267 - ret = generic_check_addressable(sb->s_blocksize_bits, 3271 + err = generic_check_addressable(sb->s_blocksize_bits, 3268 3272 ext4_blocks_count(es)); 3269 - if (ret) { 3273 + if (err) { 3270 3274 ext4_msg(sb, KERN_ERR, "filesystem" 3271 3275 " too large to mount safely on this system"); 3272 3276 if (sizeof(sector_t) < 8) 3273 3277 ext4_msg(sb, KERN_WARNING, "CONFIG_LBDAF not enabled"); 3278 + ret = err; 3274 3279 goto failed_mount; 3275 3280 } 3276 3281
-40
fs/ioctl.c
··· 6 6 7 7 #include <linux/syscalls.h> 8 8 #include <linux/mm.h> 9 - #include <linux/smp_lock.h> 10 9 #include <linux/capability.h> 11 10 #include <linux/file.h> 12 11 #include <linux/fs.h> ··· 529 530 return thaw_super(sb); 530 531 } 531 532 532 - static int ioctl_fstrim(struct file *filp, void __user *argp) 533 - { 534 - struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; 535 - struct fstrim_range range; 536 - int ret = 0; 537 - 538 - if (!capable(CAP_SYS_ADMIN)) 539 - return -EPERM; 540 - 541 - /* If filesystem doesn't support trim feature, return. */ 542 - if (sb->s_op->trim_fs == NULL) 543 - return -EOPNOTSUPP; 544 - 545 - /* If a blockdevice-backed filesystem isn't specified, return EINVAL. */ 546 - if (sb->s_bdev == NULL) 547 - return -EINVAL; 548 - 549 - if (argp == NULL) { 550 - range.start = 0; 551 - range.len = ULLONG_MAX; 552 - range.minlen = 0; 553 - } else if (copy_from_user(&range, argp, sizeof(range))) 554 - return -EFAULT; 555 - 556 - ret = sb->s_op->trim_fs(sb, &range); 557 - if (ret < 0) 558 - return ret; 559 - 560 - if ((argp != NULL) && 561 - (copy_to_user(argp, &range, sizeof(range)))) 562 - return -EFAULT; 563 - 564 - return 0; 565 - } 566 - 567 533 /* 568 534 * When you add any new common ioctls to the switches above and below 569 535 * please update compat_sys_ioctl() too. ··· 577 613 578 614 case FITHAW: 579 615 error = ioctl_fsthaw(filp); 580 - break; 581 - 582 - case FITRIM: 583 - error = ioctl_fstrim(filp, argp); 584 616 break; 585 617 586 618 case FS_IOC_FIEMAP:
+8 -8
fs/jbd2/journal.c
··· 899 899 900 900 /* journal descriptor can store up to n blocks -bzzz */ 901 901 journal->j_blocksize = blocksize; 902 + journal->j_dev = bdev; 903 + journal->j_fs_dev = fs_dev; 904 + journal->j_blk_offset = start; 905 + journal->j_maxlen = len; 906 + bdevname(journal->j_dev, journal->j_devname); 907 + p = journal->j_devname; 908 + while ((p = strchr(p, '/'))) 909 + *p = '!'; 902 910 jbd2_stats_proc_init(journal); 903 911 n = journal->j_blocksize / sizeof(journal_block_tag_t); 904 912 journal->j_wbufsize = n; ··· 916 908 __func__); 917 909 goto out_err; 918 910 } 919 - journal->j_dev = bdev; 920 - journal->j_fs_dev = fs_dev; 921 - journal->j_blk_offset = start; 922 - journal->j_maxlen = len; 923 - bdevname(journal->j_dev, journal->j_devname); 924 - p = journal->j_devname; 925 - while ((p = strchr(p, '/'))) 926 - *p = '!'; 927 911 928 912 bh = __getblk(journal->j_dev, start, journal->j_blocksize); 929 913 if (!bh) {
-1
fs/lockd/clntlock.c
··· 14 14 #include <linux/sunrpc/clnt.h> 15 15 #include <linux/sunrpc/svc.h> 16 16 #include <linux/lockd/lockd.h> 17 - #include <linux/smp_lock.h> 18 17 #include <linux/kthread.h> 19 18 20 19 #define NLMDBG_FACILITY NLMDBG_CLIENT
-1
fs/lockd/clntproc.c
··· 7 7 */ 8 8 9 9 #include <linux/module.h> 10 - #include <linux/smp_lock.h> 11 10 #include <linux/slab.h> 12 11 #include <linux/types.h> 13 12 #include <linux/errno.h>
+4 -7
fs/lockd/host.c
··· 124 124 continue; 125 125 if (host->h_server != ni->server) 126 126 continue; 127 - if (ni->server && 127 + if (ni->server && ni->src_len != 0 && 128 128 !rpc_cmp_addr(nlm_srcaddr(host), ni->src_sap)) 129 129 continue; 130 130 ··· 167 167 host->h_addrlen = ni->salen; 168 168 rpc_set_port(nlm_addr(host), 0); 169 169 memcpy(nlm_srcaddr(host), ni->src_sap, ni->src_len); 170 + host->h_srcaddrlen = ni->src_len; 170 171 host->h_version = ni->version; 171 172 host->h_proto = ni->protocol; 172 173 host->h_rpcclnt = NULL; ··· 239 238 const char *hostname, 240 239 int noresvport) 241 240 { 242 - const struct sockaddr source = { 243 - .sa_family = AF_UNSPEC, 244 - }; 245 241 struct nlm_lookup_host_info ni = { 246 242 .server = 0, 247 243 .sap = sap, ··· 247 249 .version = version, 248 250 .hostname = hostname, 249 251 .hostname_len = strlen(hostname), 250 - .src_sap = &source, 251 - .src_len = sizeof(source), 252 252 .noresvport = noresvport, 253 253 }; 254 254 ··· 353 357 .protocol = host->h_proto, 354 358 .address = nlm_addr(host), 355 359 .addrsize = host->h_addrlen, 356 - .saddress = nlm_srcaddr(host), 357 360 .timeout = &timeparms, 358 361 .servername = host->h_name, 359 362 .program = &nlm_program, ··· 371 376 args.flags |= RPC_CLNT_CREATE_HARDRTRY; 372 377 if (host->h_noresvport) 373 378 args.flags |= RPC_CLNT_CREATE_NONPRIVPORT; 379 + if (host->h_srcaddrlen) 380 + args.saddress = nlm_srcaddr(host); 374 381 375 382 clnt = rpc_create(&args); 376 383 if (!IS_ERR(clnt))
-1
fs/lockd/svc4proc.c
··· 9 9 10 10 #include <linux/types.h> 11 11 #include <linux/time.h> 12 - #include <linux/smp_lock.h> 13 12 #include <linux/lockd/lockd.h> 14 13 #include <linux/lockd/share.h> 15 14
-1
fs/lockd/svclock.c
··· 25 25 #include <linux/errno.h> 26 26 #include <linux/kernel.h> 27 27 #include <linux/sched.h> 28 - #include <linux/smp_lock.h> 29 28 #include <linux/sunrpc/clnt.h> 30 29 #include <linux/sunrpc/svc.h> 31 30 #include <linux/lockd/nlm.h>
-1
fs/lockd/svcproc.c
··· 9 9 10 10 #include <linux/types.h> 11 11 #include <linux/time.h> 12 - #include <linux/smp_lock.h> 13 12 #include <linux/lockd/lockd.h> 14 13 #include <linux/lockd/share.h> 15 14
-1
fs/locks.c
··· 122 122 #include <linux/module.h> 123 123 #include <linux/security.h> 124 124 #include <linux/slab.h> 125 - #include <linux/smp_lock.h> 126 125 #include <linux/syscalls.h> 127 126 #include <linux/time.h> 128 127 #include <linux/rcupdate.h>
-1
fs/namespace.c
··· 13 13 #include <linux/sched.h> 14 14 #include <linux/spinlock.h> 15 15 #include <linux/percpu.h> 16 - #include <linux/smp_lock.h> 17 16 #include <linux/init.h> 18 17 #include <linux/kernel.h> 19 18 #include <linux/acct.h>
-1
fs/ncpfs/dir.c
··· 19 19 #include <linux/mm.h> 20 20 #include <asm/uaccess.h> 21 21 #include <asm/byteorder.h> 22 - #include <linux/smp_lock.h> 23 22 24 23 #include <linux/ncp_fs.h> 25 24
-1
fs/ncpfs/file.c
··· 17 17 #include <linux/mm.h> 18 18 #include <linux/vmalloc.h> 19 19 #include <linux/sched.h> 20 - #include <linux/smp_lock.h> 21 20 22 21 #include <linux/ncp_fs.h> 23 22 #include "ncplib_kernel.h"
-1
fs/ncpfs/inode.c
··· 26 26 #include <linux/slab.h> 27 27 #include <linux/vmalloc.h> 28 28 #include <linux/init.h> 29 - #include <linux/smp_lock.h> 30 29 #include <linux/vfs.h> 31 30 #include <linux/mount.h> 32 31 #include <linux/seq_file.h>
-1
fs/ncpfs/ioctl.c
··· 17 17 #include <linux/mount.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/highuid.h> 20 - #include <linux/smp_lock.h> 21 20 #include <linux/vmalloc.h> 22 21 #include <linux/sched.h> 23 22
-1
fs/nfs/callback.c
··· 9 9 #include <linux/completion.h> 10 10 #include <linux/ip.h> 11 11 #include <linux/module.h> 12 - #include <linux/smp_lock.h> 13 12 #include <linux/sunrpc/svc.h> 14 13 #include <linux/sunrpc/svcsock.h> 15 14 #include <linux/nfs_fs.h>
-1
fs/nfs/delegation.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/sched.h> 13 13 #include <linux/slab.h> 14 - #include <linux/smp_lock.h> 15 14 #include <linux/spinlock.h> 16 15 17 16 #include <linux/nfs4.h>
+65 -35
fs/nfs/dir.c
··· 34 34 #include <linux/mount.h> 35 35 #include <linux/sched.h> 36 36 #include <linux/vmalloc.h> 37 + #include <linux/kmemleak.h> 37 38 38 39 #include "delegation.h" 39 40 #include "iostat.h" ··· 195 194 static 196 195 struct nfs_cache_array *nfs_readdir_get_array(struct page *page) 197 196 { 197 + void *ptr; 198 198 if (page == NULL) 199 199 return ERR_PTR(-EIO); 200 - return (struct nfs_cache_array *)kmap(page); 200 + ptr = kmap(page); 201 + if (ptr == NULL) 202 + return ERR_PTR(-ENOMEM); 203 + return ptr; 201 204 } 202 205 203 206 static ··· 218 213 { 219 214 struct nfs_cache_array *array = nfs_readdir_get_array(page); 220 215 int i; 216 + 217 + if (IS_ERR(array)) 218 + return PTR_ERR(array); 221 219 for (i = 0; i < array->size; i++) 222 220 kfree(array->array[i].string.name); 223 221 nfs_readdir_release_array(page); ··· 239 231 string->name = kmemdup(name, len, GFP_KERNEL); 240 232 if (string->name == NULL) 241 233 return -ENOMEM; 234 + /* 235 + * Avoid a kmemleak false positive. The pointer to the name is stored 236 + * in a page cache page which kmemleak does not scan. 237 + */ 238 + kmemleak_not_leak(string->name); 242 239 string->hash = full_name_hash(name, len); 243 240 return 0; 244 241 } ··· 257 244 258 245 if (IS_ERR(array)) 259 246 return PTR_ERR(array); 260 - ret = -EIO; 247 + ret = -ENOSPC; 261 248 if (array->size >= MAX_READDIR_ARRAY) 262 249 goto out; 263 250 ··· 268 255 if (ret) 269 256 goto out; 270 257 array->last_cookie = entry->cookie; 258 + array->size++; 271 259 if (entry->eof == 1) 272 260 array->eof_index = array->size; 273 - array->size++; 274 261 out: 275 262 nfs_readdir_release_array(page); 276 263 return ret; ··· 285 272 if (diff < 0) 286 273 goto out_eof; 287 274 if (diff >= array->size) { 288 - if (array->eof_index > 0) 275 + if (array->eof_index >= 0) 289 276 goto out_eof; 290 277 desc->current_index += array->size; 291 278 return -EAGAIN; ··· 294 281 index = (unsigned int)diff; 295 282 *desc->dir_cookie = array->array[index].cookie; 296 283 desc->cache_entry_index = index; 297 - if (index == array->eof_index) 298 - desc->eof = 1; 299 284 return 0; 300 285 out_eof: 301 286 desc->eof = 1; ··· 307 296 int status = -EAGAIN; 308 297 309 298 for (i = 0; i < array->size; i++) { 310 - if (i == array->eof_index) { 311 - desc->eof = 1; 312 - status = -EBADCOOKIE; 313 - } 314 299 if (array->array[i].cookie == *desc->dir_cookie) { 315 300 desc->cache_entry_index = i; 316 301 status = 0; 317 - break; 302 + goto out; 318 303 } 319 304 } 320 - 305 + if (i == array->eof_index) { 306 + desc->eof = 1; 307 + status = -EBADCOOKIE; 308 + } 309 + out: 321 310 return status; 322 311 } 323 312 ··· 460 449 461 450 /* Perform conversion from xdr to cache array */ 462 451 static 463 - void nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *entry, 452 + int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *entry, 464 453 void *xdr_page, struct page *page, unsigned int buflen) 465 454 { 466 455 struct xdr_stream stream; ··· 482 471 483 472 do { 484 473 status = xdr_decode(desc, entry, &stream); 485 - if (status != 0) 474 + if (status != 0) { 475 + if (status == -EAGAIN) 476 + status = 0; 486 477 break; 478 + } 487 479 488 - if (nfs_readdir_add_to_array(entry, page) == -1) 489 - break; 490 480 if (desc->plus == 1) 491 481 nfs_prime_dcache(desc->file->f_path.dentry, entry); 482 + 483 + status = nfs_readdir_add_to_array(entry, page); 484 + if (status != 0) 485 + break; 492 486 } while (!entry->eof); 493 487 494 488 if (status == -EBADCOOKIE && entry->eof) { 495 489 array = nfs_readdir_get_array(page); 496 - array->eof_index = array->size - 1; 497 - status = 0; 498 - nfs_readdir_release_array(page); 490 + if (!IS_ERR(array)) { 491 + array->eof_index = array->size; 492 + status = 0; 493 + nfs_readdir_release_array(page); 494 + } 499 495 } 496 + return status; 500 497 } 501 498 502 499 static ··· 556 537 struct nfs_entry entry; 557 538 struct file *file = desc->file; 558 539 struct nfs_cache_array *array; 559 - int status = 0; 540 + int status = -ENOMEM; 560 541 unsigned int array_size = ARRAY_SIZE(pages); 561 542 562 543 entry.prev_cookie = 0; ··· 568 549 goto out; 569 550 570 551 array = nfs_readdir_get_array(page); 552 + if (IS_ERR(array)) { 553 + status = PTR_ERR(array); 554 + goto out; 555 + } 571 556 memset(array, 0, sizeof(struct nfs_cache_array)); 572 557 array->eof_index = -1; 573 558 ··· 579 556 if (!pages_ptr) 580 557 goto out_release_array; 581 558 do { 559 + unsigned int pglen; 582 560 status = nfs_readdir_xdr_filler(pages, desc, &entry, file, inode); 583 561 584 562 if (status < 0) 585 563 break; 586 - nfs_readdir_page_filler(desc, &entry, pages_ptr, page, array_size * PAGE_SIZE); 587 - } while (array->eof_index < 0 && array->size < MAX_READDIR_ARRAY); 564 + pglen = status; 565 + status = nfs_readdir_page_filler(desc, &entry, pages_ptr, page, pglen); 566 + if (status < 0) { 567 + if (status == -ENOSPC) 568 + status = 0; 569 + break; 570 + } 571 + } while (array->eof_index < 0); 588 572 589 573 nfs_readdir_free_large_page(pages_ptr, pages, array_size); 590 574 out_release_array: ··· 612 582 int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page* page) 613 583 { 614 584 struct inode *inode = desc->file->f_path.dentry->d_inode; 585 + int ret; 615 586 616 - if (nfs_readdir_xdr_to_array(desc, page, inode) < 0) 587 + ret = nfs_readdir_xdr_to_array(desc, page, inode); 588 + if (ret < 0) 617 589 goto error; 618 590 SetPageUptodate(page); 619 591 ··· 627 595 return 0; 628 596 error: 629 597 unlock_page(page); 630 - return -EIO; 598 + return ret; 631 599 } 632 600 633 601 static ··· 640 608 static 641 609 struct page *get_cache_page(nfs_readdir_descriptor_t *desc) 642 610 { 643 - struct page *page; 644 - page = read_cache_page(desc->file->f_path.dentry->d_inode->i_mapping, 611 + return read_cache_page(desc->file->f_path.dentry->d_inode->i_mapping, 645 612 desc->page_index, (filler_t *)nfs_readdir_filler, desc); 646 - if (IS_ERR(page)) 647 - desc->eof = 1; 648 - return page; 649 613 } 650 614 651 615 /* ··· 667 639 static inline 668 640 int readdir_search_pagecache(nfs_readdir_descriptor_t *desc) 669 641 { 670 - int res = -EAGAIN; 642 + int res; 671 643 644 + if (desc->page_index == 0) 645 + desc->current_index = 0; 672 646 while (1) { 673 647 res = find_cache_page(desc); 674 648 if (res != -EAGAIN) ··· 700 670 struct dentry *dentry = NULL; 701 671 702 672 array = nfs_readdir_get_array(desc->page); 673 + if (IS_ERR(array)) 674 + return PTR_ERR(array); 703 675 704 676 for (i = desc->cache_entry_index; i < array->size; i++) { 705 677 d_type = DT_UNKNOWN; ··· 717 685 *desc->dir_cookie = array->array[i+1].cookie; 718 686 else 719 687 *desc->dir_cookie = array->last_cookie; 720 - if (i == array->eof_index) { 721 - desc->eof = 1; 722 - break; 723 - } 724 688 } 689 + if (i == array->eof_index) 690 + desc->eof = 1; 725 691 726 692 nfs_readdir_release_array(desc->page); 727 693 cache_page_release(desc); ··· 1375 1345 res = NULL; 1376 1346 goto out; 1377 1347 /* This turned out not to be a regular file */ 1378 - case -EISDIR: 1379 1348 case -ENOTDIR: 1380 1349 goto no_open; 1381 1350 case -ELOOP: 1382 1351 if (!(nd->intent.open.flags & O_NOFOLLOW)) 1383 1352 goto no_open; 1353 + /* case -EISDIR: */ 1384 1354 /* case -EINVAL: */ 1385 1355 default: 1386 1356 res = ERR_CAST(inode);
+2 -2
fs/nfs/nfs2xdr.c
··· 423 423 struct page **page; 424 424 size_t hdrlen; 425 425 unsigned int pglen, recvd; 426 - int status, nr = 0; 426 + int status; 427 427 428 428 if ((status = ntohl(*p++))) 429 429 return nfs_stat_to_errno(status); ··· 443 443 if (pglen > recvd) 444 444 pglen = recvd; 445 445 page = rcvbuf->pages; 446 - return nr; 446 + return pglen; 447 447 } 448 448 449 449 static void print_overflow_msg(const char *func, const struct xdr_stream *xdr)
+2 -2
fs/nfs/nfs3xdr.c
··· 555 555 struct page **page; 556 556 size_t hdrlen; 557 557 u32 recvd, pglen; 558 - int status, nr = 0; 558 + int status; 559 559 560 560 status = ntohl(*p++); 561 561 /* Decode post_op_attrs */ ··· 586 586 pglen = recvd; 587 587 page = rcvbuf->pages; 588 588 589 - return nr; 589 + return pglen; 590 590 } 591 591 592 592 __be32 *
+3 -1
fs/nfs/nfs4proc.c
··· 2852 2852 nfs4_setup_readdir(cookie, NFS_COOKIEVERF(dir), dentry, &args); 2853 2853 res.pgbase = args.pgbase; 2854 2854 status = nfs4_call_sync(NFS_SERVER(dir), &msg, &args, &res, 0); 2855 - if (status == 0) 2855 + if (status >= 0) { 2856 2856 memcpy(NFS_COOKIEVERF(dir), res.verifier.data, NFS4_VERIFIER_SIZE); 2857 + status += args.pgbase; 2858 + } 2857 2859 2858 2860 nfs_invalidate_atime(dir); 2859 2861
+1 -1
fs/nfs/nfs4xdr.c
··· 4518 4518 xdr_read_pages(xdr, pglen); 4519 4519 4520 4520 4521 - return 0; 4521 + return pglen; 4522 4522 } 4523 4523 4524 4524 static int decode_readlink(struct xdr_stream *xdr, struct rpc_rqst *req)
+7 -2
fs/nfs/super.c
··· 39 39 #include <linux/nfs_mount.h> 40 40 #include <linux/nfs4_mount.h> 41 41 #include <linux/lockd/bind.h> 42 - #include <linux/smp_lock.h> 43 42 #include <linux/seq_file.h> 44 43 #include <linux/mount.h> 45 44 #include <linux/mnt_namespace.h> ··· 65 66 #include "fscache.h" 66 67 67 68 #define NFSDBG_FACILITY NFSDBG_VFS 69 + 70 + #ifdef CONFIG_NFS_V3 71 + #define NFS_DEFAULT_VERSION 3 72 + #else 73 + #define NFS_DEFAULT_VERSION 2 74 + #endif 68 75 69 76 enum { 70 77 /* Mount options that take no arguments */ ··· 2282 2277 }; 2283 2278 int error = -ENOMEM; 2284 2279 2285 - data = nfs_alloc_parsed_mount_data(3); 2280 + data = nfs_alloc_parsed_mount_data(NFS_DEFAULT_VERSION); 2286 2281 mntfh = nfs_alloc_fhandle(); 2287 2282 if (data == NULL || mntfh == NULL) 2288 2283 goto out_free_fh;
+4 -4
fs/nfsd/nfs4state.c
··· 2262 2262 * Spawn a thread to perform a recall on the delegation represented 2263 2263 * by the lease (file_lock) 2264 2264 * 2265 - * Called from break_lease() with lock_kernel() held. 2265 + * Called from break_lease() with lock_flocks() held. 2266 2266 * Note: we assume break_lease will only call this *once* for any given 2267 2267 * lease. 2268 2268 */ ··· 2286 2286 list_add_tail(&dp->dl_recall_lru, &del_recall_lru); 2287 2287 spin_unlock(&recall_lock); 2288 2288 2289 - /* only place dl_time is set. protected by lock_kernel*/ 2289 + /* only place dl_time is set. protected by lock_flocks*/ 2290 2290 dp->dl_time = get_seconds(); 2291 2291 2292 2292 /* ··· 2303 2303 /* 2304 2304 * The file_lock is being reapd. 2305 2305 * 2306 - * Called by locks_free_lock() with lock_kernel() held. 2306 + * Called by locks_free_lock() with lock_flocks() held. 2307 2307 */ 2308 2308 static 2309 2309 void nfsd_release_deleg_cb(struct file_lock *fl) ··· 2318 2318 } 2319 2319 2320 2320 /* 2321 - * Called from setlease() with lock_kernel() held 2321 + * Called from setlease() with lock_flocks() held 2322 2322 */ 2323 2323 static 2324 2324 int nfsd_same_client_deleg_cb(struct file_lock *onlist, struct file_lock *try)
-1
fs/ocfs2/super.c
··· 41 41 #include <linux/mount.h> 42 42 #include <linux/seq_file.h> 43 43 #include <linux/quotaops.h> 44 - #include <linux/smp_lock.h> 45 44 46 45 #define MLOG_MASK_PREFIX ML_SUPER 47 46 #include <cluster/masklog.h>
-1
fs/proc/inode.c
··· 16 16 #include <linux/limits.h> 17 17 #include <linux/init.h> 18 18 #include <linux/module.h> 19 - #include <linux/smp_lock.h> 20 19 #include <linux/sysctl.h> 21 20 #include <linux/slab.h> 22 21
-1
fs/read_write.c
··· 9 9 #include <linux/fcntl.h> 10 10 #include <linux/file.h> 11 11 #include <linux/uio.h> 12 - #include <linux/smp_lock.h> 13 12 #include <linux/fsnotify.h> 14 13 #include <linux/security.h> 15 14 #include <linux/module.h>
-1
fs/reiserfs/inode.c
··· 8 8 #include <linux/reiserfs_acl.h> 9 9 #include <linux/reiserfs_xattr.h> 10 10 #include <linux/exportfs.h> 11 - #include <linux/smp_lock.h> 12 11 #include <linux/pagemap.h> 13 12 #include <linux/highmem.h> 14 13 #include <linux/slab.h>
-1
fs/reiserfs/ioctl.c
··· 9 9 #include <linux/time.h> 10 10 #include <asm/uaccess.h> 11 11 #include <linux/pagemap.h> 12 - #include <linux/smp_lock.h> 13 12 #include <linux/compat.h> 14 13 15 14 /*
-1
fs/reiserfs/journal.c
··· 43 43 #include <linux/fcntl.h> 44 44 #include <linux/stat.h> 45 45 #include <linux/string.h> 46 - #include <linux/smp_lock.h> 47 46 #include <linux/buffer_head.h> 48 47 #include <linux/workqueue.h> 49 48 #include <linux/writeback.h>
-1
fs/reiserfs/super.c
··· 28 28 #include <linux/mount.h> 29 29 #include <linux/namei.h> 30 30 #include <linux/crc32.h> 31 - #include <linux/smp_lock.h> 32 31 33 32 struct file_system_type reiserfs_fs_type; 34 33
+7
include/drm/nouveau_drm.h
··· 80 80 #define NOUVEAU_GETPARAM_VM_VRAM_BASE 12 81 81 #define NOUVEAU_GETPARAM_GRAPH_UNITS 13 82 82 #define NOUVEAU_GETPARAM_PTIMER_TIME 14 83 + #define NOUVEAU_GETPARAM_HAS_BO_USAGE 15 83 84 struct drm_nouveau_getparam { 84 85 uint64_t param; 85 86 uint64_t value; ··· 95 94 #define NOUVEAU_GEM_DOMAIN_VRAM (1 << 1) 96 95 #define NOUVEAU_GEM_DOMAIN_GART (1 << 2) 97 96 #define NOUVEAU_GEM_DOMAIN_MAPPABLE (1 << 3) 97 + 98 + #define NOUVEAU_GEM_TILE_LAYOUT_MASK 0x0000ff00 99 + #define NOUVEAU_GEM_TILE_16BPP 0x00000001 100 + #define NOUVEAU_GEM_TILE_32BPP 0x00000002 101 + #define NOUVEAU_GEM_TILE_ZETA 0x00000004 102 + #define NOUVEAU_GEM_TILE_NONCONTIG 0x00000008 98 103 99 104 struct drm_nouveau_gem_info { 100 105 uint32_t handle;
+1 -2
include/linux/ceph/libceph.h
··· 227 227 extern void ceph_release_page_vector(struct page **pages, int num_pages); 228 228 229 229 extern struct page **ceph_get_direct_page_vector(const char __user *data, 230 - int num_pages, 231 - loff_t off, size_t len); 230 + int num_pages); 232 231 extern void ceph_put_page_vector(struct page **pages, int num_pages); 233 232 extern void ceph_release_page_vector(struct page **pages, int num_pages); 234 233 extern struct page **ceph_alloc_page_vector(int num_pages, gfp_t flags);
+1
include/linux/ceph/messenger.h
··· 82 82 struct ceph_buffer *middle; 83 83 struct page **pages; /* data payload. NOT OWNER. */ 84 84 unsigned nr_pages; /* size of page array */ 85 + unsigned page_alignment; /* io offset in first page */ 85 86 struct ceph_pagelist *pagelist; /* instead of pages */ 86 87 struct list_head list_head; 87 88 struct kref kref;
+5 -2
include/linux/ceph/osd_client.h
··· 79 79 struct ceph_file_layout r_file_layout; 80 80 struct ceph_snap_context *r_snapc; /* snap context for writes */ 81 81 unsigned r_num_pages; /* size of page array (follows) */ 82 + unsigned r_page_alignment; /* io offset in first page */ 82 83 struct page **r_pages; /* pages for data payload */ 83 84 int r_pages_from_pool; 84 85 int r_own_pages; /* if true, i own page list */ ··· 195 194 int do_sync, u32 truncate_seq, 196 195 u64 truncate_size, 197 196 struct timespec *mtime, 198 - bool use_mempool, int num_reply); 197 + bool use_mempool, int num_reply, 198 + int page_align); 199 199 200 200 static inline void ceph_osdc_get_request(struct ceph_osd_request *req) 201 201 { ··· 220 218 struct ceph_file_layout *layout, 221 219 u64 off, u64 *plen, 222 220 u32 truncate_seq, u64 truncate_size, 223 - struct page **pages, int nr_pages); 221 + struct page **pages, int nr_pages, 222 + int page_align); 224 223 225 224 extern int ceph_osdc_writepages(struct ceph_osd_client *osdc, 226 225 struct ceph_vino vino,
-1
include/linux/fs.h
··· 1612 1612 ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t); 1613 1613 #endif 1614 1614 int (*bdev_try_to_free_page)(struct super_block*, struct page*, gfp_t); 1615 - int (*trim_fs) (struct super_block *, struct fstrim_range *); 1616 1615 }; 1617 1616 1618 1617 /*
+2 -4
include/linux/hardirq.h
··· 2 2 #define LINUX_HARDIRQ_H 3 3 4 4 #include <linux/preempt.h> 5 - #ifdef CONFIG_PREEMPT 6 - #include <linux/smp_lock.h> 7 - #endif 8 5 #include <linux/lockdep.h> 9 6 #include <linux/ftrace_irq.h> 10 7 #include <asm/hardirq.h> ··· 94 97 #define in_nmi() (preempt_count() & NMI_MASK) 95 98 96 99 #if defined(CONFIG_PREEMPT) && defined(CONFIG_BKL) 97 - # define PREEMPT_INATOMIC_BASE kernel_locked() 100 + # include <linux/sched.h> 101 + # define PREEMPT_INATOMIC_BASE (current->lock_depth >= 0) 98 102 #else 99 103 # define PREEMPT_INATOMIC_BASE 0 100 104 #endif
+1 -1
include/linux/libata.h
··· 986 986 unsigned long, struct ata_port_operations *); 987 987 extern int ata_scsi_detect(struct scsi_host_template *sht); 988 988 extern int ata_scsi_ioctl(struct scsi_device *dev, int cmd, void __user *arg); 989 - extern int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)); 989 + extern int ata_scsi_queuecmd(struct Scsi_Host *h, struct scsi_cmnd *cmd); 990 990 extern int ata_sas_scsi_ioctl(struct ata_port *ap, struct scsi_device *dev, 991 991 int cmd, void __user *arg); 992 992 extern void ata_sas_port_destroy(struct ata_port *);
+1
include/linux/lockd/lockd.h
··· 43 43 struct sockaddr_storage h_addr; /* peer address */ 44 44 size_t h_addrlen; 45 45 struct sockaddr_storage h_srcaddr; /* our address (optional) */ 46 + size_t h_srcaddrlen; 46 47 struct rpc_clnt *h_rpcclnt; /* RPC client to talk to peer */ 47 48 char *h_name; /* remote hostname */ 48 49 u32 h_version; /* interface version */
-6
include/linux/nfs_fs.h
··· 593 593 return ino; 594 594 } 595 595 596 - #define nfs_wait_event(clnt, wq, condition) \ 597 - ({ \ 598 - int __retval = wait_event_killable(wq, condition); \ 599 - __retval; \ 600 - }) 601 - 602 596 #define NFS_JUKEBOX_RETRY_TIME (5 * HZ) 603 597 604 598 #endif /* __KERNEL__ */
-1
include/linux/reiserfs_fs.h
··· 22 22 #include <asm/unaligned.h> 23 23 #include <linux/bitops.h> 24 24 #include <linux/proc_fs.h> 25 - #include <linux/smp_lock.h> 26 25 #include <linux/buffer_head.h> 27 26 #include <linux/reiserfs_fs_i.h> 28 27 #include <linux/reiserfs_fs_sb.h>
+1 -1
include/linux/rtnetlink.h
··· 6 6 #include <linux/if_link.h> 7 7 #include <linux/if_addr.h> 8 8 #include <linux/neighbour.h> 9 - #include <linux/netdevice.h> 10 9 11 10 /* rtnetlink families. Values up to 127 are reserved for real address 12 11 * families, values above 128 may be used arbitrarily. ··· 605 606 #ifdef __KERNEL__ 606 607 607 608 #include <linux/mutex.h> 609 + #include <linux/netdevice.h> 608 610 609 611 static __inline__ int rtattr_strcmp(const struct rtattr *rta, const char *str) 610 612 {
+1
include/linux/sched.h
··· 862 862 * single CPU. 863 863 */ 864 864 unsigned int cpu_power, cpu_power_orig; 865 + unsigned int group_weight; 865 866 866 867 /* 867 868 * The CPUs this group covers.
-3
include/linux/smp_lock.h
··· 4 4 #ifdef CONFIG_LOCK_KERNEL 5 5 #include <linux/sched.h> 6 6 7 - #define kernel_locked() (current->lock_depth >= 0) 8 - 9 7 extern int __lockfunc __reacquire_kernel_lock(void); 10 8 extern void __lockfunc __release_kernel_lock(void); 11 9 ··· 56 58 #define lock_kernel() 57 59 #define unlock_kernel() 58 60 #define cycle_kernel_lock() do { } while(0) 59 - #define kernel_locked() 1 60 61 #endif /* CONFIG_BKL */ 61 62 62 63 #define release_kernel_lock(task) do { } while(0)
-1
include/linux/tty.h
··· 13 13 #include <linux/tty_driver.h> 14 14 #include <linux/tty_ldisc.h> 15 15 #include <linux/mutex.h> 16 - #include <linux/smp_lock.h> 17 16 18 17 #include <asm/system.h> 19 18
+1 -1
include/net/cfg80211.h
··· 1355 1355 WIPHY_FLAG_4ADDR_AP = BIT(5), 1356 1356 WIPHY_FLAG_4ADDR_STATION = BIT(6), 1357 1357 WIPHY_FLAG_CONTROL_PORT_PROTOCOL = BIT(7), 1358 - WIPHY_FLAG_IBSS_RSN = BIT(7), 1358 + WIPHY_FLAG_IBSS_RSN = BIT(8), 1359 1359 }; 1360 1360 1361 1361 struct mac_address {
+1 -1
include/net/neighbour.h
··· 303 303 304 304 static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) 305 305 { 306 - unsigned long now = ACCESS_ONCE(jiffies); 306 + unsigned long now = jiffies; 307 307 308 308 if (neigh->used != now) 309 309 neigh->used = now;
+1 -2
include/scsi/libfc.h
··· 1006 1006 /* 1007 1007 * SCSI INTERACTION LAYER 1008 1008 *****************************/ 1009 - int fc_queuecommand(struct scsi_cmnd *, 1010 - void (*done)(struct scsi_cmnd *)); 1009 + int fc_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 1011 1010 int fc_eh_abort(struct scsi_cmnd *); 1012 1011 int fc_eh_device_reset(struct scsi_cmnd *); 1013 1012 int fc_eh_host_reset(struct scsi_cmnd *);
+1 -2
include/scsi/libiscsi.h
··· 341 341 extern int iscsi_eh_recover_target(struct scsi_cmnd *sc); 342 342 extern int iscsi_eh_session_reset(struct scsi_cmnd *sc); 343 343 extern int iscsi_eh_device_reset(struct scsi_cmnd *sc); 344 - extern int iscsi_queuecommand(struct scsi_cmnd *sc, 345 - void (*done)(struct scsi_cmnd *)); 344 + extern int iscsi_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *sc); 346 345 347 346 /* 348 347 * iSCSI host helpers.
+1 -2
include/scsi/libsas.h
··· 621 621 int sas_phy_enable(struct sas_phy *phy, int enabled); 622 622 int sas_phy_reset(struct sas_phy *phy, int hard_reset); 623 623 int sas_queue_up(struct sas_task *task); 624 - extern int sas_queuecommand(struct scsi_cmnd *, 625 - void (*scsi_done)(struct scsi_cmnd *)); 624 + extern int sas_queuecommand(struct Scsi_Host * ,struct scsi_cmnd *); 626 625 extern int sas_target_alloc(struct scsi_target *); 627 626 extern int sas_slave_alloc(struct scsi_device *); 628 627 extern int sas_slave_configure(struct scsi_device *);
+21 -2
include/scsi/scsi_host.h
··· 127 127 * 128 128 * STATUS: REQUIRED 129 129 */ 130 - int (* queuecommand)(struct scsi_cmnd *, 131 - void (*done)(struct scsi_cmnd *)); 130 + int (* queuecommand)(struct Scsi_Host *, struct scsi_cmnd *); 132 131 133 132 /* 134 133 * The transfer functions are used to queue a scsi command to ··· 504 505 }; 505 506 506 507 /* 508 + * Temporary #define for host lock push down. Can be removed when all 509 + * drivers have been updated to take advantage of unlocked 510 + * queuecommand. 511 + * 512 + */ 513 + #define DEF_SCSI_QCMD(func_name) \ 514 + int func_name(struct Scsi_Host *shost, struct scsi_cmnd *cmd) \ 515 + { \ 516 + unsigned long irq_flags; \ 517 + int rc; \ 518 + spin_lock_irqsave(shost->host_lock, irq_flags); \ 519 + scsi_cmd_get_serial(shost, cmd); \ 520 + rc = func_name##_lck (cmd, cmd->scsi_done); \ 521 + spin_unlock_irqrestore(shost->host_lock, irq_flags); \ 522 + return rc; \ 523 + } 524 + 525 + 526 + /* 507 527 * shost state: If you alter this, you also need to alter scsi_sysfs.c 508 528 * (for the ascii descriptions) and the state model enforcer: 509 529 * scsi_host_set_state() ··· 770 752 extern void scsi_host_put(struct Scsi_Host *t); 771 753 extern struct Scsi_Host *scsi_host_lookup(unsigned short); 772 754 extern const char *scsi_host_state_name(enum scsi_host_state); 755 + extern void scsi_cmd_get_serial(struct Scsi_Host *, struct scsi_cmnd *); 773 756 774 757 extern u64 scsi_calculate_bounce_limit(struct Scsi_Host *); 775 758
-1
init/main.c
··· 20 20 #include <linux/delay.h> 21 21 #include <linux/ioport.h> 22 22 #include <linux/init.h> 23 - #include <linux/smp_lock.h> 24 23 #include <linux/initrd.h> 25 24 #include <linux/bootmem.h> 26 25 #include <linux/acpi.h>
+11 -10
kernel/debug/kdb/kdb_main.c
··· 82 82 #define for_each_kdbcmd(cmd, num) \ 83 83 for ((cmd) = kdb_base_commands, (num) = 0; \ 84 84 num < kdb_max_commands; \ 85 - num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++, num++) 85 + num++, num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++) 86 86 87 87 typedef struct _kdbmsg { 88 88 int km_diag; /* kdb diagnostic */ ··· 646 646 } 647 647 if (!s->usable) 648 648 return KDB_NOTIMP; 649 - s->command = kmalloc((s->count + 1) * sizeof(*(s->command)), GFP_KDB); 649 + s->command = kzalloc((s->count + 1) * sizeof(*(s->command)), GFP_KDB); 650 650 if (!s->command) { 651 651 kdb_printf("Could not allocate new kdb_defcmd table for %s\n", 652 652 cmdstr); ··· 2361 2361 */ 2362 2362 static int kdb_ll(int argc, const char **argv) 2363 2363 { 2364 - int diag; 2364 + int diag = 0; 2365 2365 unsigned long addr; 2366 2366 long offset = 0; 2367 2367 unsigned long va; ··· 2400 2400 char buf[80]; 2401 2401 2402 2402 if (KDB_FLAG(CMD_INTERRUPT)) 2403 - return 0; 2403 + goto out; 2404 2404 2405 2405 sprintf(buf, "%s " kdb_machreg_fmt "\n", command, va); 2406 2406 diag = kdb_parse(buf); 2407 2407 if (diag) 2408 - return diag; 2408 + goto out; 2409 2409 2410 2410 addr = va + linkoffset; 2411 2411 if (kdb_getword(&va, addr, sizeof(va))) 2412 - return 0; 2412 + goto out; 2413 2413 } 2414 - kfree(command); 2415 2414 2416 - return 0; 2415 + out: 2416 + kfree(command); 2417 + return diag; 2417 2418 } 2418 2419 2419 2420 static int kdb_kgdb(int argc, const char **argv) ··· 2740 2739 } 2741 2740 if (kdb_commands) { 2742 2741 memcpy(new, kdb_commands, 2743 - kdb_max_commands * sizeof(*new)); 2742 + (kdb_max_commands - KDB_BASE_CMD_MAX) * sizeof(*new)); 2744 2743 kfree(kdb_commands); 2745 2744 } 2746 2745 memset(new + kdb_max_commands, 0, 2747 2746 kdb_command_extend * sizeof(*new)); 2748 2747 kdb_commands = new; 2749 - kp = kdb_commands + kdb_max_commands; 2748 + kp = kdb_commands + kdb_max_commands - KDB_BASE_CMD_MAX; 2750 2749 kdb_max_commands += kdb_command_extend; 2751 2750 } 2752 2751
+2 -1
kernel/futex.c
··· 2489 2489 { 2490 2490 struct robust_list_head __user *head = curr->robust_list; 2491 2491 struct robust_list __user *entry, *next_entry, *pending; 2492 - unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip; 2492 + unsigned int limit = ROBUST_LIST_LIMIT, pi, pip; 2493 + unsigned int uninitialized_var(next_pi); 2493 2494 unsigned long futex_offset; 2494 2495 int rc; 2495 2496
+2 -1
kernel/futex_compat.c
··· 49 49 { 50 50 struct compat_robust_list_head __user *head = curr->compat_robust_list; 51 51 struct robust_list __user *entry, *next_entry, *pending; 52 - unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip; 52 + unsigned int limit = ROBUST_LIST_LIMIT, pi, pip; 53 + unsigned int uninitialized_var(next_pi); 53 54 compat_uptr_t uentry, next_uentry, upending; 54 55 compat_long_t futex_offset; 55 56 int rc;
+2 -2
kernel/pm_qos_params.c
··· 121 121 122 122 switch (o->type) { 123 123 case PM_QOS_MIN: 124 - return plist_last(&o->requests)->prio; 124 + return plist_first(&o->requests)->prio; 125 125 126 126 case PM_QOS_MAX: 127 - return plist_first(&o->requests)->prio; 127 + return plist_last(&o->requests)->prio; 128 128 129 129 default: 130 130 /* runtime check for not using enum */
+4
kernel/power/Kconfig
··· 246 246 depends on PM_SLEEP || PM_RUNTIME 247 247 default y 248 248 249 + config ARCH_HAS_OPP 250 + bool 251 + 249 252 config PM_OPP 250 253 bool "Operating Performance Point (OPP) Layer library" 251 254 depends on PM 255 + depends on ARCH_HAS_OPP 252 256 ---help--- 253 257 SOCs have a standard set of tuples consisting of frequency and 254 258 voltage pairs that the device will support per voltage domain. This
+28 -11
kernel/sched.c
··· 560 560 561 561 static DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); 562 562 563 - static inline 564 - void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags) 565 - { 566 - rq->curr->sched_class->check_preempt_curr(rq, p, flags); 567 563 568 - /* 569 - * A queue event has occurred, and we're going to schedule. In 570 - * this case, we can save a useless back to back clock update. 571 - */ 572 - if (test_tsk_need_resched(p)) 573 - rq->skip_clock_update = 1; 574 - } 564 + static void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags); 575 565 576 566 static inline int cpu_of(struct rq *rq) 577 567 { ··· 2106 2116 p->sched_class->switched_to(rq, p, running); 2107 2117 } else 2108 2118 p->sched_class->prio_changed(rq, p, oldprio, running); 2119 + } 2120 + 2121 + static void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags) 2122 + { 2123 + const struct sched_class *class; 2124 + 2125 + if (p->sched_class == rq->curr->sched_class) { 2126 + rq->curr->sched_class->check_preempt_curr(rq, p, flags); 2127 + } else { 2128 + for_each_class(class) { 2129 + if (class == rq->curr->sched_class) 2130 + break; 2131 + if (class == p->sched_class) { 2132 + resched_task(rq->curr); 2133 + break; 2134 + } 2135 + } 2136 + } 2137 + 2138 + /* 2139 + * A queue event has occurred, and we're going to schedule. In 2140 + * this case, we can save a useless back to back clock update. 2141 + */ 2142 + if (test_tsk_need_resched(rq->curr)) 2143 + rq->skip_clock_update = 1; 2109 2144 } 2110 2145 2111 2146 #ifdef CONFIG_SMP ··· 6974 6959 6975 6960 if (cpu != group_first_cpu(sd->groups)) 6976 6961 return; 6962 + 6963 + sd->groups->group_weight = cpumask_weight(sched_group_cpus(sd->groups)); 6977 6964 6978 6965 child = sd->child; 6979 6966
+31 -9
kernel/sched_fair.c
··· 1654 1654 struct cfs_rq *cfs_rq = task_cfs_rq(curr); 1655 1655 int scale = cfs_rq->nr_running >= sched_nr_latency; 1656 1656 1657 - if (unlikely(rt_prio(p->prio))) 1658 - goto preempt; 1659 - 1660 - if (unlikely(p->sched_class != &fair_sched_class)) 1661 - return; 1662 - 1663 1657 if (unlikely(se == pse)) 1664 1658 return; 1665 1659 ··· 2029 2035 unsigned long this_load_per_task; 2030 2036 unsigned long this_nr_running; 2031 2037 unsigned long this_has_capacity; 2038 + unsigned int this_idle_cpus; 2032 2039 2033 2040 /* Statistics of the busiest group */ 2041 + unsigned int busiest_idle_cpus; 2034 2042 unsigned long max_load; 2035 2043 unsigned long busiest_load_per_task; 2036 2044 unsigned long busiest_nr_running; 2037 2045 unsigned long busiest_group_capacity; 2038 2046 unsigned long busiest_has_capacity; 2047 + unsigned int busiest_group_weight; 2039 2048 2040 2049 int group_imb; /* Is there imbalance in this sd */ 2041 2050 #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) ··· 2060 2063 unsigned long sum_nr_running; /* Nr tasks running in the group */ 2061 2064 unsigned long sum_weighted_load; /* Weighted load of group's tasks */ 2062 2065 unsigned long group_capacity; 2066 + unsigned long idle_cpus; 2067 + unsigned long group_weight; 2063 2068 int group_imb; /* Is there an imbalance in the group ? */ 2064 2069 int group_has_capacity; /* Is there extra capacity in the group? */ 2065 2070 }; ··· 2430 2431 sgs->group_load += load; 2431 2432 sgs->sum_nr_running += rq->nr_running; 2432 2433 sgs->sum_weighted_load += weighted_cpuload(i); 2433 - 2434 + if (idle_cpu(i)) 2435 + sgs->idle_cpus++; 2434 2436 } 2435 2437 2436 2438 /* ··· 2469 2469 sgs->group_capacity = DIV_ROUND_CLOSEST(group->cpu_power, SCHED_LOAD_SCALE); 2470 2470 if (!sgs->group_capacity) 2471 2471 sgs->group_capacity = fix_small_capacity(sd, group); 2472 + sgs->group_weight = group->group_weight; 2472 2473 2473 2474 if (sgs->group_capacity > sgs->sum_nr_running) 2474 2475 sgs->group_has_capacity = 1; ··· 2577 2576 sds->this_nr_running = sgs.sum_nr_running; 2578 2577 sds->this_load_per_task = sgs.sum_weighted_load; 2579 2578 sds->this_has_capacity = sgs.group_has_capacity; 2579 + sds->this_idle_cpus = sgs.idle_cpus; 2580 2580 } else if (update_sd_pick_busiest(sd, sds, sg, &sgs, this_cpu)) { 2581 2581 sds->max_load = sgs.avg_load; 2582 2582 sds->busiest = sg; 2583 2583 sds->busiest_nr_running = sgs.sum_nr_running; 2584 + sds->busiest_idle_cpus = sgs.idle_cpus; 2584 2585 sds->busiest_group_capacity = sgs.group_capacity; 2585 2586 sds->busiest_load_per_task = sgs.sum_weighted_load; 2586 2587 sds->busiest_has_capacity = sgs.group_has_capacity; 2588 + sds->busiest_group_weight = sgs.group_weight; 2587 2589 sds->group_imb = sgs.group_imb; 2588 2590 } 2589 2591 ··· 2864 2860 if (sds.this_load >= sds.avg_load) 2865 2861 goto out_balanced; 2866 2862 2867 - if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load) 2868 - goto out_balanced; 2863 + /* 2864 + * In the CPU_NEWLY_IDLE, use imbalance_pct to be conservative. 2865 + * And to check for busy balance use !idle_cpu instead of 2866 + * CPU_NOT_IDLE. This is because HT siblings will use CPU_NOT_IDLE 2867 + * even when they are idle. 2868 + */ 2869 + if (idle == CPU_NEWLY_IDLE || !idle_cpu(this_cpu)) { 2870 + if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load) 2871 + goto out_balanced; 2872 + } else { 2873 + /* 2874 + * This cpu is idle. If the busiest group load doesn't 2875 + * have more tasks than the number of available cpu's and 2876 + * there is no imbalance between this and busiest group 2877 + * wrt to idle cpu's, it is balanced. 2878 + */ 2879 + if ((sds.this_idle_cpus <= sds.busiest_idle_cpus + 1) && 2880 + sds.busiest_nr_running <= sds.busiest_group_weight) 2881 + goto out_balanced; 2882 + } 2869 2883 2870 2884 force_balance: 2871 2885 /* Looks like there is an imbalance. Compute it */
+2 -2
kernel/sched_stoptask.c
··· 19 19 static void 20 20 check_preempt_curr_stop(struct rq *rq, struct task_struct *p, int flags) 21 21 { 22 - resched_task(rq->curr); /* we preempt everything */ 22 + /* we're never preempted */ 23 23 } 24 24 25 25 static struct task_struct *pick_next_task_stop(struct rq *rq) 26 26 { 27 27 struct task_struct *stop = rq->stop; 28 28 29 - if (stop && stop->state == TASK_RUNNING) 29 + if (stop && stop->se.on_rq) 30 30 return stop; 31 31 32 32 return NULL;
+1 -1
kernel/sysctl.c
··· 702 702 .extra1 = &zero, 703 703 .extra2 = &ten_thousand, 704 704 }, 705 - #endif 706 705 { 707 706 .procname = "dmesg_restrict", 708 707 .data = &dmesg_restrict, ··· 711 712 .extra1 = &zero, 712 713 .extra2 = &one, 713 714 }, 715 + #endif 714 716 { 715 717 .procname = "ngroups_max", 716 718 .data = &ngroups_max,
+1 -1
kernel/trace/Kconfig
··· 126 126 config FUNCTION_TRACER 127 127 bool "Kernel Function Tracer" 128 128 depends on HAVE_FUNCTION_TRACER 129 - select FRAME_POINTER if (!ARM_UNWIND) 129 + select FRAME_POINTER if !ARM_UNWIND && !S390 130 130 select KALLSYMS 131 131 select GENERIC_TRACER 132 132 select CONTEXT_SWITCH_TRACER
-1
kernel/trace/trace.c
··· 17 17 #include <linux/writeback.h> 18 18 #include <linux/kallsyms.h> 19 19 #include <linux/seq_file.h> 20 - #include <linux/smp_lock.h> 21 20 #include <linux/notifier.h> 22 21 #include <linux/irqflags.h> 23 22 #include <linux/debugfs.h>
+6 -7
net/ceph/messenger.c
··· 540 540 /* initialize page iterator */ 541 541 con->out_msg_pos.page = 0; 542 542 if (m->pages) 543 - con->out_msg_pos.page_pos = 544 - le16_to_cpu(m->hdr.data_off) & ~PAGE_MASK; 543 + con->out_msg_pos.page_pos = m->page_alignment; 545 544 else 546 545 con->out_msg_pos.page_pos = 0; 547 546 con->out_msg_pos.data_pos = 0; ··· 1490 1491 struct ceph_msg *m = con->in_msg; 1491 1492 int ret; 1492 1493 int to, left; 1493 - unsigned front_len, middle_len, data_len, data_off; 1494 + unsigned front_len, middle_len, data_len; 1494 1495 int datacrc = con->msgr->nocrc; 1495 1496 int skip; 1496 1497 u64 seq; ··· 1526 1527 data_len = le32_to_cpu(con->in_hdr.data_len); 1527 1528 if (data_len > CEPH_MSG_MAX_DATA_LEN) 1528 1529 return -EIO; 1529 - data_off = le16_to_cpu(con->in_hdr.data_off); 1530 1530 1531 1531 /* verify seq# */ 1532 1532 seq = le64_to_cpu(con->in_hdr.seq); 1533 1533 if ((s64)seq - (s64)con->in_seq < 1) { 1534 - pr_info("skipping %s%lld %s seq %lld, expected %lld\n", 1534 + pr_info("skipping %s%lld %s seq %lld expected %lld\n", 1535 1535 ENTITY_NAME(con->peer_name), 1536 1536 ceph_pr_addr(&con->peer_addr.in_addr), 1537 1537 seq, con->in_seq + 1); 1538 1538 con->in_base_pos = -front_len - middle_len - data_len - 1539 1539 sizeof(m->footer); 1540 1540 con->in_tag = CEPH_MSGR_TAG_READY; 1541 - con->in_seq++; 1542 1541 return 0; 1543 1542 } else if ((s64)seq - (s64)con->in_seq > 1) { 1544 1543 pr_err("read_partial_message bad seq %lld expected %lld\n", ··· 1573 1576 1574 1577 con->in_msg_pos.page = 0; 1575 1578 if (m->pages) 1576 - con->in_msg_pos.page_pos = data_off & ~PAGE_MASK; 1579 + con->in_msg_pos.page_pos = m->page_alignment; 1577 1580 else 1578 1581 con->in_msg_pos.page_pos = 0; 1579 1582 con->in_msg_pos.data_pos = 0; ··· 2298 2301 2299 2302 /* data */ 2300 2303 m->nr_pages = 0; 2304 + m->page_alignment = 0; 2301 2305 m->pages = NULL; 2302 2306 m->pagelist = NULL; 2303 2307 m->bio = NULL; ··· 2368 2370 type, front_len); 2369 2371 return NULL; 2370 2372 } 2373 + msg->page_alignment = le16_to_cpu(hdr->data_off); 2371 2374 } 2372 2375 memcpy(&msg->hdr, &con->in_hdr, sizeof(con->in_hdr)); 2373 2376
+17 -8
net/ceph/osd_client.c
··· 71 71 op->extent.length = objlen; 72 72 } 73 73 req->r_num_pages = calc_pages_for(off, *plen); 74 + req->r_page_alignment = off & ~PAGE_MASK; 74 75 if (op->op == CEPH_OSD_OP_WRITE) 75 76 op->payload_len = *plen; 76 77 ··· 391 390 req->r_request->hdr.data_len = cpu_to_le32(data_len); 392 391 } 393 392 393 + req->r_request->page_alignment = req->r_page_alignment; 394 + 394 395 BUG_ON(p > msg->front.iov_base + msg->front.iov_len); 395 396 msg_size = p - msg->front.iov_base; 396 397 msg->front.iov_len = msg_size; ··· 422 419 u32 truncate_seq, 423 420 u64 truncate_size, 424 421 struct timespec *mtime, 425 - bool use_mempool, int num_reply) 422 + bool use_mempool, int num_reply, 423 + int page_align) 426 424 { 427 425 struct ceph_osd_req_op ops[3]; 428 426 struct ceph_osd_request *req; ··· 450 446 /* calculate max write size */ 451 447 calc_layout(osdc, vino, layout, off, plen, req, ops); 452 448 req->r_file_layout = *layout; /* keep a copy */ 449 + 450 + /* in case it differs from natural alignment that calc_layout 451 + filled in for us */ 452 + req->r_page_alignment = page_align; 453 453 454 454 ceph_osdc_build_request(req, off, plen, ops, 455 455 snapc, ··· 1497 1489 struct ceph_vino vino, struct ceph_file_layout *layout, 1498 1490 u64 off, u64 *plen, 1499 1491 u32 truncate_seq, u64 truncate_size, 1500 - struct page **pages, int num_pages) 1492 + struct page **pages, int num_pages, int page_align) 1501 1493 { 1502 1494 struct ceph_osd_request *req; 1503 1495 int rc = 0; ··· 1507 1499 req = ceph_osdc_new_request(osdc, layout, vino, off, plen, 1508 1500 CEPH_OSD_OP_READ, CEPH_OSD_FLAG_READ, 1509 1501 NULL, 0, truncate_seq, truncate_size, NULL, 1510 - false, 1); 1502 + false, 1, page_align); 1511 1503 if (!req) 1512 1504 return -ENOMEM; 1513 1505 1514 1506 /* it may be a short read due to an object boundary */ 1515 1507 req->r_pages = pages; 1516 1508 1517 - dout("readpages final extent is %llu~%llu (%d pages)\n", 1518 - off, *plen, req->r_num_pages); 1509 + dout("readpages final extent is %llu~%llu (%d pages align %d)\n", 1510 + off, *plen, req->r_num_pages, page_align); 1519 1511 1520 1512 rc = ceph_osdc_start_request(osdc, req, false); 1521 1513 if (!rc) ··· 1541 1533 { 1542 1534 struct ceph_osd_request *req; 1543 1535 int rc = 0; 1536 + int page_align = off & ~PAGE_MASK; 1544 1537 1545 1538 BUG_ON(vino.snap != CEPH_NOSNAP); 1546 1539 req = ceph_osdc_new_request(osdc, layout, vino, off, &len, ··· 1550 1541 CEPH_OSD_FLAG_WRITE, 1551 1542 snapc, do_sync, 1552 1543 truncate_seq, truncate_size, mtime, 1553 - nofail, 1); 1544 + nofail, 1, page_align); 1554 1545 if (!req) 1555 1546 return -ENOMEM; 1556 1547 ··· 1647 1638 m = ceph_msg_get(req->r_reply); 1648 1639 1649 1640 if (data_len > 0) { 1650 - unsigned data_off = le16_to_cpu(hdr->data_off); 1651 - int want = calc_pages_for(data_off & ~PAGE_MASK, data_len); 1641 + int want = calc_pages_for(req->r_page_alignment, data_len); 1652 1642 1653 1643 if (unlikely(req->r_num_pages < want)) { 1654 1644 pr_warning("tid %lld reply %d > expected %d pages\n", ··· 1659 1651 } 1660 1652 m->pages = req->r_pages; 1661 1653 m->nr_pages = req->r_num_pages; 1654 + m->page_alignment = req->r_page_alignment; 1662 1655 #ifdef CONFIG_BLOCK 1663 1656 m->bio = req->r_bio; 1664 1657 #endif
+1 -2
net/ceph/pagevec.c
··· 13 13 * build a vector of user pages 14 14 */ 15 15 struct page **ceph_get_direct_page_vector(const char __user *data, 16 - int num_pages, 17 - loff_t off, size_t len) 16 + int num_pages) 18 17 { 19 18 struct page **pages; 20 19 int rc;
+1 -1
net/core/filter.c
··· 589 589 EXPORT_SYMBOL(sk_chk_filter); 590 590 591 591 /** 592 - * sk_filter_rcu_release: Release a socket filter by rcu_head 592 + * sk_filter_rcu_release - Release a socket filter by rcu_head 593 593 * @rcu: rcu_head that contains the sk_filter to free 594 594 */ 595 595 static void sk_filter_rcu_release(struct rcu_head *rcu)
+8 -2
net/core/net-sysfs.c
··· 712 712 713 713 714 714 map = rcu_dereference_raw(queue->rps_map); 715 - if (map) 715 + if (map) { 716 + RCU_INIT_POINTER(queue->rps_map, NULL); 716 717 call_rcu(&map->rcu, rps_map_release); 718 + } 717 719 718 720 flow_table = rcu_dereference_raw(queue->rps_flow_table); 719 - if (flow_table) 721 + if (flow_table) { 722 + RCU_INIT_POINTER(queue->rps_flow_table, NULL); 720 723 call_rcu(&flow_table->rcu, rps_dev_flow_table_release); 724 + } 721 725 722 726 if (atomic_dec_and_test(&first->count)) 723 727 kfree(first); 728 + else 729 + memset(kobj, 0, sizeof(*kobj)); 724 730 } 725 731 726 732 static struct kobj_type rx_queue_ktype = {
+3
net/ipv4/icmp.c
··· 569 569 /* No need to clone since we're just using its address. */ 570 570 rt2 = rt; 571 571 572 + if (!fl.nl_u.ip4_u.saddr) 573 + fl.nl_u.ip4_u.saddr = rt->rt_src; 574 + 572 575 err = xfrm_lookup(net, (struct dst_entry **)&rt, &fl, NULL, 0); 573 576 switch (err) { 574 577 case 0:
+16 -12
net/ipv6/addrconf.c
··· 98 98 #endif 99 99 100 100 #define INFINITY_LIFE_TIME 0xFFFFFFFF 101 - #define TIME_DELTA(a, b) ((unsigned long)((long)(a) - (long)(b))) 101 + 102 + static inline u32 cstamp_delta(unsigned long cstamp) 103 + { 104 + return (cstamp - INITIAL_JIFFIES) * 100UL / HZ; 105 + } 102 106 103 107 #define ADDRCONF_TIMER_FUZZ_MINUS (HZ > 50 ? HZ/50 : 1) 104 108 #define ADDRCONF_TIMER_FUZZ (HZ / 4) ··· 3448 3444 { 3449 3445 struct ifa_cacheinfo ci; 3450 3446 3451 - ci.cstamp = (u32)(TIME_DELTA(cstamp, INITIAL_JIFFIES) / HZ * 100 3452 - + TIME_DELTA(cstamp, INITIAL_JIFFIES) % HZ * 100 / HZ); 3453 - ci.tstamp = (u32)(TIME_DELTA(tstamp, INITIAL_JIFFIES) / HZ * 100 3454 - + TIME_DELTA(tstamp, INITIAL_JIFFIES) % HZ * 100 / HZ); 3447 + ci.cstamp = cstamp_delta(cstamp); 3448 + ci.tstamp = cstamp_delta(tstamp); 3455 3449 ci.ifa_prefered = preferred; 3456 3450 ci.ifa_valid = valid; 3457 3451 ··· 3800 3798 array[DEVCONF_AUTOCONF] = cnf->autoconf; 3801 3799 array[DEVCONF_DAD_TRANSMITS] = cnf->dad_transmits; 3802 3800 array[DEVCONF_RTR_SOLICITS] = cnf->rtr_solicits; 3803 - array[DEVCONF_RTR_SOLICIT_INTERVAL] = cnf->rtr_solicit_interval; 3804 - array[DEVCONF_RTR_SOLICIT_DELAY] = cnf->rtr_solicit_delay; 3801 + array[DEVCONF_RTR_SOLICIT_INTERVAL] = 3802 + jiffies_to_msecs(cnf->rtr_solicit_interval); 3803 + array[DEVCONF_RTR_SOLICIT_DELAY] = 3804 + jiffies_to_msecs(cnf->rtr_solicit_delay); 3805 3805 array[DEVCONF_FORCE_MLD_VERSION] = cnf->force_mld_version; 3806 3806 #ifdef CONFIG_IPV6_PRIVACY 3807 3807 array[DEVCONF_USE_TEMPADDR] = cnf->use_tempaddr; ··· 3817 3813 array[DEVCONF_ACCEPT_RA_PINFO] = cnf->accept_ra_pinfo; 3818 3814 #ifdef CONFIG_IPV6_ROUTER_PREF 3819 3815 array[DEVCONF_ACCEPT_RA_RTR_PREF] = cnf->accept_ra_rtr_pref; 3820 - array[DEVCONF_RTR_PROBE_INTERVAL] = cnf->rtr_probe_interval; 3816 + array[DEVCONF_RTR_PROBE_INTERVAL] = 3817 + jiffies_to_msecs(cnf->rtr_probe_interval); 3821 3818 #ifdef CONFIG_IPV6_ROUTE_INFO 3822 3819 array[DEVCONF_ACCEPT_RA_RT_INFO_MAX_PLEN] = cnf->accept_ra_rt_info_max_plen; 3823 3820 #endif ··· 3934 3929 NLA_PUT_U32(skb, IFLA_INET6_FLAGS, idev->if_flags); 3935 3930 3936 3931 ci.max_reasm_len = IPV6_MAXPLEN; 3937 - ci.tstamp = (__u32)(TIME_DELTA(idev->tstamp, INITIAL_JIFFIES) / HZ * 100 3938 - + TIME_DELTA(idev->tstamp, INITIAL_JIFFIES) % HZ * 100 / HZ); 3939 - ci.reachable_time = idev->nd_parms->reachable_time; 3940 - ci.retrans_time = idev->nd_parms->retrans_time; 3932 + ci.tstamp = cstamp_delta(idev->tstamp); 3933 + ci.reachable_time = jiffies_to_msecs(idev->nd_parms->reachable_time); 3934 + ci.retrans_time = jiffies_to_msecs(idev->nd_parms->retrans_time); 3941 3935 NLA_PUT(skb, IFLA_INET6_CACHEINFO, sizeof(ci), &ci); 3942 3936 3943 3937 nla = nla_reserve(skb, IFLA_INET6_CONF, DEVCONF_MAX * sizeof(s32));
-1
net/irda/af_irda.c
··· 45 45 #include <linux/capability.h> 46 46 #include <linux/module.h> 47 47 #include <linux/types.h> 48 - #include <linux/smp_lock.h> 49 48 #include <linux/socket.h> 50 49 #include <linux/sockios.h> 51 50 #include <linux/slab.h>
-1
net/irda/irnet/irnet_ppp.c
··· 15 15 16 16 #include <linux/sched.h> 17 17 #include <linux/slab.h> 18 - #include <linux/smp_lock.h> 19 18 #include "irnet_ppp.h" /* Private header */ 20 19 /* Please put other headers in irnet.h - Thanks */ 21 20
+22 -8
net/irda/irttp.c
··· 550 550 */ 551 551 int irttp_udata_request(struct tsap_cb *self, struct sk_buff *skb) 552 552 { 553 + int ret; 554 + 553 555 IRDA_ASSERT(self != NULL, return -1;); 554 556 IRDA_ASSERT(self->magic == TTP_TSAP_MAGIC, return -1;); 555 557 IRDA_ASSERT(skb != NULL, return -1;); 556 558 557 559 IRDA_DEBUG(4, "%s()\n", __func__); 558 560 561 + /* Take shortcut on zero byte packets */ 562 + if (skb->len == 0) { 563 + ret = 0; 564 + goto err; 565 + } 566 + 559 567 /* Check that nothing bad happens */ 560 - if ((skb->len == 0) || (!self->connected)) { 561 - IRDA_DEBUG(1, "%s(), No data, or not connected\n", 562 - __func__); 568 + if (!self->connected) { 569 + IRDA_WARNING("%s(), Not connected\n", __func__); 570 + ret = -ENOTCONN; 563 571 goto err; 564 572 } 565 573 566 574 if (skb->len > self->max_seg_size) { 567 - IRDA_DEBUG(1, "%s(), UData is too large for IrLAP!\n", 568 - __func__); 575 + IRDA_ERROR("%s(), UData is too large for IrLAP!\n", __func__); 576 + ret = -EMSGSIZE; 569 577 goto err; 570 578 } 571 579 ··· 584 576 585 577 err: 586 578 dev_kfree_skb(skb); 587 - return -1; 579 + return ret; 588 580 } 589 581 EXPORT_SYMBOL(irttp_udata_request); 590 582 ··· 607 599 IRDA_DEBUG(2, "%s() : queue len = %d\n", __func__, 608 600 skb_queue_len(&self->tx_queue)); 609 601 602 + /* Take shortcut on zero byte packets */ 603 + if (skb->len == 0) { 604 + ret = 0; 605 + goto err; 606 + } 607 + 610 608 /* Check that nothing bad happens */ 611 - if ((skb->len == 0) || (!self->connected)) { 612 - IRDA_WARNING("%s: No data, or not connected\n", __func__); 609 + if (!self->connected) { 610 + IRDA_WARNING("%s: Not connected\n", __func__); 613 611 ret = -ENOTCONN; 614 612 goto err; 615 613 }
+1
net/netfilter/ipvs/Kconfig
··· 4 4 menuconfig IP_VS 5 5 tristate "IP virtual server support" 6 6 depends on NET && INET && NETFILTER 7 + depends on (NF_CONNTRACK || NF_CONNTRACK=n) 7 8 ---help--- 8 9 IP Virtual Server support will let you build a high-performance 9 10 virtual server based on cluster of two or more real servers. This
+1 -1
net/rds/rdma.c
··· 567 567 goto out; 568 568 } 569 569 570 - if (args->nr_local > (u64)UINT_MAX) { 570 + if (args->nr_local > UIO_MAXIOV) { 571 571 ret = -EMSGSIZE; 572 572 goto out; 573 573 }
+1 -3
net/sunrpc/stats.c
··· 115 115 */ 116 116 struct rpc_iostats *rpc_alloc_iostats(struct rpc_clnt *clnt) 117 117 { 118 - struct rpc_iostats *new; 119 - new = kcalloc(clnt->cl_maxproc, sizeof(struct rpc_iostats), GFP_KERNEL); 120 - return new; 118 + return kcalloc(clnt->cl_maxproc, sizeof(struct rpc_iostats), GFP_KERNEL); 121 119 } 122 120 EXPORT_SYMBOL_GPL(rpc_alloc_iostats); 123 121
-1
net/sunrpc/svc_xprt.c
··· 5 5 */ 6 6 7 7 #include <linux/sched.h> 8 - #include <linux/smp_lock.h> 9 8 #include <linux/errno.h> 10 9 #include <linux/freezer.h> 11 10 #include <linux/kthread.h>
+54
net/wireless/chan.c
··· 44 44 return chan; 45 45 } 46 46 47 + static bool can_beacon_sec_chan(struct wiphy *wiphy, 48 + struct ieee80211_channel *chan, 49 + enum nl80211_channel_type channel_type) 50 + { 51 + struct ieee80211_channel *sec_chan; 52 + int diff; 53 + 54 + switch (channel_type) { 55 + case NL80211_CHAN_HT40PLUS: 56 + diff = 20; 57 + break; 58 + case NL80211_CHAN_HT40MINUS: 59 + diff = -20; 60 + break; 61 + default: 62 + return false; 63 + } 64 + 65 + sec_chan = ieee80211_get_channel(wiphy, chan->center_freq + diff); 66 + if (!sec_chan) 67 + return false; 68 + 69 + /* we'll need a DFS capability later */ 70 + if (sec_chan->flags & (IEEE80211_CHAN_DISABLED | 71 + IEEE80211_CHAN_PASSIVE_SCAN | 72 + IEEE80211_CHAN_NO_IBSS | 73 + IEEE80211_CHAN_RADAR)) 74 + return false; 75 + 76 + return true; 77 + } 78 + 47 79 int cfg80211_set_freq(struct cfg80211_registered_device *rdev, 48 80 struct wireless_dev *wdev, int freq, 49 81 enum nl80211_channel_type channel_type) ··· 99 67 chan = rdev_freq_to_chan(rdev, freq, channel_type); 100 68 if (!chan) 101 69 return -EINVAL; 70 + 71 + /* Both channels should be able to initiate communication */ 72 + if (wdev && (wdev->iftype == NL80211_IFTYPE_ADHOC || 73 + wdev->iftype == NL80211_IFTYPE_AP || 74 + wdev->iftype == NL80211_IFTYPE_AP_VLAN || 75 + wdev->iftype == NL80211_IFTYPE_MESH_POINT || 76 + wdev->iftype == NL80211_IFTYPE_P2P_GO)) { 77 + switch (channel_type) { 78 + case NL80211_CHAN_HT40PLUS: 79 + case NL80211_CHAN_HT40MINUS: 80 + if (!can_beacon_sec_chan(&rdev->wiphy, chan, 81 + channel_type)) { 82 + printk(KERN_DEBUG 83 + "cfg80211: Secondary channel not " 84 + "allowed to initiate communication\n"); 85 + return -EINVAL; 86 + } 87 + break; 88 + default: 89 + break; 90 + } 91 + } 102 92 103 93 result = rdev->ops->set_channel(&rdev->wiphy, 104 94 wdev ? wdev->netdev : NULL,
+9 -3
scripts/kernel-doc
··· 5 5 ## Copyright (c) 1998 Michael Zucchi, All Rights Reserved ## 6 6 ## Copyright (C) 2000, 1 Tim Waugh <twaugh@redhat.com> ## 7 7 ## Copyright (C) 2001 Simon Huggins ## 8 - ## Copyright (C) 2005-2009 Randy Dunlap ## 8 + ## Copyright (C) 2005-2010 Randy Dunlap ## 9 9 ## ## 10 10 ## #define enhancements by Armin Kuster <akuster@mvista.com> ## 11 11 ## Copyright (c) 2000 MontaVista Software, Inc. ## ··· 453 453 if ($output_mode eq "html" || $output_mode eq "xml") { 454 454 $contents = local_unescape($contents); 455 455 # convert data read & converted thru xml_escape() into &xyz; format: 456 - $contents =~ s/\\\\\\/&/g; 456 + $contents =~ s/\\\\\\/\&/g; 457 457 } 458 458 # print STDERR "contents b4:$contents\n"; 459 459 eval $dohighlight; ··· 770 770 print $args{'type'} . " " . $args{'struct'} . " {\n"; 771 771 foreach $parameter (@{$args{'parameterlist'}}) { 772 772 if ($parameter =~ /^#/) { 773 - print "$parameter\n"; 773 + my $prm = $parameter; 774 + # convert data read & converted thru xml_escape() into &xyz; format: 775 + # This allows us to have #define macros interspersed in a struct. 776 + $prm =~ s/\\\\\\/\&/g; 777 + print "$prm\n"; 774 778 next; 775 779 } 776 780 ··· 1704 1700 ++$warnings; 1705 1701 } 1706 1702 } 1703 + 1704 + $param = xml_escape($param); 1707 1705 1708 1706 # strip spaces from $param so that it is one continous string 1709 1707 # on @parameterlist;
-1
sound/core/info.c
··· 23 23 #include <linux/time.h> 24 24 #include <linux/mm.h> 25 25 #include <linux/slab.h> 26 - #include <linux/smp_lock.h> 27 26 #include <linux/string.h> 28 27 #include <sound/core.h> 29 28 #include <sound/minors.h>
-1
sound/core/pcm_native.c
··· 22 22 #include <linux/mm.h> 23 23 #include <linux/file.h> 24 24 #include <linux/slab.h> 25 - #include <linux/smp_lock.h> 26 25 #include <linux/time.h> 27 26 #include <linux/pm_qos_params.h> 28 27 #include <linux/uio.h>
-1
sound/core/sound.c
··· 21 21 22 22 #include <linux/init.h> 23 23 #include <linux/slab.h> 24 - #include <linux/smp_lock.h> 25 24 #include <linux/time.h> 26 25 #include <linux/device.h> 27 26 #include <linux/moduleparam.h>
-1
sound/sound_core.c
··· 104 104 105 105 #include <linux/init.h> 106 106 #include <linux/slab.h> 107 - #include <linux/smp_lock.h> 108 107 #include <linux/types.h> 109 108 #include <linux/kernel.h> 110 109 #include <linux/sound.h>