···1616 </orgname>17171818 <address>1919- <email>hjk@linutronix.de</email>1919+ <email>hjk@hansjkoch.de</email>2020 </address>2121 </affiliation>2222</author>···114114115115<para>If you know of any translations for this document, or you are116116interested in translating it, please email me117117-<email>hjk@linutronix.de</email>.117117+<email>hjk@hansjkoch.de</email>.118118</para>119119</sect1>120120···171171<title>Feedback</title>172172 <para>Find something wrong with this document? (Or perhaps something173173 right?) I would love to hear from you. Please email me at174174- <email>hjk@linutronix.de</email>.</para>174174+ <email>hjk@hansjkoch.de</email>.</para>175175</sect1>176176</chapter>177177
+22-9
Documentation/development-process/2.Process
···154154 inclusion, it should be accepted by a relevant subsystem maintainer -155155 though this acceptance is not a guarantee that the patch will make it156156 all the way to the mainline. The patch will show up in the maintainer's157157- subsystem tree and into the staging trees (described below). When the157157+ subsystem tree and into the -next trees (described below). When the158158 process works, this step leads to more extensive review of the patch and159159 the discovery of any problems resulting from the integration of this160160 patch with work being done by others.···236236normally the right way to go.237237238238239239-2.4: STAGING TREES239239+2.4: NEXT TREES240240241241The chain of subsystem trees guides the flow of patches into the kernel,242242but it also raises an interesting question: what if somebody wants to look···250250the interesting subsystem trees, but that would be a big and error-prone251251job.252252253253-The answer comes in the form of staging trees, where subsystem trees are253253+The answer comes in the form of -next trees, where subsystem trees are254254collected for testing and review. The older of these trees, maintained by255255Andrew Morton, is called "-mm" (for memory management, which is how it got256256started). The -mm tree integrates patches from a long list of subsystem···275275Use of the MMOTM tree is likely to be a frustrating experience, though;276276there is a definite chance that it will not even compile.277277278278-The other staging tree, started more recently, is linux-next, maintained by278278+The other -next tree, started more recently, is linux-next, maintained by279279Stephen Rothwell. The linux-next tree is, by design, a snapshot of what280280the mainline is expected to look like after the next merge window closes.281281Linux-next trees are announced on the linux-kernel and linux-next mailing···303303See http://lwn.net/Articles/289013/ for more information on this topic, and304304stay tuned; much is still in flux where linux-next is involved.305305306306-Besides the mmotm and linux-next trees, the kernel source tree now contains307307-the drivers/staging/ directory and many sub-directories for drivers or308308-filesystems that are on their way to being added to the kernel tree309309-proper, but they remain in drivers/staging/ while they still need more310310-work.306306+2.4.1: STAGING TREES311307308308+The kernel source tree now contains the drivers/staging/ directory, where309309+many sub-directories for drivers or filesystems that are on their way to310310+being added to the kernel tree live. They remain in drivers/staging while311311+they still need more work; once complete, they can be moved into the312312+kernel proper. This is a way to keep track of drivers that aren't313313+up to Linux kernel coding or quality standards, but people may want to use314314+them and track development.315315+316316+Greg Kroah-Hartman currently (as of 2.6.36) maintains the staging tree.317317+Drivers that still need work are sent to him, with each driver having318318+its own subdirectory in drivers/staging/. Along with the driver source319319+files, a TODO file should be present in the directory as well. The TODO320320+file lists the pending work that the driver needs for acceptance into321321+the kernel proper, as well as a list of people that should be Cc'd for any322322+patches to the driver. Staging drivers that don't currently build should323323+have their config entries depend upon CONFIG_BROKEN. Once they can324324+be successfully built without outside patches, CONFIG_BROKEN can be removed.3123253133262.5: TOOLS314327
···617617 is configured as an output, this value may be written;618618 any nonzero value is treated as high.619619620620+ If the pin can be configured as interrupt-generating interrupt621621+ and if it has been configured to generate interrupts (see the622622+ description of "edge"), you can poll(2) on that file and623623+ poll(2) will return whenever the interrupt was triggered. If624624+ you use poll(2), set the events POLLPRI and POLLERR. If you625625+ use select(2), set the file descriptor in exceptfds. After626626+ poll(2) returns, either lseek(2) to the beginning of the sysfs627627+ file and read the new value or close the file and re-open it628628+ to read the value.629629+620630 "edge" ... reads as either "none", "rising", "falling", or621631 "both". Write these strings to select the signal edge(s)622632 that will make poll(2) on the "value" file return.
+1-1
Documentation/hwmon/lm93
···1111 Mark M. Hoffman <mhoffman@lightlink.com>1212 Ported to 2.6 by Eric J. Bowersox <ericb@aspsys.com>1313 Adapted to 2.6.20 by Carsten Emde <ce@osadl.org>1414- Modified for mainline integration by Hans J. Koch <hjk@linutronix.de>1414+ Modified for mainline integration by Hans J. Koch <hjk@hansjkoch.de>15151616Module Parameters1717-----------------
+1-1
Documentation/hwmon/max6650
···88 Datasheet: http://pdfserv.maxim-ic.com/en/ds/MAX6650-MAX6651.pdf991010Authors:1111- Hans J. Koch <hjk@linutronix.de>1111+ Hans J. Koch <hjk@hansjkoch.de>1212 John Morris <john.morris@spirentcom.com>1313 Claus Gindhart <claus.gindhart@kontron.com>1414
+3
Documentation/power/opp.txt
···3737SoC framework -> modifies on required cases certain OPPs -> OPP layer3838 -> queries to search/retrieve information ->39394040+Architectures that provide a SoC framework for OPP should select ARCH_HAS_OPP4141+to make the OPP layer available.4242+4043OPP layer expects each domain to be represented by a unique device pointer. SoC4144framework registers a set of initial OPPs per device with the OPP layer. This4245list is expected to be an optimally small number typically around 5 per device.
+9
MAINTAINERS
···18291829S: Supported18301830F: drivers/net/cxgb4vf/1831183118321832+STMMAC ETHERNET DRIVER18331833+M: Giuseppe Cavallaro <peppe.cavallaro@st.com>18341834+L: netdev@vger.kernel.org18351835+W: http://www.stlinux.com18361836+S: Supported18371837+F: drivers/net/stmmac/18381838+18321839CYBERPRO FB DRIVER18331840M: Russell King <linux@arm.linux.org.uk>18341841L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)···20152008DOCBOOK FOR DOCUMENTATION20162009M: Randy Dunlap <rdunlap@xenotime.net>20172010S: Maintained20112011+F: scripts/kernel-doc2018201220192013DOCKING STATION DRIVER20202014M: Shaohua Li <shaohua.li@intel.com>···20262018DOCUMENTATION20272019M: Randy Dunlap <rdunlap@xenotime.net>20282020L: linux-doc@vger.kernel.org20212021+T: quilt oss.oracle.com/~rdunlap/kernel-doc-patches/current/20292022S: Maintained20302023F: Documentation/20312024
···44 bool55 default y if !PPC646677+config 32BIT88+ bool99+ default y if PPC321010+711config 64BIT812 bool913 default y if PPC64
+2-1
arch/powerpc/boot/div64.S
···3333 cntlzw r0,r5 # we are shifting the dividend right3434 li r10,-1 # to make it < 2^32, and shifting3535 srw r10,r10,r0 # the divisor right the same amount,3636- add r9,r4,r10 # rounding up (so the estimate cannot3636+ addc r9,r4,r10 # rounding up (so the estimate cannot3737 andc r11,r6,r10 # ever be too large, only too small)3838 andc r9,r9,r103939+ addze r9,r93940 or r11,r5,r114041 rotlw r9,r9,r04142 rotlw r11,r11,r0
+2-2
arch/powerpc/kernel/kgdb.c
···337337 /* FP registers 32 -> 63 */338338#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE)339339 if (current)340340- memcpy(mem, current->thread.evr[regno-32],340340+ memcpy(mem, ¤t->thread.evr[regno-32],341341 dbg_reg_def[regno].size);342342#else343343 /* fp registers not used by kernel, leave zero */···362362 if (regno >= 32 && regno < 64) {363363 /* FP registers 32 -> 63 */364364#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE)365365- memcpy(current->thread.evr[regno-32], mem,365365+ memcpy(¤t->thread.evr[regno-32], mem,366366 dbg_reg_def[regno].size);367367#else368368 /* fp registers not used by kernel, leave zero */
+2-3
arch/powerpc/kernel/setup_64.c
···497497}498498499499/*500500- * Called into from start_kernel, after lock_kernel has been called.501501- * Initializes bootmem, which is unsed to manage page allocation until502502- * mem_init is called.500500+ * Called into from start_kernel this initializes bootmem, which is used501501+ * to manage page allocation until mem_init is called.503502 */504503void __init setup_arch(char **cmdline_p)505504{
···11231123 else11241124#endif /* CONFIG_PPC_HAS_HASH_64K */11251125 rc = __hash_page_4K(ea, access, vsid, ptep, trap, local, ssize,11261126- subpage_protection(pgdir, ea));11261126+ subpage_protection(mm, ea));1127112711281128 /* Dump some info in case of hash insertion failure, they should11291129 * never happen so it is really useful to know if/when they do
+4-1
arch/powerpc/mm/tlb_low_64e.S
···138138 cmpldi cr0,r15,0 /* Check for user region */139139 std r14,EX_TLB_ESR(r12) /* write crazy -1 to frame */140140 beq normal_tlb_miss141141+142142+ li r11,_PAGE_PRESENT|_PAGE_BAP_SX /* Base perm */143143+ oris r11,r11,_PAGE_ACCESSED@h141144 /* XXX replace the RMW cycles with immediate loads + writes */142142-1: mfspr r10,SPRN_MAS1145145+ mfspr r10,SPRN_MAS1143146 cmpldi cr0,r15,8 /* Check for vmalloc region */144147 rlwinm r10,r10,0,16,1 /* Clear TID */145148 mtspr SPRN_MAS1,r10
···4747config PPC_PSERIES_DEBUG4848 depends on PPC_PSERIES && PPC_EARLY_DEBUG4949 bool "Enable extra debug logging in platforms/pseries"5050+ help5151+ Say Y here if you want the pseries core to produce a bunch of5252+ debug messages to the system log. Select this if you are having a5353+ problem with the pseries core and want to see more of what is5454+ going on. This does not enable debugging in lpar.c, which must5555+ be manually done due to its verbosity.5056 default y51575258config PPC_SMLPAR
-2
arch/powerpc/platforms/pseries/eeh.c
···2121 * Please address comments and feedback to Linas Vepstas <linas@austin.ibm.com>2222 */23232424-#undef DEBUG2525-2624#include <linux/delay.h>2725#include <linux/init.h>2826#include <linux/list.h>
-2
arch/powerpc/platforms/pseries/pci_dlpar.c
···2525 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.2626 */27272828-#undef DEBUG2929-3028#include <linux/pci.h>3129#include <asm/pci-bridge.h>3230#include <asm/ppc-pci.h>
+12
arch/s390/Kconfig.debug
···6677source "lib/Kconfig.debug"8899+config STRICT_DEVMEM1010+ def_bool y1111+ prompt "Filter access to /dev/mem"1212+ ---help---1313+ This option restricts access to /dev/mem. If this option is1414+ disabled, you allow userspace access to all memory, including1515+ kernel and userspace memory. Accidental memory access is likely1616+ to be disastrous.1717+ Memory access is required for experts who want to debug the kernel.1818+1919+ If you are unsure, say Y.2020+921config DEBUG_STRICT_USER_COPY_CHECKS1022 bool "Strict user copy size checks"1123 ---help---
+5
arch/s390/include/asm/page.h
···130130void arch_free_page(struct page *page, int order);131131void arch_alloc_page(struct page *page, int order);132132133133+static inline int devmem_is_allowed(unsigned long pfn)134134+{135135+ return 0;136136+}137137+133138#define HAVE_ARCH_FREE_PAGE134139#define HAVE_ARCH_ALLOC_PAGE135140
···3166316631673167/**31683168 * ata_scsi_queuecmd - Issue SCSI cdb to libata-managed device31693169+ * @shost: SCSI host of command to be sent31693170 * @cmd: SCSI command to be sent31703170- * @done: Completion function, called when command is complete31713171 *31723172 * In some cases, this function translates SCSI commands into31733173 * ATA taskfiles, and queues the taskfiles to be sent to···31773177 * ATA and ATAPI devices appearing as SCSI devices.31783178 *31793179 * LOCKING:31803180- * Releases scsi-layer-held lock, and obtains host lock.31803180+ * ATA host lock31813181 *31823182 * RETURNS:31833183 * Return value from __ata_scsi_queuecmd() if @cmd can be queued,31843184 * 0 otherwise.31853185 */31863186-int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))31863186+int ata_scsi_queuecmd(struct Scsi_Host *shost, struct scsi_cmnd *cmd)31873187{31883188 struct ata_port *ap;31893189 struct ata_device *dev;31903190 struct scsi_device *scsidev = cmd->device;31913191- struct Scsi_Host *shost = scsidev->host;31923191 int rc = 0;31923192+ unsigned long irq_flags;3193319331943194 ap = ata_shost_to_port(shost);3195319531963196- spin_unlock(shost->host_lock);31973197- spin_lock(ap->lock);31963196+ spin_lock_irqsave(ap->lock, irq_flags);3198319731993198 ata_scsi_dump_cdb(ap, cmd);3200319932013200 dev = ata_scsi_find_dev(ap, scsidev);32023201 if (likely(dev))32033203- rc = __ata_scsi_queuecmd(cmd, done, dev);32023202+ rc = __ata_scsi_queuecmd(cmd, cmd->scsi_done, dev);32043203 else {32053204 cmd->result = (DID_BAD_TARGET << 16);32063206- done(cmd);32053205+ cmd->scsi_done(cmd);32073206 }3208320732093209- spin_unlock(ap->lock);32103210- spin_lock(shost->host_lock);32083208+ spin_unlock_irqrestore(ap->lock, irq_flags);32093209+32113210 return rc;32123211}32133212
+5-4
drivers/ata/sata_via.c
···538538 return 0;539539}540540541541-static void svia_configure(struct pci_dev *pdev)541541+static void svia_configure(struct pci_dev *pdev, int board_id)542542{543543 u8 tmp8;544544···577577 }578578579579 /*580580- * vt6421 has problems talking to some drives. The following580580+ * vt6420/1 has problems talking to some drives. The following581581 * is the fix from Joseph Chan <JosephChan@via.com.tw>.582582 *583583 * When host issues HOLD, device may send up to 20DW of data···596596 *597597 * https://bugzilla.kernel.org/show_bug.cgi?id=15173598598 * http://article.gmane.org/gmane.linux.ide/46352599599+ * http://thread.gmane.org/gmane.linux.kernel/1062139599600 */600600- if (pdev->device == 0x3249) {601601+ if (board_id == vt6420 || board_id == vt6421) {601602 pci_read_config_byte(pdev, 0x52, &tmp8);602603 tmp8 |= 1 << 2;603604 pci_write_config_byte(pdev, 0x52, tmp8);···653652 if (rc)654653 return rc;655654656656- svia_configure(pdev);655655+ svia_configure(pdev, board_id);657656658657 pci_set_master(pdev);659658 return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
···547547 struct drm_i915_gem_object *obj_priv;548548 int ret = 0;549549550550+ if (args->size == 0)551551+ return 0;552552+553553+ if (!access_ok(VERIFY_WRITE,554554+ (char __user *)(uintptr_t)args->data_ptr,555555+ args->size))556556+ return -EFAULT;557557+558558+ ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,559559+ args->size);560560+ if (ret)561561+ return -EFAULT;562562+550563 ret = i915_mutex_lock_interruptible(dev);551564 if (ret)552565 return ret;···574561 /* Bounds check source. */575562 if (args->offset > obj->size || args->size > obj->size - args->offset) {576563 ret = -EINVAL;577577- goto out;578578- }579579-580580- if (args->size == 0)581581- goto out;582582-583583- if (!access_ok(VERIFY_WRITE,584584- (char __user *)(uintptr_t)args->data_ptr,585585- args->size)) {586586- ret = -EFAULT;587587- goto out;588588- }589589-590590- ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,591591- args->size);592592- if (ret) {593593- ret = -EFAULT;594564 goto out;595565 }596566···977981 struct drm_i915_gem_pwrite *args = data;978982 struct drm_gem_object *obj;979983 struct drm_i915_gem_object *obj_priv;980980- int ret = 0;984984+ int ret;985985+986986+ if (args->size == 0)987987+ return 0;988988+989989+ if (!access_ok(VERIFY_READ,990990+ (char __user *)(uintptr_t)args->data_ptr,991991+ args->size))992992+ return -EFAULT;993993+994994+ ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,995995+ args->size);996996+ if (ret)997997+ return -EFAULT;981998982999 ret = i915_mutex_lock_interruptible(dev);9831000 if (ret)···1003994 }1004995 obj_priv = to_intel_bo(obj);100599610061006-1007997 /* Bounds check destination. */1008998 if (args->offset > obj->size || args->size > obj->size - args->offset) {1009999 ret = -EINVAL;10101010- goto out;10111011- }10121012-10131013- if (args->size == 0)10141014- goto out;10151015-10161016- if (!access_ok(VERIFY_READ,10171017- (char __user *)(uintptr_t)args->data_ptr,10181018- args->size)) {10191019- ret = -EFAULT;10201020- goto out;10211021- }10221022-10231023- ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,10241024- args->size);10251025- if (ret) {10261026- ret = -EFAULT;10271000 goto out;10281001 }10291002···28962905 obj->write_domain);2897290628982907 return 0;29082908+}29092909+29102910+int29112911+i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj,29122912+ bool interruptible)29132913+{29142914+ if (!obj->active)29152915+ return 0;29162916+29172917+ if (obj->base.write_domain & I915_GEM_GPU_DOMAINS)29182918+ i915_gem_flush_ring(obj->base.dev, NULL, obj->ring,29192919+ 0, obj->base.write_domain);29202920+29212921+ return i915_gem_object_wait_rendering(&obj->base, interruptible);28992922}2900292329012924/**
+89-60
drivers/gpu/drm/i915/intel_crt.c
···3434#include "i915_drm.h"3535#include "i915_drv.h"36363737+/* Here's the desired hotplug mode */3838+#define ADPA_HOTPLUG_BITS (ADPA_CRT_HOTPLUG_PERIOD_128 | \3939+ ADPA_CRT_HOTPLUG_WARMUP_10MS | \4040+ ADPA_CRT_HOTPLUG_SAMPLE_4S | \4141+ ADPA_CRT_HOTPLUG_VOLTAGE_50 | \4242+ ADPA_CRT_HOTPLUG_VOLREF_325MV | \4343+ ADPA_CRT_HOTPLUG_ENABLE)4444+4545+struct intel_crt {4646+ struct intel_encoder base;4747+ bool force_hotplug_required;4848+};4949+5050+static struct intel_crt *intel_attached_crt(struct drm_connector *connector)5151+{5252+ return container_of(intel_attached_encoder(connector),5353+ struct intel_crt, base);5454+}5555+3756static void intel_crt_dpms(struct drm_encoder *encoder, int mode)3857{3958 struct drm_device *dev = encoder->dev;···148129 dpll_md & ~DPLL_MD_UDI_MULTIPLIER_MASK);149130 }150131151151- adpa = 0;132132+ adpa = ADPA_HOTPLUG_BITS;152133 if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)153134 adpa |= ADPA_HSYNC_ACTIVE_HIGH;154135 if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)···176157static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector)177158{178159 struct drm_device *dev = connector->dev;160160+ struct intel_crt *crt = intel_attached_crt(connector);179161 struct drm_i915_private *dev_priv = dev->dev_private;180180- u32 adpa, temp;162162+ u32 adpa;181163 bool ret;182182- bool turn_off_dac = false;183164184184- temp = adpa = I915_READ(PCH_ADPA);165165+ /* The first time through, trigger an explicit detection cycle */166166+ if (crt->force_hotplug_required) {167167+ bool turn_off_dac = HAS_PCH_SPLIT(dev);168168+ u32 save_adpa;185169186186- if (HAS_PCH_SPLIT(dev))187187- turn_off_dac = true;170170+ crt->force_hotplug_required = 0;188171189189- adpa &= ~ADPA_CRT_HOTPLUG_MASK;190190- if (turn_off_dac)191191- adpa &= ~ADPA_DAC_ENABLE;172172+ save_adpa = adpa = I915_READ(PCH_ADPA);173173+ DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa);192174193193- /* disable HPD first */194194- I915_WRITE(PCH_ADPA, adpa);195195- (void)I915_READ(PCH_ADPA);175175+ adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;176176+ if (turn_off_dac)177177+ adpa &= ~ADPA_DAC_ENABLE;196178197197- adpa |= (ADPA_CRT_HOTPLUG_PERIOD_128 |198198- ADPA_CRT_HOTPLUG_WARMUP_10MS |199199- ADPA_CRT_HOTPLUG_SAMPLE_4S |200200- ADPA_CRT_HOTPLUG_VOLTAGE_50 | /* default */201201- ADPA_CRT_HOTPLUG_VOLREF_325MV |202202- ADPA_CRT_HOTPLUG_ENABLE |203203- ADPA_CRT_HOTPLUG_FORCE_TRIGGER);179179+ I915_WRITE(PCH_ADPA, adpa);204180205205- DRM_DEBUG_KMS("pch crt adpa 0x%x", adpa);206206- I915_WRITE(PCH_ADPA, adpa);181181+ if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,182182+ 1000))183183+ DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");207184208208- if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,209209- 1000))210210- DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");211211-212212- if (turn_off_dac) {213213- /* Make sure hotplug is enabled */214214- I915_WRITE(PCH_ADPA, temp | ADPA_CRT_HOTPLUG_ENABLE);215215- (void)I915_READ(PCH_ADPA);185185+ if (turn_off_dac) {186186+ I915_WRITE(PCH_ADPA, save_adpa);187187+ POSTING_READ(PCH_ADPA);188188+ }216189 }217190218191 /* Check the status to see if both blue and green are on now */219192 adpa = I915_READ(PCH_ADPA);220220- adpa &= ADPA_CRT_HOTPLUG_MONITOR_MASK;221221- if ((adpa == ADPA_CRT_HOTPLUG_MONITOR_COLOR) ||222222- (adpa == ADPA_CRT_HOTPLUG_MONITOR_MONO))193193+ if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)223194 ret = true;224195 else225196 ret = false;197197+ DRM_DEBUG_KMS("ironlake hotplug adpa=0x%x, result %d\n", adpa, ret);226198227199 return ret;228200}···287277 return i2c_transfer(&dev_priv->gmbus[ddc_bus].adapter, msgs, 1) == 1;288278}289279290290-static bool intel_crt_detect_ddc(struct drm_encoder *encoder)280280+static bool intel_crt_detect_ddc(struct intel_crt *crt)291281{292292- struct intel_encoder *intel_encoder = to_intel_encoder(encoder);293293- struct drm_i915_private *dev_priv = encoder->dev->dev_private;282282+ struct drm_i915_private *dev_priv = crt->base.base.dev->dev_private;294283295284 /* CRT should always be at 0, but check anyway */296296- if (intel_encoder->type != INTEL_OUTPUT_ANALOG)285285+ if (crt->base.type != INTEL_OUTPUT_ANALOG)297286 return false;298287299288 if (intel_crt_ddc_probe(dev_priv, dev_priv->crt_ddc_pin)) {···300291 return true;301292 }302293303303- if (intel_ddc_probe(intel_encoder, dev_priv->crt_ddc_pin)) {294294+ if (intel_ddc_probe(&crt->base, dev_priv->crt_ddc_pin)) {304295 DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n");305296 return true;306297 }···309300}310301311302static enum drm_connector_status312312-intel_crt_load_detect(struct drm_crtc *crtc, struct intel_encoder *intel_encoder)303303+intel_crt_load_detect(struct drm_crtc *crtc, struct intel_crt *crt)313304{314314- struct drm_encoder *encoder = &intel_encoder->base;305305+ struct drm_encoder *encoder = &crt->base.base;315306 struct drm_device *dev = encoder->dev;316307 struct drm_i915_private *dev_priv = dev->dev_private;317308 struct intel_crtc *intel_crtc = to_intel_crtc(crtc);···443434intel_crt_detect(struct drm_connector *connector, bool force)444435{445436 struct drm_device *dev = connector->dev;446446- struct intel_encoder *encoder = intel_attached_encoder(connector);437437+ struct intel_crt *crt = intel_attached_crt(connector);447438 struct drm_crtc *crtc;448439 int dpms_mode;449440 enum drm_connector_status status;···452443 if (intel_crt_detect_hotplug(connector)) {453444 DRM_DEBUG_KMS("CRT detected via hotplug\n");454445 return connector_status_connected;455455- } else446446+ } else {447447+ DRM_DEBUG_KMS("CRT not detected via hotplug\n");456448 return connector_status_disconnected;449449+ }457450 }458451459459- if (intel_crt_detect_ddc(&encoder->base))452452+ if (intel_crt_detect_ddc(crt))460453 return connector_status_connected;461454462455 if (!force)463456 return connector->status;464457465458 /* for pre-945g platforms use load detect */466466- if (encoder->base.crtc && encoder->base.crtc->enabled) {467467- status = intel_crt_load_detect(encoder->base.crtc, encoder);459459+ crtc = crt->base.base.crtc;460460+ if (crtc && crtc->enabled) {461461+ status = intel_crt_load_detect(crtc, crt);468462 } else {469469- crtc = intel_get_load_detect_pipe(encoder, connector,463463+ crtc = intel_get_load_detect_pipe(&crt->base, connector,470464 NULL, &dpms_mode);471465 if (crtc) {472472- if (intel_crt_detect_ddc(&encoder->base))466466+ if (intel_crt_detect_ddc(crt))473467 status = connector_status_connected;474468 else475475- status = intel_crt_load_detect(crtc, encoder);476476- intel_release_load_detect_pipe(encoder,469469+ status = intel_crt_load_detect(crtc, crt);470470+ intel_release_load_detect_pipe(&crt->base,477471 connector, dpms_mode);478472 } else479473 status = connector_status_unknown;···548536void intel_crt_init(struct drm_device *dev)549537{550538 struct drm_connector *connector;551551- struct intel_encoder *intel_encoder;539539+ struct intel_crt *crt;552540 struct intel_connector *intel_connector;553541 struct drm_i915_private *dev_priv = dev->dev_private;554542555555- intel_encoder = kzalloc(sizeof(struct intel_encoder), GFP_KERNEL);556556- if (!intel_encoder)543543+ crt = kzalloc(sizeof(struct intel_crt), GFP_KERNEL);544544+ if (!crt)557545 return;558546559547 intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);560548 if (!intel_connector) {561561- kfree(intel_encoder);549549+ kfree(crt);562550 return;563551 }564552···566554 drm_connector_init(dev, &intel_connector->base,567555 &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);568556569569- drm_encoder_init(dev, &intel_encoder->base, &intel_crt_enc_funcs,557557+ drm_encoder_init(dev, &crt->base.base, &intel_crt_enc_funcs,570558 DRM_MODE_ENCODER_DAC);571559572572- intel_connector_attach_encoder(intel_connector, intel_encoder);560560+ intel_connector_attach_encoder(intel_connector, &crt->base);573561574574- intel_encoder->type = INTEL_OUTPUT_ANALOG;575575- intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |576576- (1 << INTEL_ANALOG_CLONE_BIT) |577577- (1 << INTEL_SDVO_LVDS_CLONE_BIT);578578- intel_encoder->crtc_mask = (1 << 0) | (1 << 1);562562+ crt->base.type = INTEL_OUTPUT_ANALOG;563563+ crt->base.clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT |564564+ 1 << INTEL_ANALOG_CLONE_BIT |565565+ 1 << INTEL_SDVO_LVDS_CLONE_BIT);566566+ crt->base.crtc_mask = (1 << 0) | (1 << 1);579567 connector->interlace_allowed = 1;580568 connector->doublescan_allowed = 0;581569582582- drm_encoder_helper_add(&intel_encoder->base, &intel_crt_helper_funcs);570570+ drm_encoder_helper_add(&crt->base.base, &intel_crt_helper_funcs);583571 drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);584572585573 drm_sysfs_connector_add(connector);···588576 connector->polled = DRM_CONNECTOR_POLL_HPD;589577 else590578 connector->polled = DRM_CONNECTOR_POLL_CONNECT;579579+580580+ /*581581+ * Configure the automatic hotplug detection stuff582582+ */583583+ crt->force_hotplug_required = 0;584584+ if (HAS_PCH_SPLIT(dev)) {585585+ u32 adpa;586586+587587+ adpa = I915_READ(PCH_ADPA);588588+ adpa &= ~ADPA_CRT_HOTPLUG_MASK;589589+ adpa |= ADPA_HOTPLUG_BITS;590590+ I915_WRITE(PCH_ADPA, adpa);591591+ POSTING_READ(PCH_ADPA);592592+593593+ DRM_DEBUG_KMS("pch crt adpa set to 0x%x\n", adpa);594594+ crt->force_hotplug_required = 1;595595+ }591596592597 dev_priv->hotplug_supported_mask |= CRT_HOTPLUG_INT_STATUS;593598}
+12
drivers/gpu/drm/i915/intel_display.c
···1611161116121612 wait_event(dev_priv->pending_flip_queue,16131613 atomic_read(&obj_priv->pending_flip) == 0);16141614+16151615+ /* Big Hammer, we also need to ensure that any pending16161616+ * MI_WAIT_FOR_EVENT inside a user batch buffer on the16171617+ * current scanout is retired before unpinning the old16181618+ * framebuffer.16191619+ */16201620+ ret = i915_gem_object_flush_gpu(obj_priv, false);16211621+ if (ret) {16221622+ i915_gem_object_unpin(to_intel_framebuffer(crtc->fb)->obj);16231623+ mutex_unlock(&dev->struct_mutex);16241624+ return ret;16251625+ }16141626 }1615162716161628 ret = intel_pipe_set_base_atomic(crtc, crtc->fb, x, y,
···375375 int hdmi_config_offset;376376 int hdmi_audio_workaround;377377 int hdmi_buffer_status;378378+ bool is_ext_encoder;378379};379380380381struct radeon_connector_atom_dig {···386385 u8 dp_sink_type;387386 int dp_clock;388387 int dp_lane_count;388388+ bool edp_on;389389};390390391391struct radeon_gpio_rec {···525523struct drm_encoder *radeon_encoder_legacy_tv_dac_add(struct drm_device *dev, int bios_index, int with_tv);526524struct drm_encoder *radeon_encoder_legacy_tmds_int_add(struct drm_device *dev, int bios_index);527525struct drm_encoder *radeon_encoder_legacy_tmds_ext_add(struct drm_device *dev, int bios_index);528528-extern void atombios_external_tmds_setup(struct drm_encoder *encoder, int action);526526+extern void atombios_dvo_setup(struct drm_encoder *encoder, int action);529527extern void atombios_digital_setup(struct drm_encoder *encoder, int action);530528extern int atombios_get_encoder_mode(struct drm_encoder *encoder);529529+extern void atombios_set_edp_panel_power(struct drm_connector *connector, int action);531530extern void radeon_encoder_set_active_device(struct drm_encoder *encoder);532531533532extern void radeon_crtc_load_lut(struct drm_crtc *crtc);
+4-3
drivers/gpu/drm/radeon/radeon_object.c
···8686}87878888int radeon_bo_create(struct radeon_device *rdev, struct drm_gem_object *gobj,8989- unsigned long size, bool kernel, u32 domain,9090- struct radeon_bo **bo_ptr)8989+ unsigned long size, int byte_align, bool kernel, u32 domain,9090+ struct radeon_bo **bo_ptr)9191{9292 struct radeon_bo *bo;9393 enum ttm_bo_type type;9494+ int page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT;9495 int r;95969697 if (unlikely(rdev->mman.bdev.dev_mapping == NULL)) {···116115 /* Kernel allocation are uninterruptible */117116 mutex_lock(&rdev->vram_mutex);118117 r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type,119119- &bo->placement, 0, 0, !kernel, NULL, size,118118+ &bo->placement, page_align, 0, !kernel, NULL, size,120119 &radeon_ttm_bo_destroy);121120 mutex_unlock(&rdev->vram_mutex);122121 if (unlikely(r != 0)) {
+4-3
drivers/gpu/drm/radeon/radeon_object.h
···137137}138138139139extern int radeon_bo_create(struct radeon_device *rdev,140140- struct drm_gem_object *gobj, unsigned long size,141141- bool kernel, u32 domain,142142- struct radeon_bo **bo_ptr);140140+ struct drm_gem_object *gobj, unsigned long size,141141+ int byte_align,142142+ bool kernel, u32 domain,143143+ struct radeon_bo **bo_ptr);143144extern int radeon_bo_kmap(struct radeon_bo *bo, void **ptr);144145extern void radeon_bo_kunmap(struct radeon_bo *bo);145146extern void radeon_bo_unref(struct radeon_bo **bo);
+3-3
drivers/gpu/drm/radeon/radeon_ring.c
···176176 INIT_LIST_HEAD(&rdev->ib_pool.bogus_ib);177177 /* Allocate 1M object buffer */178178 r = radeon_bo_create(rdev, NULL, RADEON_IB_POOL_SIZE*64*1024,179179- true, RADEON_GEM_DOMAIN_GTT,180180- &rdev->ib_pool.robj);179179+ PAGE_SIZE, true, RADEON_GEM_DOMAIN_GTT,180180+ &rdev->ib_pool.robj);181181 if (r) {182182 DRM_ERROR("radeon: failed to ib pool (%d).\n", r);183183 return r;···332332 rdev->cp.ring_size = ring_size;333333 /* Allocate ring buffer */334334 if (rdev->cp.ring_obj == NULL) {335335- r = radeon_bo_create(rdev, NULL, rdev->cp.ring_size, true,335335+ r = radeon_bo_create(rdev, NULL, rdev->cp.ring_size, PAGE_SIZE, true,336336 RADEON_GEM_DOMAIN_GTT,337337 &rdev->cp.ring_obj);338338 if (r) {
+2-2
drivers/gpu/drm/radeon/radeon_test.c
···5252 goto out_cleanup;5353 }54545555- r = radeon_bo_create(rdev, NULL, size, true, RADEON_GEM_DOMAIN_VRAM,5555+ r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,5656 &vram_obj);5757 if (r) {5858 DRM_ERROR("Failed to create VRAM object\n");···7171 void **gtt_start, **gtt_end;7272 void **vram_start, **vram_end;73737474- r = radeon_bo_create(rdev, NULL, size, true,7474+ r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true,7575 RADEON_GEM_DOMAIN_GTT, gtt_obj + i);7676 if (r) {7777 DRM_ERROR("Failed to create GTT object %d\n", i);
+1-1
drivers/gpu/drm/radeon/radeon_ttm.c
···529529 DRM_ERROR("Failed initializing VRAM heap.\n");530530 return r;531531 }532532- r = radeon_bo_create(rdev, NULL, 256 * 1024, true,532532+ r = radeon_bo_create(rdev, NULL, 256 * 1024, PAGE_SIZE, true,533533 RADEON_GEM_DOMAIN_VRAM,534534 &rdev->stollen_vga_memory);535535 if (r) {
+2-2
drivers/gpu/drm/radeon/rv770.c
···915915916916 if (rdev->vram_scratch.robj == NULL) {917917 r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE,918918- true, RADEON_GEM_DOMAIN_VRAM,919919- &rdev->vram_scratch.robj);918918+ PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,919919+ &rdev->vram_scratch.robj);920920 if (r) {921921 return r;922922 }
+11
drivers/gpu/drm/ttm/ttm_bo.c
···224224 int ret;225225226226 while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) {227227+ /**228228+ * Deadlock avoidance for multi-bo reserving.229229+ */227230 if (use_sequence && bo->seq_valid &&228231 (sequence - bo->val_seq < (1 << 31))) {229232 return -EAGAIN;···244241 }245242246243 if (use_sequence) {244244+ /**245245+ * Wake up waiters that may need to recheck for deadlock,246246+ * if we decreased the sequence number.247247+ */248248+ if (unlikely((bo->val_seq - sequence < (1 << 31))249249+ || !bo->seq_valid))250250+ wake_up_all(&bo->event_queue);251251+247252 bo->val_seq = sequence;248253 bo->seq_valid = true;249254 } else {
···553553 usb_free_urb(urb);554554 }555555556556- ret = usb_wait_anchor_empty_timeout(&ar->tx_cmd, HZ);556556+ ret = usb_wait_anchor_empty_timeout(&ar->tx_cmd, 1000);557557 if (ret == 0)558558 err = -ETIMEDOUT;559559560560 /* lets wait a while until the tx - queues are dried out */561561- ret = usb_wait_anchor_empty_timeout(&ar->tx_anch, HZ);561561+ ret = usb_wait_anchor_empty_timeout(&ar->tx_anch, 1000);562562 if (ret == 0)563563 err = -ETIMEDOUT;564564
···209209 wake_up(&device->state_change_wq);210210}211211212212+struct tape_med_state_work_data {213213+ struct tape_device *device;214214+ enum tape_medium_state state;215215+ struct work_struct work;216216+};217217+218218+static void219219+tape_med_state_work_handler(struct work_struct *work)220220+{221221+ static char env_state_loaded[] = "MEDIUM_STATE=LOADED";222222+ static char env_state_unloaded[] = "MEDIUM_STATE=UNLOADED";223223+ struct tape_med_state_work_data *p =224224+ container_of(work, struct tape_med_state_work_data, work);225225+ struct tape_device *device = p->device;226226+ char *envp[] = { NULL, NULL };227227+228228+ switch (p->state) {229229+ case MS_UNLOADED:230230+ pr_info("%s: The tape cartridge has been successfully "231231+ "unloaded\n", dev_name(&device->cdev->dev));232232+ envp[0] = env_state_unloaded;233233+ kobject_uevent_env(&device->cdev->dev.kobj, KOBJ_CHANGE, envp);234234+ break;235235+ case MS_LOADED:236236+ pr_info("%s: A tape cartridge has been mounted\n",237237+ dev_name(&device->cdev->dev));238238+ envp[0] = env_state_loaded;239239+ kobject_uevent_env(&device->cdev->dev.kobj, KOBJ_CHANGE, envp);240240+ break;241241+ default:242242+ break;243243+ }244244+ tape_put_device(device);245245+ kfree(p);246246+}247247+248248+static void249249+tape_med_state_work(struct tape_device *device, enum tape_medium_state state)250250+{251251+ struct tape_med_state_work_data *p;252252+253253+ p = kzalloc(sizeof(*p), GFP_ATOMIC);254254+ if (p) {255255+ INIT_WORK(&p->work, tape_med_state_work_handler);256256+ p->device = tape_get_device(device);257257+ p->state = state;258258+ schedule_work(&p->work);259259+ }260260+}261261+212262void213263tape_med_state_set(struct tape_device *device, enum tape_medium_state newstate)214264{215215- if (device->medium_state == newstate)265265+ enum tape_medium_state oldstate;266266+267267+ oldstate = device->medium_state;268268+ if (oldstate == newstate)216269 return;270270+ device->medium_state = newstate;217271 switch(newstate){218272 case MS_UNLOADED:219273 device->tape_generic_status |= GMT_DR_OPEN(~0);220220- if (device->medium_state == MS_LOADED)221221- pr_info("%s: The tape cartridge has been successfully "222222- "unloaded\n", dev_name(&device->cdev->dev));274274+ if (oldstate == MS_LOADED)275275+ tape_med_state_work(device, MS_UNLOADED);223276 break;224277 case MS_LOADED:225278 device->tape_generic_status &= ~GMT_DR_OPEN(~0);226226- if (device->medium_state == MS_UNLOADED)227227- pr_info("%s: A tape cartridge has been mounted\n",228228- dev_name(&device->cdev->dev));279279+ if (oldstate == MS_UNLOADED)280280+ tape_med_state_work(device, MS_LOADED);229281 break;230282 default:231231- // print nothing232283 break;233284 }234234- device->medium_state = newstate;235285 wake_up(&device->state_change_wq);236286}237287
+24-13
drivers/s390/char/vmlogrdr.c
···3030#include <linux/kmod.h>3131#include <linux/cdev.h>3232#include <linux/device.h>3333-#include <linux/smp_lock.h>3433#include <linux/string.h>35343635MODULE_AUTHOR···248249 char cp_command[80];249250 char cp_response[160];250251 char *onoff, *qid_string;252252+ int rc;251253252252- memset(cp_command, 0x00, sizeof(cp_command));253253- memset(cp_response, 0x00, sizeof(cp_response));254254-255255- onoff = ((action == 1) ? "ON" : "OFF");254254+ onoff = ((action == 1) ? "ON" : "OFF");256255 qid_string = ((recording_class_AB == 1) ? " QID * " : "");257256258258- /*257257+ /*259258 * The recording commands needs to be called with option QID260259 * for guests that have previlege classes A or B.261260 * Purging has to be done as separate step, because recording262261 * can't be switched on as long as records are on the queue.263262 * Doing both at the same time doesn't work.264263 */265265-266266- if (purge) {264264+ if (purge && (action == 1)) {265265+ memset(cp_command, 0x00, sizeof(cp_command));266266+ memset(cp_response, 0x00, sizeof(cp_response));267267 snprintf(cp_command, sizeof(cp_command),268268 "RECORDING %s PURGE %s",269269 logptr->recording_name,270270 qid_string);271271-272271 cpcmd(cp_command, cp_response, sizeof(cp_response), NULL);273272 }274273···276279 logptr->recording_name,277280 onoff,278281 qid_string);279279-280282 cpcmd(cp_command, cp_response, sizeof(cp_response), NULL);281283 /* The recording command will usually answer with 'Command complete'282284 * on success, but when the specific service was never connected283285 * before then there might be an additional informational message284286 * 'HCPCRC8072I Recording entry not found' before the285285- * 'Command complete'. So I use strstr rather then the strncmp.287287+ * 'Command complete'. So I use strstr rather then the strncmp.286288 */287289 if (strstr(cp_response,"Command complete"))288288- return 0;290290+ rc = 0;289291 else290290- return -EIO;292292+ rc = -EIO;293293+ /*294294+ * If we turn recording off, we have to purge any remaining records295295+ * afterwards, as a large number of queued records may impact z/VM296296+ * performance.297297+ */298298+ if (purge && (action == 0)) {299299+ memset(cp_command, 0x00, sizeof(cp_command));300300+ memset(cp_response, 0x00, sizeof(cp_response));301301+ snprintf(cp_command, sizeof(cp_command),302302+ "RECORDING %s PURGE %s",303303+ logptr->recording_name,304304+ qid_string);305305+ cpcmd(cp_command, cp_response, sizeof(cp_response), NULL);306306+ }291307308308+ return rc;292309}293310294311
···14551455 break;14561456 case IO_SCH_UNREG_ATTACH:14571457 case IO_SCH_UNREG:14581458- if (cdev)14581458+ if (!cdev)14591459+ break;14601460+ if (cdev->private->state == DEV_STATE_SENSE_ID) {14611461+ /*14621462+ * Note: delayed work triggered by this event14631463+ * and repeated calls to sch_event are synchronized14641464+ * by the above check for work_pending(cdev).14651465+ */14661466+ dev_fsm_event(cdev, DEV_EVENT_NOTOPER);14671467+ } else14591468 ccw_device_set_notoper(cdev);14601469 break;14611470 case IO_SCH_NOP:
···7676 scpnt->scsi_done(scpnt);7777}78787979-static int zfcp_scsi_queuecommand(struct scsi_cmnd *scpnt,7979+static int zfcp_scsi_queuecommand_lck(struct scsi_cmnd *scpnt,8080 void (*done) (struct scsi_cmnd *))8181{8282 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scpnt->device);···126126127127 return ret;128128}129129+130130+static DEF_SCSI_QCMD(zfcp_scsi_queuecommand)129131130132static int zfcp_scsi_slave_alloc(struct scsi_device *sdev)131133{
+3-1
drivers/scsi/3w-9xxx.c
···17651765} /* End twa_scsi_eh_reset() */1766176617671767/* This is the main scsi queue function to handle scsi opcodes */17681768-static int twa_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))17681768+static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))17691769{17701770 int request_id, retval;17711771 TW_Device_Extension *tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;···18111811out:18121812 return retval;18131813} /* End twa_scsi_queue() */18141814+18151815+static DEF_SCSI_QCMD(twa_scsi_queue)1814181618151817/* This function hands scsi cdb's to the firmware */18161818static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry *sglistarg)
+3-1
drivers/scsi/3w-sas.c
···15011501} /* End twl_scsi_eh_reset() */1502150215031503/* This is the main scsi queue function to handle scsi opcodes */15041504-static int twl_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))15041504+static int twl_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))15051505{15061506 int request_id, retval;15071507 TW_Device_Extension *tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;···15351535out:15361536 return retval;15371537} /* End twl_scsi_queue() */15381538+15391539+static DEF_SCSI_QCMD(twl_scsi_queue)1538154015391541/* This function tells the controller to shut down */15401542static void __twl_shutdown(TW_Device_Extension *tw_dev)
+3-1
drivers/scsi/3w-xxxx.c
···19471947} /* End tw_scsiop_test_unit_ready_complete() */1948194819491949/* This is the main scsi queue function to handle scsi opcodes */19501950-static int tw_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *)) 19501950+static int tw_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))19511951{19521952 unsigned char *command = SCpnt->cmnd;19531953 int request_id = 0;···20222022 }20232023 return retval;20242024} /* End tw_scsi_queue() */20252025+20262026+static DEF_SCSI_QCMD(tw_scsi_queue)2025202720262028/* This function is the interrupt service routine */20272029static irqreturn_t tw_interrupt(int irq, void *dev_instance)
···132132};133133134134static int aha1542_detect(struct scsi_host_template *);135135-static int aha1542_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));135135+static int aha1542_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);136136static int aha1542_bus_reset(Scsi_Cmnd * SCpnt);137137static int aha1542_dev_reset(Scsi_Cmnd * SCpnt);138138static int aha1542_host_reset(Scsi_Cmnd * SCpnt);
+3-1
drivers/scsi/aha1740.c
···331331 return IRQ_RETVAL(handled);332332}333333334334-static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *))334334+static int aha1740_queuecommand_lck(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *))335335{336336 unchar direction;337337 unchar *cmd = (unchar *) SCpnt->cmnd;···502502 printk(KERN_ALERT "aha1740_queuecommand: done can't be NULL\n");503503 return 0;504504}505505+506506+static DEF_SCSI_QCMD(aha1740_queuecommand)505507506508/* Query the board for its irq_level and irq_type. Nothing else matters507509 in enhanced mode on an EISA bus. */
···1023410234 * Description:1023510235 * Queue a SCB to the controller.1023610236 *-F*************************************************************************/1023710237-static int aic7xxx_queue(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *))1023710237+static int aic7xxx_queue_lck(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *))1023810238{1023910239 struct aic7xxx_host *p;1024010240 struct aic7xxx_scb *scb;···1029110291 aic7xxx_run_waiting_queues(p);1029210292 return (0);1029310293}1029410294+1029510295+static DEF_SCSI_QCMD(aic7xxx_queue)10294102961029510297/*+F*************************************************************************1029610298 * Function:
+4-3
drivers/scsi/arcmsr/arcmsr_hba.c
···8585static int arcmsr_bus_reset(struct scsi_cmnd *);8686static int arcmsr_bios_param(struct scsi_device *sdev,8787 struct block_device *bdev, sector_t capacity, int *info);8888-static int arcmsr_queue_command(struct scsi_cmnd *cmd,8989- void (*done) (struct scsi_cmnd *));8888+static int arcmsr_queue_command(struct Scsi_Host *h, struct scsi_cmnd *cmd);9089static int arcmsr_probe(struct pci_dev *pdev,9190 const struct pci_device_id *id);9291static void arcmsr_remove(struct pci_dev *pdev);···20802081 }20812082}2082208320832083-static int arcmsr_queue_command(struct scsi_cmnd *cmd,20842084+static int arcmsr_queue_command_lck(struct scsi_cmnd *cmd,20842085 void (* done)(struct scsi_cmnd *))20852086{20862087 struct Scsi_Host *host = cmd->device->host;···21222123 arcmsr_post_ccb(acb, ccb);21232124 return 0;21242125}21262126+21272127+static DEF_SCSI_QCMD(arcmsr_queue_command)2125212821262129static bool arcmsr_get_hba_config(struct AdapterControlBlock *acb)21272130{
+3-1
drivers/scsi/arm/acornscsi.c
···25112511 * done - function called on completion, with pointer to command descriptor25122512 * Returns : 0, or < 0 on error.25132513 */25142514-int acornscsi_queuecmd(struct scsi_cmnd *SCpnt,25142514+static int acornscsi_queuecmd_lck(struct scsi_cmnd *SCpnt,25152515 void (*done)(struct scsi_cmnd *))25162516{25172517 AS_Host *host = (AS_Host *)SCpnt->device->host->hostdata;···25602560 }25612561 return 0;25622562}25632563+25642564+DEF_SCSI_QCMD(acornscsi_queuecmd)2563256525642566/*25652567 * Prototype: void acornscsi_reportstatus(struct scsi_cmnd **SCpntp1, struct scsi_cmnd **SCpntp2, int result)
+7-3
drivers/scsi/arm/fas216.c
···21982198 * Returns: 0 on success, else error.21992199 * Notes: io_request_lock is held, interrupts are disabled.22002200 */22012201-int fas216_queue_command(struct scsi_cmnd *SCpnt,22012201+static int fas216_queue_command_lck(struct scsi_cmnd *SCpnt,22022202 void (*done)(struct scsi_cmnd *))22032203{22042204 FAS216_Info *info = (FAS216_Info *)SCpnt->device->host->hostdata;···22402240 return result;22412241}2242224222432243+DEF_SCSI_QCMD(fas216_queue_command)22442244+22432245/**22442246 * fas216_internal_done - trigger restart of a waiting thread in fas216_noqueue_command22452247 * @SCpnt: Command to wake···22652263 * Returns: scsi result code.22662264 * Notes: io_request_lock is held, interrupts are disabled.22672265 */22682268-int fas216_noqueue_command(struct scsi_cmnd *SCpnt,22662266+static int fas216_noqueue_command_lck(struct scsi_cmnd *SCpnt,22692267 void (*done)(struct scsi_cmnd *))22702268{22712269 FAS216_Info *info = (FAS216_Info *)SCpnt->device->host->hostdata;···22792277 BUG_ON(info->scsi.irq != NO_IRQ);2280227822812279 info->internal_done = 0;22822282- fas216_queue_command(SCpnt, fas216_internal_done);22802280+ fas216_queue_command_lck(SCpnt, fas216_internal_done);2283228122842282 /*22852283 * This wastes time, since we can't return until the command is···2311230923122310 return 0;23132311}23122312+23132313+DEF_SCSI_QCMD(fas216_noqueue_command)2314231423152315/*23162316 * Error handler timeout function. Indicate that we timed out,
+8-10
drivers/scsi/arm/fas216.h
···331331 */332332extern int fas216_add (struct Scsi_Host *instance, struct device *dev);333333334334-/* Function: int fas216_queue_command(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))334334+/* Function: int fas216_queue_command(struct Scsi_Host *h, struct scsi_cmnd *SCpnt)335335 * Purpose : queue a command for adapter to process.336336- * Params : SCpnt - Command to queue337337- * done - done function to call once command is complete336336+ * Params : h - host adapter337337+ * : SCpnt - Command to queue338338 * Returns : 0 - success, else error339339 */340340-extern int fas216_queue_command(struct scsi_cmnd *,341341- void (*done)(struct scsi_cmnd *));340340+extern int fas216_queue_command(struct Scsi_Host *h, struct scsi_cmnd *SCpnt);342341343343-/* Function: int fas216_noqueue_command(istruct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))342342+/* Function: int fas216_noqueue_command(struct Scsi_Host *h, struct scsi_cmnd *SCpnt)344343 * Purpose : queue a command for adapter to process, and process it to completion.345345- * Params : SCpnt - Command to queue346346- * done - done function to call once command is complete344344+ * Params : h - host adapter345345+ * : SCpnt - Command to queue347346 * Returns : 0 - success, else error348347 */349349-extern int fas216_noqueue_command(struct scsi_cmnd *,350350- void (*done)(struct scsi_cmnd *));348348+extern int fas216_noqueue_command(struct Scsi_Host *, struct scsi_cmnd *)351349352350/* Function: irqreturn_t fas216_intr (FAS216_Info *info)353351 * Purpose : handle interrupts from the interface to progress a command
+3-1
drivers/scsi/atari_NCR5380.c
···910910 *911911 */912912913913-static int NCR5380_queue_command(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *))913913+static int NCR5380_queue_command_lck(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *))914914{915915 SETUP_HOSTDATA(cmd->device->host);916916 Scsi_Cmnd *tmp;···10211021 NCR5380_main(NULL);10221022 return 0;10231023}10241024+10251025+static DEF_SCSI_QCMD(NCR5380_queue_command)1024102610251027/*10261028 * Function : NCR5380_main (void)
-17
drivers/scsi/atari_scsi.c
···572572}573573574574575575-/* This is the wrapper function for NCR5380_queue_command(). It just576576- * tries to get the lock on the ST-DMA (see above) and then calls the577577- * original function.578578- */579579-580580-#if 0581581-int atari_queue_command(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *))582582-{583583- /* falcon_get_lock();584584- * ++guenther: moved to NCR5380_queue_command() to prevent585585- * race condition, see there for an explanation.586586- */587587- return NCR5380_queue_command(cmd, done);588588-}589589-#endif590590-591591-592575int __init atari_scsi_detect(struct scsi_host_template *host)593576{594577 static int called = 0;
+3-1
drivers/scsi/atp870u.c
···605605 *606606 * Queue a command to the ATP queue. Called with the host lock held.607607 */608608-static int atp870u_queuecommand(struct scsi_cmnd * req_p, 608608+static int atp870u_queuecommand_lck(struct scsi_cmnd *req_p,609609 void (*done) (struct scsi_cmnd *))610610{611611 unsigned char c;···693693#endif 694694 return 0;695695}696696+697697+static DEF_SCSI_QCMD(atp870u_queuecommand)696698697699/**698700 * send_s870 - send a command to the controller
···10801080 * and is expected to be held on return.10811081 *10821082 **/10831083-static int dc395x_queue_command(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))10831083+static int dc395x_queue_command_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))10841084{10851085 struct DeviceCtlBlk *dcb;10861086 struct ScsiReqBlk *srb;···11541154 return 0;11551155}1156115611571157+static DEF_SCSI_QCMD(dc395x_queue_command)1157115811581159/*11591160 * Return the disk geometry for the given SCSI device.
+3-1
drivers/scsi/dpt_i2o.c
···423423 return 0;424424}425425426426-static int adpt_queue(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *))426426+static int adpt_queue_lck(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *))427427{428428 adpt_hba* pHba = NULL;429429 struct adpt_device* pDev = NULL; /* dpt per device information */···490490 }491491 return adpt_scsi_to_i2o(pHba, cmd, pDev);492492}493493+494494+static DEF_SCSI_QCMD(adpt_queue)493495494496static int adpt_bios_param(struct scsi_device *sdev, struct block_device *dev,495497 sector_t capacity, int geom[])
+1-1
drivers/scsi/dpti.h
···2929 */30303131static int adpt_detect(struct scsi_host_template * sht);3232-static int adpt_queue(struct scsi_cmnd * cmd, void (*cmdcomplete) (struct scsi_cmnd *));3232+static int adpt_queue(struct Scsi_Host *h, struct scsi_cmnd * cmd);3333static int adpt_abort(struct scsi_cmnd * cmd);3434static int adpt_reset(struct scsi_cmnd* cmd);3535static int adpt_release(struct Scsi_Host *host);
+1-1
drivers/scsi/dtc.h
···3636static int dtc_biosparam(struct scsi_device *, struct block_device *,3737 sector_t, int*);3838static int dtc_detect(struct scsi_host_template *);3939-static int dtc_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));3939+static int dtc_queue_command(struct Scsi_Host *, struct scsi_cmnd *);4040static int dtc_bus_reset(Scsi_Cmnd *);41414242#ifndef CMD_PER_LUN
+4-3
drivers/scsi/eata.c
···505505506506static int eata2x_detect(struct scsi_host_template *);507507static int eata2x_release(struct Scsi_Host *);508508-static int eata2x_queuecommand(struct scsi_cmnd *,509509- void (*done) (struct scsi_cmnd *));508508+static int eata2x_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);510509static int eata2x_eh_abort(struct scsi_cmnd *);511510static int eata2x_eh_host_reset(struct scsi_cmnd *);512511static int eata2x_bios_param(struct scsi_device *, struct block_device *,···1757175817581759}1759176017601760-static int eata2x_queuecommand(struct scsi_cmnd *SCpnt,17611761+static int eata2x_queuecommand_lck(struct scsi_cmnd *SCpnt,17611762 void (*done) (struct scsi_cmnd *))17621763{17631764 struct Scsi_Host *shost = SCpnt->device->host;···18411842 ha->cp_stat[i] = IN_USE;18421843 return 0;18431844}18451845+18461846+static DEF_SCSI_QCMD(eata2x_queuecommand)1844184718451848static int eata2x_eh_abort(struct scsi_cmnd *SCarg)18461849{
+3-1
drivers/scsi/eata_pio.c
···335335 return 0;336336}337337338338-static int eata_pio_queue(struct scsi_cmnd *cmd,338338+static int eata_pio_queue_lck(struct scsi_cmnd *cmd,339339 void (*done)(struct scsi_cmnd *))340340{341341 unsigned int x, y;···437437438438 return 0;439439}440440+441441+static DEF_SCSI_QCMD(eata_pio_queue)440442441443static int eata_pio_abort(struct scsi_cmnd *cmd)442444{
···3939#include <scsi/scsi_host.h>40404141/* Common forward declarations for all Linux-versions: */4242-static int ibmmca_queuecommand (Scsi_Cmnd *, void (*done) (Scsi_Cmnd *));4242+static int ibmmca_queuecommand (struct Scsi_Host *, struct scsi_cmnd *);4343static int ibmmca_abort (Scsi_Cmnd *);4444static int ibmmca_host_reset (Scsi_Cmnd *);4545static int ibmmca_biosparam (struct scsi_device *, struct block_device *, sector_t, int *);···16911691}1692169216931693/* The following routine is the SCSI command queue for the midlevel driver */16941694-static int ibmmca_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))16941694+static int ibmmca_queuecommand_lck(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))16951695{16961696 unsigned int ldn;16971697 unsigned int scsi_cmd;···19951995 }19961996 return 0;19971997}19981998+19991999+static DEF_SCSI_QCMD(ibmmca_queuecommand)1998200019992001static int __ibmmca_abort(Scsi_Cmnd * cmd)20002002{
+3-1
drivers/scsi/ibmvscsi/ibmvfc.c
···16061606 * Returns:16071607 * 0 on success / other on failure16081608 **/16091609-static int ibmvfc_queuecommand(struct scsi_cmnd *cmnd,16091609+static int ibmvfc_queuecommand_lck(struct scsi_cmnd *cmnd,16101610 void (*done) (struct scsi_cmnd *))16111611{16121612 struct ibmvfc_host *vhost = shost_priv(cmnd->device->host);···16711671 done(cmnd);16721672 return 0;16731673}16741674+16751675+static DEF_SCSI_QCMD(ibmvfc_queuecommand)1674167616751677/**16761678 * ibmvfc_sync_completion - Signal that a synchronous command has completed
+3-1
drivers/scsi/ibmvscsi/ibmvscsi.c
···713713 * @cmd: struct scsi_cmnd to be executed714714 * @done: Callback function to be called when cmd is completed715715*/716716-static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd,716716+static int ibmvscsi_queuecommand_lck(struct scsi_cmnd *cmnd,717717 void (*done) (struct scsi_cmnd *))718718{719719 struct srp_cmd *srp_cmd;···765765766766 return ibmvscsi_send_srp_event(evt_struct, hostdata, 0);767767}768768+769769+static DEF_SCSI_QCMD(ibmvscsi_queuecommand)768770769771/* ------------------------------------------------------------770772 * Routines for driver initialization
+3-1
drivers/scsi/imm.c
···926926 return 0;927927}928928929929-static int imm_queuecommand(struct scsi_cmnd *cmd,929929+static int imm_queuecommand_lck(struct scsi_cmnd *cmd,930930 void (*done)(struct scsi_cmnd *))931931{932932 imm_struct *dev = imm_dev(cmd->device->host);···948948949949 return 0;950950}951951+952952+static DEF_SCSI_QCMD(imm_queuecommand)951953952954/*953955 * Apparently the disk->capacity attribute is off by 1 sector
···396396 flags)397397398398static int in2000_detect(struct scsi_host_template *) in2000__INIT;399399-static int in2000_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));399399+static int in2000_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);400400static int in2000_abort(Scsi_Cmnd *);401401static void in2000_setup(char *, int *) in2000__INIT;402402static int in2000_biosparam(struct scsi_device *, struct block_device *,
+3-1
drivers/scsi/initio.c
···26392639 * will cause the mid layer to call us again later with the command)26402640 */2641264126422642-static int i91u_queuecommand(struct scsi_cmnd *cmd,26422642+static int i91u_queuecommand_lck(struct scsi_cmnd *cmd,26432643 void (*done)(struct scsi_cmnd *))26442644{26452645 struct initio_host *host = (struct initio_host *) cmd->device->host->hostdata;···26552655 initio_exec_scb(host, cmnd);26562656 return 0;26572657}26582658+26592659+static DEF_SCSI_QCMD(i91u_queuecommand)2658266026592661/**26602662 * i91u_bus_reset - reset the SCSI bus
+3-1
drivers/scsi/ipr.c
···57095709 * SCSI_MLQUEUE_DEVICE_BUSY if device is busy57105710 * SCSI_MLQUEUE_HOST_BUSY if host is busy57115711 **/57125712-static int ipr_queuecommand(struct scsi_cmnd *scsi_cmd,57125712+static int ipr_queuecommand_lck(struct scsi_cmnd *scsi_cmd,57135713 void (*done) (struct scsi_cmnd *))57145714{57155715 struct ipr_ioa_cfg *ioa_cfg;···5791579157925792 return 0;57935793}57945794+57955795+static DEF_SCSI_QCMD(ipr_queuecommand)5794579657955797/**57965798 * ipr_ioctl - IOCTL handler
+4-2
drivers/scsi/ips.c
···232232static int ips_release(struct Scsi_Host *);233233static int ips_eh_abort(struct scsi_cmnd *);234234static int ips_eh_reset(struct scsi_cmnd *);235235-static int ips_queue(struct scsi_cmnd *, void (*)(struct scsi_cmnd *));235235+static int ips_queue(struct Scsi_Host *, struct scsi_cmnd *);236236static const char *ips_info(struct Scsi_Host *);237237static irqreturn_t do_ipsintr(int, void *);238238static int ips_hainit(ips_ha_t *);···10461046/* Linux obtains io_request_lock before calling this function */10471047/* */10481048/****************************************************************************/10491049-static int ips_queue(struct scsi_cmnd *SC, void (*done) (struct scsi_cmnd *))10491049+static int ips_queue_lck(struct scsi_cmnd *SC, void (*done) (struct scsi_cmnd *))10501050{10511051 ips_ha_t *ha;10521052 ips_passthru_t *pt;···1136113611371137 return (0);11381138}11391139+11401140+static DEF_SCSI_QCMD(ips_queue)1139114111401142/****************************************************************************/11411143/* */
+3-1
drivers/scsi/libfc/fc_fcp.c
···17531753 * This is the i/o strategy routine, called by the SCSI layer. This routine17541754 * is called with the host_lock held.17551755 */17561756-int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))17561756+static int fc_queuecommand_lck(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))17571757{17581758 struct fc_lport *lport;17591759 struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));···18511851 spin_lock_irq(lport->host->host_lock);18521852 return rc;18531853}18541854+18551855+DEF_SCSI_QCMD(fc_queuecommand)18541856EXPORT_SYMBOL(fc_queuecommand);1855185718561858/**
···16271627 * Called by midlayer with host locked to queue a new16281628 * request16291629 */16301630-static int mesh_queue(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))16301630+static int mesh_queue_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))16311631{16321632 struct mesh_state *ms;16331633···1647164716481648 return 0;16491649}16501650+16511651+static DEF_SCSI_QCMD(mesh_queue)1650165216511653/*16521654 * Called to handle interrupts, either call by the interrupt
+3-1
drivers/scsi/mpt2sas/mpt2sas_scsih.c
···33153315 * SCSI_MLQUEUE_HOST_BUSY if the entire host queue is full33163316 */33173317static int33183318-_scsih_qcmd(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))33183318+_scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))33193319{33203320 struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host);33213321 struct MPT2SAS_DEVICE *sas_device_priv_data;···34403440 out:34413441 return SCSI_MLQUEUE_HOST_BUSY;34423442}34433443+34443444+static DEF_SCSI_QCMD(_scsih_qcmd)3443344534443446/**34453447 * _scsih_normalize_sense - normalize descriptor and fixed format sense data
···196196static int nsp32_proc_info (struct Scsi_Host *, char *, char **, off_t, int, int);197197198198static int nsp32_detect (struct pci_dev *pdev);199199-static int nsp32_queuecommand(struct scsi_cmnd *,200200- void (*done)(struct scsi_cmnd *));199199+static int nsp32_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);201200static const char *nsp32_info (struct Scsi_Host *);202201static int nsp32_release (struct Scsi_Host *);203202···908909 return TRUE;909910}910911911911-static int nsp32_queuecommand(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))912912+static int nsp32_queuecommand_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))912913{913914 nsp32_hw_data *data = (nsp32_hw_data *)SCpnt->device->host->hostdata;914915 nsp32_target *target;···1048104910491050 return 0;10501051}10521052+10531053+static DEF_SCSI_QCMD(nsp32_queuecommand)1051105410521055/* initialize asic */10531056static int nsp32hw_init(nsp32_hw_data *data)
+1-1
drivers/scsi/pas16.h
···118118static int pas16_biosparam(struct scsi_device *, struct block_device *,119119 sector_t, int*);120120static int pas16_detect(struct scsi_host_template *);121121-static int pas16_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));121121+static int pas16_queue_command(struct Scsi_Host *, struct scsi_cmnd *);122122static int pas16_bus_reset(Scsi_Cmnd *);123123124124#ifndef CMD_PER_LUN
+3-1
drivers/scsi/pcmcia/nsp_cs.c
···184184 SCpnt->scsi_done(SCpnt);185185}186186187187-static int nsp_queuecommand(struct scsi_cmnd *SCpnt,187187+static int nsp_queuecommand_lck(struct scsi_cmnd *SCpnt,188188 void (*done)(struct scsi_cmnd *))189189{190190#ifdef NSP_DEBUG···263263#endif264264 return 0;265265}266266+267267+static DEF_SCSI_QCMD(nsp_queuecommand)266268267269/*268270 * setup PIO FIFO transfer mode and enable/disable to data out
+1-2
drivers/scsi/pcmcia/nsp_cs.h
···299299 off_t offset,300300 int length,301301 int inout);302302-static int nsp_queuecommand(struct scsi_cmnd *SCpnt,303303- void (* done)(struct scsi_cmnd *SCpnt));302302+static int nsp_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *SCpnt);304303305304/* Error handler */306305/*static int nsp_eh_abort (struct scsi_cmnd *SCpnt);*/
+3-1
drivers/scsi/pcmcia/sym53c500_cs.c
···547547}548548549549static int 550550-SYM53C500_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))550550+SYM53C500_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))551551{552552 int i;553553 int port_base = SCpnt->device->host->io_port;···582582583583 return 0;584584}585585+586586+static DEF_SCSI_QCMD(SYM53C500_queue)585587586588static int 587589SYM53C500_host_reset(struct scsi_cmnd *SCpnt)
···10031003 *10041004 * "This code must fly." -davem10051005 */10061006-static int qlogicpti_queuecommand(struct scsi_cmnd *Cmnd, void (*done)(struct scsi_cmnd *))10061006+static int qlogicpti_queuecommand_lck(struct scsi_cmnd *Cmnd, void (*done)(struct scsi_cmnd *))10071007{10081008 struct Scsi_Host *host = Cmnd->device->host;10091009 struct qlogicpti *qpti = (struct qlogicpti *) host->hostdata;···10511051 done(Cmnd);10521052 return 1;10531053}10541054+10551055+static DEF_SCSI_QCMD(qlogicpti_queuecommand)1054105610551057static int qlogicpti_return_status(struct Status_Entry *sts, int id)10561058{
+5-13
drivers/scsi/scsi.c
···634634 * Description: a serial number identifies a request for error recovery635635 * and debugging purposes. Protected by the Host_Lock of host.636636 */637637-static inline void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd *cmd)637637+void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd *cmd)638638{639639 cmd->serial_number = host->cmd_serial_number++;640640 if (cmd->serial_number == 0) 641641 cmd->serial_number = host->cmd_serial_number++;642642}643643+EXPORT_SYMBOL(scsi_cmd_get_serial);643644644645/**645646 * scsi_dispatch_command - Dispatch a command to the low-level driver.···652651int scsi_dispatch_cmd(struct scsi_cmnd *cmd)653652{654653 struct Scsi_Host *host = cmd->device->host;655655- unsigned long flags = 0;656654 unsigned long timeout;657655 int rtn = 0;658656···737737 goto out;738738 }739739740740- spin_lock_irqsave(host->host_lock, flags);741741- /*742742- * AK: unlikely race here: for some reason the timer could743743- * expire before the serial number is set up below.744744- *745745- * TODO: kill serial or move to blk layer746746- */747747- scsi_cmd_get_serial(host, cmd); 748748-749740 if (unlikely(host->shost_state == SHOST_DEL)) {750741 cmd->result = (DID_NO_CONNECT << 16);751742 scsi_done(cmd);752743 } else {753744 trace_scsi_dispatch_cmd_start(cmd);754754- rtn = host->hostt->queuecommand(cmd, scsi_done);745745+ cmd->scsi_done = scsi_done;746746+ rtn = host->hostt->queuecommand(host, cmd);755747 }756756- spin_unlock_irqrestore(host->host_lock, flags);748748+757749 if (rtn) {758750 trace_scsi_dispatch_cmd_error(cmd, rtn);759751 if (rtn != SCSI_MLQUEUE_DEVICE_BUSY &&
···908908 */909909910910/* Only make static if a wrapper function is used */911911-static int NCR5380_queue_command(struct scsi_cmnd *cmd,911911+static int NCR5380_queue_command_lck(struct scsi_cmnd *cmd,912912 void (*done)(struct scsi_cmnd *))913913{914914 SETUP_HOSTDATA(cmd->device->host);···10181018 NCR5380_main(NULL);10191019 return 0;10201020}10211021+10221022+static DEF_SCSI_QCMD(NCR5380_queue_command)1021102310221024/*10231025 * Function : NCR5380_main (void)
+1-2
drivers/scsi/sun3_scsi.h
···5151static int sun3scsi_detect (struct scsi_host_template *);5252static const char *sun3scsi_info (struct Scsi_Host *);5353static int sun3scsi_bus_reset(struct scsi_cmnd *);5454-static int sun3scsi_queue_command(struct scsi_cmnd *,5555- void (*done)(struct scsi_cmnd *));5454+static int sun3scsi_queue_command(struct Scsi_Host *, struct scsi_cmnd *);5655static int sun3scsi_release (struct Scsi_Host *);57565857#ifndef CMD_PER_LUN
+3-1
drivers/scsi/sym53c416.c
···734734 return info;735735}736736737737-int sym53c416_queuecommand(Scsi_Cmnd *SCpnt, void (*done)(Scsi_Cmnd *))737737+static int sym53c416_queuecommand_lck(Scsi_Cmnd *SCpnt, void (*done)(Scsi_Cmnd *))738738{739739 int base;740740 unsigned long flags = 0;···760760 spin_unlock_irqrestore(&sym53c416_lock, flags);761761 return 0;762762}763763+764764+DEF_SCSI_QCMD(sym53c416_queuecommand)763765764766static int sym53c416_host_reset(Scsi_Cmnd *SCpnt)765767{
+1-1
drivers/scsi/sym53c416.h
···2525static int sym53c416_detect(struct scsi_host_template *);2626static const char *sym53c416_info(struct Scsi_Host *);2727static int sym53c416_release(struct Scsi_Host *);2828-static int sym53c416_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));2828+static int sym53c416_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);2929static int sym53c416_host_reset(Scsi_Cmnd *);3030static int sym53c416_bios_param(struct scsi_device *, struct block_device *,3131 sector_t, int *);
+3-1
drivers/scsi/sym53c8xx_2/sym_glue.c
···505505 * queuecommand method. Entered with the host adapter lock held and506506 * interrupts disabled.507507 */508508-static int sym53c8xx_queue_command(struct scsi_cmnd *cmd,508508+static int sym53c8xx_queue_command_lck(struct scsi_cmnd *cmd,509509 void (*done)(struct scsi_cmnd *))510510{511511 struct sym_hcb *np = SYM_SOFTC_PTR(cmd);···535535 return SCSI_MLQUEUE_HOST_BUSY;536536 return 0;537537}538538+539539+static DEF_SCSI_QCMD(sym53c8xx_queue_command)538540539541/*540542 * Linux entry point of the interrupt handler.
+1-2
drivers/scsi/t128.h
···9696static int t128_biosparam(struct scsi_device *, struct block_device *,9797 sector_t, int*);9898static int t128_detect(struct scsi_host_template *);9999-static int t128_queue_command(struct scsi_cmnd *,100100- void (*done)(struct scsi_cmnd *));9999+static int t128_queue_command(struct Scsi_Host *, struct scsi_cmnd *);101100static int t128_bus_reset(struct scsi_cmnd *);102101103102#ifndef CMD_PER_LUN
···11config VIDEO_STRADIS22 tristate "Stradis 4:2:2 MPEG-2 video driver (DEPRECATED)"33- depends on EXPERIMENTAL && PCI && VIDEO_V4L1 && VIRT_TO_BUS33+ depends on EXPERIMENTAL && PCI && VIDEO_V4L1 && VIRT_TO_BUS && BKL44 help55 Say Y here to enable support for the Stradis 4:2:2 MPEG-2 video66 driver for PCI. There is a product page at
+127-48
drivers/tty/sysrq.c
···554554#ifdef CONFIG_INPUT555555556556/* Simple translation table for the SysRq keys */557557-static const unsigned char sysrq_xlate[KEY_MAX + 1] =557557+static const unsigned char sysrq_xlate[KEY_CNT] =558558 "\000\0331234567890-=\177\t" /* 0x00 - 0x0f */559559 "qwertyuiop[]\r\000as" /* 0x10 - 0x1f */560560 "dfghjkl;'`\000\\zxcv" /* 0x20 - 0x2f */···563563 "230\177\000\000\213\214\000\000\000\000\000\000\000\000\000\000" /* 0x50 - 0x5f */564564 "\r\000/"; /* 0x60 - 0x6f */565565566566-static bool sysrq_down;567567-static int sysrq_alt_use;568568-static int sysrq_alt;569569-static DEFINE_SPINLOCK(sysrq_event_lock);566566+struct sysrq_state {567567+ struct input_handle handle;568568+ struct work_struct reinject_work;569569+ unsigned long key_down[BITS_TO_LONGS(KEY_CNT)];570570+ unsigned int alt;571571+ unsigned int alt_use;572572+ bool active;573573+ bool need_reinject;574574+};570575571571-static bool sysrq_filter(struct input_handle *handle, unsigned int type,572572- unsigned int code, int value)576576+static void sysrq_reinject_alt_sysrq(struct work_struct *work)573577{578578+ struct sysrq_state *sysrq =579579+ container_of(work, struct sysrq_state, reinject_work);580580+ struct input_handle *handle = &sysrq->handle;581581+ unsigned int alt_code = sysrq->alt_use;582582+583583+ if (sysrq->need_reinject) {584584+ /* Simulate press and release of Alt + SysRq */585585+ input_inject_event(handle, EV_KEY, alt_code, 1);586586+ input_inject_event(handle, EV_KEY, KEY_SYSRQ, 1);587587+ input_inject_event(handle, EV_SYN, SYN_REPORT, 1);588588+589589+ input_inject_event(handle, EV_KEY, KEY_SYSRQ, 0);590590+ input_inject_event(handle, EV_KEY, alt_code, 0);591591+ input_inject_event(handle, EV_SYN, SYN_REPORT, 1);592592+ }593593+}594594+595595+static bool sysrq_filter(struct input_handle *handle,596596+ unsigned int type, unsigned int code, int value)597597+{598598+ struct sysrq_state *sysrq = handle->private;599599+ bool was_active = sysrq->active;574600 bool suppress;575601576576- /* We are called with interrupts disabled, just take the lock */577577- spin_lock(&sysrq_event_lock);602602+ switch (type) {578603579579- if (type != EV_KEY)580580- goto out;581581-582582- switch (code) {583583-584584- case KEY_LEFTALT:585585- case KEY_RIGHTALT:586586- if (value)587587- sysrq_alt = code;588588- else {589589- if (sysrq_down && code == sysrq_alt_use)590590- sysrq_down = false;591591-592592- sysrq_alt = 0;593593- }604604+ case EV_SYN:605605+ suppress = false;594606 break;595607596596- case KEY_SYSRQ:597597- if (value == 1 && sysrq_alt) {598598- sysrq_down = true;599599- sysrq_alt_use = sysrq_alt;608608+ case EV_KEY:609609+ switch (code) {610610+611611+ case KEY_LEFTALT:612612+ case KEY_RIGHTALT:613613+ if (!value) {614614+ /* One of ALTs is being released */615615+ if (sysrq->active && code == sysrq->alt_use)616616+ sysrq->active = false;617617+618618+ sysrq->alt = KEY_RESERVED;619619+620620+ } else if (value != 2) {621621+ sysrq->alt = code;622622+ sysrq->need_reinject = false;623623+ }624624+ break;625625+626626+ case KEY_SYSRQ:627627+ if (value == 1 && sysrq->alt != KEY_RESERVED) {628628+ sysrq->active = true;629629+ sysrq->alt_use = sysrq->alt;630630+ /*631631+ * If nothing else will be pressed we'll need632632+ * to * re-inject Alt-SysRq keysroke.633633+ */634634+ sysrq->need_reinject = true;635635+ }636636+637637+ /*638638+ * Pretend that sysrq was never pressed at all. This639639+ * is needed to properly handle KGDB which will try640640+ * to release all keys after exiting debugger. If we641641+ * do not clear key bit it KGDB will end up sending642642+ * release events for Alt and SysRq, potentially643643+ * triggering print screen function.644644+ */645645+ if (sysrq->active)646646+ clear_bit(KEY_SYSRQ, handle->dev->key);647647+648648+ break;649649+650650+ default:651651+ if (sysrq->active && value && value != 2) {652652+ sysrq->need_reinject = false;653653+ __handle_sysrq(sysrq_xlate[code], true);654654+ }655655+ break;656656+ }657657+658658+ suppress = sysrq->active;659659+660660+ if (!sysrq->active) {661661+ /*662662+ * If we are not suppressing key presses keep track of663663+ * keyboard state so we can release keys that have been664664+ * pressed before entering SysRq mode.665665+ */666666+ if (value)667667+ set_bit(code, sysrq->key_down);668668+ else669669+ clear_bit(code, sysrq->key_down);670670+671671+ if (was_active)672672+ schedule_work(&sysrq->reinject_work);673673+674674+ } else if (value == 0 &&675675+ test_and_clear_bit(code, sysrq->key_down)) {676676+ /*677677+ * Pass on release events for keys that was pressed before678678+ * entering SysRq mode.679679+ */680680+ suppress = false;600681 }601682 break;602683603684 default:604604- if (sysrq_down && value && value != 2)605605- __handle_sysrq(sysrq_xlate[code], true);685685+ suppress = sysrq->active;606686 break;607687 }608608-609609-out:610610- suppress = sysrq_down;611611- spin_unlock(&sysrq_event_lock);612688613689 return suppress;614690}···693617 struct input_dev *dev,694618 const struct input_device_id *id)695619{696696- struct input_handle *handle;620620+ struct sysrq_state *sysrq;697621 int error;698622699699- sysrq_down = false;700700- sysrq_alt = 0;701701-702702- handle = kzalloc(sizeof(struct input_handle), GFP_KERNEL);703703- if (!handle)623623+ sysrq = kzalloc(sizeof(struct sysrq_state), GFP_KERNEL);624624+ if (!sysrq)704625 return -ENOMEM;705626706706- handle->dev = dev;707707- handle->handler = handler;708708- handle->name = "sysrq";627627+ INIT_WORK(&sysrq->reinject_work, sysrq_reinject_alt_sysrq);709628710710- error = input_register_handle(handle);629629+ sysrq->handle.dev = dev;630630+ sysrq->handle.handler = handler;631631+ sysrq->handle.name = "sysrq";632632+ sysrq->handle.private = sysrq;633633+634634+ error = input_register_handle(&sysrq->handle);711635 if (error) {712636 pr_err("Failed to register input sysrq handler, error %d\n",713637 error);714638 goto err_free;715639 }716640717717- error = input_open_device(handle);641641+ error = input_open_device(&sysrq->handle);718642 if (error) {719643 pr_err("Failed to open input device, error %d\n", error);720644 goto err_unregister;···723647 return 0;724648725649 err_unregister:726726- input_unregister_handle(handle);650650+ input_unregister_handle(&sysrq->handle);727651 err_free:728728- kfree(handle);652652+ kfree(sysrq);729653 return error;730654}731655732656static void sysrq_disconnect(struct input_handle *handle)733657{658658+ struct sysrq_state *sysrq = handle->private;659659+734660 input_close_device(handle);661661+ cancel_work_sync(&sysrq->reinject_work);735662 input_unregister_handle(handle);736736- kfree(handle);663663+ kfree(sysrq);737664}738665739666/*
···285285286286/* queue a command */287287/* This is always called with scsi_lock(host) held */288288-static int queuecommand(struct scsi_cmnd *srb,288288+static int queuecommand_lck(struct scsi_cmnd *srb,289289 void (*done)(struct scsi_cmnd *))290290{291291 struct us_data *us = host_to_us(srb->device->host);···314314315315 return 0;316316}317317+318318+static DEF_SCSI_QCMD(queuecommand)317319318320/***********************************************************************319321 * Error handling functions
+3-1
drivers/usb/storage/uas.c
···430430 return 0;431431}432432433433-static int uas_queuecommand(struct scsi_cmnd *cmnd,433433+static int uas_queuecommand_lck(struct scsi_cmnd *cmnd,434434 void (*done)(struct scsi_cmnd *))435435{436436 struct scsi_device *sdev = cmnd->device;···487487488488 return 0;489489}490490+491491+static DEF_SCSI_QCMD(uas_queuecommand)490492491493static int uas_eh_abort_handler(struct scsi_cmnd *cmnd)492494{
···170170171171 union ceph_mds_request_args r_args;172172 int r_fmode; /* file mode, if expecting cap */173173+ uid_t r_uid;174174+ gid_t r_gid;173175174176 /* for choosing which mds to send this request to */175177 int r_direct_mode;
+1-3
fs/ceph/super.h
···293293 int i_rd_ref, i_rdcache_ref, i_wr_ref;294294 int i_wrbuffer_ref, i_wrbuffer_ref_head;295295 u32 i_shared_gen; /* increment each time we get FILE_SHARED */296296- u32 i_rdcache_gen; /* we increment this each time we get297297- FILE_CACHE. If it's non-zero, we298298- _may_ have cached pages. */296296+ u32 i_rdcache_gen; /* incremented each time we get FILE_CACHE. */299297 u32 i_rdcache_revoking; /* RDCACHE gen to async invalidate, if any */300298301299 struct list_head i_unsafe_writes; /* uncommitted sync writes */
···331331 return err;332332 }333333334334+ case FITRIM:335335+ {336336+ struct super_block *sb = inode->i_sb;337337+ struct fstrim_range range;338338+ int ret = 0;339339+340340+ if (!capable(CAP_SYS_ADMIN))341341+ return -EPERM;342342+343343+ if (copy_from_user(&range, (struct fstrim_range *)arg,344344+ sizeof(range)))345345+ return -EFAULT;346346+347347+ ret = ext4_trim_fs(sb, &range);348348+ if (ret < 0)349349+ return ret;350350+351351+ if (copy_to_user((struct fstrim_range *)arg, &range,352352+ sizeof(range)))353353+ return -EFAULT;354354+355355+ return 0;356356+ }357357+334358 default:335359 return -ENOTTY;336360 }
+2-2
fs/ext4/page-io.c
···237237 } while (bh != head);238238 }239239240240- put_io_page(io_end->pages[i]);241241-242240 /*243241 * If this is a partial write which happened to make244242 * all buffers uptodate then we can optimize away a···246248 */247249 if (!partial_write)248250 SetPageUptodate(page);251251+252252+ put_io_page(io_end->pages[i]);249253 }250254 io_end->num_io_pages = 0;251255 inode = io_end->inode;
+3-6
fs/ext4/super.c
···11971197 .quota_write = ext4_quota_write,11981198#endif11991199 .bdev_try_to_free_page = bdev_try_to_free_page,12001200- .trim_fs = ext4_trim_fs12011200};1202120112031202static const struct super_operations ext4_nojournal_sops = {···27982799 struct ext4_li_request *elr;2799280028002801 mutex_lock(&ext4_li_info->li_list_mtx);28012801- if (list_empty(&ext4_li_info->li_request_list))28022802- return;28032803-28042802 list_for_each_safe(pos, n, &ext4_li_info->li_request_list) {28052803 elr = list_entry(pos, struct ext4_li_request,28062804 lr_request);···32643268 * Test whether we have more sectors than will fit in sector_t,32653269 * and whether the max offset is addressable by the page cache.32663270 */32673267- ret = generic_check_addressable(sb->s_blocksize_bits,32713271+ err = generic_check_addressable(sb->s_blocksize_bits,32683272 ext4_blocks_count(es));32693269- if (ret) {32733273+ if (err) {32703274 ext4_msg(sb, KERN_ERR, "filesystem"32713275 " too large to mount safely on this system");32723276 if (sizeof(sector_t) < 8)32733277 ext4_msg(sb, KERN_WARNING, "CONFIG_LBDAF not enabled");32783278+ ret = err;32743279 goto failed_mount;32753280 }32763281
-40
fs/ioctl.c
···6677#include <linux/syscalls.h>88#include <linux/mm.h>99-#include <linux/smp_lock.h>109#include <linux/capability.h>1110#include <linux/file.h>1211#include <linux/fs.h>···529530 return thaw_super(sb);530531}531532532532-static int ioctl_fstrim(struct file *filp, void __user *argp)533533-{534534- struct super_block *sb = filp->f_path.dentry->d_inode->i_sb;535535- struct fstrim_range range;536536- int ret = 0;537537-538538- if (!capable(CAP_SYS_ADMIN))539539- return -EPERM;540540-541541- /* If filesystem doesn't support trim feature, return. */542542- if (sb->s_op->trim_fs == NULL)543543- return -EOPNOTSUPP;544544-545545- /* If a blockdevice-backed filesystem isn't specified, return EINVAL. */546546- if (sb->s_bdev == NULL)547547- return -EINVAL;548548-549549- if (argp == NULL) {550550- range.start = 0;551551- range.len = ULLONG_MAX;552552- range.minlen = 0;553553- } else if (copy_from_user(&range, argp, sizeof(range)))554554- return -EFAULT;555555-556556- ret = sb->s_op->trim_fs(sb, &range);557557- if (ret < 0)558558- return ret;559559-560560- if ((argp != NULL) &&561561- (copy_to_user(argp, &range, sizeof(range))))562562- return -EFAULT;563563-564564- return 0;565565-}566566-567533/*568534 * When you add any new common ioctls to the switches above and below569535 * please update compat_sys_ioctl() too.···577613578614 case FITHAW:579615 error = ioctl_fsthaw(filp);580580- break;581581-582582- case FITRIM:583583- error = ioctl_fstrim(filp, argp);584616 break;585617586618 case FS_IOC_FIEMAP:
+8-8
fs/jbd2/journal.c
···899899900900 /* journal descriptor can store up to n blocks -bzzz */901901 journal->j_blocksize = blocksize;902902+ journal->j_dev = bdev;903903+ journal->j_fs_dev = fs_dev;904904+ journal->j_blk_offset = start;905905+ journal->j_maxlen = len;906906+ bdevname(journal->j_dev, journal->j_devname);907907+ p = journal->j_devname;908908+ while ((p = strchr(p, '/')))909909+ *p = '!';902910 jbd2_stats_proc_init(journal);903911 n = journal->j_blocksize / sizeof(journal_block_tag_t);904912 journal->j_wbufsize = n;···916908 __func__);917909 goto out_err;918910 }919919- journal->j_dev = bdev;920920- journal->j_fs_dev = fs_dev;921921- journal->j_blk_offset = start;922922- journal->j_maxlen = len;923923- bdevname(journal->j_dev, journal->j_devname);924924- p = journal->j_devname;925925- while ((p = strchr(p, '/')))926926- *p = '!';927911928912 bh = __getblk(journal->j_dev, start, journal->j_blocksize);929913 if (!bh) {
···3939#include <linux/nfs_mount.h>4040#include <linux/nfs4_mount.h>4141#include <linux/lockd/bind.h>4242-#include <linux/smp_lock.h>4342#include <linux/seq_file.h>4443#include <linux/mount.h>4544#include <linux/mnt_namespace.h>···6566#include "fscache.h"66676768#define NFSDBG_FACILITY NFSDBG_VFS6969+7070+#ifdef CONFIG_NFS_V37171+#define NFS_DEFAULT_VERSION 37272+#else7373+#define NFS_DEFAULT_VERSION 27474+#endif68756976enum {7077 /* Mount options that take no arguments */···22822277 };22832278 int error = -ENOMEM;2284227922852285- data = nfs_alloc_parsed_mount_data(3);22802280+ data = nfs_alloc_parsed_mount_data(NFS_DEFAULT_VERSION);22862281 mntfh = nfs_alloc_fhandle();22872282 if (data == NULL || mntfh == NULL)22882283 goto out_free_fh;
+4-4
fs/nfsd/nfs4state.c
···22622262 * Spawn a thread to perform a recall on the delegation represented22632263 * by the lease (file_lock)22642264 *22652265- * Called from break_lease() with lock_kernel() held.22652265+ * Called from break_lease() with lock_flocks() held.22662266 * Note: we assume break_lease will only call this *once* for any given22672267 * lease.22682268 */···22862286 list_add_tail(&dp->dl_recall_lru, &del_recall_lru);22872287 spin_unlock(&recall_lock);2288228822892289- /* only place dl_time is set. protected by lock_kernel*/22892289+ /* only place dl_time is set. protected by lock_flocks*/22902290 dp->dl_time = get_seconds();2291229122922292 /*···23032303/*23042304 * The file_lock is being reapd.23052305 *23062306- * Called by locks_free_lock() with lock_kernel() held.23062306+ * Called by locks_free_lock() with lock_flocks() held.23072307 */23082308static23092309void nfsd_release_deleg_cb(struct file_lock *fl)···23182318}2319231923202320/*23212321- * Called from setlease() with lock_kernel() held23212321+ * Called from setlease() with lock_flocks() held23222322 */23232323static23242324int nfsd_same_client_deleg_cb(struct file_lock *onlist, struct file_lock *try)
···66#include <linux/if_link.h>77#include <linux/if_addr.h>88#include <linux/neighbour.h>99-#include <linux/netdevice.h>1091110/* rtnetlink families. Values up to 127 are reserved for real address1211 * families, values above 128 may be used arbitrarily.···605606#ifdef __KERNEL__606607607608#include <linux/mutex.h>609609+#include <linux/netdevice.h>608610609611static __inline__ int rtattr_strcmp(const struct rtattr *rta, const char *str)610612{
+1
include/linux/sched.h
···862862 * single CPU.863863 */864864 unsigned int cpu_power, cpu_power_orig;865865+ unsigned int group_weight;865866866867 /*867868 * The CPUs this group covers.
-3
include/linux/smp_lock.h
···44#ifdef CONFIG_LOCK_KERNEL55#include <linux/sched.h>6677-#define kernel_locked() (current->lock_depth >= 0)88-97extern int __lockfunc __reacquire_kernel_lock(void);108extern void __lockfunc __release_kernel_lock(void);119···5658#define lock_kernel()5759#define unlock_kernel()5860#define cycle_kernel_lock() do { } while(0)5959-#define kernel_locked() 16061#endif /* CONFIG_BKL */61626263#define release_kernel_lock(task) do { } while(0)
···303303304304static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)305305{306306- unsigned long now = ACCESS_ONCE(jiffies);306306+ unsigned long now = jiffies;307307308308 if (neigh->used != now)309309 neigh->used = now;
···341341extern int iscsi_eh_recover_target(struct scsi_cmnd *sc);342342extern int iscsi_eh_session_reset(struct scsi_cmnd *sc);343343extern int iscsi_eh_device_reset(struct scsi_cmnd *sc);344344-extern int iscsi_queuecommand(struct scsi_cmnd *sc,345345- void (*done)(struct scsi_cmnd *));344344+extern int iscsi_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *sc);346345347346/*348347 * iSCSI host helpers.
+1-2
include/scsi/libsas.h
···621621int sas_phy_enable(struct sas_phy *phy, int enabled);622622int sas_phy_reset(struct sas_phy *phy, int hard_reset);623623int sas_queue_up(struct sas_task *task);624624-extern int sas_queuecommand(struct scsi_cmnd *,625625- void (*scsi_done)(struct scsi_cmnd *));624624+extern int sas_queuecommand(struct Scsi_Host * ,struct scsi_cmnd *);626625extern int sas_target_alloc(struct scsi_target *);627626extern int sas_slave_alloc(struct scsi_device *);628627extern int sas_slave_configure(struct scsi_device *);
+21-2
include/scsi/scsi_host.h
···127127 *128128 * STATUS: REQUIRED129129 */130130- int (* queuecommand)(struct scsi_cmnd *,131131- void (*done)(struct scsi_cmnd *));130130+ int (* queuecommand)(struct Scsi_Host *, struct scsi_cmnd *);132131133132 /*134133 * The transfer functions are used to queue a scsi command to···504505};505506506507/*508508+ * Temporary #define for host lock push down. Can be removed when all509509+ * drivers have been updated to take advantage of unlocked510510+ * queuecommand.511511+ *512512+ */513513+#define DEF_SCSI_QCMD(func_name) \514514+ int func_name(struct Scsi_Host *shost, struct scsi_cmnd *cmd) \515515+ { \516516+ unsigned long irq_flags; \517517+ int rc; \518518+ spin_lock_irqsave(shost->host_lock, irq_flags); \519519+ scsi_cmd_get_serial(shost, cmd); \520520+ rc = func_name##_lck (cmd, cmd->scsi_done); \521521+ spin_unlock_irqrestore(shost->host_lock, irq_flags); \522522+ return rc; \523523+ }524524+525525+526526+/*507527 * shost state: If you alter this, you also need to alter scsi_sysfs.c508528 * (for the ascii descriptions) and the state model enforcer:509529 * scsi_host_set_state()···770752extern void scsi_host_put(struct Scsi_Host *t);771753extern struct Scsi_Host *scsi_host_lookup(unsigned short);772754extern const char *scsi_host_state_name(enum scsi_host_state);755755+extern void scsi_cmd_get_serial(struct Scsi_Host *, struct scsi_cmnd *);773756774757extern u64 scsi_calculate_bounce_limit(struct Scsi_Host *);775758
···8282#define for_each_kdbcmd(cmd, num) \8383 for ((cmd) = kdb_base_commands, (num) = 0; \8484 num < kdb_max_commands; \8585- num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++, num++)8585+ num++, num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++)86868787typedef struct _kdbmsg {8888 int km_diag; /* kdb diagnostic */···646646 }647647 if (!s->usable)648648 return KDB_NOTIMP;649649- s->command = kmalloc((s->count + 1) * sizeof(*(s->command)), GFP_KDB);649649+ s->command = kzalloc((s->count + 1) * sizeof(*(s->command)), GFP_KDB);650650 if (!s->command) {651651 kdb_printf("Could not allocate new kdb_defcmd table for %s\n",652652 cmdstr);···23612361 */23622362static int kdb_ll(int argc, const char **argv)23632363{23642364- int diag;23642364+ int diag = 0;23652365 unsigned long addr;23662366 long offset = 0;23672367 unsigned long va;···24002400 char buf[80];2401240124022402 if (KDB_FLAG(CMD_INTERRUPT))24032403- return 0;24032403+ goto out;2404240424052405 sprintf(buf, "%s " kdb_machreg_fmt "\n", command, va);24062406 diag = kdb_parse(buf);24072407 if (diag)24082408- return diag;24082408+ goto out;2409240924102410 addr = va + linkoffset;24112411 if (kdb_getword(&va, addr, sizeof(va)))24122412- return 0;24122412+ goto out;24132413 }24142414- kfree(command);2415241424162416- return 0;24152415+out:24162416+ kfree(command);24172417+ return diag;24172418}2418241924192420static int kdb_kgdb(int argc, const char **argv)···27402739 }27412740 if (kdb_commands) {27422741 memcpy(new, kdb_commands,27432743- kdb_max_commands * sizeof(*new));27422742+ (kdb_max_commands - KDB_BASE_CMD_MAX) * sizeof(*new));27442743 kfree(kdb_commands);27452744 }27462745 memset(new + kdb_max_commands, 0,27472746 kdb_command_extend * sizeof(*new));27482747 kdb_commands = new;27492749- kp = kdb_commands + kdb_max_commands;27482748+ kp = kdb_commands + kdb_max_commands - KDB_BASE_CMD_MAX;27502749 kdb_max_commands += kdb_command_extend;27512750 }27522751
+2-1
kernel/futex.c
···24892489{24902490 struct robust_list_head __user *head = curr->robust_list;24912491 struct robust_list __user *entry, *next_entry, *pending;24922492- unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip;24922492+ unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;24932493+ unsigned int uninitialized_var(next_pi);24932494 unsigned long futex_offset;24942495 int rc;24952496
+2-1
kernel/futex_compat.c
···4949{5050 struct compat_robust_list_head __user *head = curr->compat_robust_list;5151 struct robust_list __user *entry, *next_entry, *pending;5252- unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip;5252+ unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;5353+ unsigned int uninitialized_var(next_pi);5354 compat_uptr_t uentry, next_uentry, upending;5455 compat_long_t futex_offset;5556 int rc;
+2-2
kernel/pm_qos_params.c
···121121122122 switch (o->type) {123123 case PM_QOS_MIN:124124- return plist_last(&o->requests)->prio;124124+ return plist_first(&o->requests)->prio;125125126126 case PM_QOS_MAX:127127- return plist_first(&o->requests)->prio;127127+ return plist_last(&o->requests)->prio;128128129129 default:130130 /* runtime check for not using enum */
+4
kernel/power/Kconfig
···246246 depends on PM_SLEEP || PM_RUNTIME247247 default y248248249249+config ARCH_HAS_OPP250250+ bool251251+249252config PM_OPP250253 bool "Operating Performance Point (OPP) Layer library"251254 depends on PM255255+ depends on ARCH_HAS_OPP252256 ---help---253257 SOCs have a standard set of tuples consisting of frequency and254258 voltage pairs that the device will support per voltage domain. This
+28-11
kernel/sched.c
···560560561561static DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);562562563563-static inline564564-void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)565565-{566566- rq->curr->sched_class->check_preempt_curr(rq, p, flags);567563568568- /*569569- * A queue event has occurred, and we're going to schedule. In570570- * this case, we can save a useless back to back clock update.571571- */572572- if (test_tsk_need_resched(p))573573- rq->skip_clock_update = 1;574574-}564564+static void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);575565576566static inline int cpu_of(struct rq *rq)577567{···21062116 p->sched_class->switched_to(rq, p, running);21072117 } else21082118 p->sched_class->prio_changed(rq, p, oldprio, running);21192119+}21202120+21212121+static void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)21222122+{21232123+ const struct sched_class *class;21242124+21252125+ if (p->sched_class == rq->curr->sched_class) {21262126+ rq->curr->sched_class->check_preempt_curr(rq, p, flags);21272127+ } else {21282128+ for_each_class(class) {21292129+ if (class == rq->curr->sched_class)21302130+ break;21312131+ if (class == p->sched_class) {21322132+ resched_task(rq->curr);21332133+ break;21342134+ }21352135+ }21362136+ }21372137+21382138+ /*21392139+ * A queue event has occurred, and we're going to schedule. In21402140+ * this case, we can save a useless back to back clock update.21412141+ */21422142+ if (test_tsk_need_resched(rq->curr))21432143+ rq->skip_clock_update = 1;21092144}2110214521112146#ifdef CONFIG_SMP···6974695969756960 if (cpu != group_first_cpu(sd->groups))69766961 return;69626962+69636963+ sd->groups->group_weight = cpumask_weight(sched_group_cpus(sd->groups));6977696469786965 child = sd->child;69796966
+31-9
kernel/sched_fair.c
···16541654 struct cfs_rq *cfs_rq = task_cfs_rq(curr);16551655 int scale = cfs_rq->nr_running >= sched_nr_latency;1656165616571657- if (unlikely(rt_prio(p->prio)))16581658- goto preempt;16591659-16601660- if (unlikely(p->sched_class != &fair_sched_class))16611661- return;16621662-16631657 if (unlikely(se == pse))16641658 return;16651659···20292035 unsigned long this_load_per_task;20302036 unsigned long this_nr_running;20312037 unsigned long this_has_capacity;20382038+ unsigned int this_idle_cpus;2032203920332040 /* Statistics of the busiest group */20412041+ unsigned int busiest_idle_cpus;20342042 unsigned long max_load;20352043 unsigned long busiest_load_per_task;20362044 unsigned long busiest_nr_running;20372045 unsigned long busiest_group_capacity;20382046 unsigned long busiest_has_capacity;20472047+ unsigned int busiest_group_weight;2039204820402049 int group_imb; /* Is there imbalance in this sd */20412050#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)···20602063 unsigned long sum_nr_running; /* Nr tasks running in the group */20612064 unsigned long sum_weighted_load; /* Weighted load of group's tasks */20622065 unsigned long group_capacity;20662066+ unsigned long idle_cpus;20672067+ unsigned long group_weight;20632068 int group_imb; /* Is there an imbalance in the group ? */20642069 int group_has_capacity; /* Is there extra capacity in the group? */20652070};···24302431 sgs->group_load += load;24312432 sgs->sum_nr_running += rq->nr_running;24322433 sgs->sum_weighted_load += weighted_cpuload(i);24332433-24342434+ if (idle_cpu(i))24352435+ sgs->idle_cpus++;24342436 }2435243724362438 /*···24692469 sgs->group_capacity = DIV_ROUND_CLOSEST(group->cpu_power, SCHED_LOAD_SCALE);24702470 if (!sgs->group_capacity)24712471 sgs->group_capacity = fix_small_capacity(sd, group);24722472+ sgs->group_weight = group->group_weight;2472247324732474 if (sgs->group_capacity > sgs->sum_nr_running)24742475 sgs->group_has_capacity = 1;···25772576 sds->this_nr_running = sgs.sum_nr_running;25782577 sds->this_load_per_task = sgs.sum_weighted_load;25792578 sds->this_has_capacity = sgs.group_has_capacity;25792579+ sds->this_idle_cpus = sgs.idle_cpus;25802580 } else if (update_sd_pick_busiest(sd, sds, sg, &sgs, this_cpu)) {25812581 sds->max_load = sgs.avg_load;25822582 sds->busiest = sg;25832583 sds->busiest_nr_running = sgs.sum_nr_running;25842584+ sds->busiest_idle_cpus = sgs.idle_cpus;25842585 sds->busiest_group_capacity = sgs.group_capacity;25852586 sds->busiest_load_per_task = sgs.sum_weighted_load;25862587 sds->busiest_has_capacity = sgs.group_has_capacity;25882588+ sds->busiest_group_weight = sgs.group_weight;25872589 sds->group_imb = sgs.group_imb;25882590 }25892591···28642860 if (sds.this_load >= sds.avg_load)28652861 goto out_balanced;2866286228672867- if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load)28682868- goto out_balanced;28632863+ /*28642864+ * In the CPU_NEWLY_IDLE, use imbalance_pct to be conservative.28652865+ * And to check for busy balance use !idle_cpu instead of28662866+ * CPU_NOT_IDLE. This is because HT siblings will use CPU_NOT_IDLE28672867+ * even when they are idle.28682868+ */28692869+ if (idle == CPU_NEWLY_IDLE || !idle_cpu(this_cpu)) {28702870+ if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load)28712871+ goto out_balanced;28722872+ } else {28732873+ /*28742874+ * This cpu is idle. If the busiest group load doesn't28752875+ * have more tasks than the number of available cpu's and28762876+ * there is no imbalance between this and busiest group28772877+ * wrt to idle cpu's, it is balanced.28782878+ */28792879+ if ((sds.this_idle_cpus <= sds.busiest_idle_cpus + 1) &&28802880+ sds.busiest_nr_running <= sds.busiest_group_weight)28812881+ goto out_balanced;28822882+ }2869288328702884force_balance:28712885 /* Looks like there is an imbalance. Compute it */
+2-2
kernel/sched_stoptask.c
···1919static void2020check_preempt_curr_stop(struct rq *rq, struct task_struct *p, int flags)2121{2222- resched_task(rq->curr); /* we preempt everything */2222+ /* we're never preempted */2323}24242525static struct task_struct *pick_next_task_stop(struct rq *rq)2626{2727 struct task_struct *stop = rq->stop;28282929- if (stop && stop->state == TASK_RUNNING)2929+ if (stop && stop->se.on_rq)3030 return stop;31313232 return NULL;
···7171 op->extent.length = objlen;7272 }7373 req->r_num_pages = calc_pages_for(off, *plen);7474+ req->r_page_alignment = off & ~PAGE_MASK;7475 if (op->op == CEPH_OSD_OP_WRITE)7576 op->payload_len = *plen;7677···391390 req->r_request->hdr.data_len = cpu_to_le32(data_len);392391 }393392393393+ req->r_request->page_alignment = req->r_page_alignment;394394+394395 BUG_ON(p > msg->front.iov_base + msg->front.iov_len);395396 msg_size = p - msg->front.iov_base;396397 msg->front.iov_len = msg_size;···422419 u32 truncate_seq,423420 u64 truncate_size,424421 struct timespec *mtime,425425- bool use_mempool, int num_reply)422422+ bool use_mempool, int num_reply,423423+ int page_align)426424{427425 struct ceph_osd_req_op ops[3];428426 struct ceph_osd_request *req;···450446 /* calculate max write size */451447 calc_layout(osdc, vino, layout, off, plen, req, ops);452448 req->r_file_layout = *layout; /* keep a copy */449449+450450+ /* in case it differs from natural alignment that calc_layout451451+ filled in for us */452452+ req->r_page_alignment = page_align;453453454454 ceph_osdc_build_request(req, off, plen, ops,455455 snapc,···14971489 struct ceph_vino vino, struct ceph_file_layout *layout,14981490 u64 off, u64 *plen,14991491 u32 truncate_seq, u64 truncate_size,15001500- struct page **pages, int num_pages)14921492+ struct page **pages, int num_pages, int page_align)15011493{15021494 struct ceph_osd_request *req;15031495 int rc = 0;···15071499 req = ceph_osdc_new_request(osdc, layout, vino, off, plen,15081500 CEPH_OSD_OP_READ, CEPH_OSD_FLAG_READ,15091501 NULL, 0, truncate_seq, truncate_size, NULL,15101510- false, 1);15021502+ false, 1, page_align);15111503 if (!req)15121504 return -ENOMEM;1513150515141506 /* it may be a short read due to an object boundary */15151507 req->r_pages = pages;1516150815171517- dout("readpages final extent is %llu~%llu (%d pages)\n",15181518- off, *plen, req->r_num_pages);15091509+ dout("readpages final extent is %llu~%llu (%d pages align %d)\n",15101510+ off, *plen, req->r_num_pages, page_align);1519151115201512 rc = ceph_osdc_start_request(osdc, req, false);15211513 if (!rc)···15411533{15421534 struct ceph_osd_request *req;15431535 int rc = 0;15361536+ int page_align = off & ~PAGE_MASK;1544153715451538 BUG_ON(vino.snap != CEPH_NOSNAP);15461539 req = ceph_osdc_new_request(osdc, layout, vino, off, &len,···15501541 CEPH_OSD_FLAG_WRITE,15511542 snapc, do_sync,15521543 truncate_seq, truncate_size, mtime,15531553- nofail, 1);15441544+ nofail, 1, page_align);15541545 if (!req)15551546 return -ENOMEM;15561547···16471638 m = ceph_msg_get(req->r_reply);1648163916491640 if (data_len > 0) {16501650- unsigned data_off = le16_to_cpu(hdr->data_off);16511651- int want = calc_pages_for(data_off & ~PAGE_MASK, data_len);16411641+ int want = calc_pages_for(req->r_page_alignment, data_len);1652164216531643 if (unlikely(req->r_num_pages < want)) {16541644 pr_warning("tid %lld reply %d > expected %d pages\n",···16591651 }16601652 m->pages = req->r_pages;16611653 m->nr_pages = req->r_num_pages;16541654+ m->page_alignment = req->r_page_alignment;16621655#ifdef CONFIG_BLOCK16631656 m->bio = req->r_bio;16641657#endif
+1-2
net/ceph/pagevec.c
···1313 * build a vector of user pages1414 */1515struct page **ceph_get_direct_page_vector(const char __user *data,1616- int num_pages,1717- loff_t off, size_t len)1616+ int num_pages)1817{1918 struct page **pages;2019 int rc;
+1-1
net/core/filter.c
···589589EXPORT_SYMBOL(sk_chk_filter);590590591591/**592592- * sk_filter_rcu_release: Release a socket filter by rcu_head592592+ * sk_filter_rcu_release - Release a socket filter by rcu_head593593 * @rcu: rcu_head that contains the sk_filter to free594594 */595595static void sk_filter_rcu_release(struct rcu_head *rcu)
+8-2
net/core/net-sysfs.c
···712712713713714714 map = rcu_dereference_raw(queue->rps_map);715715- if (map)715715+ if (map) {716716+ RCU_INIT_POINTER(queue->rps_map, NULL);716717 call_rcu(&map->rcu, rps_map_release);718718+ }717719718720 flow_table = rcu_dereference_raw(queue->rps_flow_table);719719- if (flow_table)721721+ if (flow_table) {722722+ RCU_INIT_POINTER(queue->rps_flow_table, NULL);720723 call_rcu(&flow_table->rcu, rps_dev_flow_table_release);724724+ }721725722726 if (atomic_dec_and_test(&first->count))723727 kfree(first);728728+ else729729+ memset(kobj, 0, sizeof(*kobj));724730}725731726732static struct kobj_type rx_queue_ktype = {
+3
net/ipv4/icmp.c
···569569 /* No need to clone since we're just using its address. */570570 rt2 = rt;571571572572+ if (!fl.nl_u.ip4_u.saddr)573573+ fl.nl_u.ip4_u.saddr = rt->rt_src;574574+572575 err = xfrm_lookup(net, (struct dst_entry **)&rt, &fl, NULL, 0);573576 switch (err) {574577 case 0:
···15151616#include <linux/sched.h>1717#include <linux/slab.h>1818-#include <linux/smp_lock.h>1918#include "irnet_ppp.h" /* Private header */2019/* Please put other headers in irnet.h - Thanks */2120
+22-8
net/irda/irttp.c
···550550 */551551int irttp_udata_request(struct tsap_cb *self, struct sk_buff *skb)552552{553553+ int ret;554554+553555 IRDA_ASSERT(self != NULL, return -1;);554556 IRDA_ASSERT(self->magic == TTP_TSAP_MAGIC, return -1;);555557 IRDA_ASSERT(skb != NULL, return -1;);556558557559 IRDA_DEBUG(4, "%s()\n", __func__);558560561561+ /* Take shortcut on zero byte packets */562562+ if (skb->len == 0) {563563+ ret = 0;564564+ goto err;565565+ }566566+559567 /* Check that nothing bad happens */560560- if ((skb->len == 0) || (!self->connected)) {561561- IRDA_DEBUG(1, "%s(), No data, or not connected\n",562562- __func__);568568+ if (!self->connected) {569569+ IRDA_WARNING("%s(), Not connected\n", __func__);570570+ ret = -ENOTCONN;563571 goto err;564572 }565573566574 if (skb->len > self->max_seg_size) {567567- IRDA_DEBUG(1, "%s(), UData is too large for IrLAP!\n",568568- __func__);575575+ IRDA_ERROR("%s(), UData is too large for IrLAP!\n", __func__);576576+ ret = -EMSGSIZE;569577 goto err;570578 }571579···584576585577err:586578 dev_kfree_skb(skb);587587- return -1;579579+ return ret;588580}589581EXPORT_SYMBOL(irttp_udata_request);590582···607599 IRDA_DEBUG(2, "%s() : queue len = %d\n", __func__,608600 skb_queue_len(&self->tx_queue));609601602602+ /* Take shortcut on zero byte packets */603603+ if (skb->len == 0) {604604+ ret = 0;605605+ goto err;606606+ }607607+610608 /* Check that nothing bad happens */611611- if ((skb->len == 0) || (!self->connected)) {612612- IRDA_WARNING("%s: No data, or not connected\n", __func__);609609+ if (!self->connected) {610610+ IRDA_WARNING("%s: Not connected\n", __func__);613611 ret = -ENOTCONN;614612 goto err;615613 }
+1
net/netfilter/ipvs/Kconfig
···44menuconfig IP_VS55 tristate "IP virtual server support"66 depends on NET && INET && NETFILTER77+ depends on (NF_CONNTRACK || NF_CONNTRACK=n)78 ---help---89 IP Virtual Server support will let you build a high-performance910 virtual server based on cluster of two or more real servers. This