at v3.1-rc3 840 lines 33 kB view raw
1<?xml version="1.0" encoding="UTF-8"?> 2<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" 3 "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []> 4 5<book id="drmDevelopersGuide"> 6 <bookinfo> 7 <title>Linux DRM Developer's Guide</title> 8 9 <copyright> 10 <year>2008-2009</year> 11 <holder> 12 Intel Corporation (Jesse Barnes &lt;jesse.barnes@intel.com&gt;) 13 </holder> 14 </copyright> 15 16 <legalnotice> 17 <para> 18 The contents of this file may be used under the terms of the GNU 19 General Public License version 2 (the "GPL") as distributed in 20 the kernel source COPYING file. 21 </para> 22 </legalnotice> 23 </bookinfo> 24 25<toc></toc> 26 27 <!-- Introduction --> 28 29 <chapter id="drmIntroduction"> 30 <title>Introduction</title> 31 <para> 32 The Linux DRM layer contains code intended to support the needs 33 of complex graphics devices, usually containing programmable 34 pipelines well suited to 3D graphics acceleration. Graphics 35 drivers in the kernel can make use of DRM functions to make 36 tasks like memory management, interrupt handling and DMA easier, 37 and provide a uniform interface to applications. 38 </para> 39 <para> 40 A note on versions: this guide covers features found in the DRM 41 tree, including the TTM memory manager, output configuration and 42 mode setting, and the new vblank internals, in addition to all 43 the regular features found in current kernels. 44 </para> 45 <para> 46 [Insert diagram of typical DRM stack here] 47 </para> 48 </chapter> 49 50 <!-- Internals --> 51 52 <chapter id="drmInternals"> 53 <title>DRM Internals</title> 54 <para> 55 This chapter documents DRM internals relevant to driver authors 56 and developers working to add support for the latest features to 57 existing drivers. 58 </para> 59 <para> 60 First, we'll go over some typical driver initialization 61 requirements, like setting up command buffers, creating an 62 initial output configuration, and initializing core services. 63 Subsequent sections will cover core internals in more detail, 64 providing implementation notes and examples. 65 </para> 66 <para> 67 The DRM layer provides several services to graphics drivers, 68 many of them driven by the application interfaces it provides 69 through libdrm, the library that wraps most of the DRM ioctls. 70 These include vblank event handling, memory 71 management, output management, framebuffer management, command 72 submission &amp; fencing, suspend/resume support, and DMA 73 services. 74 </para> 75 <para> 76 The core of every DRM driver is struct drm_driver. Drivers 77 will typically statically initialize a drm_driver structure, 78 then pass it to drm_init() at load time. 79 </para> 80 81 <!-- Internals: driver init --> 82 83 <sect1> 84 <title>Driver initialization</title> 85 <para> 86 Before calling the DRM initialization routines, the driver must 87 first create and fill out a struct drm_driver structure. 88 </para> 89 <programlisting> 90 static struct drm_driver driver = { 91 /* don't use mtrr's here, the Xserver or user space app should 92 * deal with them for intel hardware. 93 */ 94 .driver_features = 95 DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | 96 DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_MODESET, 97 .load = i915_driver_load, 98 .unload = i915_driver_unload, 99 .firstopen = i915_driver_firstopen, 100 .lastclose = i915_driver_lastclose, 101 .preclose = i915_driver_preclose, 102 .save = i915_save, 103 .restore = i915_restore, 104 .device_is_agp = i915_driver_device_is_agp, 105 .get_vblank_counter = i915_get_vblank_counter, 106 .enable_vblank = i915_enable_vblank, 107 .disable_vblank = i915_disable_vblank, 108 .irq_preinstall = i915_driver_irq_preinstall, 109 .irq_postinstall = i915_driver_irq_postinstall, 110 .irq_uninstall = i915_driver_irq_uninstall, 111 .irq_handler = i915_driver_irq_handler, 112 .reclaim_buffers = drm_core_reclaim_buffers, 113 .get_map_ofs = drm_core_get_map_ofs, 114 .get_reg_ofs = drm_core_get_reg_ofs, 115 .fb_probe = intelfb_probe, 116 .fb_remove = intelfb_remove, 117 .fb_resize = intelfb_resize, 118 .master_create = i915_master_create, 119 .master_destroy = i915_master_destroy, 120#if defined(CONFIG_DEBUG_FS) 121 .debugfs_init = i915_debugfs_init, 122 .debugfs_cleanup = i915_debugfs_cleanup, 123#endif 124 .gem_init_object = i915_gem_init_object, 125 .gem_free_object = i915_gem_free_object, 126 .gem_vm_ops = &amp;i915_gem_vm_ops, 127 .ioctls = i915_ioctls, 128 .fops = { 129 .owner = THIS_MODULE, 130 .open = drm_open, 131 .release = drm_release, 132 .ioctl = drm_ioctl, 133 .mmap = drm_mmap, 134 .poll = drm_poll, 135 .fasync = drm_fasync, 136#ifdef CONFIG_COMPAT 137 .compat_ioctl = i915_compat_ioctl, 138#endif 139 .llseek = noop_llseek, 140 }, 141 .pci_driver = { 142 .name = DRIVER_NAME, 143 .id_table = pciidlist, 144 .probe = probe, 145 .remove = __devexit_p(drm_cleanup_pci), 146 }, 147 .name = DRIVER_NAME, 148 .desc = DRIVER_DESC, 149 .date = DRIVER_DATE, 150 .major = DRIVER_MAJOR, 151 .minor = DRIVER_MINOR, 152 .patchlevel = DRIVER_PATCHLEVEL, 153 }; 154 </programlisting> 155 <para> 156 In the example above, taken from the i915 DRM driver, the driver 157 sets several flags indicating what core features it supports. 158 We'll go over the individual callbacks in later sections. Since 159 flags indicate which features your driver supports to the DRM 160 core, you need to set most of them prior to calling drm_init(). Some, 161 like DRIVER_MODESET can be set later based on user supplied parameters, 162 but that's the exception rather than the rule. 163 </para> 164 <variablelist> 165 <title>Driver flags</title> 166 <varlistentry> 167 <term>DRIVER_USE_AGP</term> 168 <listitem><para> 169 Driver uses AGP interface 170 </para></listitem> 171 </varlistentry> 172 <varlistentry> 173 <term>DRIVER_REQUIRE_AGP</term> 174 <listitem><para> 175 Driver needs AGP interface to function. 176 </para></listitem> 177 </varlistentry> 178 <varlistentry> 179 <term>DRIVER_USE_MTRR</term> 180 <listitem> 181 <para> 182 Driver uses MTRR interface for mapping memory. Deprecated. 183 </para> 184 </listitem> 185 </varlistentry> 186 <varlistentry> 187 <term>DRIVER_PCI_DMA</term> 188 <listitem><para> 189 Driver is capable of PCI DMA. Deprecated. 190 </para></listitem> 191 </varlistentry> 192 <varlistentry> 193 <term>DRIVER_SG</term> 194 <listitem><para> 195 Driver can perform scatter/gather DMA. Deprecated. 196 </para></listitem> 197 </varlistentry> 198 <varlistentry> 199 <term>DRIVER_HAVE_DMA</term> 200 <listitem><para>Driver supports DMA. Deprecated.</para></listitem> 201 </varlistentry> 202 <varlistentry> 203 <term>DRIVER_HAVE_IRQ</term><term>DRIVER_IRQ_SHARED</term> 204 <listitem> 205 <para> 206 DRIVER_HAVE_IRQ indicates whether the driver has a IRQ 207 handler, DRIVER_IRQ_SHARED indicates whether the device &amp; 208 handler support shared IRQs (note that this is required of 209 PCI drivers). 210 </para> 211 </listitem> 212 </varlistentry> 213 <varlistentry> 214 <term>DRIVER_DMA_QUEUE</term> 215 <listitem> 216 <para> 217 If the driver queues DMA requests and completes them 218 asynchronously, this flag should be set. Deprecated. 219 </para> 220 </listitem> 221 </varlistentry> 222 <varlistentry> 223 <term>DRIVER_FB_DMA</term> 224 <listitem> 225 <para> 226 Driver supports DMA to/from the framebuffer. Deprecated. 227 </para> 228 </listitem> 229 </varlistentry> 230 <varlistentry> 231 <term>DRIVER_MODESET</term> 232 <listitem> 233 <para> 234 Driver supports mode setting interfaces. 235 </para> 236 </listitem> 237 </varlistentry> 238 </variablelist> 239 <para> 240 In this specific case, the driver requires AGP and supports 241 IRQs. DMA, as we'll see, is handled by device specific ioctls 242 in this case. It also supports the kernel mode setting APIs, though 243 unlike in the actual i915 driver source, this example unconditionally 244 exports KMS capability. 245 </para> 246 </sect1> 247 248 <!-- Internals: driver load --> 249 250 <sect1> 251 <title>Driver load</title> 252 <para> 253 In the previous section, we saw what a typical drm_driver 254 structure might look like. One of the more important fields in 255 the structure is the hook for the load function. 256 </para> 257 <programlisting> 258 static struct drm_driver driver = { 259 ... 260 .load = i915_driver_load, 261 ... 262 }; 263 </programlisting> 264 <para> 265 The load function has many responsibilities: allocating a driver 266 private structure, specifying supported performance counters, 267 configuring the device (e.g. mapping registers &amp; command 268 buffers), initializing the memory manager, and setting up the 269 initial output configuration. 270 </para> 271 <para> 272 Note that the tasks performed at driver load time must not 273 conflict with DRM client requirements. For instance, if user 274 level mode setting drivers are in use, it would be problematic 275 to perform output discovery &amp; configuration at load time. 276 Likewise, if pre-memory management aware user level drivers are 277 in use, memory management and command buffer setup may need to 278 be omitted. These requirements are driver specific, and care 279 needs to be taken to keep both old and new applications and 280 libraries working. The i915 driver supports the "modeset" 281 module parameter to control whether advanced features are 282 enabled at load time or in legacy fashion. If compatibility is 283 a concern (e.g. with drivers converted over to the new interfaces 284 from the old ones), care must be taken to prevent incompatible 285 device initialization and control with the currently active 286 userspace drivers. 287 </para> 288 289 <sect2> 290 <title>Driver private &amp; performance counters</title> 291 <para> 292 The driver private hangs off the main drm_device structure and 293 can be used for tracking various device specific bits of 294 information, like register offsets, command buffer status, 295 register state for suspend/resume, etc. At load time, a 296 driver can simply allocate one and set drm_device.dev_priv 297 appropriately; at unload the driver can free it and set 298 drm_device.dev_priv to NULL. 299 </para> 300 <para> 301 The DRM supports several counters which can be used for rough 302 performance characterization. Note that the DRM stat counter 303 system is not often used by applications, and supporting 304 additional counters is completely optional. 305 </para> 306 <para> 307 These interfaces are deprecated and should not be used. If performance 308 monitoring is desired, the developer should investigate and 309 potentially enhance the kernel perf and tracing infrastructure to export 310 GPU related performance information to performance monitoring 311 tools and applications. 312 </para> 313 </sect2> 314 315 <sect2> 316 <title>Configuring the device</title> 317 <para> 318 Obviously, device configuration will be device specific. 319 However, there are several common operations: finding a 320 device's PCI resources, mapping them, and potentially setting 321 up an IRQ handler. 322 </para> 323 <para> 324 Finding &amp; mapping resources is fairly straightforward. The 325 DRM wrapper functions, drm_get_resource_start() and 326 drm_get_resource_len() can be used to find BARs on the given 327 drm_device struct. Once those values have been retrieved, the 328 driver load function can call drm_addmap() to create a new 329 mapping for the BAR in question. Note you'll probably want a 330 drm_local_map_t in your driver private structure to track any 331 mappings you create. 332<!-- !Fdrivers/gpu/drm/drm_bufs.c drm_get_resource_* --> 333<!-- !Finclude/drm/drmP.h drm_local_map_t --> 334 </para> 335 <para> 336 if compatibility with other operating systems isn't a concern 337 (DRM drivers can run under various BSD variants and OpenSolaris), 338 native Linux calls can be used for the above, e.g. pci_resource_* 339 and iomap*/iounmap. See the Linux device driver book for more 340 info. 341 </para> 342 <para> 343 Once you have a register map, you can use the DRM_READn() and 344 DRM_WRITEn() macros to access the registers on your device, or 345 use driver specific versions to offset into your MMIO space 346 relative to a driver specific base pointer (see I915_READ for 347 example). 348 </para> 349 <para> 350 If your device supports interrupt generation, you may want to 351 setup an interrupt handler at driver load time as well. This 352 is done using the drm_irq_install() function. If your device 353 supports vertical blank interrupts, it should call 354 drm_vblank_init() to initialize the core vblank handling code before 355 enabling interrupts on your device. This ensures the vblank related 356 structures are allocated and allows the core to handle vblank events. 357 </para> 358<!--!Fdrivers/char/drm/drm_irq.c drm_irq_install--> 359 <para> 360 Once your interrupt handler is registered (it'll use your 361 drm_driver.irq_handler as the actual interrupt handling 362 function), you can safely enable interrupts on your device, 363 assuming any other state your interrupt handler uses is also 364 initialized. 365 </para> 366 <para> 367 Another task that may be necessary during configuration is 368 mapping the video BIOS. On many devices, the VBIOS describes 369 device configuration, LCD panel timings (if any), and contains 370 flags indicating device state. Mapping the BIOS can be done 371 using the pci_map_rom() call, a convenience function that 372 takes care of mapping the actual ROM, whether it has been 373 shadowed into memory (typically at address 0xc0000) or exists 374 on the PCI device in the ROM BAR. Note that once you've 375 mapped the ROM and extracted any necessary information, be 376 sure to unmap it; on many devices the ROM address decoder is 377 shared with other BARs, so leaving it mapped can cause 378 undesired behavior like hangs or memory corruption. 379<!--!Fdrivers/pci/rom.c pci_map_rom--> 380 </para> 381 </sect2> 382 383 <sect2> 384 <title>Memory manager initialization</title> 385 <para> 386 In order to allocate command buffers, cursor memory, scanout 387 buffers, etc., as well as support the latest features provided 388 by packages like Mesa and the X.Org X server, your driver 389 should support a memory manager. 390 </para> 391 <para> 392 If your driver supports memory management (it should!), you'll 393 need to set that up at load time as well. How you initialize 394 it depends on which memory manager you're using, TTM or GEM. 395 </para> 396 <sect3> 397 <title>TTM initialization</title> 398 <para> 399 TTM (for Translation Table Manager) manages video memory and 400 aperture space for graphics devices. TTM supports both UMA devices 401 and devices with dedicated video RAM (VRAM), i.e. most discrete 402 graphics devices. If your device has dedicated RAM, supporting 403 TTM is desirable. TTM also integrates tightly with your 404 driver specific buffer execution function. See the radeon 405 driver for examples. 406 </para> 407 <para> 408 The core TTM structure is the ttm_bo_driver struct. It contains 409 several fields with function pointers for initializing the TTM, 410 allocating and freeing memory, waiting for command completion 411 and fence synchronization, and memory migration. See the 412 radeon_ttm.c file for an example of usage. 413 </para> 414 <para> 415 The ttm_global_reference structure is made up of several fields: 416 </para> 417 <programlisting> 418 struct ttm_global_reference { 419 enum ttm_global_types global_type; 420 size_t size; 421 void *object; 422 int (*init) (struct ttm_global_reference *); 423 void (*release) (struct ttm_global_reference *); 424 }; 425 </programlisting> 426 <para> 427 There should be one global reference structure for your memory 428 manager as a whole, and there will be others for each object 429 created by the memory manager at runtime. Your global TTM should 430 have a type of TTM_GLOBAL_TTM_MEM. The size field for the global 431 object should be sizeof(struct ttm_mem_global), and the init and 432 release hooks should point at your driver specific init and 433 release routines, which will probably eventually call 434 ttm_mem_global_init and ttm_mem_global_release respectively. 435 </para> 436 <para> 437 Once your global TTM accounting structure is set up and initialized 438 (done by calling ttm_global_item_ref on the global object you 439 just created), you'll need to create a buffer object TTM to 440 provide a pool for buffer object allocation by clients and the 441 kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO, 442 and its size should be sizeof(struct ttm_bo_global). Again, 443 driver specific init and release functions can be provided, 444 likely eventually calling ttm_bo_global_init and 445 ttm_bo_global_release, respectively. Also like the previous 446 object, ttm_global_item_ref is used to create an initial reference 447 count for the TTM, which will call your initialization function. 448 </para> 449 </sect3> 450 <sect3> 451 <title>GEM initialization</title> 452 <para> 453 GEM is an alternative to TTM, designed specifically for UMA 454 devices. It has simpler initialization and execution requirements 455 than TTM, but has no VRAM management capability. Core GEM 456 initialization is comprised of a basic drm_mm_init call to create 457 a GTT DRM MM object, which provides an address space pool for 458 object allocation. In a KMS configuration, the driver will 459 need to allocate and initialize a command ring buffer following 460 basic GEM initialization. Most UMA devices have a so-called 461 "stolen" memory region, which provides space for the initial 462 framebuffer and large, contiguous memory regions required by the 463 device. This space is not typically managed by GEM, and must 464 be initialized separately into its own DRM MM object. 465 </para> 466 <para> 467 Initialization will be driver specific, and will depend on 468 the architecture of the device. In the case of Intel 469 integrated graphics chips like 965GM, GEM initialization can 470 be done by calling the internal GEM init function, 471 i915_gem_do_init(). Since the 965GM is a UMA device 472 (i.e. it doesn't have dedicated VRAM), GEM will manage 473 making regular RAM available for GPU operations. Memory set 474 aside by the BIOS (called "stolen" memory by the i915 475 driver) will be managed by the DRM memrange allocator; the 476 rest of the aperture will be managed by GEM. 477 <programlisting> 478 /* Basic memrange allocator for stolen space (aka vram) */ 479 drm_memrange_init(&amp;dev_priv->vram, 0, prealloc_size); 480 /* Let GEM Manage from end of prealloc space to end of aperture */ 481 i915_gem_do_init(dev, prealloc_size, agp_size); 482 </programlisting> 483<!--!Edrivers/char/drm/drm_memrange.c--> 484 </para> 485 <para> 486 Once the memory manager has been set up, we can allocate the 487 command buffer. In the i915 case, this is also done with a 488 GEM function, i915_gem_init_ringbuffer(). 489 </para> 490 </sect3> 491 </sect2> 492 493 <sect2> 494 <title>Output configuration</title> 495 <para> 496 The final initialization task is output configuration. This involves 497 finding and initializing the CRTCs, encoders and connectors 498 for your device, creating an initial configuration and 499 registering a framebuffer console driver. 500 </para> 501 <sect3> 502 <title>Output discovery and initialization</title> 503 <para> 504 Several core functions exist to create CRTCs, encoders and 505 connectors, namely drm_crtc_init(), drm_connector_init() and 506 drm_encoder_init(), along with several "helper" functions to 507 perform common tasks. 508 </para> 509 <para> 510 Connectors should be registered with sysfs once they've been 511 detected and initialized, using the 512 drm_sysfs_connector_add() function. Likewise, when they're 513 removed from the system, they should be destroyed with 514 drm_sysfs_connector_remove(). 515 </para> 516 <programlisting> 517<![CDATA[ 518void intel_crt_init(struct drm_device *dev) 519{ 520 struct drm_connector *connector; 521 struct intel_output *intel_output; 522 523 intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL); 524 if (!intel_output) 525 return; 526 527 connector = &intel_output->base; 528 drm_connector_init(dev, &intel_output->base, 529 &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA); 530 531 drm_encoder_init(dev, &intel_output->enc, &intel_crt_enc_funcs, 532 DRM_MODE_ENCODER_DAC); 533 534 drm_mode_connector_attach_encoder(&intel_output->base, 535 &intel_output->enc); 536 537 /* Set up the DDC bus. */ 538 intel_output->ddc_bus = intel_i2c_create(dev, GPIOA, "CRTDDC_A"); 539 if (!intel_output->ddc_bus) { 540 dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration " 541 "failed.\n"); 542 return; 543 } 544 545 intel_output->type = INTEL_OUTPUT_ANALOG; 546 connector->interlace_allowed = 0; 547 connector->doublescan_allowed = 0; 548 549 drm_encoder_helper_add(&intel_output->enc, &intel_crt_helper_funcs); 550 drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs); 551 552 drm_sysfs_connector_add(connector); 553} 554]]> 555 </programlisting> 556 <para> 557 In the example above (again, taken from the i915 driver), a 558 CRT connector and encoder combination is created. A device 559 specific i2c bus is also created, for fetching EDID data and 560 performing monitor detection. Once the process is complete, 561 the new connector is registered with sysfs, to make its 562 properties available to applications. 563 </para> 564 <sect4> 565 <title>Helper functions and core functions</title> 566 <para> 567 Since many PC-class graphics devices have similar display output 568 designs, the DRM provides a set of helper functions to make 569 output management easier. The core helper routines handle 570 encoder re-routing and disabling of unused functions following 571 mode set. Using the helpers is optional, but recommended for 572 devices with PC-style architectures (i.e. a set of display planes 573 for feeding pixels to encoders which are in turn routed to 574 connectors). Devices with more complex requirements needing 575 finer grained management can opt to use the core callbacks 576 directly. 577 </para> 578 <para> 579 [Insert typical diagram here.] [Insert OMAP style config here.] 580 </para> 581 </sect4> 582 <para> 583 For each encoder, CRTC and connector, several functions must 584 be provided, depending on the object type. Encoder objects 585 need to provide a DPMS (basically on/off) function, mode fixup 586 (for converting requested modes into native hardware timings), 587 and prepare, set and commit functions for use by the core DRM 588 helper functions. Connector helpers need to provide mode fetch and 589 validity functions as well as an encoder matching function for 590 returning an ideal encoder for a given connector. The core 591 connector functions include a DPMS callback, (deprecated) 592 save/restore routines, detection, mode probing, property handling, 593 and cleanup functions. 594 </para> 595<!--!Edrivers/char/drm/drm_crtc.h--> 596<!--!Edrivers/char/drm/drm_crtc.c--> 597<!--!Edrivers/char/drm/drm_crtc_helper.c--> 598 </sect3> 599 </sect2> 600 </sect1> 601 602 <!-- Internals: vblank handling --> 603 604 <sect1> 605 <title>VBlank event handling</title> 606 <para> 607 The DRM core exposes two vertical blank related ioctls: 608 DRM_IOCTL_WAIT_VBLANK and DRM_IOCTL_MODESET_CTL. 609<!--!Edrivers/char/drm/drm_irq.c--> 610 </para> 611 <para> 612 DRM_IOCTL_WAIT_VBLANK takes a struct drm_wait_vblank structure 613 as its argument, and is used to block or request a signal when a 614 specified vblank event occurs. 615 </para> 616 <para> 617 DRM_IOCTL_MODESET_CTL should be called by application level 618 drivers before and after mode setting, since on many devices the 619 vertical blank counter will be reset at that time. Internally, 620 the DRM snapshots the last vblank count when the ioctl is called 621 with the _DRM_PRE_MODESET command so that the counter won't go 622 backwards (which is dealt with when _DRM_POST_MODESET is used). 623 </para> 624 <para> 625 To support the functions above, the DRM core provides several 626 helper functions for tracking vertical blank counters, and 627 requires drivers to provide several callbacks: 628 get_vblank_counter(), enable_vblank() and disable_vblank(). The 629 core uses get_vblank_counter() to keep the counter accurate 630 across interrupt disable periods. It should return the current 631 vertical blank event count, which is often tracked in a device 632 register. The enable and disable vblank callbacks should enable 633 and disable vertical blank interrupts, respectively. In the 634 absence of DRM clients waiting on vblank events, the core DRM 635 code will use the disable_vblank() function to disable 636 interrupts, which saves power. They'll be re-enabled again when 637 a client calls the vblank wait ioctl above. 638 </para> 639 <para> 640 Devices that don't provide a count register can simply use an 641 internal atomic counter incremented on every vertical blank 642 interrupt, and can make their enable and disable vblank 643 functions into no-ops. 644 </para> 645 </sect1> 646 647 <sect1> 648 <title>Memory management</title> 649 <para> 650 The memory manager lies at the heart of many DRM operations, and 651 is also required to support advanced client features like OpenGL 652 pbuffers. The DRM currently contains two memory managers, TTM 653 and GEM. 654 </para> 655 656 <sect2> 657 <title>The Translation Table Manager (TTM)</title> 658 <para> 659 TTM was developed by Tungsten Graphics, primarily by Thomas 660 Hellström, and is intended to be a flexible, high performance 661 graphics memory manager. 662 </para> 663 <para> 664 Drivers wishing to support TTM must fill out a drm_bo_driver 665 structure. 666 </para> 667 <para> 668 TTM design background and information belongs here. 669 </para> 670 </sect2> 671 672 <sect2> 673 <title>The Graphics Execution Manager (GEM)</title> 674 <para> 675 GEM is an Intel project, authored by Eric Anholt and Keith 676 Packard. It provides simpler interfaces than TTM, and is well 677 suited for UMA devices. 678 </para> 679 <para> 680 GEM-enabled drivers must provide gem_init_object() and 681 gem_free_object() callbacks to support the core memory 682 allocation routines. They should also provide several driver 683 specific ioctls to support command execution, pinning, buffer 684 read &amp; write, mapping, and domain ownership transfers. 685 </para> 686 <para> 687 On a fundamental level, GEM involves several operations: memory 688 allocation and freeing, command execution, and aperture management 689 at command execution time. Buffer object allocation is relatively 690 straightforward and largely provided by Linux's shmem layer, which 691 provides memory to back each object. When mapped into the GTT 692 or used in a command buffer, the backing pages for an object are 693 flushed to memory and marked write combined so as to be coherent 694 with the GPU. Likewise, when the GPU finishes rendering to an object, 695 if the CPU accesses it, it must be made coherent with the CPU's view 696 of memory, usually involving GPU cache flushing of various kinds. 697 This core CPU&lt;-&gt;GPU coherency management is provided by the GEM 698 set domain function, which evaluates an object's current domain and 699 performs any necessary flushing or synchronization to put the object 700 into the desired coherency domain (note that the object may be busy, 701 i.e. an active render target; in that case the set domain function 702 will block the client and wait for rendering to complete before 703 performing any necessary flushing operations). 704 </para> 705 <para> 706 Perhaps the most important GEM function is providing a command 707 execution interface to clients. Client programs construct command 708 buffers containing references to previously allocated memory objects 709 and submit them to GEM. At that point, GEM will take care to bind 710 all the objects into the GTT, execute the buffer, and provide 711 necessary synchronization between clients accessing the same buffers. 712 This often involves evicting some objects from the GTT and re-binding 713 others (a fairly expensive operation), and providing relocation 714 support which hides fixed GTT offsets from clients. Clients must 715 take care not to submit command buffers that reference more objects 716 than can fit in the GTT or GEM will reject them and no rendering 717 will occur. Similarly, if several objects in the buffer require 718 fence registers to be allocated for correct rendering (e.g. 2D blits 719 on pre-965 chips), care must be taken not to require more fence 720 registers than are available to the client. Such resource management 721 should be abstracted from the client in libdrm. 722 </para> 723 </sect2> 724 725 </sect1> 726 727 <!-- Output management --> 728 <sect1> 729 <title>Output management</title> 730 <para> 731 At the core of the DRM output management code is a set of 732 structures representing CRTCs, encoders and connectors. 733 </para> 734 <para> 735 A CRTC is an abstraction representing a part of the chip that 736 contains a pointer to a scanout buffer. Therefore, the number 737 of CRTCs available determines how many independent scanout 738 buffers can be active at any given time. The CRTC structure 739 contains several fields to support this: a pointer to some video 740 memory, a display mode, and an (x, y) offset into the video 741 memory to support panning or configurations where one piece of 742 video memory spans multiple CRTCs. 743 </para> 744 <para> 745 An encoder takes pixel data from a CRTC and converts it to a 746 format suitable for any attached connectors. On some devices, 747 it may be possible to have a CRTC send data to more than one 748 encoder. In that case, both encoders would receive data from 749 the same scanout buffer, resulting in a "cloned" display 750 configuration across the connectors attached to each encoder. 751 </para> 752 <para> 753 A connector is the final destination for pixel data on a device, 754 and usually connects directly to an external display device like 755 a monitor or laptop panel. A connector can only be attached to 756 one encoder at a time. The connector is also the structure 757 where information about the attached display is kept, so it 758 contains fields for display data, EDID data, DPMS &amp; 759 connection status, and information about modes supported on the 760 attached displays. 761 </para> 762<!--!Edrivers/char/drm/drm_crtc.c--> 763 </sect1> 764 765 <sect1> 766 <title>Framebuffer management</title> 767 <para> 768 In order to set a mode on a given CRTC, encoder and connector 769 configuration, clients need to provide a framebuffer object which 770 will provide a source of pixels for the CRTC to deliver to the encoder(s) 771 and ultimately the connector(s) in the configuration. A framebuffer 772 is fundamentally a driver specific memory object, made into an opaque 773 handle by the DRM addfb function. Once an fb has been created this 774 way it can be passed to the KMS mode setting routines for use in 775 a configuration. 776 </para> 777 </sect1> 778 779 <sect1> 780 <title>Command submission &amp; fencing</title> 781 <para> 782 This should cover a few device specific command submission 783 implementations. 784 </para> 785 </sect1> 786 787 <sect1> 788 <title>Suspend/resume</title> 789 <para> 790 The DRM core provides some suspend/resume code, but drivers 791 wanting full suspend/resume support should provide save() and 792 restore() functions. These will be called at suspend, 793 hibernate, or resume time, and should perform any state save or 794 restore required by your device across suspend or hibernate 795 states. 796 </para> 797 </sect1> 798 799 <sect1> 800 <title>DMA services</title> 801 <para> 802 This should cover how DMA mapping etc. is supported by the core. 803 These functions are deprecated and should not be used. 804 </para> 805 </sect1> 806 </chapter> 807 808 <!-- External interfaces --> 809 810 <chapter id="drmExternals"> 811 <title>Userland interfaces</title> 812 <para> 813 The DRM core exports several interfaces to applications, 814 generally intended to be used through corresponding libdrm 815 wrapper functions. In addition, drivers export device specific 816 interfaces for use by userspace drivers &amp; device aware 817 applications through ioctls and sysfs files. 818 </para> 819 <para> 820 External interfaces include: memory mapping, context management, 821 DMA operations, AGP management, vblank control, fence 822 management, memory management, and output management. 823 </para> 824 <para> 825 Cover generic ioctls and sysfs layout here. Only need high 826 level info, since man pages will cover the rest. 827 </para> 828 </chapter> 829 830 <!-- API reference --> 831 832 <appendix id="drmDriverApi"> 833 <title>DRM Driver API</title> 834 <para> 835 Include auto-generated API reference here (need to reference it 836 from paragraphs above too). 837 </para> 838 </appendix> 839 840</book>