Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2016-12-30' of git://anongit.freedesktop.org/git/drm-misc into drm-next

First -misc pull for 4.11:
- drm_mm rework + lots of selftests (Chris Wilson)
- new connector_list locking+iterators
- plenty of kerneldoc updates
- format handling rework from Ville
- atomic helper changes from Maarten for better plane corner-case handling
in drivers, plus the i915 legacy cursor patch that needs this
- bridge cleanup from Laurent
- plus plenty of small stuff all over
- also contains a merge of the 4.10 docs tree so that we could apply the
dma-buf kerneldoc patches

It's a lot more than usual, but due to the merge window blackout it also
covers about 4 weeks, so all in line again on a per-week basis. The more
annoying part with no pull request for 4 weeks is managing cross-tree
work. The -intel pull request I'll follow up with does conflict quite a
bit with -misc here. Longer-term (if drm-misc keeps growing) a
drm-next-queued to accept pull request for the next merge window during
this time might be useful.

I'd also like to backmerge -rc2+this into drm-intel next week, we have
quite a pile of patches waiting for the stuff in here.

* tag 'drm-misc-next-2016-12-30' of git://anongit.freedesktop.org/git/drm-misc: (126 commits)
drm: Add kerneldoc markup for new @scan parameters in drm_mm
drm/mm: Document locking rules
drm: Use drm_mm_insert_node_in_range_generic() for everyone
drm: Apply range restriction after color adjustment when allocation
drm: Wrap drm_mm_node.hole_follows
drm: Apply tight eviction scanning to color_adjust
drm: Simplify drm_mm scan-list manipulation
drm: Optimise power-of-two alignments in drm_mm_scan_add_block()
drm: Compute tight evictions for drm_mm_scan
drm: Fix application of color vs range restriction when scanning drm_mm
drm: Unconditionally do the range check in drm_mm_scan_add_block()
drm: Rename prev_node to hole in drm_mm_scan_add_block()
drm: Fix O= out-of-tree builds for selftests
drm: Extract struct drm_mm_scan from struct drm_mm
drm: Add asserts to catch overflow in drm_mm_init() and drm_mm_init_scan()
drm: Simplify drm_mm_clean()
drm: Detect overflow in drm_mm_reserve_node()
drm: Fix kerneldoc for drm_mm_scan_remove_block()
drm: Promote drm_mm alignment to u64
drm: kselftest for drm_mm and restricted color eviction
...

+5272 -2031
+46
Documentation/devicetree/bindings/display/bridge/ti,ths8135.txt
··· 1 + THS8135 Video DAC 2 + ----------------- 3 + 4 + This is the binding for Texas Instruments THS8135 Video DAC bridge. 5 + 6 + Required properties: 7 + 8 + - compatible: Must be "ti,ths8135" 9 + 10 + Required nodes: 11 + 12 + This device has two video ports. Their connections are modelled using the OF 13 + graph bindings specified in Documentation/devicetree/bindings/graph.txt. 14 + 15 + - Video port 0 for RGB input 16 + - Video port 1 for VGA output 17 + 18 + Example 19 + ------- 20 + 21 + vga-bridge { 22 + compatible = "ti,ths8135"; 23 + #address-cells = <1>; 24 + #size-cells = <0>; 25 + 26 + ports { 27 + #address-cells = <1>; 28 + #size-cells = <0>; 29 + 30 + port@0 { 31 + reg = <0>; 32 + 33 + vga_bridge_in: endpoint { 34 + remote-endpoint = <&lcdc_out_vga>; 35 + }; 36 + }; 37 + 38 + port@1 { 39 + reg = <1>; 40 + 41 + vga_bridge_out: endpoint { 42 + remote-endpoint = <&vga_con_in>; 43 + }; 44 + }; 45 + }; 46 + };
+1 -1
Documentation/devicetree/bindings/display/hisilicon/hisi-ade.txt
··· 16 16 "clk_ade_core" for the ADE core clock. 17 17 "clk_codec_jpeg" for the media NOC QoS clock, which use the same clock with 18 18 jpeg codec. 19 - "clk_ade_pix" for the ADE pixel clok. 19 + "clk_ade_pix" for the ADE pixel clock. 20 20 - assigned-clocks: Should contain "clk_ade_core" and "clk_codec_jpeg" clocks' 21 21 phandle + clock-specifier pairs. 22 22 - assigned-clock-rates: clock rates, one for each entry in assigned-clocks.
-482
Documentation/dma-buf-sharing.txt
··· 1 - DMA Buffer Sharing API Guide 2 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3 - 4 - Sumit Semwal 5 - <sumit dot semwal at linaro dot org> 6 - <sumit dot semwal at ti dot com> 7 - 8 - This document serves as a guide to device-driver writers on what is the dma-buf 9 - buffer sharing API, how to use it for exporting and using shared buffers. 10 - 11 - Any device driver which wishes to be a part of DMA buffer sharing, can do so as 12 - either the 'exporter' of buffers, or the 'user' of buffers. 13 - 14 - Say a driver A wants to use buffers created by driver B, then we call B as the 15 - exporter, and A as buffer-user. 16 - 17 - The exporter 18 - - implements and manages operations[1] for the buffer 19 - - allows other users to share the buffer by using dma_buf sharing APIs, 20 - - manages the details of buffer allocation, 21 - - decides about the actual backing storage where this allocation happens, 22 - - takes care of any migration of scatterlist - for all (shared) users of this 23 - buffer, 24 - 25 - The buffer-user 26 - - is one of (many) sharing users of the buffer. 27 - - doesn't need to worry about how the buffer is allocated, or where. 28 - - needs a mechanism to get access to the scatterlist that makes up this buffer 29 - in memory, mapped into its own address space, so it can access the same area 30 - of memory. 31 - 32 - dma-buf operations for device dma only 33 - -------------------------------------- 34 - 35 - The dma_buf buffer sharing API usage contains the following steps: 36 - 37 - 1. Exporter announces that it wishes to export a buffer 38 - 2. Userspace gets the file descriptor associated with the exported buffer, and 39 - passes it around to potential buffer-users based on use case 40 - 3. Each buffer-user 'connects' itself to the buffer 41 - 4. When needed, buffer-user requests access to the buffer from exporter 42 - 5. When finished with its use, the buffer-user notifies end-of-DMA to exporter 43 - 6. when buffer-user is done using this buffer completely, it 'disconnects' 44 - itself from the buffer. 45 - 46 - 47 - 1. Exporter's announcement of buffer export 48 - 49 - The buffer exporter announces its wish to export a buffer. In this, it 50 - connects its own private buffer data, provides implementation for operations 51 - that can be performed on the exported dma_buf, and flags for the file 52 - associated with this buffer. All these fields are filled in struct 53 - dma_buf_export_info, defined via the DEFINE_DMA_BUF_EXPORT_INFO macro. 54 - 55 - Interface: 56 - DEFINE_DMA_BUF_EXPORT_INFO(exp_info) 57 - struct dma_buf *dma_buf_export(struct dma_buf_export_info *exp_info) 58 - 59 - If this succeeds, dma_buf_export allocates a dma_buf structure, and 60 - returns a pointer to the same. It also associates an anonymous file with this 61 - buffer, so it can be exported. On failure to allocate the dma_buf object, 62 - it returns NULL. 63 - 64 - 'exp_name' in struct dma_buf_export_info is the name of exporter - to 65 - facilitate information while debugging. It is set to KBUILD_MODNAME by 66 - default, so exporters don't have to provide a specific name, if they don't 67 - wish to. 68 - 69 - DEFINE_DMA_BUF_EXPORT_INFO macro defines the struct dma_buf_export_info, 70 - zeroes it out and pre-populates exp_name in it. 71 - 72 - 73 - 2. Userspace gets a handle to pass around to potential buffer-users 74 - 75 - Userspace entity requests for a file-descriptor (fd) which is a handle to the 76 - anonymous file associated with the buffer. It can then share the fd with other 77 - drivers and/or processes. 78 - 79 - Interface: 80 - int dma_buf_fd(struct dma_buf *dmabuf, int flags) 81 - 82 - This API installs an fd for the anonymous file associated with this buffer; 83 - returns either 'fd', or error. 84 - 85 - 3. Each buffer-user 'connects' itself to the buffer 86 - 87 - Each buffer-user now gets a reference to the buffer, using the fd passed to 88 - it. 89 - 90 - Interface: 91 - struct dma_buf *dma_buf_get(int fd) 92 - 93 - This API will return a reference to the dma_buf, and increment refcount for 94 - it. 95 - 96 - After this, the buffer-user needs to attach its device with the buffer, which 97 - helps the exporter to know of device buffer constraints. 98 - 99 - Interface: 100 - struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, 101 - struct device *dev) 102 - 103 - This API returns reference to an attachment structure, which is then used 104 - for scatterlist operations. It will optionally call the 'attach' dma_buf 105 - operation, if provided by the exporter. 106 - 107 - The dma-buf sharing framework does the bookkeeping bits related to managing 108 - the list of all attachments to a buffer. 109 - 110 - Until this stage, the buffer-exporter has the option to choose not to actually 111 - allocate the backing storage for this buffer, but wait for the first buffer-user 112 - to request use of buffer for allocation. 113 - 114 - 115 - 4. When needed, buffer-user requests access to the buffer 116 - 117 - Whenever a buffer-user wants to use the buffer for any DMA, it asks for 118 - access to the buffer using dma_buf_map_attachment API. At least one attach to 119 - the buffer must have happened before map_dma_buf can be called. 120 - 121 - Interface: 122 - struct sg_table * dma_buf_map_attachment(struct dma_buf_attachment *, 123 - enum dma_data_direction); 124 - 125 - This is a wrapper to dma_buf->ops->map_dma_buf operation, which hides the 126 - "dma_buf->ops->" indirection from the users of this interface. 127 - 128 - In struct dma_buf_ops, map_dma_buf is defined as 129 - struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *, 130 - enum dma_data_direction); 131 - 132 - It is one of the buffer operations that must be implemented by the exporter. 133 - It should return the sg_table containing scatterlist for this buffer, mapped 134 - into caller's address space. 135 - 136 - If this is being called for the first time, the exporter can now choose to 137 - scan through the list of attachments for this buffer, collate the requirements 138 - of the attached devices, and choose an appropriate backing storage for the 139 - buffer. 140 - 141 - Based on enum dma_data_direction, it might be possible to have multiple users 142 - accessing at the same time (for reading, maybe), or any other kind of sharing 143 - that the exporter might wish to make available to buffer-users. 144 - 145 - map_dma_buf() operation can return -EINTR if it is interrupted by a signal. 146 - 147 - 148 - 5. When finished, the buffer-user notifies end-of-DMA to exporter 149 - 150 - Once the DMA for the current buffer-user is over, it signals 'end-of-DMA' to 151 - the exporter using the dma_buf_unmap_attachment API. 152 - 153 - Interface: 154 - void dma_buf_unmap_attachment(struct dma_buf_attachment *, 155 - struct sg_table *); 156 - 157 - This is a wrapper to dma_buf->ops->unmap_dma_buf() operation, which hides the 158 - "dma_buf->ops->" indirection from the users of this interface. 159 - 160 - In struct dma_buf_ops, unmap_dma_buf is defined as 161 - void (*unmap_dma_buf)(struct dma_buf_attachment *, 162 - struct sg_table *, 163 - enum dma_data_direction); 164 - 165 - unmap_dma_buf signifies the end-of-DMA for the attachment provided. Like 166 - map_dma_buf, this API also must be implemented by the exporter. 167 - 168 - 169 - 6. when buffer-user is done using this buffer, it 'disconnects' itself from the 170 - buffer. 171 - 172 - After the buffer-user has no more interest in using this buffer, it should 173 - disconnect itself from the buffer: 174 - 175 - - it first detaches itself from the buffer. 176 - 177 - Interface: 178 - void dma_buf_detach(struct dma_buf *dmabuf, 179 - struct dma_buf_attachment *dmabuf_attach); 180 - 181 - This API removes the attachment from the list in dmabuf, and optionally calls 182 - dma_buf->ops->detach(), if provided by exporter, for any housekeeping bits. 183 - 184 - - Then, the buffer-user returns the buffer reference to exporter. 185 - 186 - Interface: 187 - void dma_buf_put(struct dma_buf *dmabuf); 188 - 189 - This API then reduces the refcount for this buffer. 190 - 191 - If, as a result of this call, the refcount becomes 0, the 'release' file 192 - operation related to this fd is called. It calls the dmabuf->ops->release() 193 - operation in turn, and frees the memory allocated for dmabuf when exported. 194 - 195 - NOTES: 196 - - Importance of attach-detach and {map,unmap}_dma_buf operation pairs 197 - The attach-detach calls allow the exporter to figure out backing-storage 198 - constraints for the currently-interested devices. This allows preferential 199 - allocation, and/or migration of pages across different types of storage 200 - available, if possible. 201 - 202 - Bracketing of DMA access with {map,unmap}_dma_buf operations is essential 203 - to allow just-in-time backing of storage, and migration mid-way through a 204 - use-case. 205 - 206 - - Migration of backing storage if needed 207 - If after 208 - - at least one map_dma_buf has happened, 209 - - and the backing storage has been allocated for this buffer, 210 - another new buffer-user intends to attach itself to this buffer, it might 211 - be allowed, if possible for the exporter. 212 - 213 - In case it is allowed by the exporter: 214 - if the new buffer-user has stricter 'backing-storage constraints', and the 215 - exporter can handle these constraints, the exporter can just stall on the 216 - map_dma_buf until all outstanding access is completed (as signalled by 217 - unmap_dma_buf). 218 - Once all users have finished accessing and have unmapped this buffer, the 219 - exporter could potentially move the buffer to the stricter backing-storage, 220 - and then allow further {map,unmap}_dma_buf operations from any buffer-user 221 - from the migrated backing-storage. 222 - 223 - If the exporter cannot fulfill the backing-storage constraints of the new 224 - buffer-user device as requested, dma_buf_attach() would return an error to 225 - denote non-compatibility of the new buffer-sharing request with the current 226 - buffer. 227 - 228 - If the exporter chooses not to allow an attach() operation once a 229 - map_dma_buf() API has been called, it simply returns an error. 230 - 231 - Kernel cpu access to a dma-buf buffer object 232 - -------------------------------------------- 233 - 234 - The motivation to allow cpu access from the kernel to a dma-buf object from the 235 - importers side are: 236 - - fallback operations, e.g. if the devices is connected to a usb bus and the 237 - kernel needs to shuffle the data around first before sending it away. 238 - - full transparency for existing users on the importer side, i.e. userspace 239 - should not notice the difference between a normal object from that subsystem 240 - and an imported one backed by a dma-buf. This is really important for drm 241 - opengl drivers that expect to still use all the existing upload/download 242 - paths. 243 - 244 - Access to a dma_buf from the kernel context involves three steps: 245 - 246 - 1. Prepare access, which invalidate any necessary caches and make the object 247 - available for cpu access. 248 - 2. Access the object page-by-page with the dma_buf map apis 249 - 3. Finish access, which will flush any necessary cpu caches and free reserved 250 - resources. 251 - 252 - 1. Prepare access 253 - 254 - Before an importer can access a dma_buf object with the cpu from the kernel 255 - context, it needs to notify the exporter of the access that is about to 256 - happen. 257 - 258 - Interface: 259 - int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, 260 - enum dma_data_direction direction) 261 - 262 - This allows the exporter to ensure that the memory is actually available for 263 - cpu access - the exporter might need to allocate or swap-in and pin the 264 - backing storage. The exporter also needs to ensure that cpu access is 265 - coherent for the access direction. The direction can be used by the exporter 266 - to optimize the cache flushing, i.e. access with a different direction (read 267 - instead of write) might return stale or even bogus data (e.g. when the 268 - exporter needs to copy the data to temporary storage). 269 - 270 - This step might fail, e.g. in oom conditions. 271 - 272 - 2. Accessing the buffer 273 - 274 - To support dma_buf objects residing in highmem cpu access is page-based using 275 - an api similar to kmap. Accessing a dma_buf is done in aligned chunks of 276 - PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns 277 - a pointer in kernel virtual address space. Afterwards the chunk needs to be 278 - unmapped again. There is no limit on how often a given chunk can be mapped 279 - and unmapped, i.e. the importer does not need to call begin_cpu_access again 280 - before mapping the same chunk again. 281 - 282 - Interfaces: 283 - void *dma_buf_kmap(struct dma_buf *, unsigned long); 284 - void dma_buf_kunmap(struct dma_buf *, unsigned long, void *); 285 - 286 - There are also atomic variants of these interfaces. Like for kmap they 287 - facilitate non-blocking fast-paths. Neither the importer nor the exporter (in 288 - the callback) is allowed to block when using these. 289 - 290 - Interfaces: 291 - void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long); 292 - void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *); 293 - 294 - For importers all the restrictions of using kmap apply, like the limited 295 - supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2 296 - atomic dma_buf kmaps at the same time (in any given process context). 297 - 298 - dma_buf kmap calls outside of the range specified in begin_cpu_access are 299 - undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on 300 - the partial chunks at the beginning and end but may return stale or bogus 301 - data outside of the range (in these partial chunks). 302 - 303 - Note that these calls need to always succeed. The exporter needs to complete 304 - any preparations that might fail in begin_cpu_access. 305 - 306 - For some cases the overhead of kmap can be too high, a vmap interface 307 - is introduced. This interface should be used very carefully, as vmalloc 308 - space is a limited resources on many architectures. 309 - 310 - Interfaces: 311 - void *dma_buf_vmap(struct dma_buf *dmabuf) 312 - void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) 313 - 314 - The vmap call can fail if there is no vmap support in the exporter, or if it 315 - runs out of vmalloc space. Fallback to kmap should be implemented. Note that 316 - the dma-buf layer keeps a reference count for all vmap access and calls down 317 - into the exporter's vmap function only when no vmapping exists, and only 318 - unmaps it once. Protection against concurrent vmap/vunmap calls is provided 319 - by taking the dma_buf->lock mutex. 320 - 321 - 3. Finish access 322 - 323 - When the importer is done accessing the CPU, it needs to announce this to 324 - the exporter (to facilitate cache flushing and unpinning of any pinned 325 - resources). The result of any dma_buf kmap calls after end_cpu_access is 326 - undefined. 327 - 328 - Interface: 329 - void dma_buf_end_cpu_access(struct dma_buf *dma_buf, 330 - enum dma_data_direction dir); 331 - 332 - 333 - Direct Userspace Access/mmap Support 334 - ------------------------------------ 335 - 336 - Being able to mmap an export dma-buf buffer object has 2 main use-cases: 337 - - CPU fallback processing in a pipeline and 338 - - supporting existing mmap interfaces in importers. 339 - 340 - 1. CPU fallback processing in a pipeline 341 - 342 - In many processing pipelines it is sometimes required that the cpu can access 343 - the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid 344 - the need to handle this specially in userspace frameworks for buffer sharing 345 - it's ideal if the dma_buf fd itself can be used to access the backing storage 346 - from userspace using mmap. 347 - 348 - Furthermore Android's ION framework already supports this (and is otherwise 349 - rather similar to dma-buf from a userspace consumer side with using fds as 350 - handles, too). So it's beneficial to support this in a similar fashion on 351 - dma-buf to have a good transition path for existing Android userspace. 352 - 353 - No special interfaces, userspace simply calls mmap on the dma-buf fd, making 354 - sure that the cache synchronization ioctl (DMA_BUF_IOCTL_SYNC) is *always* 355 - used when the access happens. Note that DMA_BUF_IOCTL_SYNC can fail with 356 - -EAGAIN or -EINTR, in which case it must be restarted. 357 - 358 - Some systems might need some sort of cache coherency management e.g. when 359 - CPU and GPU domains are being accessed through dma-buf at the same time. To 360 - circumvent this problem there are begin/end coherency markers, that forward 361 - directly to existing dma-buf device drivers vfunc hooks. Userspace can make 362 - use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The sequence 363 - would be used like following: 364 - - mmap dma-buf fd 365 - - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write 366 - to mmap area 3. SYNC_END ioctl. This can be repeated as often as you 367 - want (with the new data being consumed by the GPU or say scanout device) 368 - - munmap once you don't need the buffer any more 369 - 370 - For correctness and optimal performance, it is always required to use 371 - SYNC_START and SYNC_END before and after, respectively, when accessing the 372 - mapped address. Userspace cannot rely on coherent access, even when there 373 - are systems where it just works without calling these ioctls. 374 - 375 - 2. Supporting existing mmap interfaces in importers 376 - 377 - Similar to the motivation for kernel cpu access it is again important that 378 - the userspace code of a given importing subsystem can use the same interfaces 379 - with a imported dma-buf buffer object as with a native buffer object. This is 380 - especially important for drm where the userspace part of contemporary OpenGL, 381 - X, and other drivers is huge, and reworking them to use a different way to 382 - mmap a buffer rather invasive. 383 - 384 - The assumption in the current dma-buf interfaces is that redirecting the 385 - initial mmap is all that's needed. A survey of some of the existing 386 - subsystems shows that no driver seems to do any nefarious thing like syncing 387 - up with outstanding asynchronous processing on the device or allocating 388 - special resources at fault time. So hopefully this is good enough, since 389 - adding interfaces to intercept pagefaults and allow pte shootdowns would 390 - increase the complexity quite a bit. 391 - 392 - Interface: 393 - int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, 394 - unsigned long); 395 - 396 - If the importing subsystem simply provides a special-purpose mmap call to set 397 - up a mapping in userspace, calling do_mmap with dma_buf->file will equally 398 - achieve that for a dma-buf object. 399 - 400 - 3. Implementation notes for exporters 401 - 402 - Because dma-buf buffers have invariant size over their lifetime, the dma-buf 403 - core checks whether a vma is too large and rejects such mappings. The 404 - exporter hence does not need to duplicate this check. 405 - 406 - Because existing importing subsystems might presume coherent mappings for 407 - userspace, the exporter needs to set up a coherent mapping. If that's not 408 - possible, it needs to fake coherency by manually shooting down ptes when 409 - leaving the cpu domain and flushing caches at fault time. Note that all the 410 - dma_buf files share the same anon inode, hence the exporter needs to replace 411 - the dma_buf file stored in vma->vm_file with it's own if pte shootdown is 412 - required. This is because the kernel uses the underlying inode's address_space 413 - for vma tracking (and hence pte tracking at shootdown time with 414 - unmap_mapping_range). 415 - 416 - If the above shootdown dance turns out to be too expensive in certain 417 - scenarios, we can extend dma-buf with a more explicit cache tracking scheme 418 - for userspace mappings. But the current assumption is that using mmap is 419 - always a slower path, so some inefficiencies should be acceptable. 420 - 421 - Exporters that shoot down mappings (for any reasons) shall not do any 422 - synchronization at fault time with outstanding device operations. 423 - Synchronization is an orthogonal issue to sharing the backing storage of a 424 - buffer and hence should not be handled by dma-buf itself. This is explicitly 425 - mentioned here because many people seem to want something like this, but if 426 - different exporters handle this differently, buffer sharing can fail in 427 - interesting ways depending upong the exporter (if userspace starts depending 428 - upon this implicit synchronization). 429 - 430 - Other Interfaces Exposed to Userspace on the dma-buf FD 431 - ------------------------------------------------------ 432 - 433 - - Since kernel 3.12 the dma-buf FD supports the llseek system call, but only 434 - with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow 435 - the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other 436 - llseek operation will report -EINVAL. 437 - 438 - If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all 439 - cases. Userspace can use this to detect support for discovering the dma-buf 440 - size using llseek. 441 - 442 - Miscellaneous notes 443 - ------------------- 444 - 445 - - Any exporters or users of the dma-buf buffer sharing framework must have 446 - a 'select DMA_SHARED_BUFFER' in their respective Kconfigs. 447 - 448 - - In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set 449 - on the file descriptor. This is not just a resource leak, but a 450 - potential security hole. It could give the newly exec'd application 451 - access to buffers, via the leaked fd, to which it should otherwise 452 - not be permitted access. 453 - 454 - The problem with doing this via a separate fcntl() call, versus doing it 455 - atomically when the fd is created, is that this is inherently racy in a 456 - multi-threaded app[3]. The issue is made worse when it is library code 457 - opening/creating the file descriptor, as the application may not even be 458 - aware of the fd's. 459 - 460 - To avoid this problem, userspace must have a way to request O_CLOEXEC 461 - flag be set when the dma-buf fd is created. So any API provided by 462 - the exporting driver to create a dmabuf fd must provide a way to let 463 - userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd(). 464 - 465 - - If an exporter needs to manually flush caches and hence needs to fake 466 - coherency for mmap support, it needs to be able to zap all the ptes pointing 467 - at the backing storage. Now linux mm needs a struct address_space associated 468 - with the struct file stored in vma->vm_file to do that with the function 469 - unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd 470 - with the anon_file struct file, i.e. all dma_bufs share the same file. 471 - 472 - Hence exporters need to setup their own file (and address_space) association 473 - by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap 474 - callback. In the specific case of a gem driver the exporter could use the 475 - shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then 476 - zap ptes by unmapping the corresponding range of the struct address_space 477 - associated with their own file. 478 - 479 - References: 480 - [1] struct dma_buf_ops in include/linux/dma-buf.h 481 - [2] All interfaces mentioned above defined in include/linux/dma-buf.h 482 - [3] https://lwn.net/Articles/236486/
+92
Documentation/driver-api/dma-buf.rst
··· 17 17 Shared DMA Buffers 18 18 ------------------ 19 19 20 + This document serves as a guide to device-driver writers on what is the dma-buf 21 + buffer sharing API, how to use it for exporting and using shared buffers. 22 + 23 + Any device driver which wishes to be a part of DMA buffer sharing, can do so as 24 + either the 'exporter' of buffers, or the 'user' or 'importer' of buffers. 25 + 26 + Say a driver A wants to use buffers created by driver B, then we call B as the 27 + exporter, and A as buffer-user/importer. 28 + 29 + The exporter 30 + 31 + - implements and manages operations in :c:type:`struct dma_buf_ops 32 + <dma_buf_ops>` for the buffer, 33 + - allows other users to share the buffer by using dma_buf sharing APIs, 34 + - manages the details of buffer allocation, wrapped int a :c:type:`struct 35 + dma_buf <dma_buf>`, 36 + - decides about the actual backing storage where this allocation happens, 37 + - and takes care of any migration of scatterlist - for all (shared) users of 38 + this buffer. 39 + 40 + The buffer-user 41 + 42 + - is one of (many) sharing users of the buffer. 43 + - doesn't need to worry about how the buffer is allocated, or where. 44 + - and needs a mechanism to get access to the scatterlist that makes up this 45 + buffer in memory, mapped into its own address space, so it can access the 46 + same area of memory. This interface is provided by :c:type:`struct 47 + dma_buf_attachment <dma_buf_attachment>`. 48 + 49 + Any exporters or users of the dma-buf buffer sharing framework must have a 50 + 'select DMA_SHARED_BUFFER' in their respective Kconfigs. 51 + 52 + Userspace Interface Notes 53 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 54 + 55 + Mostly a DMA buffer file descriptor is simply an opaque object for userspace, 56 + and hence the generic interface exposed is very minimal. There's a few things to 57 + consider though: 58 + 59 + - Since kernel 3.12 the dma-buf FD supports the llseek system call, but only 60 + with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow 61 + the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other 62 + llseek operation will report -EINVAL. 63 + 64 + If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all 65 + cases. Userspace can use this to detect support for discovering the dma-buf 66 + size using llseek. 67 + 68 + - In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set 69 + on the file descriptor. This is not just a resource leak, but a 70 + potential security hole. It could give the newly exec'd application 71 + access to buffers, via the leaked fd, to which it should otherwise 72 + not be permitted access. 73 + 74 + The problem with doing this via a separate fcntl() call, versus doing it 75 + atomically when the fd is created, is that this is inherently racy in a 76 + multi-threaded app[3]. The issue is made worse when it is library code 77 + opening/creating the file descriptor, as the application may not even be 78 + aware of the fd's. 79 + 80 + To avoid this problem, userspace must have a way to request O_CLOEXEC 81 + flag be set when the dma-buf fd is created. So any API provided by 82 + the exporting driver to create a dmabuf fd must provide a way to let 83 + userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd(). 84 + 85 + - Memory mapping the contents of the DMA buffer is also supported. See the 86 + discussion below on `CPU Access to DMA Buffer Objects`_ for the full details. 87 + 88 + - The DMA buffer FD is also pollable, see `Fence Poll Support`_ below for 89 + details. 90 + 91 + Basic Operation and Device DMA Access 92 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 93 + 94 + .. kernel-doc:: drivers/dma-buf/dma-buf.c 95 + :doc: dma buf device access 96 + 97 + CPU Access to DMA Buffer Objects 98 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 99 + 100 + .. kernel-doc:: drivers/dma-buf/dma-buf.c 101 + :doc: cpu access 102 + 103 + Fence Poll Support 104 + ~~~~~~~~~~~~~~~~~~ 105 + 106 + .. kernel-doc:: drivers/dma-buf/dma-buf.c 107 + :doc: fence polling 108 + 109 + Kernel Functions and Structures Reference 110 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 111 + 20 112 .. kernel-doc:: drivers/dma-buf/dma-buf.c 21 113 :export: 22 114
+1 -1
MAINTAINERS
··· 3966 3966 F: include/linux/dma-buf* 3967 3967 F: include/linux/reservation.h 3968 3968 F: include/linux/*fence.h 3969 - F: Documentation/dma-buf-sharing.txt 3969 + F: Documentation/driver-api/dma-buf.rst 3970 3970 T: git git://anongit.freedesktop.org/drm/drm-misc 3971 3971 3972 3972 SYNC FILE FRAMEWORK
+203 -5
drivers/dma-buf/dma-buf.c
··· 124 124 return base + offset; 125 125 } 126 126 127 + /** 128 + * DOC: fence polling 129 + * 130 + * To support cross-device and cross-driver synchronization of buffer access 131 + * implicit fences (represented internally in the kernel with struct &fence) can 132 + * be attached to a &dma_buf. The glue for that and a few related things are 133 + * provided in the &reservation_object structure. 134 + * 135 + * Userspace can query the state of these implicitly tracked fences using poll() 136 + * and related system calls: 137 + * 138 + * - Checking for POLLIN, i.e. read access, can be use to query the state of the 139 + * most recent write or exclusive fence. 140 + * 141 + * - Checking for POLLOUT, i.e. write access, can be used to query the state of 142 + * all attached fences, shared and exclusive ones. 143 + * 144 + * Note that this only signals the completion of the respective fences, i.e. the 145 + * DMA transfers are complete. Cache flushing and any other necessary 146 + * preparations before CPU access can begin still need to happen. 147 + */ 148 + 127 149 static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) 128 150 { 129 151 struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb; ··· 336 314 } 337 315 338 316 /** 317 + * DOC: dma buf device access 318 + * 319 + * For device DMA access to a shared DMA buffer the usual sequence of operations 320 + * is fairly simple: 321 + * 322 + * 1. The exporter defines his exporter instance using 323 + * DEFINE_DMA_BUF_EXPORT_INFO() and calls dma_buf_export() to wrap a private 324 + * buffer object into a &dma_buf. It then exports that &dma_buf to userspace 325 + * as a file descriptor by calling dma_buf_fd(). 326 + * 327 + * 2. Userspace passes this file-descriptors to all drivers it wants this buffer 328 + * to share with: First the filedescriptor is converted to a &dma_buf using 329 + * dma_buf_get(). The the buffer is attached to the device using 330 + * dma_buf_attach(). 331 + * 332 + * Up to this stage the exporter is still free to migrate or reallocate the 333 + * backing storage. 334 + * 335 + * 3. Once the buffer is attached to all devices userspace can inniate DMA 336 + * access to the shared buffer. In the kernel this is done by calling 337 + * dma_buf_map_attachment() and dma_buf_unmap_attachment(). 338 + * 339 + * 4. Once a driver is done with a shared buffer it needs to call 340 + * dma_buf_detach() (after cleaning up any mappings) and then release the 341 + * reference acquired with dma_buf_get by calling dma_buf_put(). 342 + * 343 + * For the detailed semantics exporters are expected to implement see 344 + * &dma_buf_ops. 345 + */ 346 + 347 + /** 339 348 * dma_buf_export - Creates a new dma_buf, and associates an anon file 340 349 * with this buffer, so it can be exported. 341 350 * Also connect the allocator specific data and ops to the buffer. 342 351 * Additionally, provide a name string for exporter; useful in debugging. 343 352 * 344 353 * @exp_info: [in] holds all the export related information provided 345 - * by the exporter. see struct dma_buf_export_info 354 + * by the exporter. see struct &dma_buf_export_info 346 355 * for further details. 347 356 * 348 357 * Returns, on success, a newly created dma_buf object, which wraps the 349 358 * supplied private data and operations for dma_buf_ops. On either missing 350 359 * ops, or error in allocating struct dma_buf, will return negative error. 351 360 * 361 + * For most cases the easiest way to create @exp_info is through the 362 + * %DEFINE_DMA_BUF_EXPORT_INFO macro. 352 363 */ 353 364 struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) 354 365 { ··· 513 458 * dma_buf_put - decreases refcount of the buffer 514 459 * @dmabuf: [in] buffer to reduce refcount of 515 460 * 516 - * Uses file's refcounting done implicitly by fput() 461 + * Uses file's refcounting done implicitly by fput(). 462 + * 463 + * If, as a result of this call, the refcount becomes 0, the 'release' file 464 + * operation related to this fd is called. It calls the release operation of 465 + * struct &dma_buf_ops in turn, and frees the memory allocated for dmabuf when 466 + * exported. 517 467 */ 518 468 void dma_buf_put(struct dma_buf *dmabuf) 519 469 { ··· 535 475 * @dmabuf: [in] buffer to attach device to. 536 476 * @dev: [in] device to be attached. 537 477 * 538 - * Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on 539 - * error. 478 + * Returns struct dma_buf_attachment pointer for this attachment. Attachments 479 + * must be cleaned up by calling dma_buf_detach(). 480 + * 481 + * Returns: 482 + * 483 + * A pointer to newly created &dma_buf_attachment on success, or a negative 484 + * error code wrapped into a pointer on failure. 485 + * 486 + * Note that this can fail if the backing storage of @dmabuf is in a place not 487 + * accessible to @dev, and cannot be moved to a more suitable place. This is 488 + * indicated with the error code -EBUSY. 540 489 */ 541 490 struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, 542 491 struct device *dev) ··· 588 519 * @dmabuf: [in] buffer to detach from. 589 520 * @attach: [in] attachment to be detached; is free'd after this call. 590 521 * 522 + * Clean up a device attachment obtained by calling dma_buf_attach(). 591 523 */ 592 524 void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) 593 525 { ··· 613 543 * @direction: [in] direction of DMA transfer 614 544 * 615 545 * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR 616 - * on error. 546 + * on error. May return -EINTR if it is interrupted by a signal. 547 + * 548 + * A mapping must be unmapped again using dma_buf_map_attachment(). Note that 549 + * the underlying backing storage is pinned for as long as a mapping exists, 550 + * therefore users/importers should not hold onto a mapping for undue amounts of 551 + * time. 617 552 */ 618 553 struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, 619 554 enum dma_data_direction direction) ··· 646 571 * @sg_table: [in] scatterlist info of the buffer to unmap 647 572 * @direction: [in] direction of DMA transfer 648 573 * 574 + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). 649 575 */ 650 576 void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, 651 577 struct sg_table *sg_table, ··· 661 585 direction); 662 586 } 663 587 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); 588 + 589 + /** 590 + * DOC: cpu access 591 + * 592 + * There are mutliple reasons for supporting CPU access to a dma buffer object: 593 + * 594 + * - Fallback operations in the kernel, for example when a device is connected 595 + * over USB and the kernel needs to shuffle the data around first before 596 + * sending it away. Cache coherency is handled by braketing any transactions 597 + * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() 598 + * access. 599 + * 600 + * To support dma_buf objects residing in highmem cpu access is page-based 601 + * using an api similar to kmap. Accessing a dma_buf is done in aligned chunks 602 + * of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which 603 + * returns a pointer in kernel virtual address space. Afterwards the chunk 604 + * needs to be unmapped again. There is no limit on how often a given chunk 605 + * can be mapped and unmapped, i.e. the importer does not need to call 606 + * begin_cpu_access again before mapping the same chunk again. 607 + * 608 + * Interfaces:: 609 + * void \*dma_buf_kmap(struct dma_buf \*, unsigned long); 610 + * void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*); 611 + * 612 + * There are also atomic variants of these interfaces. Like for kmap they 613 + * facilitate non-blocking fast-paths. Neither the importer nor the exporter 614 + * (in the callback) is allowed to block when using these. 615 + * 616 + * Interfaces:: 617 + * void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long); 618 + * void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*); 619 + * 620 + * For importers all the restrictions of using kmap apply, like the limited 621 + * supply of kmap_atomic slots. Hence an importer shall only hold onto at 622 + * max 2 atomic dma_buf kmaps at the same time (in any given process context). 623 + * 624 + * dma_buf kmap calls outside of the range specified in begin_cpu_access are 625 + * undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on 626 + * the partial chunks at the beginning and end but may return stale or bogus 627 + * data outside of the range (in these partial chunks). 628 + * 629 + * Note that these calls need to always succeed. The exporter needs to 630 + * complete any preparations that might fail in begin_cpu_access. 631 + * 632 + * For some cases the overhead of kmap can be too high, a vmap interface 633 + * is introduced. This interface should be used very carefully, as vmalloc 634 + * space is a limited resources on many architectures. 635 + * 636 + * Interfaces:: 637 + * void \*dma_buf_vmap(struct dma_buf \*dmabuf) 638 + * void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr) 639 + * 640 + * The vmap call can fail if there is no vmap support in the exporter, or if 641 + * it runs out of vmalloc space. Fallback to kmap should be implemented. Note 642 + * that the dma-buf layer keeps a reference count for all vmap access and 643 + * calls down into the exporter's vmap function only when no vmapping exists, 644 + * and only unmaps it once. Protection against concurrent vmap/vunmap calls is 645 + * provided by taking the dma_buf->lock mutex. 646 + * 647 + * - For full compatibility on the importer side with existing userspace 648 + * interfaces, which might already support mmap'ing buffers. This is needed in 649 + * many processing pipelines (e.g. feeding a software rendered image into a 650 + * hardware pipeline, thumbnail creation, snapshots, ...). Also, Android's ION 651 + * framework already supported this and for DMA buffer file descriptors to 652 + * replace ION buffers mmap support was needed. 653 + * 654 + * There is no special interfaces, userspace simply calls mmap on the dma-buf 655 + * fd. But like for CPU access there's a need to braket the actual access, 656 + * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that 657 + * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must 658 + * be restarted. 659 + * 660 + * Some systems might need some sort of cache coherency management e.g. when 661 + * CPU and GPU domains are being accessed through dma-buf at the same time. 662 + * To circumvent this problem there are begin/end coherency markers, that 663 + * forward directly to existing dma-buf device drivers vfunc hooks. Userspace 664 + * can make use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The 665 + * sequence would be used like following: 666 + * 667 + * - mmap dma-buf fd 668 + * - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write 669 + * to mmap area 3. SYNC_END ioctl. This can be repeated as often as you 670 + * want (with the new data being consumed by say the GPU or the scanout 671 + * device) 672 + * - munmap once you don't need the buffer any more 673 + * 674 + * For correctness and optimal performance, it is always required to use 675 + * SYNC_START and SYNC_END before and after, respectively, when accessing the 676 + * mapped address. Userspace cannot rely on coherent access, even when there 677 + * are systems where it just works without calling these ioctls. 678 + * 679 + * - And as a CPU fallback in userspace processing pipelines. 680 + * 681 + * Similar to the motivation for kernel cpu access it is again important that 682 + * the userspace code of a given importing subsystem can use the same 683 + * interfaces with a imported dma-buf buffer object as with a native buffer 684 + * object. This is especially important for drm where the userspace part of 685 + * contemporary OpenGL, X, and other drivers is huge, and reworking them to 686 + * use a different way to mmap a buffer rather invasive. 687 + * 688 + * The assumption in the current dma-buf interfaces is that redirecting the 689 + * initial mmap is all that's needed. A survey of some of the existing 690 + * subsystems shows that no driver seems to do any nefarious thing like 691 + * syncing up with outstanding asynchronous processing on the device or 692 + * allocating special resources at fault time. So hopefully this is good 693 + * enough, since adding interfaces to intercept pagefaults and allow pte 694 + * shootdowns would increase the complexity quite a bit. 695 + * 696 + * Interface:: 697 + * int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*, 698 + * unsigned long); 699 + * 700 + * If the importing subsystem simply provides a special-purpose mmap call to 701 + * set up a mapping in userspace, calling do_mmap with dma_buf->file will 702 + * equally achieve that for a dma-buf object. 703 + */ 664 704 665 705 static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, 666 706 enum dma_data_direction direction) ··· 802 610 * specified access direction. 803 611 * @dmabuf: [in] buffer to prepare cpu access for. 804 612 * @direction: [in] length of range for cpu access. 613 + * 614 + * After the cpu access is complete the caller should call 615 + * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is 616 + * it guaranteed to be coherent with other DMA access. 805 617 * 806 618 * Can return negative error values, returns 0 on success. 807 619 */ ··· 838 642 * specified access direction. 839 643 * @dmabuf: [in] buffer to complete cpu access for. 840 644 * @direction: [in] length of range for cpu access. 645 + * 646 + * This terminates CPU access started with dma_buf_begin_cpu_access(). 841 647 * 842 648 * Can return negative error values, returns 0 on success. 843 649 */
+4 -11
drivers/dma-buf/sync_file.c
··· 67 67 * sync_file_create() - creates a sync file 68 68 * @fence: fence to add to the sync_fence 69 69 * 70 - * Creates a sync_file containg @fence. Once this is called, the sync_file 71 - * takes ownership of @fence. The sync_file can be released with 72 - * fput(sync_file->file). Returns the sync_file or NULL in case of error. 70 + * Creates a sync_file containg @fence. This function acquires and additional 71 + * reference of @fence for the newly-created &sync_file, if it succeeds. The 72 + * sync_file can be released with fput(sync_file->file). Returns the 73 + * sync_file or NULL in case of error. 73 74 */ 74 75 struct sync_file *sync_file_create(struct dma_fence *fence) 75 76 { ··· 91 90 } 92 91 EXPORT_SYMBOL(sync_file_create); 93 92 94 - /** 95 - * sync_file_fdget() - get a sync_file from an fd 96 - * @fd: fd referencing a fence 97 - * 98 - * Ensures @fd references a valid sync_file, increments the refcount of the 99 - * backing file. Returns the sync_file or NULL in case of error. 100 - */ 101 93 static struct sync_file *sync_file_fdget(int fd) 102 94 { 103 95 struct file *file = fget(fd); ··· 462 468 .unlocked_ioctl = sync_file_ioctl, 463 469 .compat_ioctl = sync_file_ioctl, 464 470 }; 465 -
+19
drivers/gpu/drm/Kconfig
··· 48 48 49 49 If in doubt, say "N". 50 50 51 + config DRM_DEBUG_MM_SELFTEST 52 + tristate "kselftests for DRM range manager (struct drm_mm)" 53 + depends on DRM 54 + depends on DEBUG_KERNEL 55 + select PRIME_NUMBERS 56 + select DRM_LIB_RANDOM 57 + default n 58 + help 59 + This option provides a kernel module that can be used to test 60 + the DRM range manager (drm_mm) and its API. This option is not 61 + useful for distributions or general kernels, but only for kernel 62 + developers working on DRM and associated drivers. 63 + 64 + If in doubt, say "N". 65 + 51 66 config DRM_KMS_HELPER 52 67 tristate 53 68 depends on DRM ··· 336 321 chipset. If M is selected the module will be called savage. 337 322 338 323 endif # DRM_LEGACY 324 + 325 + config DRM_LIB_RANDOM 326 + bool 327 + default n
+2
drivers/gpu/drm/Makefile
··· 18 18 drm_plane.o drm_color_mgmt.o drm_print.o \ 19 19 drm_dumb_buffers.o drm_mode_config.o 20 20 21 + drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o 21 22 drm-$(CONFIG_COMPAT) += drm_ioc32.o 22 23 drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o 23 24 drm-$(CONFIG_PCI) += ati_pcigart.o ··· 38 37 drm_kms_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o 39 38 40 39 obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o 40 + obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/ 41 41 42 42 CFLAGS_drm_trace_points.o := -I$(src) 43 43
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 508 508 { 509 509 int ret; 510 510 rfb->obj = obj; 511 - drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd); 511 + drm_helper_mode_fill_fb_struct(dev, &rfb->base, mode_cmd); 512 512 ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs); 513 513 if (ret) { 514 514 rfb->obj = NULL;
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
··· 245 245 246 246 strcpy(info->fix.id, "amdgpudrmfb"); 247 247 248 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 248 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 249 249 250 250 info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT; 251 251 info->fbops = &amdgpufb_ops; ··· 272 272 DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); 273 273 DRM_INFO("vram apper at 0x%lX\n", (unsigned long)adev->mc.aper_base); 274 274 DRM_INFO("size %lu\n", (unsigned long)amdgpu_bo_size(abo)); 275 - DRM_INFO("fb depth is %d\n", fb->depth); 275 + DRM_INFO("fb depth is %d\n", fb->format->depth); 276 276 DRM_INFO(" pitch is %d\n", fb->pitches[0]); 277 277 278 278 vga_switcheroo_client_fb_set(adev->ddev->pdev, info);
+2 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
··· 61 61 struct drm_connector *connector; 62 62 63 63 mutex_lock(&mode_config->mutex); 64 - if (mode_config->num_connector) { 65 - list_for_each_entry(connector, &mode_config->connector_list, head) 66 - amdgpu_connector_hotplug(connector); 67 - } 64 + list_for_each_entry(connector, &mode_config->connector_list, head) 65 + amdgpu_connector_hotplug(connector); 68 66 mutex_unlock(&mode_config->mutex); 69 67 /* Just fire off a uevent and let userspace tell us what to do */ 70 68 drm_helper_hpd_irq_event(dev);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 32 32 33 33 #include <drm/drm_crtc.h> 34 34 #include <drm/drm_edid.h> 35 + #include <drm/drm_encoder.h> 35 36 #include <drm/drm_dp_helper.h> 36 37 #include <drm/drm_fixed.h> 37 38 #include <drm/drm_crtc_helper.h>
+3 -3
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 2072 2072 2073 2073 pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG); 2074 2074 2075 - switch (target_fb->pixel_format) { 2075 + switch (target_fb->format->format) { 2076 2076 case DRM_FORMAT_C8: 2077 2077 fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0); 2078 2078 fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0); ··· 2145 2145 break; 2146 2146 default: 2147 2147 DRM_ERROR("Unsupported screen format %s\n", 2148 - drm_get_format_name(target_fb->pixel_format, &format_name)); 2148 + drm_get_format_name(target_fb->format->format, &format_name)); 2149 2149 return -EINVAL; 2150 2150 } 2151 2151 ··· 2220 2220 WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); 2221 2221 WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); 2222 2222 2223 - fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 2223 + fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 2224 2224 WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); 2225 2225 2226 2226 dce_v10_0_grph_enable(crtc, true);
+3 -3
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 2053 2053 2054 2054 pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG); 2055 2055 2056 - switch (target_fb->pixel_format) { 2056 + switch (target_fb->format->format) { 2057 2057 case DRM_FORMAT_C8: 2058 2058 fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0); 2059 2059 fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0); ··· 2126 2126 break; 2127 2127 default: 2128 2128 DRM_ERROR("Unsupported screen format %s\n", 2129 - drm_get_format_name(target_fb->pixel_format, &format_name)); 2129 + drm_get_format_name(target_fb->format->format, &format_name)); 2130 2130 return -EINVAL; 2131 2131 } 2132 2132 ··· 2201 2201 WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); 2202 2202 WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); 2203 2203 2204 - fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 2204 + fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 2205 2205 WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); 2206 2206 2207 2207 dce_v11_0_grph_enable(crtc, true);
+3 -3
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 1501 1501 amdgpu_bo_get_tiling_flags(abo, &tiling_flags); 1502 1502 amdgpu_bo_unreserve(abo); 1503 1503 1504 - switch (target_fb->pixel_format) { 1504 + switch (target_fb->format->format) { 1505 1505 case DRM_FORMAT_C8: 1506 1506 fb_format = (GRPH_DEPTH(GRPH_DEPTH_8BPP) | 1507 1507 GRPH_FORMAT(GRPH_FORMAT_INDEXED)); ··· 1567 1567 break; 1568 1568 default: 1569 1569 DRM_ERROR("Unsupported screen format %s\n", 1570 - drm_get_format_name(target_fb->pixel_format, &format_name)); 1570 + drm_get_format_name(target_fb->format->format, &format_name)); 1571 1571 return -EINVAL; 1572 1572 } 1573 1573 ··· 1630 1630 WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); 1631 1631 WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); 1632 1632 1633 - fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 1633 + fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 1634 1634 WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); 1635 1635 1636 1636 dce_v6_0_grph_enable(crtc, true);
+3 -3
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 1950 1950 1951 1951 pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG); 1952 1952 1953 - switch (target_fb->pixel_format) { 1953 + switch (target_fb->format->format) { 1954 1954 case DRM_FORMAT_C8: 1955 1955 fb_format = ((GRPH_DEPTH_8BPP << GRPH_CONTROL__GRPH_DEPTH__SHIFT) | 1956 1956 (GRPH_FORMAT_INDEXED << GRPH_CONTROL__GRPH_FORMAT__SHIFT)); ··· 2016 2016 break; 2017 2017 default: 2018 2018 DRM_ERROR("Unsupported screen format %s\n", 2019 - drm_get_format_name(target_fb->pixel_format, &format_name)); 2019 + drm_get_format_name(target_fb->format->format, &format_name)); 2020 2020 return -EINVAL; 2021 2021 } 2022 2022 ··· 2079 2079 WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); 2080 2080 WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); 2081 2081 2082 - fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 2082 + fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 2083 2083 WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); 2084 2084 2085 2085 dce_v8_0_grph_enable(crtc, true);
+2 -1
drivers/gpu/drm/arc/arcpgu_crtc.c
··· 35 35 static void arc_pgu_set_pxl_fmt(struct drm_crtc *crtc) 36 36 { 37 37 struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 38 - uint32_t pixel_format = crtc->primary->state->fb->pixel_format; 38 + const struct drm_framebuffer *fb = crtc->primary->state->fb; 39 + uint32_t pixel_format = fb->format->format; 39 40 struct simplefb_format *format = NULL; 40 41 int i; 41 42
+1 -4
drivers/gpu/drm/arc/arcpgu_hdmi.c
··· 47 47 return ret; 48 48 49 49 /* Link drm_bridge to encoder */ 50 - bridge->encoder = encoder; 51 - encoder->bridge = bridge; 52 - 53 - ret = drm_bridge_attach(drm, bridge); 50 + ret = drm_bridge_attach(encoder, bridge, NULL); 54 51 if (ret) 55 52 drm_encoder_cleanup(encoder); 56 53
+10 -8
drivers/gpu/drm/arm/hdlcd_crtc.c
··· 60 60 { 61 61 unsigned int btpp; 62 62 struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); 63 + const struct drm_framebuffer *fb = crtc->primary->state->fb; 63 64 uint32_t pixel_format; 64 65 struct simplefb_format *format = NULL; 65 66 int i; 66 67 67 - pixel_format = crtc->primary->state->fb->pixel_format; 68 + pixel_format = fb->format->format; 68 69 69 70 for (i = 0; i < ARRAY_SIZE(supported_formats); i++) { 70 71 if (supported_formats[i].fourcc == pixel_format) ··· 221 220 static void hdlcd_plane_atomic_update(struct drm_plane *plane, 222 221 struct drm_plane_state *state) 223 222 { 223 + struct drm_framebuffer *fb = plane->state->fb; 224 224 struct hdlcd_drm_private *hdlcd; 225 225 struct drm_gem_cma_object *gem; 226 226 u32 src_w, src_h, dest_w, dest_h; 227 227 dma_addr_t scanout_start; 228 228 229 - if (!plane->state->fb) 229 + if (!fb) 230 230 return; 231 231 232 232 src_w = plane->state->src_w >> 16; 233 233 src_h = plane->state->src_h >> 16; 234 234 dest_w = plane->state->crtc_w; 235 235 dest_h = plane->state->crtc_h; 236 - gem = drm_fb_cma_get_gem_obj(plane->state->fb, 0); 237 - scanout_start = gem->paddr + plane->state->fb->offsets[0] + 238 - plane->state->crtc_y * plane->state->fb->pitches[0] + 236 + gem = drm_fb_cma_get_gem_obj(fb, 0); 237 + scanout_start = gem->paddr + fb->offsets[0] + 238 + plane->state->crtc_y * fb->pitches[0] + 239 239 plane->state->crtc_x * 240 - drm_format_plane_cpp(plane->state->fb->pixel_format, 0); 240 + fb->format->cpp[0]; 241 241 242 242 hdlcd = plane->dev->dev_private; 243 - hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, plane->state->fb->pitches[0]); 244 - hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, plane->state->fb->pitches[0]); 243 + hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, fb->pitches[0]); 244 + hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, fb->pitches[0]); 245 245 hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_COUNT, dest_h - 1); 246 246 hdlcd_write(hdlcd, HDLCD_REG_FB_BASE, scanout_start); 247 247 }
+5 -5
drivers/gpu/drm/arm/malidp_planes.c
··· 112 112 fb = state->fb; 113 113 114 114 ms->format = malidp_hw_get_format_id(&mp->hwdev->map, mp->layer->id, 115 - fb->pixel_format); 115 + fb->format->format); 116 116 if (ms->format == MALIDP_INVALID_FORMAT_ID) 117 117 return -EINVAL; 118 118 119 - ms->n_planes = drm_format_num_planes(fb->pixel_format); 119 + ms->n_planes = fb->format->num_planes; 120 120 for (i = 0; i < ms->n_planes; i++) { 121 121 if (!malidp_hw_pitch_valid(mp->hwdev, fb->pitches[i])) { 122 122 DRM_DEBUG_KMS("Invalid pitch %u for plane %d\n", ··· 137 137 138 138 /* packed RGB888 / BGR888 can't be rotated or flipped */ 139 139 if (state->rotation != DRM_ROTATE_0 && 140 - (state->fb->pixel_format == DRM_FORMAT_RGB888 || 141 - state->fb->pixel_format == DRM_FORMAT_BGR888)) 140 + (fb->format->format == DRM_FORMAT_RGB888 || 141 + fb->format->format == DRM_FORMAT_BGR888)) 142 142 return -EINVAL; 143 143 144 144 ms->rotmem_size = 0; ··· 147 147 148 148 val = mp->hwdev->rotmem_required(mp->hwdev, state->crtc_h, 149 149 state->crtc_w, 150 - state->fb->pixel_format); 150 + fb->format->format); 151 151 if (val < 0) 152 152 return val; 153 153
+4 -5
drivers/gpu/drm/armada/armada_crtc.c
··· 169 169 int x, int y) 170 170 { 171 171 u32 addr = drm_fb_obj(fb)->dev_addr; 172 - u32 pixel_format = fb->pixel_format; 173 - int num_planes = drm_format_num_planes(pixel_format); 172 + int num_planes = fb->format->num_planes; 174 173 int i; 175 174 176 175 if (num_planes > 3) ··· 177 178 178 179 for (i = 0; i < num_planes; i++) 179 180 addrs[i] = addr + fb->offsets[i] + y * fb->pitches[i] + 180 - x * drm_format_plane_cpp(pixel_format, i); 181 + x * fb->format->cpp[i]; 181 182 for (; i < 3; i++) 182 183 addrs[i] = 0; 183 184 } ··· 190 191 unsigned i = 0; 191 192 192 193 DRM_DEBUG_DRIVER("pitch %u x %d y %d bpp %d\n", 193 - pitch, x, y, fb->bits_per_pixel); 194 + pitch, x, y, fb->format->cpp[0] * 8); 194 195 195 196 armada_drm_plane_calc_addrs(addrs, fb, x, y); 196 197 ··· 1035 1036 int ret; 1036 1037 1037 1038 /* We don't support changing the pixel format */ 1038 - if (fb->pixel_format != crtc->primary->fb->pixel_format) 1039 + if (fb->format != crtc->primary->fb->format) 1039 1040 return -EINVAL; 1040 1041 1041 1042 work = kmalloc(sizeof(*work), GFP_KERNEL);
+1 -1
drivers/gpu/drm/armada/armada_fb.c
··· 81 81 dfb->mod = config; 82 82 dfb->obj = obj; 83 83 84 - drm_helper_mode_fill_fb_struct(&dfb->fb, mode); 84 + drm_helper_mode_fill_fb_struct(dev, &dfb->fb, mode); 85 85 86 86 ret = drm_framebuffer_init(dev, &dfb->fb, &armada_fb_funcs); 87 87 if (ret) {
+3 -2
drivers/gpu/drm/armada/armada_fbdev.c
··· 89 89 info->screen_base = ptr; 90 90 fbh->fb = &dfb->fb; 91 91 92 - drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth); 92 + drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], 93 + dfb->fb.format->depth); 93 94 drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height); 94 95 95 96 DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n", 96 - dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel, 97 + dfb->fb.width, dfb->fb.height, dfb->fb.format->cpp[0] * 8, 97 98 (unsigned long long)obj->phys_addr); 98 99 99 100 return 0;
+2 -2
drivers/gpu/drm/armada/armada_overlay.c
··· 186 186 187 187 armada_drm_plane_calc_addrs(addrs, fb, src_x, src_y); 188 188 189 - pixel_format = fb->pixel_format; 189 + pixel_format = fb->format->format; 190 190 hsub = drm_format_horz_chroma_subsampling(pixel_format); 191 - num_planes = drm_format_num_planes(pixel_format); 191 + num_planes = fb->format->num_planes; 192 192 193 193 /* 194 194 * Annoyingly, shifting a YUYV-format image by one pixel
+1
drivers/gpu/drm/ast/ast_drv.h
··· 28 28 #ifndef __AST_DRV_H__ 29 29 #define __AST_DRV_H__ 30 30 31 + #include <drm/drm_encoder.h> 31 32 #include <drm/drm_fb_helper.h> 32 33 33 34 #include <drm/ttm/ttm_bo_api.h>
+2 -2
drivers/gpu/drm/ast/ast_fb.c
··· 49 49 struct drm_gem_object *obj; 50 50 struct ast_bo *bo; 51 51 int src_offset, dst_offset; 52 - int bpp = (afbdev->afb.base.bits_per_pixel + 7)/8; 52 + int bpp = afbdev->afb.base.format->cpp[0]; 53 53 int ret = -EBUSY; 54 54 bool unmap = false; 55 55 bool store_for_later = false; ··· 237 237 info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0); 238 238 info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0); 239 239 240 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 240 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 241 241 drm_fb_helper_fill_var(info, &afbdev->helper, sizes->fb_width, sizes->fb_height); 242 242 243 243 info->screen_base = sysram;
+1 -1
drivers/gpu/drm/ast/ast_main.c
··· 314 314 { 315 315 int ret; 316 316 317 - drm_helper_mode_fill_fb_struct(&ast_fb->base, mode_cmd); 317 + drm_helper_mode_fill_fb_struct(dev, &ast_fb->base, mode_cmd); 318 318 ast_fb->obj = obj; 319 319 ret = drm_framebuffer_init(dev, &ast_fb->base, &ast_fb_funcs); 320 320 if (ret) {
+11 -5
drivers/gpu/drm/ast/ast_mode.c
··· 79 79 struct ast_vbios_mode_info *vbios_mode) 80 80 { 81 81 struct ast_private *ast = crtc->dev->dev_private; 82 + const struct drm_framebuffer *fb = crtc->primary->fb; 82 83 u32 refresh_rate_index = 0, mode_id, color_index, refresh_rate; 83 84 u32 hborder, vborder; 84 85 bool check_sync; 85 86 struct ast_vbios_enhtable *best = NULL; 86 87 87 - switch (crtc->primary->fb->bits_per_pixel) { 88 + switch (fb->format->cpp[0] * 8) { 88 89 case 8: 89 90 vbios_mode->std_table = &vbios_stdtable[VGAModeIndex]; 90 91 color_index = VGAModeIndex - 1; ··· 208 207 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0x00); 209 208 if (vbios_mode->enh_table->flags & NewModeInfo) { 210 209 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0xa8); 211 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x92, crtc->primary->fb->bits_per_pixel); 210 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x92, 211 + fb->format->cpp[0] * 8); 212 212 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x93, adjusted_mode->clock / 1000); 213 213 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x94, adjusted_mode->crtc_hdisplay); 214 214 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x95, adjusted_mode->crtc_hdisplay >> 8); ··· 371 369 static void ast_set_offset_reg(struct drm_crtc *crtc) 372 370 { 373 371 struct ast_private *ast = crtc->dev->dev_private; 372 + const struct drm_framebuffer *fb = crtc->primary->fb; 374 373 375 374 u16 offset; 376 375 377 - offset = crtc->primary->fb->pitches[0] >> 3; 376 + offset = fb->pitches[0] >> 3; 378 377 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x13, (offset & 0xff)); 379 378 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xb0, (offset >> 8) & 0x3f); 380 379 } ··· 398 395 struct ast_vbios_mode_info *vbios_mode) 399 396 { 400 397 struct ast_private *ast = crtc->dev->dev_private; 398 + const struct drm_framebuffer *fb = crtc->primary->fb; 401 399 u8 jregA0 = 0, jregA3 = 0, jregA8 = 0; 402 400 403 - switch (crtc->primary->fb->bits_per_pixel) { 401 + switch (fb->format->cpp[0] * 8) { 404 402 case 8: 405 403 jregA0 = 0x70; 406 404 jregA3 = 0x01; ··· 456 452 static bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode, 457 453 struct ast_vbios_mode_info *vbios_mode) 458 454 { 459 - switch (crtc->primary->fb->bits_per_pixel) { 455 + const struct drm_framebuffer *fb = crtc->primary->fb; 456 + 457 + switch (fb->format->cpp[0] * 8) { 460 458 case 8: 461 459 break; 462 460 default:
+1 -1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.c
··· 446 446 return; 447 447 448 448 if (fb) 449 - nplanes = drm_format_num_planes(fb->pixel_format); 449 + nplanes = fb->format->num_planes; 450 450 451 451 if (nplanes > layer->max_planes) 452 452 return;
+1 -3
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
··· 230 230 of_node_put(np); 231 231 232 232 if (bridge) { 233 - output->encoder.bridge = bridge; 234 - bridge->encoder = &output->encoder; 235 - ret = drm_bridge_attach(dev, bridge); 233 + ret = drm_bridge_attach(&output->encoder, bridge, NULL); 236 234 if (!ret) 237 235 return 0; 238 236 }
+11 -11
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 356 356 cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL | 357 357 ATMEL_HLCDC_LAYER_ITER; 358 358 359 - if (atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format)) 359 + if (atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format)) 360 360 cfg |= ATMEL_HLCDC_LAYER_LAEN; 361 361 else 362 362 cfg |= ATMEL_HLCDC_LAYER_GAEN | ··· 386 386 u32 cfg; 387 387 int ret; 388 388 389 - ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->pixel_format, 389 + ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->format->format, 390 390 &cfg); 391 391 if (ret) 392 392 return; 393 393 394 - if ((state->base.fb->pixel_format == DRM_FORMAT_YUV422 || 395 - state->base.fb->pixel_format == DRM_FORMAT_NV61) && 394 + if ((state->base.fb->format->format == DRM_FORMAT_YUV422 || 395 + state->base.fb->format->format == DRM_FORMAT_NV61) && 396 396 drm_rotation_90_or_270(state->base.rotation)) 397 397 cfg |= ATMEL_HLCDC_YUV422ROT; 398 398 ··· 405 405 * Rotation optimization is not working on RGB888 (rotation is still 406 406 * working but without any optimization). 407 407 */ 408 - if (state->base.fb->pixel_format == DRM_FORMAT_RGB888) 408 + if (state->base.fb->format->format == DRM_FORMAT_RGB888) 409 409 cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS; 410 410 else 411 411 cfg = 0; ··· 514 514 ovl_state = drm_plane_state_to_atmel_hlcdc_plane_state(ovl_s); 515 515 516 516 if (!ovl_s->fb || 517 - atmel_hlcdc_format_embeds_alpha(ovl_s->fb->pixel_format) || 517 + atmel_hlcdc_format_embeds_alpha(ovl_s->fb->format->format) || 518 518 ovl_state->alpha != 255) 519 519 continue; 520 520 ··· 621 621 state->src_w >>= 16; 622 622 state->src_h >>= 16; 623 623 624 - state->nplanes = drm_format_num_planes(fb->pixel_format); 624 + state->nplanes = fb->format->num_planes; 625 625 if (state->nplanes > ATMEL_HLCDC_MAX_PLANES) 626 626 return -EINVAL; 627 627 ··· 664 664 patched_src_h = DIV_ROUND_CLOSEST(patched_crtc_h * state->src_h, 665 665 state->crtc_h); 666 666 667 - hsub = drm_format_horz_chroma_subsampling(fb->pixel_format); 668 - vsub = drm_format_vert_chroma_subsampling(fb->pixel_format); 667 + hsub = drm_format_horz_chroma_subsampling(fb->format->format); 668 + vsub = drm_format_vert_chroma_subsampling(fb->format->format); 669 669 670 670 for (i = 0; i < state->nplanes; i++) { 671 671 unsigned int offset = 0; 672 672 int xdiv = i ? hsub : 1; 673 673 int ydiv = i ? vsub : 1; 674 674 675 - state->bpp[i] = drm_format_plane_cpp(fb->pixel_format, i); 675 + state->bpp[i] = fb->format->cpp[i]; 676 676 if (!state->bpp[i]) 677 677 return -EINVAL; 678 678 ··· 741 741 742 742 if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) && 743 743 (!layout->memsize || 744 - atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format))) 744 + atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format))) 745 745 return -EINVAL; 746 746 747 747 if (state->crtc_x < 0 || state->crtc_y < 0)
+1
drivers/gpu/drm/bochs/bochs.h
··· 4 4 #include <drm/drmP.h> 5 5 #include <drm/drm_crtc.h> 6 6 #include <drm/drm_crtc_helper.h> 7 + #include <drm/drm_encoder.h> 7 8 #include <drm/drm_fb_helper.h> 8 9 9 10 #include <drm/drm_gem.h>
+1 -1
drivers/gpu/drm/bochs/bochs_fbdev.c
··· 123 123 info->flags = FBINFO_DEFAULT; 124 124 info->fbops = &bochsfb_ops; 125 125 126 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 126 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 127 127 drm_fb_helper_fill_var(info, &bochs->fb.helper, sizes->fb_width, 128 128 sizes->fb_height); 129 129
+1 -1
drivers/gpu/drm/bochs/bochs_mm.c
··· 484 484 { 485 485 int ret; 486 486 487 - drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd); 487 + drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd); 488 488 gfb->obj = obj; 489 489 ret = drm_framebuffer_init(dev, &gfb->base, &bochs_fb_funcs); 490 490 if (ret) {
+6 -3
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 133 133 { 134 134 struct analogix_dp_device *dp = dev_get_drvdata(dev); 135 135 struct edp_vsc_psr psr_vsc; 136 + int ret; 136 137 137 138 if (!dp->psr_support) 138 139 return 0; ··· 147 146 148 147 psr_vsc.DB0 = 0; 149 148 psr_vsc.DB1 = 0; 149 + 150 + ret = drm_dp_dpcd_writeb(&dp->aux, DP_SET_POWER, DP_SET_POWER_D0); 151 + if (ret != 1) 152 + dev_err(dp->dev, "Failed to set DP Power0 %d\n", ret); 150 153 151 154 analogix_dp_send_psr_spd(dp, &psr_vsc); 152 155 return 0; ··· 1232 1227 1233 1228 dp->bridge = bridge; 1234 1229 1235 - dp->encoder->bridge = bridge; 1236 1230 bridge->driver_private = dp; 1237 - bridge->encoder = dp->encoder; 1238 1231 bridge->funcs = &analogix_dp_bridge_funcs; 1239 1232 1240 - ret = drm_bridge_attach(drm_dev, bridge); 1233 + ret = drm_bridge_attach(dp->encoder, bridge, NULL); 1241 1234 if (ret) { 1242 1235 DRM_ERROR("failed to attach drm bridge\n"); 1243 1236 return -EINVAL;
+1
drivers/gpu/drm/bridge/dumb-vga-dac.c
··· 237 237 238 238 static const struct of_device_id dumb_vga_match[] = { 239 239 { .compatible = "dumb-vga-dac" }, 240 + { .compatible = "ti,ths8135" }, 240 241 {}, 241 242 }; 242 243 MODULE_DEVICE_TABLE(of, dumb_vga_match);
+1 -2
drivers/gpu/drm/bridge/dw-hdmi.c
··· 1841 1841 hdmi->bridge = bridge; 1842 1842 bridge->driver_private = hdmi; 1843 1843 bridge->funcs = &dw_hdmi_bridge_funcs; 1844 - ret = drm_bridge_attach(drm, bridge); 1844 + ret = drm_bridge_attach(encoder, bridge, NULL); 1845 1845 if (ret) { 1846 1846 DRM_ERROR("Failed to initialize bridge with drm\n"); 1847 1847 return -EINVAL; 1848 1848 } 1849 1849 1850 - encoder->bridge = bridge; 1851 1850 hdmi->connector.polled = DRM_CONNECTOR_POLL_HPD; 1852 1851 1853 1852 drm_connector_helper_add(&hdmi->connector,
+1
drivers/gpu/drm/cirrus/cirrus_drv.h
··· 13 13 14 14 #include <video/vga.h> 15 15 16 + #include <drm/drm_encoder.h> 16 17 #include <drm/drm_fb_helper.h> 17 18 18 19 #include <drm/ttm/ttm_bo_api.h>
+3 -3
drivers/gpu/drm/cirrus/cirrus_fbdev.c
··· 22 22 struct drm_gem_object *obj; 23 23 struct cirrus_bo *bo; 24 24 int src_offset, dst_offset; 25 - int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8; 25 + int bpp = afbdev->gfb.base.format->cpp[0]; 26 26 int ret = -EBUSY; 27 27 bool unmap = false; 28 28 bool store_for_later = false; ··· 218 218 info->flags = FBINFO_DEFAULT; 219 219 info->fbops = &cirrusfb_ops; 220 220 221 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 221 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 222 222 drm_fb_helper_fill_var(info, &gfbdev->helper, sizes->fb_width, 223 223 sizes->fb_height); 224 224 ··· 238 238 DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); 239 239 DRM_INFO("vram aper at 0x%lX\n", (unsigned long)info->fix.smem_start); 240 240 DRM_INFO("size %lu\n", (unsigned long)info->fix.smem_len); 241 - DRM_INFO("fb depth is %d\n", fb->depth); 241 + DRM_INFO("fb depth is %d\n", fb->format->depth); 242 242 DRM_INFO(" pitch is %d\n", fb->pitches[0]); 243 243 244 244 return 0;
+1 -1
drivers/gpu/drm/cirrus/cirrus_main.c
··· 34 34 { 35 35 int ret; 36 36 37 - drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd); 37 + drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd); 38 38 gfb->obj = obj; 39 39 ret = drm_framebuffer_init(dev, &gfb->base, &cirrus_fb_funcs); 40 40 if (ret) {
+5 -4
drivers/gpu/drm/cirrus/cirrus_mode.c
··· 185 185 { 186 186 struct drm_device *dev = crtc->dev; 187 187 struct cirrus_device *cdev = dev->dev_private; 188 + const struct drm_framebuffer *fb = crtc->primary->fb; 188 189 int hsyncstart, hsyncend, htotal, hdispend; 189 190 int vtotal, vdispend; 190 191 int tmp; ··· 258 257 sr07 = RREG8(SEQ_DATA); 259 258 sr07 &= 0xe0; 260 259 hdr = 0; 261 - switch (crtc->primary->fb->bits_per_pixel) { 260 + switch (fb->format->cpp[0] * 8) { 262 261 case 8: 263 262 sr07 |= 0x11; 264 263 break; ··· 281 280 WREG_SEQ(0x7, sr07); 282 281 283 282 /* Program the pitch */ 284 - tmp = crtc->primary->fb->pitches[0] / 8; 283 + tmp = fb->pitches[0] / 8; 285 284 WREG_CRT(VGA_CRTC_OFFSET, tmp); 286 285 287 286 /* Enable extended blanking and pitch bits, and enable full memory */ 288 287 tmp = 0x22; 289 - tmp |= (crtc->primary->fb->pitches[0] >> 7) & 0x10; 290 - tmp |= (crtc->primary->fb->pitches[0] >> 6) & 0x40; 288 + tmp |= (fb->pitches[0] >> 7) & 0x10; 289 + tmp |= (fb->pitches[0] >> 6) & 0x40; 291 290 WREG_CRT(0x1b, tmp); 292 291 293 292 /* Enable high-colour modes */
+15 -11
drivers/gpu/drm/drm_atomic.c
··· 902 902 } 903 903 904 904 /* Check whether this plane supports the fb pixel format. */ 905 - ret = drm_plane_check_pixel_format(plane, state->fb->pixel_format); 905 + ret = drm_plane_check_pixel_format(plane, state->fb->format->format); 906 906 if (ret) { 907 907 struct drm_format_name_buf format_name; 908 908 DRM_DEBUG_ATOMIC("Invalid pixel format %s\n", 909 - drm_get_format_name(state->fb->pixel_format, 909 + drm_get_format_name(state->fb->format->format, 910 910 &format_name)); 911 911 return ret; 912 912 } ··· 960 960 drm_printf(p, "\tfb=%u\n", state->fb ? state->fb->base.id : 0); 961 961 if (state->fb) { 962 962 struct drm_framebuffer *fb = state->fb; 963 - int i, n = drm_format_num_planes(fb->pixel_format); 963 + int i, n = fb->format->num_planes; 964 964 struct drm_format_name_buf format_name; 965 965 966 966 drm_printf(p, "\t\tformat=%s\n", 967 - drm_get_format_name(fb->pixel_format, &format_name)); 967 + drm_get_format_name(fb->format->format, &format_name)); 968 968 drm_printf(p, "\t\t\tmodifier=0x%llx\n", fb->modifier); 969 969 drm_printf(p, "\t\tsize=%dx%d\n", fb->width, fb->height); 970 970 drm_printf(p, "\t\tlayers:\n"); ··· 1417 1417 struct drm_mode_config *config = &state->dev->mode_config; 1418 1418 struct drm_connector *connector; 1419 1419 struct drm_connector_state *conn_state; 1420 + struct drm_connector_list_iter conn_iter; 1420 1421 int ret; 1421 1422 1422 1423 ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx); ··· 1431 1430 * Changed connectors are already in @state, so only need to look at the 1432 1431 * current configuration. 1433 1432 */ 1434 - drm_for_each_connector(connector, state->dev) { 1433 + drm_connector_list_iter_get(state->dev, &conn_iter); 1434 + drm_for_each_connector_iter(connector, &conn_iter) { 1435 1435 if (connector->state->crtc != crtc) 1436 1436 continue; 1437 1437 1438 1438 conn_state = drm_atomic_get_connector_state(state, connector); 1439 - if (IS_ERR(conn_state)) 1439 + if (IS_ERR(conn_state)) { 1440 + drm_connector_list_iter_put(&conn_iter); 1440 1441 return PTR_ERR(conn_state); 1442 + } 1441 1443 } 1444 + drm_connector_list_iter_put(&conn_iter); 1442 1445 1443 1446 return 0; 1444 1447 } ··· 1697 1692 struct drm_plane *plane; 1698 1693 struct drm_crtc *crtc; 1699 1694 struct drm_connector *connector; 1695 + struct drm_connector_list_iter conn_iter; 1700 1696 1701 1697 if (!drm_core_check_feature(dev, DRIVER_ATOMIC)) 1702 1698 return; ··· 1708 1702 list_for_each_entry(crtc, &config->crtc_list, head) 1709 1703 drm_atomic_crtc_print_state(p, crtc->state); 1710 1704 1711 - list_for_each_entry(connector, &config->connector_list, head) 1705 + drm_connector_list_iter_get(dev, &conn_iter); 1706 + drm_for_each_connector_iter(connector, &conn_iter) 1712 1707 drm_atomic_connector_print_state(p, connector->state); 1708 + drm_connector_list_iter_put(&conn_iter); 1713 1709 } 1714 1710 EXPORT_SYMBOL(drm_state_dump); 1715 1711 ··· 2203 2195 goto out; 2204 2196 2205 2197 if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) { 2206 - /* 2207 - * Unlike commit, check_only does not clean up state. 2208 - * Below we call drm_atomic_state_put for it. 2209 - */ 2210 2198 ret = drm_atomic_check_only(state); 2211 2199 } else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) { 2212 2200 ret = drm_atomic_nonblocking_commit(state);
+42 -73
drivers/gpu/drm/drm_atomic_helper.c
··· 94 94 { 95 95 struct drm_connector_state *conn_state; 96 96 struct drm_connector *connector; 97 + struct drm_connector_list_iter conn_iter; 97 98 struct drm_encoder *encoder; 98 99 unsigned encoder_mask = 0; 99 - int i, ret; 100 + int i, ret = 0; 100 101 101 102 /* 102 103 * First loop, find all newly assigned encoders from the connectors ··· 145 144 * and the crtc is disabled if no encoder is left. This preserves 146 145 * compatibility with the legacy set_config behavior. 147 146 */ 148 - drm_for_each_connector(connector, state->dev) { 147 + drm_connector_list_iter_get(state->dev, &conn_iter); 148 + drm_for_each_connector_iter(connector, &conn_iter) { 149 149 struct drm_crtc_state *crtc_state; 150 150 151 151 if (drm_atomic_get_existing_connector_state(state, connector)) ··· 162 160 connector->state->crtc->base.id, 163 161 connector->state->crtc->name, 164 162 connector->base.id, connector->name); 165 - return -EINVAL; 163 + ret = -EINVAL; 164 + goto out; 166 165 } 167 166 168 167 conn_state = drm_atomic_get_connector_state(state, connector); 169 - if (IS_ERR(conn_state)) 170 - return PTR_ERR(conn_state); 168 + if (IS_ERR(conn_state)) { 169 + ret = PTR_ERR(conn_state); 170 + goto out; 171 + } 171 172 172 173 DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] in use on [CRTC:%d:%s], disabling [CONNECTOR:%d:%s]\n", 173 174 encoder->base.id, encoder->name, ··· 181 176 182 177 ret = drm_atomic_set_crtc_for_connector(conn_state, NULL); 183 178 if (ret) 184 - return ret; 179 + goto out; 185 180 186 181 if (!crtc_state->connector_mask) { 187 182 ret = drm_atomic_set_mode_prop_for_crtc(crtc_state, 188 183 NULL); 189 184 if (ret < 0) 190 - return ret; 185 + goto out; 191 186 192 187 crtc_state->active = false; 193 188 } 194 189 } 190 + out: 191 + drm_connector_list_iter_put(&conn_iter); 195 192 196 - return 0; 193 + return ret; 197 194 } 198 195 199 196 static void ··· 1065 1058 EXPORT_SYMBOL(drm_atomic_helper_wait_for_fences); 1066 1059 1067 1060 /** 1068 - * drm_atomic_helper_framebuffer_changed - check if framebuffer has changed 1069 - * @dev: DRM device 1070 - * @old_state: atomic state object with old state structures 1071 - * @crtc: DRM crtc 1072 - * 1073 - * Checks whether the framebuffer used for this CRTC changes as a result of 1074 - * the atomic update. This is useful for drivers which cannot use 1075 - * drm_atomic_helper_wait_for_vblanks() and need to reimplement its 1076 - * functionality. 1077 - * 1078 - * Returns: 1079 - * true if the framebuffer changed. 1080 - */ 1081 - bool drm_atomic_helper_framebuffer_changed(struct drm_device *dev, 1082 - struct drm_atomic_state *old_state, 1083 - struct drm_crtc *crtc) 1084 - { 1085 - struct drm_plane *plane; 1086 - struct drm_plane_state *old_plane_state; 1087 - int i; 1088 - 1089 - for_each_plane_in_state(old_state, plane, old_plane_state, i) { 1090 - if (plane->state->crtc != crtc && 1091 - old_plane_state->crtc != crtc) 1092 - continue; 1093 - 1094 - if (plane->state->fb != old_plane_state->fb) 1095 - return true; 1096 - } 1097 - 1098 - return false; 1099 - } 1100 - EXPORT_SYMBOL(drm_atomic_helper_framebuffer_changed); 1101 - 1102 - /** 1103 1061 * drm_atomic_helper_wait_for_vblanks - wait for vblank on crtcs 1104 1062 * @dev: DRM device 1105 1063 * @old_state: atomic state object with old state structures ··· 1082 1110 struct drm_crtc *crtc; 1083 1111 struct drm_crtc_state *old_crtc_state; 1084 1112 int i, ret; 1113 + unsigned crtc_mask = 0; 1114 + 1115 + /* 1116 + * Legacy cursor ioctls are completely unsynced, and userspace 1117 + * relies on that (by doing tons of cursor updates). 1118 + */ 1119 + if (old_state->legacy_cursor_update) 1120 + return; 1085 1121 1086 1122 for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) { 1087 - /* No one cares about the old state, so abuse it for tracking 1088 - * and store whether we hold a vblank reference (and should do a 1089 - * vblank wait) in the ->enable boolean. */ 1090 - old_crtc_state->enable = false; 1123 + struct drm_crtc_state *new_crtc_state = crtc->state; 1091 1124 1092 - if (!crtc->state->enable) 1093 - continue; 1094 - 1095 - /* Legacy cursor ioctls are completely unsynced, and userspace 1096 - * relies on that (by doing tons of cursor updates). */ 1097 - if (old_state->legacy_cursor_update) 1098 - continue; 1099 - 1100 - if (!drm_atomic_helper_framebuffer_changed(dev, 1101 - old_state, crtc)) 1125 + if (!new_crtc_state->active || !new_crtc_state->planes_changed) 1102 1126 continue; 1103 1127 1104 1128 ret = drm_crtc_vblank_get(crtc); 1105 1129 if (ret != 0) 1106 1130 continue; 1107 1131 1108 - old_crtc_state->enable = true; 1109 - old_crtc_state->last_vblank_count = drm_crtc_vblank_count(crtc); 1132 + crtc_mask |= drm_crtc_mask(crtc); 1133 + old_state->crtcs[i].last_vblank_count = drm_crtc_vblank_count(crtc); 1110 1134 } 1111 1135 1112 1136 for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) { 1113 - if (!old_crtc_state->enable) 1137 + if (!(crtc_mask & drm_crtc_mask(crtc))) 1114 1138 continue; 1115 1139 1116 1140 ret = wait_event_timeout(dev->vblank[i].queue, 1117 - old_crtc_state->last_vblank_count != 1141 + old_state->crtcs[i].last_vblank_count != 1118 1142 drm_crtc_vblank_count(crtc), 1119 1143 msecs_to_jiffies(50)); 1120 1144 ··· 1632 1664 1633 1665 funcs = plane->helper_private; 1634 1666 1635 - if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc)) 1636 - continue; 1637 - 1638 1667 if (funcs->prepare_fb) { 1639 1668 ret = funcs->prepare_fb(plane, plane_state); 1640 1669 if (ret) ··· 1646 1681 const struct drm_plane_helper_funcs *funcs; 1647 1682 1648 1683 if (j >= i) 1649 - continue; 1650 - 1651 - if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc)) 1652 1684 continue; 1653 1685 1654 1686 funcs = plane->helper_private; ··· 1913 1951 1914 1952 for_each_plane_in_state(old_state, plane, plane_state, i) { 1915 1953 const struct drm_plane_helper_funcs *funcs; 1916 - 1917 - if (!drm_atomic_helper_framebuffer_changed(dev, old_state, plane_state->crtc)) 1918 - continue; 1919 1954 1920 1955 funcs = plane->helper_private; 1921 1956 ··· 2401 2442 { 2402 2443 struct drm_atomic_state *state; 2403 2444 struct drm_connector *conn; 2445 + struct drm_connector_list_iter conn_iter; 2404 2446 int err; 2405 2447 2406 2448 state = drm_atomic_state_alloc(dev); ··· 2410 2450 2411 2451 state->acquire_ctx = ctx; 2412 2452 2413 - drm_for_each_connector(conn, dev) { 2453 + drm_connector_list_iter_get(dev, &conn_iter); 2454 + drm_for_each_connector_iter(conn, &conn_iter) { 2414 2455 struct drm_crtc *crtc = conn->state->crtc; 2415 2456 struct drm_crtc_state *crtc_state; 2416 2457 ··· 2429 2468 2430 2469 err = drm_atomic_commit(state); 2431 2470 free: 2471 + drm_connector_list_iter_put(&conn_iter); 2432 2472 drm_atomic_state_put(state); 2433 2473 return err; 2434 2474 } ··· 2802 2840 struct drm_crtc_state *crtc_state; 2803 2841 struct drm_crtc *crtc; 2804 2842 struct drm_connector *tmp_connector; 2843 + struct drm_connector_list_iter conn_iter; 2805 2844 int ret; 2806 2845 bool active = false; 2807 2846 int old_mode = connector->dpms; ··· 2830 2867 2831 2868 WARN_ON(!drm_modeset_is_locked(&config->connection_mutex)); 2832 2869 2833 - drm_for_each_connector(tmp_connector, connector->dev) { 2870 + drm_connector_list_iter_get(connector->dev, &conn_iter); 2871 + drm_for_each_connector_iter(tmp_connector, &conn_iter) { 2834 2872 if (tmp_connector->state->crtc != crtc) 2835 2873 continue; 2836 2874 ··· 2840 2876 break; 2841 2877 } 2842 2878 } 2879 + drm_connector_list_iter_put(&conn_iter); 2843 2880 crtc_state->active = active; 2844 2881 2845 2882 ret = drm_atomic_commit(state); ··· 3218 3253 { 3219 3254 struct drm_atomic_state *state; 3220 3255 struct drm_connector *conn; 3256 + struct drm_connector_list_iter conn_iter; 3221 3257 struct drm_plane *plane; 3222 3258 struct drm_crtc *crtc; 3223 3259 int err = 0; ··· 3249 3283 } 3250 3284 } 3251 3285 3252 - drm_for_each_connector(conn, dev) { 3286 + drm_connector_list_iter_get(dev, &conn_iter); 3287 + drm_for_each_connector_iter(conn, &conn_iter) { 3253 3288 struct drm_connector_state *conn_state; 3254 3289 3255 3290 conn_state = drm_atomic_get_connector_state(state, conn); 3256 3291 if (IS_ERR(conn_state)) { 3257 3292 err = PTR_ERR(conn_state); 3293 + drm_connector_list_iter_put(&conn_iter); 3258 3294 goto free; 3259 3295 } 3260 3296 } 3297 + drm_connector_list_iter_put(&conn_iter); 3261 3298 3262 3299 /* clear the acquire context so that it isn't accidentally reused */ 3263 3300 state->acquire_ctx = NULL;
+36 -23
drivers/gpu/drm/drm_bridge.c
··· 26 26 #include <linux/mutex.h> 27 27 28 28 #include <drm/drm_bridge.h> 29 + #include <drm/drm_encoder.h> 30 + 31 + #include "drm_crtc_internal.h" 29 32 30 33 /** 31 34 * DOC: overview ··· 95 92 EXPORT_SYMBOL(drm_bridge_remove); 96 93 97 94 /** 98 - * drm_bridge_attach - associate given bridge to our DRM device 95 + * drm_bridge_attach - attach the bridge to an encoder's chain 99 96 * 100 - * @dev: DRM device 101 - * @bridge: bridge control structure 97 + * @encoder: DRM encoder 98 + * @bridge: bridge to attach 99 + * @previous: previous bridge in the chain (optional) 102 100 * 103 - * Called by a kms driver to link one of our encoder/bridge to the given 104 - * bridge. 101 + * Called by a kms driver to link the bridge to an encoder's chain. The previous 102 + * argument specifies the previous bridge in the chain. If NULL, the bridge is 103 + * linked directly at the encoder's output. Otherwise it is linked at the 104 + * previous bridge's output. 105 105 * 106 - * Note that setting up links between the bridge and our encoder/bridge 107 - * objects needs to be handled by the kms driver itself. 106 + * If non-NULL the previous bridge must be already attached by a call to this 107 + * function. 108 108 * 109 109 * RETURNS: 110 110 * Zero on success, error code on failure 111 111 */ 112 - int drm_bridge_attach(struct drm_device *dev, struct drm_bridge *bridge) 112 + int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge, 113 + struct drm_bridge *previous) 113 114 { 114 - if (!dev || !bridge) 115 + int ret; 116 + 117 + if (!encoder || !bridge) 118 + return -EINVAL; 119 + 120 + if (previous && (!previous->dev || previous->encoder != encoder)) 115 121 return -EINVAL; 116 122 117 123 if (bridge->dev) 118 124 return -EBUSY; 119 125 120 - bridge->dev = dev; 126 + bridge->dev = encoder->dev; 127 + bridge->encoder = encoder; 121 128 122 - if (bridge->funcs->attach) 123 - return bridge->funcs->attach(bridge); 129 + if (bridge->funcs->attach) { 130 + ret = bridge->funcs->attach(bridge); 131 + if (ret < 0) { 132 + bridge->dev = NULL; 133 + bridge->encoder = NULL; 134 + return ret; 135 + } 136 + } 137 + 138 + if (previous) 139 + previous->next = bridge; 140 + else 141 + encoder->bridge = bridge; 124 142 125 143 return 0; 126 144 } 127 145 EXPORT_SYMBOL(drm_bridge_attach); 128 146 129 - /** 130 - * drm_bridge_detach - deassociate given bridge from its DRM device 131 - * 132 - * @bridge: bridge control structure 133 - * 134 - * Called by a kms driver to unlink the given bridge from its DRM device. 135 - * 136 - * Note that tearing down links between the bridge and our encoder/bridge 137 - * objects needs to be handled by the kms driver itself. 138 - */ 139 147 void drm_bridge_detach(struct drm_bridge *bridge) 140 148 { 141 149 if (WARN_ON(!bridge)) ··· 160 146 161 147 bridge->dev = NULL; 162 148 } 163 - EXPORT_SYMBOL(drm_bridge_detach); 164 149 165 150 /** 166 151 * DOC: bridge callbacks
+167 -80
drivers/gpu/drm/drm_connector.c
··· 23 23 #include <drm/drmP.h> 24 24 #include <drm/drm_connector.h> 25 25 #include <drm/drm_edid.h> 26 + #include <drm/drm_encoder.h> 26 27 27 28 #include "drm_crtc_internal.h" 28 29 #include "drm_internal.h" ··· 190 189 struct ida *connector_ida = 191 190 &drm_connector_enum_list[connector_type].ida; 192 191 193 - drm_modeset_lock_all(dev); 194 - 195 192 ret = drm_mode_object_get_reg(dev, &connector->base, 196 193 DRM_MODE_OBJECT_CONNECTOR, 197 194 false, drm_connector_free); 198 195 if (ret) 199 - goto out_unlock; 196 + return ret; 200 197 201 198 connector->base.properties = &connector->properties; 202 199 connector->dev = dev; ··· 224 225 225 226 INIT_LIST_HEAD(&connector->probed_modes); 226 227 INIT_LIST_HEAD(&connector->modes); 228 + mutex_init(&connector->mutex); 227 229 connector->edid_blob_ptr = NULL; 228 230 connector->status = connector_status_unknown; 229 231 ··· 232 232 233 233 /* We should add connectors at the end to avoid upsetting the connector 234 234 * index too much. */ 235 + spin_lock_irq(&config->connector_list_lock); 235 236 list_add_tail(&connector->head, &config->connector_list); 236 237 config->num_connector++; 238 + spin_unlock_irq(&config->connector_list_lock); 237 239 238 240 if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL) 239 241 drm_object_attach_property(&connector->base, ··· 259 257 out_put: 260 258 if (ret) 261 259 drm_mode_object_unregister(dev, &connector->base); 262 - 263 - out_unlock: 264 - drm_modeset_unlock_all(dev); 265 260 266 261 return ret; 267 262 } ··· 350 351 drm_mode_object_unregister(dev, &connector->base); 351 352 kfree(connector->name); 352 353 connector->name = NULL; 354 + spin_lock_irq(&dev->mode_config.connector_list_lock); 353 355 list_del(&connector->head); 354 356 dev->mode_config.num_connector--; 357 + spin_unlock_irq(&dev->mode_config.connector_list_lock); 355 358 356 359 WARN_ON(connector->state && !connector->funcs->atomic_destroy_state); 357 360 if (connector->state && connector->funcs->atomic_destroy_state) 358 361 connector->funcs->atomic_destroy_state(connector, 359 362 connector->state); 363 + 364 + mutex_destroy(&connector->mutex); 360 365 361 366 memset(connector, 0, sizeof(*connector)); 362 367 } ··· 377 374 */ 378 375 int drm_connector_register(struct drm_connector *connector) 379 376 { 380 - int ret; 377 + int ret = 0; 381 378 379 + mutex_lock(&connector->mutex); 382 380 if (connector->registered) 383 - return 0; 381 + goto unlock; 384 382 385 383 ret = drm_sysfs_connector_add(connector); 386 384 if (ret) 387 - return ret; 385 + goto unlock; 388 386 389 387 ret = drm_debugfs_connector_add(connector); 390 388 if (ret) { ··· 401 397 drm_mode_object_register(connector->dev, &connector->base); 402 398 403 399 connector->registered = true; 404 - return 0; 400 + goto unlock; 405 401 406 402 err_debugfs: 407 403 drm_debugfs_connector_remove(connector); 408 404 err_sysfs: 409 405 drm_sysfs_connector_remove(connector); 406 + unlock: 407 + mutex_unlock(&connector->mutex); 410 408 return ret; 411 409 } 412 410 EXPORT_SYMBOL(drm_connector_register); ··· 421 415 */ 422 416 void drm_connector_unregister(struct drm_connector *connector) 423 417 { 424 - if (!connector->registered) 418 + mutex_lock(&connector->mutex); 419 + if (!connector->registered) { 420 + mutex_unlock(&connector->mutex); 425 421 return; 422 + } 426 423 427 424 if (connector->funcs->early_unregister) 428 425 connector->funcs->early_unregister(connector); ··· 434 425 drm_debugfs_connector_remove(connector); 435 426 436 427 connector->registered = false; 428 + mutex_unlock(&connector->mutex); 437 429 } 438 430 EXPORT_SYMBOL(drm_connector_unregister); 439 431 440 432 void drm_connector_unregister_all(struct drm_device *dev) 441 433 { 442 434 struct drm_connector *connector; 435 + struct drm_connector_list_iter conn_iter; 443 436 444 - /* FIXME: taking the mode config mutex ends up in a clash with sysfs */ 445 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) 437 + drm_connector_list_iter_get(dev, &conn_iter); 438 + drm_for_each_connector_iter(connector, &conn_iter) 446 439 drm_connector_unregister(connector); 440 + drm_connector_list_iter_put(&conn_iter); 447 441 } 448 442 449 443 int drm_connector_register_all(struct drm_device *dev) 450 444 { 451 445 struct drm_connector *connector; 452 - int ret; 446 + struct drm_connector_list_iter conn_iter; 447 + int ret = 0; 453 448 454 - /* FIXME: taking the mode config mutex ends up in a clash with 455 - * fbcon/backlight registration */ 456 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 449 + drm_connector_list_iter_get(dev, &conn_iter); 450 + drm_for_each_connector_iter(connector, &conn_iter) { 457 451 ret = drm_connector_register(connector); 458 452 if (ret) 459 - goto err; 453 + break; 460 454 } 455 + drm_connector_list_iter_put(&conn_iter); 461 456 462 - return 0; 463 - 464 - err: 465 - mutex_unlock(&dev->mode_config.mutex); 466 - drm_connector_unregister_all(dev); 457 + if (ret) 458 + drm_connector_unregister_all(dev); 467 459 return ret; 468 460 } 469 461 ··· 485 475 return "unknown"; 486 476 } 487 477 EXPORT_SYMBOL(drm_get_connector_status_name); 478 + 479 + #ifdef CONFIG_LOCKDEP 480 + static struct lockdep_map connector_list_iter_dep_map = { 481 + .name = "drm_connector_list_iter" 482 + }; 483 + #endif 484 + 485 + /** 486 + * drm_connector_list_iter_get - initialize a connector_list iterator 487 + * @dev: DRM device 488 + * @iter: connector_list iterator 489 + * 490 + * Sets @iter up to walk the connector list in &drm_mode_config of @dev. @iter 491 + * must always be cleaned up again by calling drm_connector_list_iter_put(). 492 + * Iteration itself happens using drm_connector_list_iter_next() or 493 + * drm_for_each_connector_iter(). 494 + */ 495 + void drm_connector_list_iter_get(struct drm_device *dev, 496 + struct drm_connector_list_iter *iter) 497 + { 498 + iter->dev = dev; 499 + iter->conn = NULL; 500 + lock_acquire_shared_recursive(&connector_list_iter_dep_map, 0, 1, NULL, _RET_IP_); 501 + } 502 + EXPORT_SYMBOL(drm_connector_list_iter_get); 503 + 504 + /** 505 + * drm_connector_list_iter_next - return next connector 506 + * @iter: connectr_list iterator 507 + * 508 + * Returns the next connector for @iter, or NULL when the list walk has 509 + * completed. 510 + */ 511 + struct drm_connector * 512 + drm_connector_list_iter_next(struct drm_connector_list_iter *iter) 513 + { 514 + struct drm_connector *old_conn = iter->conn; 515 + struct drm_mode_config *config = &iter->dev->mode_config; 516 + struct list_head *lhead; 517 + unsigned long flags; 518 + 519 + spin_lock_irqsave(&config->connector_list_lock, flags); 520 + lhead = old_conn ? &old_conn->head : &config->connector_list; 521 + 522 + do { 523 + if (lhead->next == &config->connector_list) { 524 + iter->conn = NULL; 525 + break; 526 + } 527 + 528 + lhead = lhead->next; 529 + iter->conn = list_entry(lhead, struct drm_connector, head); 530 + 531 + /* loop until it's not a zombie connector */ 532 + } while (!kref_get_unless_zero(&iter->conn->base.refcount)); 533 + spin_unlock_irqrestore(&config->connector_list_lock, flags); 534 + 535 + if (old_conn) 536 + drm_connector_unreference(old_conn); 537 + 538 + return iter->conn; 539 + } 540 + EXPORT_SYMBOL(drm_connector_list_iter_next); 541 + 542 + /** 543 + * drm_connector_list_iter_put - tear down a connector_list iterator 544 + * @iter: connector_list iterator 545 + * 546 + * Tears down @iter and releases any resources (like &drm_connector references) 547 + * acquired while walking the list. This must always be called, both when the 548 + * iteration completes fully or when it was aborted without walking the entire 549 + * list. 550 + */ 551 + void drm_connector_list_iter_put(struct drm_connector_list_iter *iter) 552 + { 553 + iter->dev = NULL; 554 + if (iter->conn) 555 + drm_connector_unreference(iter->conn); 556 + lock_release(&connector_list_iter_dep_map, 0, _RET_IP_); 557 + } 558 + EXPORT_SYMBOL(drm_connector_list_iter_put); 488 559 489 560 static const struct drm_prop_enum_list drm_subpixel_enum_list[] = { 490 561 { SubPixelUnknown, "Unknown" }, ··· 1163 1072 1164 1073 memset(&u_mode, 0, sizeof(struct drm_mode_modeinfo)); 1165 1074 1166 - mutex_lock(&dev->mode_config.mutex); 1167 - 1168 1075 connector = drm_connector_lookup(dev, out_resp->connector_id); 1169 - if (!connector) { 1170 - ret = -ENOENT; 1171 - goto out_unlock; 1172 - } 1173 - 1174 - for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) 1175 - if (connector->encoder_ids[i] != 0) 1176 - encoders_count++; 1177 - 1178 - if (out_resp->count_modes == 0) { 1179 - connector->funcs->fill_modes(connector, 1180 - dev->mode_config.max_width, 1181 - dev->mode_config.max_height); 1182 - } 1183 - 1184 - /* delayed so we get modes regardless of pre-fill_modes state */ 1185 - list_for_each_entry(mode, &connector->modes, head) 1186 - if (drm_mode_expose_to_userspace(mode, file_priv)) 1187 - mode_count++; 1188 - 1189 - out_resp->connector_id = connector->base.id; 1190 - out_resp->connector_type = connector->connector_type; 1191 - out_resp->connector_type_id = connector->connector_type_id; 1192 - out_resp->mm_width = connector->display_info.width_mm; 1193 - out_resp->mm_height = connector->display_info.height_mm; 1194 - out_resp->subpixel = connector->display_info.subpixel_order; 1195 - out_resp->connection = connector->status; 1076 + if (!connector) 1077 + return -ENOENT; 1196 1078 1197 1079 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 1198 1080 encoder = drm_connector_get_encoder(connector); ··· 1173 1109 out_resp->encoder_id = encoder->base.id; 1174 1110 else 1175 1111 out_resp->encoder_id = 0; 1112 + 1113 + ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic, 1114 + (uint32_t __user *)(unsigned long)(out_resp->props_ptr), 1115 + (uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr), 1116 + &out_resp->count_props); 1117 + drm_modeset_unlock(&dev->mode_config.connection_mutex); 1118 + if (ret) 1119 + goto out_unref; 1120 + 1121 + for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) 1122 + if (connector->encoder_ids[i] != 0) 1123 + encoders_count++; 1124 + 1125 + if ((out_resp->count_encoders >= encoders_count) && encoders_count) { 1126 + copied = 0; 1127 + encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr); 1128 + for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) { 1129 + if (connector->encoder_ids[i] != 0) { 1130 + if (put_user(connector->encoder_ids[i], 1131 + encoder_ptr + copied)) { 1132 + ret = -EFAULT; 1133 + goto out_unref; 1134 + } 1135 + copied++; 1136 + } 1137 + } 1138 + } 1139 + out_resp->count_encoders = encoders_count; 1140 + 1141 + out_resp->connector_id = connector->base.id; 1142 + out_resp->connector_type = connector->connector_type; 1143 + out_resp->connector_type_id = connector->connector_type_id; 1144 + 1145 + mutex_lock(&dev->mode_config.mutex); 1146 + if (out_resp->count_modes == 0) { 1147 + connector->funcs->fill_modes(connector, 1148 + dev->mode_config.max_width, 1149 + dev->mode_config.max_height); 1150 + } 1151 + 1152 + out_resp->mm_width = connector->display_info.width_mm; 1153 + out_resp->mm_height = connector->display_info.height_mm; 1154 + out_resp->subpixel = connector->display_info.subpixel_order; 1155 + out_resp->connection = connector->status; 1156 + 1157 + /* delayed so we get modes regardless of pre-fill_modes state */ 1158 + list_for_each_entry(mode, &connector->modes, head) 1159 + if (drm_mode_expose_to_userspace(mode, file_priv)) 1160 + mode_count++; 1176 1161 1177 1162 /* 1178 1163 * This ioctl is called twice, once to determine how much space is ··· 1244 1131 } 1245 1132 } 1246 1133 out_resp->count_modes = mode_count; 1247 - 1248 - ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic, 1249 - (uint32_t __user *)(unsigned long)(out_resp->props_ptr), 1250 - (uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr), 1251 - &out_resp->count_props); 1252 - if (ret) 1253 - goto out; 1254 - 1255 - if ((out_resp->count_encoders >= encoders_count) && encoders_count) { 1256 - copied = 0; 1257 - encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr); 1258 - for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) { 1259 - if (connector->encoder_ids[i] != 0) { 1260 - if (put_user(connector->encoder_ids[i], 1261 - encoder_ptr + copied)) { 1262 - ret = -EFAULT; 1263 - goto out; 1264 - } 1265 - copied++; 1266 - } 1267 - } 1268 - } 1269 - out_resp->count_encoders = encoders_count; 1270 - 1271 1134 out: 1272 - drm_modeset_unlock(&dev->mode_config.connection_mutex); 1273 - 1274 - drm_connector_unreference(connector); 1275 - out_unlock: 1276 1135 mutex_unlock(&dev->mode_config.mutex); 1136 + out_unref: 1137 + drm_connector_unreference(connector); 1277 1138 1278 1139 return ret; 1279 1140 }
+6 -3
drivers/gpu/drm/drm_crtc.c
··· 357 357 358 358 drm_modeset_lock_crtc(crtc, crtc->primary); 359 359 crtc_resp->gamma_size = crtc->gamma_size; 360 - if (crtc->primary->fb) 360 + 361 + if (crtc->primary->state && crtc->primary->state->fb) 362 + crtc_resp->fb_id = crtc->primary->state->fb->base.id; 363 + else if (!crtc->primary->state && crtc->primary->fb) 361 364 crtc_resp->fb_id = crtc->primary->fb->base.id; 362 365 else 363 366 crtc_resp->fb_id = 0; ··· 575 572 */ 576 573 if (!crtc->primary->format_default) { 577 574 ret = drm_plane_check_pixel_format(crtc->primary, 578 - fb->pixel_format); 575 + fb->format->format); 579 576 if (ret) { 580 577 struct drm_format_name_buf format_name; 581 578 DRM_DEBUG_KMS("Invalid pixel format %s\n", 582 - drm_get_format_name(fb->pixel_format, 579 + drm_get_format_name(fb->format->format, 583 580 &format_name)); 584 581 goto out; 585 582 }
+40 -13
drivers/gpu/drm/drm_crtc_helper.c
··· 36 36 #include <drm/drmP.h> 37 37 #include <drm/drm_atomic.h> 38 38 #include <drm/drm_crtc.h> 39 + #include <drm/drm_encoder.h> 39 40 #include <drm/drm_fourcc.h> 40 41 #include <drm/drm_crtc_helper.h> 41 42 #include <drm/drm_fb_helper.h> ··· 89 88 bool drm_helper_encoder_in_use(struct drm_encoder *encoder) 90 89 { 91 90 struct drm_connector *connector; 91 + struct drm_connector_list_iter conn_iter; 92 92 struct drm_device *dev = encoder->dev; 93 93 94 94 /* ··· 101 99 WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 102 100 } 103 101 104 - drm_for_each_connector(connector, dev) 105 - if (connector->encoder == encoder) 102 + 103 + drm_connector_list_iter_get(dev, &conn_iter); 104 + drm_for_each_connector_iter(connector, &conn_iter) { 105 + if (connector->encoder == encoder) { 106 + drm_connector_list_iter_put(&conn_iter); 106 107 return true; 108 + } 109 + } 110 + drm_connector_list_iter_put(&conn_iter); 107 111 return false; 108 112 } 109 113 EXPORT_SYMBOL(drm_helper_encoder_in_use); ··· 444 436 445 437 /* Decouple all encoders and their attached connectors from this crtc */ 446 438 drm_for_each_encoder(encoder, dev) { 439 + struct drm_connector_list_iter conn_iter; 440 + 447 441 if (encoder->crtc != crtc) 448 442 continue; 449 443 450 - drm_for_each_connector(connector, dev) { 444 + drm_connector_list_iter_get(dev, &conn_iter); 445 + drm_for_each_connector_iter(connector, &conn_iter) { 451 446 if (connector->encoder != encoder) 452 447 continue; 453 448 ··· 467 456 /* we keep a reference while the encoder is bound */ 468 457 drm_connector_unreference(connector); 469 458 } 459 + drm_connector_list_iter_put(&conn_iter); 470 460 } 471 461 472 462 __drm_helper_disable_unused_functions(dev); ··· 519 507 bool mode_changed = false; /* if true do a full mode set */ 520 508 bool fb_changed = false; /* if true and !mode_changed just do a flip */ 521 509 struct drm_connector *connector; 510 + struct drm_connector_list_iter conn_iter; 522 511 int count = 0, ro, fail = 0; 523 512 const struct drm_crtc_helper_funcs *crtc_funcs; 524 513 struct drm_mode_set save_set; ··· 584 571 } 585 572 586 573 count = 0; 587 - drm_for_each_connector(connector, dev) { 574 + drm_connector_list_iter_get(dev, &conn_iter); 575 + drm_for_each_connector_iter(connector, &conn_iter) 588 576 save_connector_encoders[count++] = connector->encoder; 589 - } 577 + drm_connector_list_iter_put(&conn_iter); 590 578 591 579 save_set.crtc = set->crtc; 592 580 save_set.mode = &set->crtc->mode; ··· 602 588 if (set->crtc->primary->fb == NULL) { 603 589 DRM_DEBUG_KMS("crtc has no fb, full mode set\n"); 604 590 mode_changed = true; 605 - } else if (set->fb->pixel_format != 606 - set->crtc->primary->fb->pixel_format) { 591 + } else if (set->fb->format != set->crtc->primary->fb->format) { 607 592 mode_changed = true; 608 593 } else 609 594 fb_changed = true; ··· 629 616 630 617 /* a) traverse passed in connector list and get encoders for them */ 631 618 count = 0; 632 - drm_for_each_connector(connector, dev) { 619 + drm_connector_list_iter_get(dev, &conn_iter); 620 + drm_for_each_connector_iter(connector, &conn_iter) { 633 621 const struct drm_connector_helper_funcs *connector_funcs = 634 622 connector->helper_private; 635 623 new_encoder = connector->encoder; ··· 663 649 connector->encoder = new_encoder; 664 650 } 665 651 } 652 + drm_connector_list_iter_put(&conn_iter); 666 653 667 654 if (fail) { 668 655 ret = -EINVAL; ··· 671 656 } 672 657 673 658 count = 0; 674 - drm_for_each_connector(connector, dev) { 659 + drm_connector_list_iter_get(dev, &conn_iter); 660 + drm_for_each_connector_iter(connector, &conn_iter) { 675 661 if (!connector->encoder) 676 662 continue; 677 663 ··· 690 674 if (new_crtc && 691 675 !drm_encoder_crtc_ok(connector->encoder, new_crtc)) { 692 676 ret = -EINVAL; 677 + drm_connector_list_iter_put(&conn_iter); 693 678 goto fail; 694 679 } 695 680 if (new_crtc != connector->encoder->crtc) { ··· 707 690 connector->base.id, connector->name); 708 691 } 709 692 } 693 + drm_connector_list_iter_put(&conn_iter); 710 694 711 695 /* mode_set_base is not a required function */ 712 696 if (fb_changed && !crtc_funcs->mode_set_base) ··· 762 744 } 763 745 764 746 count = 0; 765 - drm_for_each_connector(connector, dev) { 747 + drm_connector_list_iter_get(dev, &conn_iter); 748 + drm_for_each_connector_iter(connector, &conn_iter) 766 749 connector->encoder = save_connector_encoders[count++]; 767 - } 750 + drm_connector_list_iter_put(&conn_iter); 768 751 769 752 /* after fail drop reference on all unbound connectors in set, let 770 753 * bound connectors keep their reference ··· 792 773 { 793 774 int dpms = DRM_MODE_DPMS_OFF; 794 775 struct drm_connector *connector; 776 + struct drm_connector_list_iter conn_iter; 795 777 struct drm_device *dev = encoder->dev; 796 778 797 - drm_for_each_connector(connector, dev) 779 + drm_connector_list_iter_get(dev, &conn_iter); 780 + drm_for_each_connector_iter(connector, &conn_iter) 798 781 if (connector->encoder == encoder) 799 782 if (connector->dpms < dpms) 800 783 dpms = connector->dpms; 784 + drm_connector_list_iter_put(&conn_iter); 785 + 801 786 return dpms; 802 787 } 803 788 ··· 833 810 { 834 811 int dpms = DRM_MODE_DPMS_OFF; 835 812 struct drm_connector *connector; 813 + struct drm_connector_list_iter conn_iter; 836 814 struct drm_device *dev = crtc->dev; 837 815 838 - drm_for_each_connector(connector, dev) 816 + drm_connector_list_iter_get(dev, &conn_iter); 817 + drm_for_each_connector_iter(connector, &conn_iter) 839 818 if (connector->encoder && connector->encoder->crtc == crtc) 840 819 if (connector->dpms < dpms) 841 820 dpms = connector->dpms; 821 + drm_connector_list_iter_put(&conn_iter); 822 + 842 823 return dpms; 843 824 } 844 825
+9
drivers/gpu/drm/drm_crtc_internal.h
··· 174 174 void *data, struct drm_file *file_priv); 175 175 176 176 /* drm_atomic.c */ 177 + #ifdef CONFIG_DEBUG_FS 178 + struct drm_minor; 179 + int drm_atomic_debugfs_init(struct drm_minor *minor); 180 + int drm_atomic_debugfs_cleanup(struct drm_minor *minor); 181 + #endif 182 + 177 183 int drm_atomic_get_property(struct drm_mode_object *obj, 178 184 struct drm_property *property, uint64_t *val); 179 185 int drm_mode_atomic_ioctl(struct drm_device *dev, ··· 191 185 void drm_plane_unregister_all(struct drm_device *dev); 192 186 int drm_plane_check_pixel_format(const struct drm_plane *plane, 193 187 u32 format); 188 + 189 + /* drm_bridge.c */ 190 + void drm_bridge_detach(struct drm_bridge *bridge); 194 191 195 192 /* IOCTL */ 196 193 int drm_mode_getplane_res(struct drm_device *dev, void *data,
+1
drivers/gpu/drm/drm_debugfs.c
··· 38 38 #include <drm/drm_edid.h> 39 39 #include <drm/drm_atomic.h> 40 40 #include "drm_internal.h" 41 + #include "drm_crtc_internal.h" 41 42 42 43 #if defined(CONFIG_DEBUG_FS) 43 44
+7 -4
drivers/gpu/drm/drm_drv.c
··· 323 323 * historical baggage. Hence use the reference counting provided by 324 324 * drm_dev_ref() and drm_dev_unref() only carefully. 325 325 * 326 - * Also note that embedding of &drm_device is currently not (yet) supported (but 327 - * it would be easy to add). Drivers can store driver-private data in the 328 - * dev_priv field of &drm_device. 326 + * It is recommended that drivers embed struct &drm_device into their own device 327 + * structure, which is supported through drm_dev_init(). 329 328 */ 330 329 331 330 /** ··· 461 462 * Note that for purely virtual devices @parent can be NULL. 462 463 * 463 464 * Drivers that do not want to allocate their own device struct 464 - * embedding struct &drm_device can call drm_dev_alloc() instead. 465 + * embedding struct &drm_device can call drm_dev_alloc() instead. For drivers 466 + * that do embed struct &drm_device it must be placed first in the overall 467 + * structure, and the overall structure must be allocated using kmalloc(): The 468 + * drm core's release function unconditionally calls kfree() on the @dev pointer 469 + * when the final reference is released. 465 470 * 466 471 * RETURNS: 467 472 * 0 on success, or error code on failure.
+1
drivers/gpu/drm/drm_edid.c
··· 35 35 #include <linux/vga_switcheroo.h> 36 36 #include <drm/drmP.h> 37 37 #include <drm/drm_edid.h> 38 + #include <drm/drm_encoder.h> 38 39 #include <drm/drm_displayid.h> 39 40 40 41 #define version_greater(edid, maj, min) \
+16 -1
drivers/gpu/drm/drm_encoder.c
··· 159 159 * the indices on the drm_encoder after us in the encoder_list. 160 160 */ 161 161 162 + if (encoder->bridge) { 163 + struct drm_bridge *bridge = encoder->bridge; 164 + struct drm_bridge *next; 165 + 166 + while (bridge) { 167 + next = bridge->next; 168 + drm_bridge_detach(bridge); 169 + bridge = next; 170 + } 171 + } 172 + 162 173 drm_mode_object_unregister(dev, &encoder->base); 163 174 kfree(encoder->name); 164 175 list_del(&encoder->head); ··· 184 173 struct drm_connector *connector; 185 174 struct drm_device *dev = encoder->dev; 186 175 bool uses_atomic = false; 176 + struct drm_connector_list_iter conn_iter; 187 177 188 178 /* For atomic drivers only state objects are synchronously updated and 189 179 * protected by modeset locks, so check those first. */ 190 - drm_for_each_connector(connector, dev) { 180 + drm_connector_list_iter_get(dev, &conn_iter); 181 + drm_for_each_connector_iter(connector, &conn_iter) { 191 182 if (!connector->state) 192 183 continue; 193 184 ··· 198 185 if (connector->state->best_encoder != encoder) 199 186 continue; 200 187 188 + drm_connector_list_iter_put(&conn_iter); 201 189 return connector->state->crtc; 202 190 } 191 + drm_connector_list_iter_put(&conn_iter); 203 192 204 193 /* Don't return stale data (e.g. pending async disable). */ 205 194 if (uses_atomic)
+4 -7
drivers/gpu/drm/drm_fb_cma_helper.c
··· 147 147 if (!fb_cma) 148 148 return ERR_PTR(-ENOMEM); 149 149 150 - drm_helper_mode_fill_fb_struct(&fb_cma->fb, mode_cmd); 150 + drm_helper_mode_fill_fb_struct(dev, &fb_cma->fb, mode_cmd); 151 151 152 152 for (i = 0; i < num_planes; i++) 153 153 fb_cma->obj[i] = obj[i]; ··· 304 304 static void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m) 305 305 { 306 306 struct drm_fb_cma *fb_cma = to_fb_cma(fb); 307 - const struct drm_format_info *info; 308 307 int i; 309 308 310 309 seq_printf(m, "fb: %dx%d@%4.4s\n", fb->width, fb->height, 311 - (char *)&fb->pixel_format); 310 + (char *)&fb->format->format); 312 311 313 - info = drm_format_info(fb->pixel_format); 314 - 315 - for (i = 0; i < info->num_planes; i++) { 312 + for (i = 0; i < fb->format->num_planes; i++) { 316 313 seq_printf(m, " %d: offset=%d pitch=%d, obj: ", 317 314 i, fb->offsets[i], fb->pitches[i]); 318 315 drm_gem_cma_describe(fb_cma->obj[i], m); ··· 464 467 fbi->flags = FBINFO_FLAG_DEFAULT; 465 468 fbi->fbops = &drm_fbdev_cma_ops; 466 469 467 - drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); 470 + drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth); 468 471 drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); 469 472 470 473 offset = fbi->var.xoffset * bytes_per_pixel;
+16 -12
drivers/gpu/drm/drm_fb_helper.c
··· 120 120 { 121 121 struct drm_device *dev = fb_helper->dev; 122 122 struct drm_connector *connector; 123 - int i, ret; 123 + struct drm_connector_list_iter conn_iter; 124 + int i, ret = 0; 124 125 125 126 if (!drm_fbdev_emulation) 126 127 return 0; 127 128 128 129 mutex_lock(&dev->mode_config.mutex); 129 - drm_for_each_connector(connector, dev) { 130 + drm_connector_list_iter_get(dev, &conn_iter); 131 + drm_for_each_connector_iter(connector, &conn_iter) { 130 132 ret = drm_fb_helper_add_one_connector(fb_helper, connector); 131 133 132 134 if (ret) 133 135 goto fail; 134 136 } 135 - mutex_unlock(&dev->mode_config.mutex); 136 - return 0; 137 + goto out; 138 + 137 139 fail: 138 140 drm_fb_helper_for_each_connector(fb_helper, i) { 139 141 struct drm_fb_helper_connector *fb_helper_connector = ··· 147 145 fb_helper->connector_info[i] = NULL; 148 146 } 149 147 fb_helper->connector_count = 0; 148 + out: 149 + drm_connector_list_iter_put(&conn_iter); 150 150 mutex_unlock(&dev->mode_config.mutex); 151 151 152 152 return ret; ··· 405 401 406 402 drm_warn_on_modeset_not_all_locked(dev); 407 403 408 - if (dev->mode_config.funcs->atomic_commit) 404 + if (drm_drv_uses_atomic_modeset(dev)) 409 405 return restore_fbdev_mode_atomic(fb_helper); 410 406 411 407 drm_for_each_plane(plane, dev) { ··· 1173 1169 !fb_helper->funcs->gamma_get)) 1174 1170 return -EINVAL; 1175 1171 1176 - WARN_ON(fb->bits_per_pixel != 8); 1172 + WARN_ON(fb->format->cpp[0] != 1); 1177 1173 1178 1174 fb_helper->funcs->gamma_set(crtc, red, green, blue, regno); 1179 1175 ··· 1256 1252 * Changes struct fb_var_screeninfo are currently not pushed back 1257 1253 * to KMS, hence fail if different settings are requested. 1258 1254 */ 1259 - if (var->bits_per_pixel != fb->bits_per_pixel || 1255 + if (var->bits_per_pixel != fb->format->cpp[0] * 8 || 1260 1256 var->xres != fb->width || var->yres != fb->height || 1261 1257 var->xres_virtual != fb->width || var->yres_virtual != fb->height) { 1262 1258 DRM_DEBUG("fb userspace requested width/height/bpp different than current fb " 1263 1259 "request %dx%d-%d (virtual %dx%d) > %dx%d-%d\n", 1264 1260 var->xres, var->yres, var->bits_per_pixel, 1265 1261 var->xres_virtual, var->yres_virtual, 1266 - fb->width, fb->height, fb->bits_per_pixel); 1262 + fb->width, fb->height, fb->format->cpp[0] * 8); 1267 1263 return -EINVAL; 1268 1264 } 1269 1265 ··· 1444 1440 return -EBUSY; 1445 1441 } 1446 1442 1447 - if (dev->mode_config.funcs->atomic_commit) { 1443 + if (drm_drv_uses_atomic_modeset(dev)) { 1448 1444 ret = pan_display_atomic(var, info); 1449 1445 goto unlock; 1450 1446 } ··· 1649 1645 info->pseudo_palette = fb_helper->pseudo_palette; 1650 1646 info->var.xres_virtual = fb->width; 1651 1647 info->var.yres_virtual = fb->height; 1652 - info->var.bits_per_pixel = fb->bits_per_pixel; 1648 + info->var.bits_per_pixel = fb->format->cpp[0] * 8; 1653 1649 info->var.accel_flags = FB_ACCELF_TEXT; 1654 1650 info->var.xoffset = 0; 1655 1651 info->var.yoffset = 0; ··· 1657 1653 info->var.height = -1; 1658 1654 info->var.width = -1; 1659 1655 1660 - switch (fb->depth) { 1656 + switch (fb->format->depth) { 1661 1657 case 8: 1662 1658 info->var.red.offset = 0; 1663 1659 info->var.green.offset = 0; ··· 2060 2056 * NULL we fallback to the default drm_atomic_helper_best_encoder() 2061 2057 * helper. 2062 2058 */ 2063 - if (fb_helper->dev->mode_config.funcs->atomic_commit && 2059 + if (drm_drv_uses_atomic_modeset(fb_helper->dev) && 2064 2060 !connector_funcs->best_encoder) 2065 2061 encoder = drm_atomic_helper_best_encoder(connector); 2066 2062 else
+1 -1
drivers/gpu/drm/drm_fops.c
··· 622 622 * kmalloc and @p must be the first member element. 623 623 * 624 624 * Callers which already hold dev->event_lock should use 625 - * drm_event_reserve_init() instead. 625 + * drm_event_reserve_init_locked() instead. 626 626 * 627 627 * RETURNS: 628 628 *
+50 -3
drivers/gpu/drm/drm_framebuffer.c
··· 432 432 433 433 r->height = fb->height; 434 434 r->width = fb->width; 435 - r->depth = fb->depth; 436 - r->bpp = fb->bits_per_pixel; 435 + r->depth = fb->format->depth; 436 + r->bpp = fb->format->cpp[0] * 8; 437 437 r->pitch = fb->pitches[0]; 438 438 if (fb->funcs->create_handle) { 439 439 if (drm_is_current_master(file_priv) || capable(CAP_SYS_ADMIN) || ··· 631 631 { 632 632 int ret; 633 633 634 + if (WARN_ON_ONCE(fb->dev != dev || !fb->format)) 635 + return -EINVAL; 636 + 634 637 INIT_LIST_HEAD(&fb->filp_head); 635 - fb->dev = dev; 638 + 636 639 fb->funcs = funcs; 637 640 638 641 ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB, ··· 793 790 drm_framebuffer_unreference(fb); 794 791 } 795 792 EXPORT_SYMBOL(drm_framebuffer_remove); 793 + 794 + /** 795 + * drm_framebuffer_plane_width - width of the plane given the first plane 796 + * @width: width of the first plane 797 + * @fb: the framebuffer 798 + * @plane: plane index 799 + * 800 + * Returns: 801 + * The width of @plane, given that the width of the first plane is @width. 802 + */ 803 + int drm_framebuffer_plane_width(int width, 804 + const struct drm_framebuffer *fb, int plane) 805 + { 806 + if (plane >= fb->format->num_planes) 807 + return 0; 808 + 809 + if (plane == 0) 810 + return width; 811 + 812 + return width / fb->format->hsub; 813 + } 814 + EXPORT_SYMBOL(drm_framebuffer_plane_width); 815 + 816 + /** 817 + * drm_framebuffer_plane_height - height of the plane given the first plane 818 + * @height: height of the first plane 819 + * @fb: the framebuffer 820 + * @plane: plane index 821 + * 822 + * Returns: 823 + * The height of @plane, given that the height of the first plane is @height. 824 + */ 825 + int drm_framebuffer_plane_height(int height, 826 + const struct drm_framebuffer *fb, int plane) 827 + { 828 + if (plane >= fb->format->num_planes) 829 + return 0; 830 + 831 + if (plane == 0) 832 + return height; 833 + 834 + return height / fb->format->vsub; 835 + } 836 + EXPORT_SYMBOL(drm_framebuffer_plane_height);
+4 -4
drivers/gpu/drm/drm_internal.h
··· 58 58 /* IOCTLS */ 59 59 int drm_wait_vblank(struct drm_device *dev, void *data, 60 60 struct drm_file *filp); 61 - int drm_control(struct drm_device *dev, void *data, 62 - struct drm_file *file_priv); 63 - int drm_modeset_ctl(struct drm_device *dev, void *data, 64 - struct drm_file *file_priv); 61 + int drm_legacy_irq_control(struct drm_device *dev, void *data, 62 + struct drm_file *file_priv); 63 + int drm_legacy_modeset_ctl(struct drm_device *dev, void *data, 64 + struct drm_file *file_priv); 65 65 66 66 /* drm_auth.c */ 67 67 int drm_getmagic(struct drm_device *dev, void *data,
+14 -9
drivers/gpu/drm/drm_ioctl.c
··· 115 115 struct drm_unique *u = data; 116 116 struct drm_master *master = file_priv->master; 117 117 118 + mutex_lock(&master->dev->master_mutex); 118 119 if (u->unique_len >= master->unique_len) { 119 - if (copy_to_user(u->unique, master->unique, master->unique_len)) 120 + if (copy_to_user(u->unique, master->unique, master->unique_len)) { 121 + mutex_unlock(&master->dev->master_mutex); 120 122 return -EFAULT; 123 + } 121 124 } 122 125 u->unique_len = master->unique_len; 126 + mutex_unlock(&master->dev->master_mutex); 123 127 124 128 return 0; 125 129 } ··· 344 340 struct drm_set_version *sv = data; 345 341 int if_version, retcode = 0; 346 342 343 + mutex_lock(&dev->master_mutex); 347 344 if (sv->drm_di_major != -1) { 348 345 if (sv->drm_di_major != DRM_IF_MAJOR || 349 346 sv->drm_di_minor < 0 || sv->drm_di_minor > DRM_IF_MINOR) { ··· 379 374 sv->drm_di_minor = DRM_IF_MINOR; 380 375 sv->drm_dd_major = dev->driver->major; 381 376 sv->drm_dd_minor = dev->driver->minor; 377 + mutex_unlock(&dev->master_mutex); 382 378 383 379 return retcode; 384 380 } ··· 534 528 static const struct drm_ioctl_desc drm_ioctls[] = { 535 529 DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, 536 530 DRM_UNLOCKED|DRM_RENDER_ALLOW|DRM_CONTROL_ALLOW), 537 - DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0), 531 + DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, DRM_UNLOCKED), 538 532 DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, DRM_UNLOCKED), 539 533 DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY), 540 534 DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_legacy_getmap_ioctl, DRM_UNLOCKED), 541 535 DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED), 542 536 DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED), 543 537 DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW), 544 - DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, 0), 545 - DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER), 538 + DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, DRM_UNLOCKED), 539 + DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_UNLOCKED | DRM_MASTER), 546 540 547 541 DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 548 542 DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), ··· 581 575 DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_legacy_freebufs, DRM_AUTH), 582 576 DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_legacy_dma_ioctl, DRM_AUTH), 583 577 584 - DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 578 + DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_legacy_irq_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 585 579 586 580 #if IS_ENABLED(CONFIG_AGP) 587 581 DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), ··· 599 593 600 594 DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED), 601 595 602 - DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0), 596 + DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_legacy_modeset_ctl, 0), 603 597 604 598 DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 605 599 ··· 735 729 if (ksize > in_size) 736 730 memset(kdata + in_size, 0, ksize - in_size); 737 731 738 - /* Enforce sane locking for modern driver ioctls. Core ioctls are 739 - * too messy still. */ 740 - if ((!drm_core_check_feature(dev, DRIVER_LEGACY) && is_driver_ioctl) || 732 + /* Enforce sane locking for modern driver ioctls. */ 733 + if (!drm_core_check_feature(dev, DRIVER_LEGACY) || 741 734 (ioctl->flags & DRM_UNLOCKED)) 742 735 retcode = func(dev, kdata, file_priv); 743 736 else {
+4 -26
drivers/gpu/drm/drm_irq.c
··· 579 579 } 580 580 EXPORT_SYMBOL(drm_irq_uninstall); 581 581 582 - /* 583 - * IRQ control ioctl. 584 - * 585 - * \param inode device inode. 586 - * \param file_priv DRM file private. 587 - * \param cmd command. 588 - * \param arg user argument, pointing to a drm_control structure. 589 - * \return zero on success or a negative number on failure. 590 - * 591 - * Calls irq_install() or irq_uninstall() according to \p arg. 592 - */ 593 - int drm_control(struct drm_device *dev, void *data, 594 - struct drm_file *file_priv) 582 + int drm_legacy_irq_control(struct drm_device *dev, void *data, 583 + struct drm_file *file_priv) 595 584 { 596 585 struct drm_control *ctl = data; 597 586 int ret = 0, irq; ··· 1431 1442 } 1432 1443 } 1433 1444 1434 - /* 1435 - * drm_modeset_ctl - handle vblank event counter changes across mode switch 1436 - * @DRM_IOCTL_ARGS: standard ioctl arguments 1437 - * 1438 - * Applications should call the %_DRM_PRE_MODESET and %_DRM_POST_MODESET 1439 - * ioctls around modesetting so that any lost vblank events are accounted for. 1440 - * 1441 - * Generally the counter will reset across mode sets. If interrupts are 1442 - * enabled around this call, we don't have to do anything since the counter 1443 - * will have already been incremented. 1444 - */ 1445 - int drm_modeset_ctl(struct drm_device *dev, void *data, 1446 - struct drm_file *file_priv) 1445 + int drm_legacy_modeset_ctl(struct drm_device *dev, void *data, 1446 + struct drm_file *file_priv) 1447 1447 { 1448 1448 struct drm_modeset_ctl *modeset = data; 1449 1449 unsigned int pipe;
+230 -335
drivers/gpu/drm/drm_mm.c
··· 1 1 /************************************************************************** 2 2 * 3 3 * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA. 4 + * Copyright 2016 Intel Corporation 4 5 * All Rights Reserved. 5 6 * 6 7 * Permission is hereby granted, free of charge, to any person obtaining a ··· 32 31 * class implementation for more advanced memory managers. 33 32 * 34 33 * Note that the algorithm used is quite simple and there might be substantial 35 - * performance gains if a smarter free list is implemented. Currently it is just an 36 - * unordered stack of free regions. This could easily be improved if an RB-tree 37 - * is used instead. At least if we expect heavy fragmentation. 34 + * performance gains if a smarter free list is implemented. Currently it is 35 + * just an unordered stack of free regions. This could easily be improved if 36 + * an RB-tree is used instead. At least if we expect heavy fragmentation. 38 37 * 39 38 * Aligned allocations can also see improvement. 40 39 * ··· 68 67 * where an object needs to be created which exactly matches the firmware's 69 68 * scanout target. As long as the range is still free it can be inserted anytime 70 69 * after the allocator is initialized, which helps with avoiding looped 71 - * depencies in the driver load sequence. 70 + * dependencies in the driver load sequence. 72 71 * 73 72 * drm_mm maintains a stack of most recently freed holes, which of all 74 73 * simplistic datastructures seems to be a fairly decent approach to clustering ··· 79 78 * 80 79 * drm_mm supports a few features: Alignment and range restrictions can be 81 80 * supplied. Further more every &drm_mm_node has a color value (which is just an 82 - * opaqua unsigned long) which in conjunction with a driver callback can be used 81 + * opaque unsigned long) which in conjunction with a driver callback can be used 83 82 * to implement sophisticated placement restrictions. The i915 DRM driver uses 84 83 * this to implement guard pages between incompatible caching domains in the 85 84 * graphics TT. 86 85 * 87 - * Two behaviors are supported for searching and allocating: bottom-up and top-down. 88 - * The default is bottom-up. Top-down allocation can be used if the memory area 89 - * has different restrictions, or just to reduce fragmentation. 86 + * Two behaviors are supported for searching and allocating: bottom-up and 87 + * top-down. The default is bottom-up. Top-down allocation can be used if the 88 + * memory area has different restrictions, or just to reduce fragmentation. 90 89 * 91 90 * Finally iteration helpers to walk all nodes and all holes are provided as are 92 91 * some basic allocator dumpers for debugging. 92 + * 93 + * Note that this range allocator is not thread-safe, drivers need to protect 94 + * modifications with their on locking. The idea behind this is that for a full 95 + * memory manager additional data needs to be protected anyway, hence internal 96 + * locking would be fully redundant. 93 97 */ 94 98 95 - static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, 96 - u64 size, 97 - unsigned alignment, 98 - unsigned long color, 99 - enum drm_mm_search_flags flags); 100 99 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, 101 100 u64 size, 102 - unsigned alignment, 101 + u64 alignment, 103 102 unsigned long color, 104 103 u64 start, 105 104 u64 end, ··· 139 138 if (!buf) 140 139 return; 141 140 142 - list_for_each_entry(node, &mm->head_node.node_list, node_list) { 141 + list_for_each_entry(node, drm_mm_nodes(mm), node_list) { 143 142 struct stack_trace trace = { 144 143 .entries = entries, 145 144 .max_entries = STACKDEPTH ··· 175 174 START, LAST, static inline, drm_mm_interval_tree) 176 175 177 176 struct drm_mm_node * 178 - __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last) 177 + __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last) 179 178 { 180 - return drm_mm_interval_tree_iter_first(&mm->interval_tree, 179 + return drm_mm_interval_tree_iter_first((struct rb_root *)&mm->interval_tree, 181 180 start, last); 182 181 } 183 182 EXPORT_SYMBOL(__drm_mm_interval_first); ··· 228 227 229 228 static void drm_mm_insert_helper(struct drm_mm_node *hole_node, 230 229 struct drm_mm_node *node, 231 - u64 size, unsigned alignment, 230 + u64 size, u64 alignment, 232 231 unsigned long color, 232 + u64 range_start, u64 range_end, 233 233 enum drm_mm_allocator_flags flags) 234 234 { 235 235 struct drm_mm *mm = hole_node->mm; ··· 239 237 u64 adj_start = hole_start; 240 238 u64 adj_end = hole_end; 241 239 242 - BUG_ON(node->allocated); 240 + DRM_MM_BUG_ON(!drm_mm_hole_follows(hole_node) || node->allocated); 243 241 244 242 if (mm->color_adjust) 245 243 mm->color_adjust(hole_node, color, &adj_start, &adj_end); 244 + 245 + adj_start = max(adj_start, range_start); 246 + adj_end = min(adj_end, range_end); 246 247 247 248 if (flags & DRM_MM_CREATE_TOP) 248 249 adj_start = adj_end - size; 249 250 250 251 if (alignment) { 251 - u64 tmp = adj_start; 252 - unsigned rem; 252 + u64 rem; 253 253 254 - rem = do_div(tmp, alignment); 254 + div64_u64_rem(adj_start, alignment, &rem); 255 255 if (rem) { 256 256 if (flags & DRM_MM_CREATE_TOP) 257 257 adj_start -= rem; ··· 261 257 adj_start += alignment - rem; 262 258 } 263 259 } 264 - 265 - BUG_ON(adj_start < hole_start); 266 - BUG_ON(adj_end > hole_end); 267 260 268 261 if (adj_start == hole_start) { 269 262 hole_node->hole_follows = 0; ··· 277 276 278 277 drm_mm_interval_tree_add_node(hole_node, node); 279 278 280 - BUG_ON(node->start + node->size > adj_end); 279 + DRM_MM_BUG_ON(node->start < range_start); 280 + DRM_MM_BUG_ON(node->start < adj_start); 281 + DRM_MM_BUG_ON(node->start + node->size > adj_end); 282 + DRM_MM_BUG_ON(node->start + node->size > range_end); 281 283 282 284 node->hole_follows = 0; 283 285 if (__drm_mm_hole_node_start(node) < hole_end) { ··· 312 308 u64 hole_start, hole_end; 313 309 u64 adj_start, adj_end; 314 310 315 - if (WARN_ON(node->size == 0)) 316 - return -EINVAL; 317 - 318 311 end = node->start + node->size; 312 + if (unlikely(end <= node->start)) 313 + return -ENOSPC; 319 314 320 315 /* Find the relevant hole to add our node to */ 321 316 hole = drm_mm_interval_tree_iter_first(&mm->interval_tree, ··· 323 320 if (hole->start < end) 324 321 return -ENOSPC; 325 322 } else { 326 - hole = list_entry(&mm->head_node.node_list, 327 - typeof(*hole), node_list); 323 + hole = list_entry(drm_mm_nodes(mm), typeof(*hole), node_list); 328 324 } 329 325 330 326 hole = list_last_entry(&hole->node_list, typeof(*hole), node_list); 331 - if (!hole->hole_follows) 327 + if (!drm_mm_hole_follows(hole)) 332 328 return -ENOSPC; 333 329 334 330 adj_start = hole_start = __drm_mm_hole_node_start(hole); ··· 364 362 EXPORT_SYMBOL(drm_mm_reserve_node); 365 363 366 364 /** 367 - * drm_mm_insert_node_generic - search for space and insert @node 368 - * @mm: drm_mm to allocate from 369 - * @node: preallocate node to insert 370 - * @size: size of the allocation 371 - * @alignment: alignment of the allocation 372 - * @color: opaque tag value to use for this node 373 - * @sflags: flags to fine-tune the allocation search 374 - * @aflags: flags to fine-tune the allocation behavior 375 - * 376 - * The preallocated node must be cleared to 0. 377 - * 378 - * Returns: 379 - * 0 on success, -ENOSPC if there's no suitable hole. 380 - */ 381 - int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node, 382 - u64 size, unsigned alignment, 383 - unsigned long color, 384 - enum drm_mm_search_flags sflags, 385 - enum drm_mm_allocator_flags aflags) 386 - { 387 - struct drm_mm_node *hole_node; 388 - 389 - if (WARN_ON(size == 0)) 390 - return -EINVAL; 391 - 392 - hole_node = drm_mm_search_free_generic(mm, size, alignment, 393 - color, sflags); 394 - if (!hole_node) 395 - return -ENOSPC; 396 - 397 - drm_mm_insert_helper(hole_node, node, size, alignment, color, aflags); 398 - return 0; 399 - } 400 - EXPORT_SYMBOL(drm_mm_insert_node_generic); 401 - 402 - static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, 403 - struct drm_mm_node *node, 404 - u64 size, unsigned alignment, 405 - unsigned long color, 406 - u64 start, u64 end, 407 - enum drm_mm_allocator_flags flags) 408 - { 409 - struct drm_mm *mm = hole_node->mm; 410 - u64 hole_start = drm_mm_hole_node_start(hole_node); 411 - u64 hole_end = drm_mm_hole_node_end(hole_node); 412 - u64 adj_start = hole_start; 413 - u64 adj_end = hole_end; 414 - 415 - BUG_ON(!hole_node->hole_follows || node->allocated); 416 - 417 - if (adj_start < start) 418 - adj_start = start; 419 - if (adj_end > end) 420 - adj_end = end; 421 - 422 - if (mm->color_adjust) 423 - mm->color_adjust(hole_node, color, &adj_start, &adj_end); 424 - 425 - if (flags & DRM_MM_CREATE_TOP) 426 - adj_start = adj_end - size; 427 - 428 - if (alignment) { 429 - u64 tmp = adj_start; 430 - unsigned rem; 431 - 432 - rem = do_div(tmp, alignment); 433 - if (rem) { 434 - if (flags & DRM_MM_CREATE_TOP) 435 - adj_start -= rem; 436 - else 437 - adj_start += alignment - rem; 438 - } 439 - } 440 - 441 - if (adj_start == hole_start) { 442 - hole_node->hole_follows = 0; 443 - list_del(&hole_node->hole_stack); 444 - } 445 - 446 - node->start = adj_start; 447 - node->size = size; 448 - node->mm = mm; 449 - node->color = color; 450 - node->allocated = 1; 451 - 452 - list_add(&node->node_list, &hole_node->node_list); 453 - 454 - drm_mm_interval_tree_add_node(hole_node, node); 455 - 456 - BUG_ON(node->start < start); 457 - BUG_ON(node->start < adj_start); 458 - BUG_ON(node->start + node->size > adj_end); 459 - BUG_ON(node->start + node->size > end); 460 - 461 - node->hole_follows = 0; 462 - if (__drm_mm_hole_node_start(node) < hole_end) { 463 - list_add(&node->hole_stack, &mm->hole_stack); 464 - node->hole_follows = 1; 465 - } 466 - 467 - save_stack(node); 468 - } 469 - 470 - /** 471 365 * drm_mm_insert_node_in_range_generic - ranged search for space and insert @node 472 366 * @mm: drm_mm to allocate from 473 367 * @node: preallocate node to insert ··· 381 483 * 0 on success, -ENOSPC if there's no suitable hole. 382 484 */ 383 485 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node, 384 - u64 size, unsigned alignment, 486 + u64 size, u64 alignment, 385 487 unsigned long color, 386 488 u64 start, u64 end, 387 489 enum drm_mm_search_flags sflags, ··· 398 500 if (!hole_node) 399 501 return -ENOSPC; 400 502 401 - drm_mm_insert_helper_range(hole_node, node, 402 - size, alignment, color, 403 - start, end, aflags); 503 + drm_mm_insert_helper(hole_node, node, 504 + size, alignment, color, 505 + start, end, aflags); 404 506 return 0; 405 507 } 406 508 EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic); ··· 411 513 * 412 514 * This just removes a node from its drm_mm allocator. The node does not need to 413 515 * be cleared again before it can be re-inserted into this or any other drm_mm 414 - * allocator. It is a bug to call this function on a un-allocated node. 516 + * allocator. It is a bug to call this function on a unallocated node. 415 517 */ 416 518 void drm_mm_remove_node(struct drm_mm_node *node) 417 519 { 418 520 struct drm_mm *mm = node->mm; 419 521 struct drm_mm_node *prev_node; 420 522 421 - if (WARN_ON(!node->allocated)) 422 - return; 423 - 424 - BUG_ON(node->scanned_block || node->scanned_prev_free 425 - || node->scanned_next_free); 523 + DRM_MM_BUG_ON(!node->allocated); 524 + DRM_MM_BUG_ON(node->scanned_block); 426 525 427 526 prev_node = 428 527 list_entry(node->node_list.prev, struct drm_mm_node, node_list); 429 528 430 - if (node->hole_follows) { 431 - BUG_ON(__drm_mm_hole_node_start(node) == 432 - __drm_mm_hole_node_end(node)); 529 + if (drm_mm_hole_follows(node)) { 530 + DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) == 531 + __drm_mm_hole_node_end(node)); 433 532 list_del(&node->hole_stack); 434 - } else 435 - BUG_ON(__drm_mm_hole_node_start(node) != 436 - __drm_mm_hole_node_end(node)); 533 + } else { 534 + DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) != 535 + __drm_mm_hole_node_end(node)); 536 + } 437 537 438 - 439 - if (!prev_node->hole_follows) { 538 + if (!drm_mm_hole_follows(prev_node)) { 440 539 prev_node->hole_follows = 1; 441 540 list_add(&prev_node->hole_stack, &mm->hole_stack); 442 541 } else ··· 445 550 } 446 551 EXPORT_SYMBOL(drm_mm_remove_node); 447 552 448 - static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment) 553 + static int check_free_hole(u64 start, u64 end, u64 size, u64 alignment) 449 554 { 450 555 if (end - start < size) 451 556 return 0; 452 557 453 558 if (alignment) { 454 - u64 tmp = start; 455 - unsigned rem; 559 + u64 rem; 456 560 457 - rem = do_div(tmp, alignment); 561 + div64_u64_rem(start, alignment, &rem); 458 562 if (rem) 459 563 start += alignment - rem; 460 564 } ··· 461 567 return end >= start + size; 462 568 } 463 569 464 - static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, 465 - u64 size, 466 - unsigned alignment, 467 - unsigned long color, 468 - enum drm_mm_search_flags flags) 469 - { 470 - struct drm_mm_node *entry; 471 - struct drm_mm_node *best; 472 - u64 adj_start; 473 - u64 adj_end; 474 - u64 best_size; 475 - 476 - BUG_ON(mm->scanned_blocks); 477 - 478 - best = NULL; 479 - best_size = ~0UL; 480 - 481 - __drm_mm_for_each_hole(entry, mm, adj_start, adj_end, 482 - flags & DRM_MM_SEARCH_BELOW) { 483 - u64 hole_size = adj_end - adj_start; 484 - 485 - if (mm->color_adjust) { 486 - mm->color_adjust(entry, color, &adj_start, &adj_end); 487 - if (adj_end <= adj_start) 488 - continue; 489 - } 490 - 491 - if (!check_free_hole(adj_start, adj_end, size, alignment)) 492 - continue; 493 - 494 - if (!(flags & DRM_MM_SEARCH_BEST)) 495 - return entry; 496 - 497 - if (hole_size < best_size) { 498 - best = entry; 499 - best_size = hole_size; 500 - } 501 - } 502 - 503 - return best; 504 - } 505 - 506 570 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, 507 571 u64 size, 508 - unsigned alignment, 572 + u64 alignment, 509 573 unsigned long color, 510 574 u64 start, 511 575 u64 end, ··· 475 623 u64 adj_end; 476 624 u64 best_size; 477 625 478 - BUG_ON(mm->scanned_blocks); 626 + DRM_MM_BUG_ON(mm->scan_active); 479 627 480 628 best = NULL; 481 629 best_size = ~0UL; ··· 484 632 flags & DRM_MM_SEARCH_BELOW) { 485 633 u64 hole_size = adj_end - adj_start; 486 634 487 - if (adj_start < start) 488 - adj_start = start; 489 - if (adj_end > end) 490 - adj_end = end; 491 - 492 635 if (mm->color_adjust) { 493 636 mm->color_adjust(entry, color, &adj_start, &adj_end); 494 637 if (adj_end <= adj_start) 495 638 continue; 496 639 } 640 + 641 + adj_start = max(adj_start, start); 642 + adj_end = min(adj_end, end); 497 643 498 644 if (!check_free_hole(adj_start, adj_end, size, alignment)) 499 645 continue; ··· 519 669 */ 520 670 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) 521 671 { 672 + DRM_MM_BUG_ON(!old->allocated); 673 + 522 674 list_replace(&old->node_list, &new->node_list); 523 675 list_replace(&old->hole_stack, &new->hole_stack); 524 676 rb_replace_node(&old->rb, &new->rb, &old->mm->interval_tree); ··· 544 692 * efficient when we simply start to select all objects from the tail of an LRU 545 693 * until there's a suitable hole: Especially for big objects or nodes that 546 694 * otherwise have special allocation constraints there's a good chance we evict 547 - * lots of (smaller) objects unecessarily. 695 + * lots of (smaller) objects unnecessarily. 548 696 * 549 697 * The DRM range allocator supports this use-case through the scanning 550 698 * interfaces. First a scan operation needs to be initialized with 551 - * drm_mm_init_scan() or drm_mm_init_scan_with_range(). The the driver adds 552 - * objects to the roaster (probably by walking an LRU list, but this can be 553 - * freely implemented) until a suitable hole is found or there's no further 554 - * evitable object. 699 + * drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds 700 + * objects to the roster (probably by walking an LRU list, but this can be 701 + * freely implemented) (using drm_mm_scan_add_block()) until a suitable hole 702 + * is found or there are no further evictable objects. 555 703 * 556 - * The the driver must walk through all objects again in exactly the reverse 704 + * The driver must walk through all objects again in exactly the reverse 557 705 * order to restore the allocator state. Note that while the allocator is used 558 706 * in the scan mode no other operation is allowed. 559 707 * 560 - * Finally the driver evicts all objects selected in the scan. Adding and 561 - * removing an object is O(1), and since freeing a node is also O(1) the overall 562 - * complexity is O(scanned_objects). So like the free stack which needs to be 563 - * walked before a scan operation even begins this is linear in the number of 564 - * objects. It doesn't seem to hurt badly. 708 + * Finally the driver evicts all objects selected (drm_mm_scan_remove_block() 709 + * reported true) in the scan, and any overlapping nodes after color adjustment 710 + * (drm_mm_scan_evict_color()). Adding and removing an object is O(1), and 711 + * since freeing a node is also O(1) the overall complexity is 712 + * O(scanned_objects). So like the free stack which needs to be walked before a 713 + * scan operation even begins this is linear in the number of objects. It 714 + * doesn't seem to hurt too badly. 565 715 */ 566 716 567 717 /** 568 - * drm_mm_init_scan - initialize lru scanning 569 - * @mm: drm_mm to scan 570 - * @size: size of the allocation 571 - * @alignment: alignment of the allocation 572 - * @color: opaque tag value to use for the allocation 573 - * 574 - * This simply sets up the scanning routines with the parameters for the desired 575 - * hole. Note that there's no need to specify allocation flags, since they only 576 - * change the place a node is allocated from within a suitable hole. 577 - * 578 - * Warning: 579 - * As long as the scan list is non-empty, no other operations than 580 - * adding/removing nodes to/from the scan list are allowed. 581 - */ 582 - void drm_mm_init_scan(struct drm_mm *mm, 583 - u64 size, 584 - unsigned alignment, 585 - unsigned long color) 586 - { 587 - mm->scan_color = color; 588 - mm->scan_alignment = alignment; 589 - mm->scan_size = size; 590 - mm->scanned_blocks = 0; 591 - mm->scan_hit_start = 0; 592 - mm->scan_hit_end = 0; 593 - mm->scan_check_range = 0; 594 - mm->prev_scanned_node = NULL; 595 - } 596 - EXPORT_SYMBOL(drm_mm_init_scan); 597 - 598 - /** 599 - * drm_mm_init_scan - initialize range-restricted lru scanning 718 + * drm_mm_scan_init_with_range - initialize range-restricted lru scanning 719 + * @scan: scan state 600 720 * @mm: drm_mm to scan 601 721 * @size: size of the allocation 602 722 * @alignment: alignment of the allocation 603 723 * @color: opaque tag value to use for the allocation 604 724 * @start: start of the allowed range for the allocation 605 725 * @end: end of the allowed range for the allocation 726 + * @flags: flags to specify how the allocation will be performed afterwards 606 727 * 607 728 * This simply sets up the scanning routines with the parameters for the desired 608 - * hole. Note that there's no need to specify allocation flags, since they only 609 - * change the place a node is allocated from within a suitable hole. 729 + * hole. 610 730 * 611 731 * Warning: 612 732 * As long as the scan list is non-empty, no other operations than 613 733 * adding/removing nodes to/from the scan list are allowed. 614 734 */ 615 - void drm_mm_init_scan_with_range(struct drm_mm *mm, 735 + void drm_mm_scan_init_with_range(struct drm_mm_scan *scan, 736 + struct drm_mm *mm, 616 737 u64 size, 617 - unsigned alignment, 738 + u64 alignment, 618 739 unsigned long color, 619 740 u64 start, 620 - u64 end) 741 + u64 end, 742 + unsigned int flags) 621 743 { 622 - mm->scan_color = color; 623 - mm->scan_alignment = alignment; 624 - mm->scan_size = size; 625 - mm->scanned_blocks = 0; 626 - mm->scan_hit_start = 0; 627 - mm->scan_hit_end = 0; 628 - mm->scan_start = start; 629 - mm->scan_end = end; 630 - mm->scan_check_range = 1; 631 - mm->prev_scanned_node = NULL; 744 + DRM_MM_BUG_ON(start >= end); 745 + DRM_MM_BUG_ON(!size || size > end - start); 746 + DRM_MM_BUG_ON(mm->scan_active); 747 + 748 + scan->mm = mm; 749 + 750 + if (alignment <= 1) 751 + alignment = 0; 752 + 753 + scan->color = color; 754 + scan->alignment = alignment; 755 + scan->remainder_mask = is_power_of_2(alignment) ? alignment - 1 : 0; 756 + scan->size = size; 757 + scan->flags = flags; 758 + 759 + DRM_MM_BUG_ON(end <= start); 760 + scan->range_start = start; 761 + scan->range_end = end; 762 + 763 + scan->hit_start = U64_MAX; 764 + scan->hit_end = 0; 632 765 } 633 - EXPORT_SYMBOL(drm_mm_init_scan_with_range); 766 + EXPORT_SYMBOL(drm_mm_scan_init_with_range); 634 767 635 768 /** 636 769 * drm_mm_scan_add_block - add a node to the scan list 770 + * @scan: the active drm_mm scanner 637 771 * @node: drm_mm_node to add 638 772 * 639 773 * Add a node to the scan list that might be freed to make space for the desired ··· 628 790 * Returns: 629 791 * True if a hole has been found, false otherwise. 630 792 */ 631 - bool drm_mm_scan_add_block(struct drm_mm_node *node) 793 + bool drm_mm_scan_add_block(struct drm_mm_scan *scan, 794 + struct drm_mm_node *node) 632 795 { 633 - struct drm_mm *mm = node->mm; 634 - struct drm_mm_node *prev_node; 796 + struct drm_mm *mm = scan->mm; 797 + struct drm_mm_node *hole; 635 798 u64 hole_start, hole_end; 799 + u64 col_start, col_end; 636 800 u64 adj_start, adj_end; 637 801 638 - mm->scanned_blocks++; 802 + DRM_MM_BUG_ON(node->mm != mm); 803 + DRM_MM_BUG_ON(!node->allocated); 804 + DRM_MM_BUG_ON(node->scanned_block); 805 + node->scanned_block = true; 806 + mm->scan_active++; 639 807 640 - BUG_ON(node->scanned_block); 641 - node->scanned_block = 1; 808 + /* Remove this block from the node_list so that we enlarge the hole 809 + * (distance between the end of our previous node and the start of 810 + * or next), without poisoning the link so that we can restore it 811 + * later in drm_mm_scan_remove_block(). 812 + */ 813 + hole = list_prev_entry(node, node_list); 814 + DRM_MM_BUG_ON(list_next_entry(hole, node_list) != node); 815 + __list_del_entry(&node->node_list); 642 816 643 - prev_node = list_entry(node->node_list.prev, struct drm_mm_node, 644 - node_list); 817 + hole_start = __drm_mm_hole_node_start(hole); 818 + hole_end = __drm_mm_hole_node_end(hole); 645 819 646 - node->scanned_preceeds_hole = prev_node->hole_follows; 647 - prev_node->hole_follows = 1; 648 - list_del(&node->node_list); 649 - node->node_list.prev = &prev_node->node_list; 650 - node->node_list.next = &mm->prev_scanned_node->node_list; 651 - mm->prev_scanned_node = node; 652 - 653 - adj_start = hole_start = drm_mm_hole_node_start(prev_node); 654 - adj_end = hole_end = drm_mm_hole_node_end(prev_node); 655 - 656 - if (mm->scan_check_range) { 657 - if (adj_start < mm->scan_start) 658 - adj_start = mm->scan_start; 659 - if (adj_end > mm->scan_end) 660 - adj_end = mm->scan_end; 661 - } 662 - 820 + col_start = hole_start; 821 + col_end = hole_end; 663 822 if (mm->color_adjust) 664 - mm->color_adjust(prev_node, mm->scan_color, 665 - &adj_start, &adj_end); 823 + mm->color_adjust(hole, scan->color, &col_start, &col_end); 666 824 667 - if (check_free_hole(adj_start, adj_end, 668 - mm->scan_size, mm->scan_alignment)) { 669 - mm->scan_hit_start = hole_start; 670 - mm->scan_hit_end = hole_end; 671 - return true; 825 + adj_start = max(col_start, scan->range_start); 826 + adj_end = min(col_end, scan->range_end); 827 + if (adj_end <= adj_start || adj_end - adj_start < scan->size) 828 + return false; 829 + 830 + if (scan->flags == DRM_MM_CREATE_TOP) 831 + adj_start = adj_end - scan->size; 832 + 833 + if (scan->alignment) { 834 + u64 rem; 835 + 836 + if (likely(scan->remainder_mask)) 837 + rem = adj_start & scan->remainder_mask; 838 + else 839 + div64_u64_rem(adj_start, scan->alignment, &rem); 840 + if (rem) { 841 + adj_start -= rem; 842 + if (scan->flags != DRM_MM_CREATE_TOP) 843 + adj_start += scan->alignment; 844 + if (adj_start < max(col_start, scan->range_start) || 845 + min(col_end, scan->range_end) - adj_start < scan->size) 846 + return false; 847 + 848 + if (adj_end <= adj_start || 849 + adj_end - adj_start < scan->size) 850 + return false; 851 + } 672 852 } 673 853 674 - return false; 854 + scan->hit_start = adj_start; 855 + scan->hit_end = adj_start + scan->size; 856 + 857 + DRM_MM_BUG_ON(scan->hit_start >= scan->hit_end); 858 + DRM_MM_BUG_ON(scan->hit_start < hole_start); 859 + DRM_MM_BUG_ON(scan->hit_end > hole_end); 860 + 861 + return true; 675 862 } 676 863 EXPORT_SYMBOL(drm_mm_scan_add_block); 677 864 678 865 /** 679 866 * drm_mm_scan_remove_block - remove a node from the scan list 867 + * @scan: the active drm_mm scanner 680 868 * @node: drm_mm_node to remove 681 869 * 682 - * Nodes _must_ be removed in the exact same order from the scan list as they 683 - * have been added, otherwise the internal state of the memory manager will be 684 - * corrupted. 870 + * Nodes _must_ be removed in exactly the reverse order from the scan list as 871 + * they have been added (e.g. using list_add as they are added and then 872 + * list_for_each over that eviction list to remove), otherwise the internal 873 + * state of the memory manager will be corrupted. 685 874 * 686 875 * When the scan list is empty, the selected memory nodes can be freed. An 687 876 * immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then ··· 718 853 * True if this block should be evicted, false otherwise. Will always 719 854 * return false when no hole has been found. 720 855 */ 721 - bool drm_mm_scan_remove_block(struct drm_mm_node *node) 856 + bool drm_mm_scan_remove_block(struct drm_mm_scan *scan, 857 + struct drm_mm_node *node) 722 858 { 723 - struct drm_mm *mm = node->mm; 724 859 struct drm_mm_node *prev_node; 725 860 726 - mm->scanned_blocks--; 861 + DRM_MM_BUG_ON(node->mm != scan->mm); 862 + DRM_MM_BUG_ON(!node->scanned_block); 863 + node->scanned_block = false; 727 864 728 - BUG_ON(!node->scanned_block); 729 - node->scanned_block = 0; 865 + DRM_MM_BUG_ON(!node->mm->scan_active); 866 + node->mm->scan_active--; 730 867 731 - prev_node = list_entry(node->node_list.prev, struct drm_mm_node, 732 - node_list); 733 - 734 - prev_node->hole_follows = node->scanned_preceeds_hole; 868 + /* During drm_mm_scan_add_block() we decoupled this node leaving 869 + * its pointers intact. Now that the caller is walking back along 870 + * the eviction list we can restore this block into its rightful 871 + * place on the full node_list. To confirm that the caller is walking 872 + * backwards correctly we check that prev_node->next == node->next, 873 + * i.e. both believe the same node should be on the other side of the 874 + * hole. 875 + */ 876 + prev_node = list_prev_entry(node, node_list); 877 + DRM_MM_BUG_ON(list_next_entry(prev_node, node_list) != 878 + list_next_entry(node, node_list)); 735 879 list_add(&node->node_list, &prev_node->node_list); 736 880 737 - return (drm_mm_hole_node_end(node) > mm->scan_hit_start && 738 - node->start < mm->scan_hit_end); 881 + return (node->start + node->size > scan->hit_start && 882 + node->start < scan->hit_end); 739 883 } 740 884 EXPORT_SYMBOL(drm_mm_scan_remove_block); 741 885 742 886 /** 743 - * drm_mm_clean - checks whether an allocator is clean 744 - * @mm: drm_mm allocator to check 887 + * drm_mm_scan_color_evict - evict overlapping nodes on either side of hole 888 + * @scan: drm_mm scan with target hole 889 + * 890 + * After completing an eviction scan and removing the selected nodes, we may 891 + * need to remove a few more nodes from either side of the target hole if 892 + * mm.color_adjust is being used. 745 893 * 746 894 * Returns: 747 - * True if the allocator is completely free, false if there's still a node 748 - * allocated in it. 895 + * A node to evict, or NULL if there are no overlapping nodes. 749 896 */ 750 - bool drm_mm_clean(struct drm_mm * mm) 897 + struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan) 751 898 { 752 - struct list_head *head = &mm->head_node.node_list; 899 + struct drm_mm *mm = scan->mm; 900 + struct drm_mm_node *hole; 901 + u64 hole_start, hole_end; 753 902 754 - return (head->next->next == head); 903 + DRM_MM_BUG_ON(list_empty(&mm->hole_stack)); 904 + 905 + if (!mm->color_adjust) 906 + return NULL; 907 + 908 + hole = list_first_entry(&mm->hole_stack, typeof(*hole), hole_stack); 909 + hole_start = __drm_mm_hole_node_start(hole); 910 + hole_end = __drm_mm_hole_node_end(hole); 911 + 912 + DRM_MM_BUG_ON(hole_start > scan->hit_start); 913 + DRM_MM_BUG_ON(hole_end < scan->hit_end); 914 + 915 + mm->color_adjust(hole, scan->color, &hole_start, &hole_end); 916 + if (hole_start > scan->hit_start) 917 + return hole; 918 + if (hole_end < scan->hit_end) 919 + return list_next_entry(hole, node_list); 920 + 921 + return NULL; 755 922 } 756 - EXPORT_SYMBOL(drm_mm_clean); 923 + EXPORT_SYMBOL(drm_mm_scan_color_evict); 757 924 758 925 /** 759 926 * drm_mm_init - initialize a drm-mm allocator ··· 795 898 * 796 899 * Note that @mm must be cleared to 0 before calling this function. 797 900 */ 798 - void drm_mm_init(struct drm_mm * mm, u64 start, u64 size) 901 + void drm_mm_init(struct drm_mm *mm, u64 start, u64 size) 799 902 { 903 + DRM_MM_BUG_ON(start + size <= start); 904 + 800 905 INIT_LIST_HEAD(&mm->hole_stack); 801 - mm->scanned_blocks = 0; 906 + mm->scan_active = 0; 802 907 803 908 /* Clever trick to avoid a special case in the free hole tracking. */ 804 909 INIT_LIST_HEAD(&mm->head_node.node_list); 805 910 mm->head_node.allocated = 0; 806 911 mm->head_node.hole_follows = 1; 807 - mm->head_node.scanned_block = 0; 808 - mm->head_node.scanned_prev_free = 0; 809 - mm->head_node.scanned_next_free = 0; 810 912 mm->head_node.mm = mm; 811 913 mm->head_node.start = start + size; 812 914 mm->head_node.size = start - mm->head_node.start; ··· 826 930 */ 827 931 void drm_mm_takedown(struct drm_mm *mm) 828 932 { 829 - if (WARN(!list_empty(&mm->head_node.node_list), 933 + if (WARN(!drm_mm_clean(mm), 830 934 "Memory manager not clean during takedown.\n")) 831 935 show_leaks(mm); 832 - 833 936 } 834 937 EXPORT_SYMBOL(drm_mm_takedown); 835 938 836 - static u64 drm_mm_debug_hole(struct drm_mm_node *entry, 837 - const char *prefix) 939 + static u64 drm_mm_debug_hole(const struct drm_mm_node *entry, 940 + const char *prefix) 838 941 { 839 942 u64 hole_start, hole_end, hole_size; 840 943 ··· 854 959 * @mm: drm_mm allocator to dump 855 960 * @prefix: prefix to use for dumping to dmesg 856 961 */ 857 - void drm_mm_debug_table(struct drm_mm *mm, const char *prefix) 962 + void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix) 858 963 { 859 - struct drm_mm_node *entry; 964 + const struct drm_mm_node *entry; 860 965 u64 total_used = 0, total_free = 0, total = 0; 861 966 862 967 total_free += drm_mm_debug_hole(&mm->head_node, prefix); ··· 875 980 EXPORT_SYMBOL(drm_mm_debug_table); 876 981 877 982 #if defined(CONFIG_DEBUG_FS) 878 - static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry) 983 + static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry) 879 984 { 880 985 u64 hole_start, hole_end, hole_size; 881 986 ··· 896 1001 * @m: seq_file to dump to 897 1002 * @mm: drm_mm allocator to dump 898 1003 */ 899 - int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm) 1004 + int drm_mm_dump_table(struct seq_file *m, const struct drm_mm *mm) 900 1005 { 901 - struct drm_mm_node *entry; 1006 + const struct drm_mm_node *entry; 902 1007 u64 total_used = 0, total_free = 0, total = 0; 903 1008 904 1009 total_free += drm_mm_dump_hole(m, &mm->head_node);
+55 -86
drivers/gpu/drm/drm_mode_config.c
··· 20 20 * OF THIS SOFTWARE. 21 21 */ 22 22 23 + #include <drm/drm_encoder.h> 23 24 #include <drm/drm_mode_config.h> 24 25 #include <drm/drmP.h> 25 26 ··· 85 84 struct drm_file *file_priv) 86 85 { 87 86 struct drm_mode_card_res *card_res = data; 88 - struct list_head *lh; 89 87 struct drm_framebuffer *fb; 90 88 struct drm_connector *connector; 91 89 struct drm_crtc *crtc; 92 90 struct drm_encoder *encoder; 93 - int ret = 0; 94 - int connector_count = 0; 95 - int crtc_count = 0; 96 - int fb_count = 0; 97 - int encoder_count = 0; 98 - int copied = 0; 91 + int count, ret = 0; 99 92 uint32_t __user *fb_id; 100 93 uint32_t __user *crtc_id; 101 94 uint32_t __user *connector_id; 102 95 uint32_t __user *encoder_id; 96 + struct drm_connector_list_iter conn_iter; 103 97 104 98 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 105 99 return -EINVAL; 106 100 107 101 108 102 mutex_lock(&file_priv->fbs_lock); 109 - /* 110 - * For the non-control nodes we need to limit the list of resources 111 - * by IDs in the group list for this node 112 - */ 113 - list_for_each(lh, &file_priv->fbs) 114 - fb_count++; 115 - 116 - /* handle this in 4 parts */ 117 - /* FBs */ 118 - if (card_res->count_fbs >= fb_count) { 119 - copied = 0; 120 - fb_id = (uint32_t __user *)(unsigned long)card_res->fb_id_ptr; 121 - list_for_each_entry(fb, &file_priv->fbs, filp_head) { 122 - if (put_user(fb->base.id, fb_id + copied)) { 123 - mutex_unlock(&file_priv->fbs_lock); 124 - return -EFAULT; 125 - } 126 - copied++; 103 + count = 0; 104 + fb_id = u64_to_user_ptr(card_res->fb_id_ptr); 105 + list_for_each_entry(fb, &file_priv->fbs, filp_head) { 106 + if (count < card_res->count_fbs && 107 + put_user(fb->base.id, fb_id + count)) { 108 + mutex_unlock(&file_priv->fbs_lock); 109 + return -EFAULT; 127 110 } 111 + count++; 128 112 } 129 - card_res->count_fbs = fb_count; 113 + card_res->count_fbs = count; 130 114 mutex_unlock(&file_priv->fbs_lock); 131 - 132 - /* mode_config.mutex protects the connector list against e.g. DP MST 133 - * connector hot-adding. CRTC/Plane lists are invariant. */ 134 - mutex_lock(&dev->mode_config.mutex); 135 - drm_for_each_crtc(crtc, dev) 136 - crtc_count++; 137 - 138 - drm_for_each_connector(connector, dev) 139 - connector_count++; 140 - 141 - drm_for_each_encoder(encoder, dev) 142 - encoder_count++; 143 115 144 116 card_res->max_height = dev->mode_config.max_height; 145 117 card_res->min_height = dev->mode_config.min_height; 146 118 card_res->max_width = dev->mode_config.max_width; 147 119 card_res->min_width = dev->mode_config.min_width; 148 120 149 - /* CRTCs */ 150 - if (card_res->count_crtcs >= crtc_count) { 151 - copied = 0; 152 - crtc_id = (uint32_t __user *)(unsigned long)card_res->crtc_id_ptr; 153 - drm_for_each_crtc(crtc, dev) { 154 - if (put_user(crtc->base.id, crtc_id + copied)) { 155 - ret = -EFAULT; 156 - goto out; 157 - } 158 - copied++; 159 - } 121 + count = 0; 122 + crtc_id = u64_to_user_ptr(card_res->crtc_id_ptr); 123 + drm_for_each_crtc(crtc, dev) { 124 + if (count < card_res->count_crtcs && 125 + put_user(crtc->base.id, crtc_id + count)) 126 + return -EFAULT; 127 + count++; 160 128 } 161 - card_res->count_crtcs = crtc_count; 129 + card_res->count_crtcs = count; 162 130 163 - /* Encoders */ 164 - if (card_res->count_encoders >= encoder_count) { 165 - copied = 0; 166 - encoder_id = (uint32_t __user *)(unsigned long)card_res->encoder_id_ptr; 167 - drm_for_each_encoder(encoder, dev) { 168 - if (put_user(encoder->base.id, encoder_id + 169 - copied)) { 170 - ret = -EFAULT; 171 - goto out; 172 - } 173 - copied++; 174 - } 131 + count = 0; 132 + encoder_id = u64_to_user_ptr(card_res->encoder_id_ptr); 133 + drm_for_each_encoder(encoder, dev) { 134 + if (count < card_res->count_encoders && 135 + put_user(encoder->base.id, encoder_id + count)) 136 + return -EFAULT; 137 + count++; 175 138 } 176 - card_res->count_encoders = encoder_count; 139 + card_res->count_encoders = count; 177 140 178 - /* Connectors */ 179 - if (card_res->count_connectors >= connector_count) { 180 - copied = 0; 181 - connector_id = (uint32_t __user *)(unsigned long)card_res->connector_id_ptr; 182 - drm_for_each_connector(connector, dev) { 183 - if (put_user(connector->base.id, 184 - connector_id + copied)) { 185 - ret = -EFAULT; 186 - goto out; 187 - } 188 - copied++; 141 + drm_connector_list_iter_get(dev, &conn_iter); 142 + count = 0; 143 + connector_id = u64_to_user_ptr(card_res->connector_id_ptr); 144 + drm_for_each_connector_iter(connector, &conn_iter) { 145 + if (count < card_res->count_connectors && 146 + put_user(connector->base.id, connector_id + count)) { 147 + drm_connector_list_iter_put(&conn_iter); 148 + return -EFAULT; 189 149 } 150 + count++; 190 151 } 191 - card_res->count_connectors = connector_count; 152 + card_res->count_connectors = count; 153 + drm_connector_list_iter_put(&conn_iter); 192 154 193 - out: 194 - mutex_unlock(&dev->mode_config.mutex); 195 155 return ret; 196 156 } 197 157 ··· 170 208 struct drm_plane *plane; 171 209 struct drm_encoder *encoder; 172 210 struct drm_connector *connector; 211 + struct drm_connector_list_iter conn_iter; 173 212 174 213 drm_for_each_plane(plane, dev) 175 214 if (plane->funcs->reset) ··· 184 221 if (encoder->funcs->reset) 185 222 encoder->funcs->reset(encoder); 186 223 187 - mutex_lock(&dev->mode_config.mutex); 188 - drm_for_each_connector(connector, dev) 224 + drm_connector_list_iter_get(dev, &conn_iter); 225 + drm_for_each_connector_iter(connector, &conn_iter) 189 226 if (connector->funcs->reset) 190 227 connector->funcs->reset(connector); 191 - mutex_unlock(&dev->mode_config.mutex); 228 + drm_connector_list_iter_put(&conn_iter); 192 229 } 193 230 EXPORT_SYMBOL(drm_mode_config_reset); 194 231 ··· 369 406 idr_init(&dev->mode_config.crtc_idr); 370 407 idr_init(&dev->mode_config.tile_idr); 371 408 ida_init(&dev->mode_config.connector_ida); 409 + spin_lock_init(&dev->mode_config.connector_list_lock); 372 410 373 - drm_modeset_lock_all(dev); 374 411 drm_mode_create_standard_properties(dev); 375 - drm_modeset_unlock_all(dev); 376 412 377 413 /* Just to be sure */ 378 414 dev->mode_config.num_fb = 0; ··· 398 436 */ 399 437 void drm_mode_config_cleanup(struct drm_device *dev) 400 438 { 401 - struct drm_connector *connector, *ot; 439 + struct drm_connector *connector; 440 + struct drm_connector_list_iter conn_iter; 402 441 struct drm_crtc *crtc, *ct; 403 442 struct drm_encoder *encoder, *enct; 404 443 struct drm_framebuffer *fb, *fbt; ··· 412 449 encoder->funcs->destroy(encoder); 413 450 } 414 451 415 - list_for_each_entry_safe(connector, ot, 416 - &dev->mode_config.connector_list, head) { 417 - connector->funcs->destroy(connector); 452 + drm_connector_list_iter_get(dev, &conn_iter); 453 + drm_for_each_connector_iter(connector, &conn_iter) { 454 + /* drm_connector_list_iter holds an full reference to the 455 + * current connector itself, which means it is inherently safe 456 + * against unreferencing the current connector - but not against 457 + * deleting it right away. */ 458 + drm_connector_unreference(connector); 418 459 } 460 + drm_connector_list_iter_put(&conn_iter); 461 + WARN_ON(!list_empty(&dev->mode_config.connector_list)); 419 462 420 463 list_for_each_entry_safe(property, pt, &dev->mode_config.property_list, 421 464 head) {
+2 -1
drivers/gpu/drm/drm_mode_object.c
··· 23 23 #include <linux/export.h> 24 24 #include <drm/drmP.h> 25 25 #include <drm/drm_mode_object.h> 26 + #include <drm/drm_atomic.h> 26 27 27 28 #include "drm_crtc_internal.h" 28 29 ··· 274 273 * their value in obj->properties->values[].. mostly to avoid 275 274 * having to deal w/ EDID and similar props in atomic paths: 276 275 */ 277 - if (drm_core_check_feature(property->dev, DRIVER_ATOMIC) && 276 + if (drm_drv_uses_atomic_modeset(property->dev) && 278 277 !(property->flags & DRM_MODE_PROP_IMMUTABLE)) 279 278 return drm_atomic_get_property(obj, property, val); 280 279
+7 -18
drivers/gpu/drm/drm_modeset_helper.c
··· 48 48 49 49 INIT_LIST_HEAD(&panel_list); 50 50 51 + spin_lock_irq(&dev->mode_config.connector_list_lock); 51 52 list_for_each_entry_safe(connector, tmp, 52 53 &dev->mode_config.connector_list, head) { 53 54 if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS || ··· 58 57 } 59 58 60 59 list_splice(&panel_list, &dev->mode_config.connector_list); 60 + spin_unlock_irq(&dev->mode_config.connector_list_lock); 61 61 } 62 62 EXPORT_SYMBOL(drm_helper_move_panel_connectors_to_head); 63 63 64 64 /** 65 65 * drm_helper_mode_fill_fb_struct - fill out framebuffer metadata 66 + * @dev: DRM device 66 67 * @fb: drm_framebuffer object to fill out 67 68 * @mode_cmd: metadata from the userspace fb creation request 68 69 * 69 70 * This helper can be used in a drivers fb_create callback to pre-fill the fb's 70 71 * metadata fields. 71 72 */ 72 - void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb, 73 + void drm_helper_mode_fill_fb_struct(struct drm_device *dev, 74 + struct drm_framebuffer *fb, 73 75 const struct drm_mode_fb_cmd2 *mode_cmd) 74 76 { 75 - const struct drm_format_info *info; 76 77 int i; 77 78 78 - info = drm_format_info(mode_cmd->pixel_format); 79 - if (!info || !info->depth) { 80 - struct drm_format_name_buf format_name; 81 - 82 - DRM_DEBUG_KMS("non-RGB pixel format %s\n", 83 - drm_get_format_name(mode_cmd->pixel_format, 84 - &format_name)); 85 - 86 - fb->depth = 0; 87 - fb->bits_per_pixel = 0; 88 - } else { 89 - fb->depth = info->depth; 90 - fb->bits_per_pixel = info->cpp[0] * 8; 91 - } 92 - 79 + fb->dev = dev; 80 + fb->format = drm_format_info(mode_cmd->pixel_format); 93 81 fb->width = mode_cmd->width; 94 82 fb->height = mode_cmd->height; 95 83 for (i = 0; i < 4; i++) { ··· 86 96 fb->offsets[i] = mode_cmd->offsets[i]; 87 97 } 88 98 fb->modifier = mode_cmd->modifier[0]; 89 - fb->pixel_format = mode_cmd->pixel_format; 90 99 fb->flags = mode_cmd->flags; 91 100 } 92 101 EXPORT_SYMBOL(drm_helper_mode_fill_fb_struct);
+1
drivers/gpu/drm/drm_of.c
··· 4 4 #include <linux/of_graph.h> 5 5 #include <drm/drmP.h> 6 6 #include <drm/drm_crtc.h> 7 + #include <drm/drm_encoder.h> 7 8 #include <drm/drm_of.h> 8 9 9 10 static void drm_release_of(struct device *dev, void *data)
+9 -5
drivers/gpu/drm/drm_plane.c
··· 392 392 return -ENOENT; 393 393 394 394 drm_modeset_lock(&plane->mutex, NULL); 395 - if (plane->crtc) 395 + if (plane->state && plane->state->crtc) 396 + plane_resp->crtc_id = plane->state->crtc->base.id; 397 + else if (!plane->state && plane->crtc) 396 398 plane_resp->crtc_id = plane->crtc->base.id; 397 399 else 398 400 plane_resp->crtc_id = 0; 399 401 400 - if (plane->fb) 402 + if (plane->state && plane->state->fb) 403 + plane_resp->fb_id = plane->state->fb->base.id; 404 + else if (!plane->state && plane->fb) 401 405 plane_resp->fb_id = plane->fb->base.id; 402 406 else 403 407 plane_resp->fb_id = 0; ··· 482 478 } 483 479 484 480 /* Check whether this plane supports the fb pixel format. */ 485 - ret = drm_plane_check_pixel_format(plane, fb->pixel_format); 481 + ret = drm_plane_check_pixel_format(plane, fb->format->format); 486 482 if (ret) { 487 483 struct drm_format_name_buf format_name; 488 484 DRM_DEBUG_KMS("Invalid pixel format %s\n", 489 - drm_get_format_name(fb->pixel_format, 485 + drm_get_format_name(fb->format->format, 490 486 &format_name)); 491 487 goto out; 492 488 } ··· 858 854 if (ret) 859 855 goto out; 860 856 861 - if (crtc->primary->fb->pixel_format != fb->pixel_format) { 857 + if (crtc->primary->fb->format != fb->format) { 862 858 DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n"); 863 859 ret = -EINVAL; 864 860 goto out;
+5 -1
drivers/gpu/drm/drm_plane_helper.c
··· 29 29 #include <drm/drm_rect.h> 30 30 #include <drm/drm_atomic.h> 31 31 #include <drm/drm_crtc_helper.h> 32 + #include <drm/drm_encoder.h> 32 33 #include <drm/drm_atomic_helper.h> 33 34 34 35 #define SUBPIXEL_MASK 0xffff ··· 75 74 { 76 75 struct drm_device *dev = crtc->dev; 77 76 struct drm_connector *connector; 77 + struct drm_connector_list_iter conn_iter; 78 78 int count = 0; 79 79 80 80 /* ··· 85 83 */ 86 84 WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 87 85 88 - drm_for_each_connector(connector, dev) { 86 + drm_connector_list_iter_get(dev, &conn_iter); 87 + drm_for_each_connector_iter(connector, &conn_iter) { 89 88 if (connector->encoder && connector->encoder->crtc == crtc) { 90 89 if (connector_list != NULL && count < num_connectors) 91 90 *(connector_list++) = connector; ··· 94 91 count++; 95 92 } 96 93 } 94 + drm_connector_list_iter_put(&conn_iter); 97 95 98 96 return count; 99 97 }
+12 -6
drivers/gpu/drm/drm_probe_helper.c
··· 129 129 { 130 130 bool poll = false; 131 131 struct drm_connector *connector; 132 + struct drm_connector_list_iter conn_iter; 132 133 unsigned long delay = DRM_OUTPUT_POLL_PERIOD; 133 134 134 135 WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); ··· 137 136 if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll) 138 137 return; 139 138 140 - drm_for_each_connector(connector, dev) { 139 + drm_connector_list_iter_get(dev, &conn_iter); 140 + drm_for_each_connector_iter(connector, &conn_iter) { 141 141 if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT | 142 142 DRM_CONNECTOR_POLL_DISCONNECT)) 143 143 poll = true; 144 144 } 145 + drm_connector_list_iter_put(&conn_iter); 145 146 146 147 if (dev->mode_config.delayed_event) { 147 148 poll = true; ··· 385 382 struct delayed_work *delayed_work = to_delayed_work(work); 386 383 struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_work); 387 384 struct drm_connector *connector; 385 + struct drm_connector_list_iter conn_iter; 388 386 enum drm_connector_status old_status; 389 387 bool repoll = false, changed; 390 388 ··· 401 397 goto out; 402 398 } 403 399 404 - drm_for_each_connector(connector, dev) { 405 - 400 + drm_connector_list_iter_get(dev, &conn_iter); 401 + drm_for_each_connector_iter(connector, &conn_iter) { 406 402 /* Ignore forced connectors. */ 407 403 if (connector->force) 408 404 continue; ··· 455 451 changed = true; 456 452 } 457 453 } 454 + drm_connector_list_iter_put(&conn_iter); 458 455 459 456 mutex_unlock(&dev->mode_config.mutex); 460 457 ··· 567 562 bool drm_helper_hpd_irq_event(struct drm_device *dev) 568 563 { 569 564 struct drm_connector *connector; 565 + struct drm_connector_list_iter conn_iter; 570 566 enum drm_connector_status old_status; 571 567 bool changed = false; 572 568 ··· 575 569 return false; 576 570 577 571 mutex_lock(&dev->mode_config.mutex); 578 - drm_for_each_connector(connector, dev) { 579 - 572 + drm_connector_list_iter_get(dev, &conn_iter); 573 + drm_for_each_connector_iter(connector, &conn_iter) { 580 574 /* Only handle HPD capable connectors. */ 581 575 if (!(connector->polled & DRM_CONNECTOR_POLL_HPD)) 582 576 continue; ··· 592 586 if (old_status != connector->status) 593 587 changed = true; 594 588 } 595 - 589 + drm_connector_list_iter_put(&conn_iter); 596 590 mutex_unlock(&dev->mode_config.mutex); 597 591 598 592 if (changed)
+1 -20
drivers/gpu/drm/drm_simple_kms_helper.c
··· 182 182 int drm_simple_display_pipe_attach_bridge(struct drm_simple_display_pipe *pipe, 183 183 struct drm_bridge *bridge) 184 184 { 185 - bridge->encoder = &pipe->encoder; 186 - pipe->encoder.bridge = bridge; 187 - return drm_bridge_attach(pipe->encoder.dev, bridge); 185 + return drm_bridge_attach(&pipe->encoder, bridge, NULL); 188 186 } 189 187 EXPORT_SYMBOL(drm_simple_display_pipe_attach_bridge); 190 - 191 - /** 192 - * drm_simple_display_pipe_detach_bridge - Detach the bridge from the display pipe 193 - * @pipe: simple display pipe object 194 - * 195 - * Detaches the drm bridge previously attached with 196 - * drm_simple_display_pipe_attach_bridge() 197 - */ 198 - void drm_simple_display_pipe_detach_bridge(struct drm_simple_display_pipe *pipe) 199 - { 200 - if (WARN_ON(!pipe->encoder.bridge)) 201 - return; 202 - 203 - drm_bridge_detach(pipe->encoder.bridge); 204 - pipe->encoder.bridge = NULL; 205 - } 206 - EXPORT_SYMBOL(drm_simple_display_pipe_detach_bridge); 207 188 208 189 /** 209 190 * drm_simple_display_pipe_init - Initialize a simple display pipeline
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_drv.c
··· 592 592 drm->dev_private = NULL; 593 593 kfree(priv); 594 594 595 - drm_put_dev(drm); 595 + drm_dev_unref(drm); 596 596 } 597 597 598 598 static const struct component_master_ops etnaviv_master_ops = {
+5 -4
drivers/gpu/drm/etnaviv/etnaviv_mmu.c
··· 113 113 114 114 while (1) { 115 115 struct etnaviv_vram_mapping *m, *n; 116 + struct drm_mm_scan scan; 116 117 struct list_head list; 117 118 bool found; 118 119 ··· 135 134 } 136 135 137 136 /* Try to retire some entries */ 138 - drm_mm_init_scan(&mmu->mm, size, 0, 0); 137 + drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0, 0); 139 138 140 139 found = 0; 141 140 INIT_LIST_HEAD(&list); ··· 152 151 continue; 153 152 154 153 list_add(&free->scan_node, &list); 155 - if (drm_mm_scan_add_block(&free->vram_node)) { 154 + if (drm_mm_scan_add_block(&scan, &free->vram_node)) { 156 155 found = true; 157 156 break; 158 157 } ··· 161 160 if (!found) { 162 161 /* Nothing found, clean up and fail */ 163 162 list_for_each_entry_safe(m, n, &list, scan_node) 164 - BUG_ON(drm_mm_scan_remove_block(&m->vram_node)); 163 + BUG_ON(drm_mm_scan_remove_block(&scan, &m->vram_node)); 165 164 break; 166 165 } 167 166 ··· 172 171 * can leave the block pinned. 173 172 */ 174 173 list_for_each_entry_safe(m, n, &list, scan_node) 175 - if (!drm_mm_scan_remove_block(&m->vram_node)) 174 + if (!drm_mm_scan_remove_block(&scan, &m->vram_node)) 176 175 list_del_init(&m->scan_node); 177 176 178 177 /*
+3 -3
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 200 200 val = readl(ctx->addr + DECON_WINCONx(win)); 201 201 val &= ~WINCONx_BPPMODE_MASK; 202 202 203 - switch (fb->pixel_format) { 203 + switch (fb->format->format) { 204 204 case DRM_FORMAT_XRGB1555: 205 205 val |= WINCONx_BPPMODE_16BPP_I1555; 206 206 val |= WINCONx_HAWSWP_F; ··· 226 226 return; 227 227 } 228 228 229 - DRM_DEBUG_KMS("bpp = %u\n", fb->bits_per_pixel); 229 + DRM_DEBUG_KMS("bpp = %u\n", fb->format->cpp[0] * 8); 230 230 231 231 /* 232 232 * In case of exynos, setting dma-burst to 16Word causes permanent ··· 275 275 struct decon_context *ctx = crtc->ctx; 276 276 struct drm_framebuffer *fb = state->base.fb; 277 277 unsigned int win = plane->index; 278 - unsigned int bpp = fb->bits_per_pixel >> 3; 278 + unsigned int bpp = fb->format->cpp[0]; 279 279 unsigned int pitch = fb->pitches[0]; 280 280 dma_addr_t dma_addr = exynos_drm_fb_dma_addr(fb, 0); 281 281 u32 val;
+4 -4
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 281 281 val = readl(ctx->regs + WINCON(win)); 282 282 val &= ~WINCONx_BPPMODE_MASK; 283 283 284 - switch (fb->pixel_format) { 284 + switch (fb->format->format) { 285 285 case DRM_FORMAT_RGB565: 286 286 val |= WINCONx_BPPMODE_16BPP_565; 287 287 val |= WINCONx_BURSTLEN_16WORD; ··· 330 330 break; 331 331 } 332 332 333 - DRM_DEBUG_KMS("bpp = %d\n", fb->bits_per_pixel); 333 + DRM_DEBUG_KMS("bpp = %d\n", fb->format->cpp[0] * 8); 334 334 335 335 /* 336 336 * In case of exynos, setting dma-burst to 16Word causes permanent ··· 340 340 * movement causes unstable DMA which results into iommu crash/tear. 341 341 */ 342 342 343 - padding = (fb->pitches[0] / (fb->bits_per_pixel >> 3)) - fb->width; 343 + padding = (fb->pitches[0] / fb->format->cpp[0]) - fb->width; 344 344 if (fb->width + padding < MIN_FB_WIDTH_FOR_16WORD_BURST) { 345 345 val &= ~WINCONx_BURSTLEN_MASK; 346 346 val |= WINCONx_BURSTLEN_8WORD; ··· 407 407 unsigned int last_x; 408 408 unsigned int last_y; 409 409 unsigned int win = plane->index; 410 - unsigned int bpp = fb->bits_per_pixel >> 3; 410 + unsigned int bpp = fb->format->cpp[0]; 411 411 unsigned int pitch = fb->pitches[0]; 412 412 413 413 if (ctx->suspended)
+1 -4
drivers/gpu/drm/exynos/exynos_dp.c
··· 99 99 struct drm_connector *connector) 100 100 { 101 101 struct exynos_dp_device *dp = to_dp(plat_data); 102 - struct drm_encoder *encoder = &dp->encoder; 103 102 int ret; 104 103 105 104 drm_connector_register(connector); ··· 106 107 107 108 /* Pre-empt DP connector creation if there's a bridge */ 108 109 if (dp->ptn_bridge) { 109 - bridge->next = dp->ptn_bridge; 110 - dp->ptn_bridge->encoder = encoder; 111 - ret = drm_bridge_attach(encoder->dev, dp->ptn_bridge); 110 + ret = drm_bridge_attach(&dp->encoder, dp->ptn_bridge, bridge); 112 111 if (ret) { 113 112 DRM_ERROR("Failed to attach bridge to drm\n"); 114 113 bridge->next = NULL;
+2 -4
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 1718 1718 } 1719 1719 1720 1720 bridge = of_drm_find_bridge(dsi->bridge_node); 1721 - if (bridge) { 1722 - encoder->bridge = bridge; 1723 - drm_bridge_attach(drm_dev, bridge); 1724 - } 1721 + if (bridge) 1722 + drm_bridge_attach(encoder, bridge, NULL); 1725 1723 1726 1724 return mipi_dsi_host_register(&dsi->dsi_host); 1727 1725 }
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 126 126 + mode_cmd->offsets[i]; 127 127 } 128 128 129 - drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd); 129 + drm_helper_mode_fill_fb_struct(dev, &exynos_fb->fb, mode_cmd); 130 130 131 131 ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs); 132 132 if (ret < 0) {
+3 -3
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 76 76 { 77 77 struct fb_info *fbi; 78 78 struct drm_framebuffer *fb = helper->fb; 79 - unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3); 79 + unsigned int size = fb->width * fb->height * fb->format->cpp[0]; 80 80 unsigned int nr_pages; 81 81 unsigned long offset; 82 82 ··· 90 90 fbi->flags = FBINFO_FLAG_DEFAULT; 91 91 fbi->fbops = &exynos_drm_fb_ops; 92 92 93 - drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); 93 + drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth); 94 94 drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); 95 95 96 96 nr_pages = exynos_gem->size >> PAGE_SHIFT; ··· 103 103 return -EIO; 104 104 } 105 105 106 - offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3); 106 + offset = fbi->var.xoffset * fb->format->cpp[0]; 107 107 offset += fbi->var.yoffset * fb->pitches[0]; 108 108 109 109 fbi->screen_base = exynos_gem->kvaddr + offset;
+2 -2
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 738 738 unsigned long val, size, offset; 739 739 unsigned int last_x, last_y, buf_offsize, line_size; 740 740 unsigned int win = plane->index; 741 - unsigned int bpp = fb->bits_per_pixel >> 3; 741 + unsigned int bpp = fb->format->cpp[0]; 742 742 unsigned int pitch = fb->pitches[0]; 743 743 744 744 if (ctx->suspended) ··· 804 804 DRM_DEBUG_KMS("osd size = 0x%x\n", (unsigned int)val); 805 805 } 806 806 807 - fimd_win_set_pixfmt(ctx, win, fb->pixel_format, state->src.w); 807 + fimd_win_set_pixfmt(ctx, win, fb->format->format, state->src.w); 808 808 809 809 /* hardware window 0 doesn't support color key. */ 810 810 if (win != 0)
+6 -6
drivers/gpu/drm/exynos/exynos_mixer.c
··· 485 485 bool crcb_mode = false; 486 486 u32 val; 487 487 488 - switch (fb->pixel_format) { 488 + switch (fb->format->format) { 489 489 case DRM_FORMAT_NV12: 490 490 crcb_mode = false; 491 491 break; ··· 494 494 break; 495 495 default: 496 496 DRM_ERROR("pixel format for vp is wrong [%d].\n", 497 - fb->pixel_format); 497 + fb->format->format); 498 498 return; 499 499 } 500 500 ··· 597 597 unsigned int fmt; 598 598 u32 val; 599 599 600 - switch (fb->pixel_format) { 600 + switch (fb->format->format) { 601 601 case DRM_FORMAT_XRGB4444: 602 602 case DRM_FORMAT_ARGB4444: 603 603 fmt = MXR_FORMAT_ARGB4444; ··· 631 631 632 632 /* converting dma address base and source offset */ 633 633 dma_addr = exynos_drm_fb_dma_addr(fb, 0) 634 - + (state->src.x * fb->bits_per_pixel >> 3) 634 + + (state->src.x * fb->format->cpp[0]) 635 635 + (state->src.y * fb->pitches[0]); 636 636 src_x_offset = 0; 637 637 src_y_offset = 0; ··· 649 649 650 650 /* setup geometry */ 651 651 mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), 652 - fb->pitches[0] / (fb->bits_per_pixel >> 3)); 652 + fb->pitches[0] / fb->format->cpp[0]); 653 653 654 654 /* setup display size */ 655 655 if (ctx->mxr_ver == MXR_VER_128_0_0_184 && ··· 681 681 mixer_cfg_scan(ctx, mode->vdisplay); 682 682 mixer_cfg_rgb_fmt(ctx, mode->vdisplay); 683 683 mixer_cfg_layer(ctx, win, priority, true); 684 - mixer_cfg_gfx_blend(ctx, win, is_alpha_format(fb->pixel_format)); 684 + mixer_cfg_gfx_blend(ctx, win, is_alpha_format(fb->format->format)); 685 685 686 686 /* layer update mandatory for mixer 16.0.33.0 */ 687 687 if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
+2 -1
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
··· 434 434 { 435 435 struct fsl_dcu_drm_device *fsl_dev = platform_get_drvdata(pdev); 436 436 437 - drm_put_dev(fsl_dev->drm); 437 + drm_dev_unregister(fsl_dev->drm); 438 + drm_dev_unref(fsl_dev->drm); 438 439 clk_disable_unprepare(fsl_dev->clk); 439 440 clk_unregister(fsl_dev->pix_clk); 440 441
+2
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
··· 12 12 #ifndef __FSL_DCU_DRM_DRV_H__ 13 13 #define __FSL_DCU_DRM_DRV_H__ 14 14 15 + #include <drm/drm_encoder.h> 16 + 15 17 #include "fsl_dcu_drm_crtc.h" 16 18 #include "fsl_dcu_drm_output.h" 17 19 #include "fsl_dcu_drm_plane.h"
+2 -2
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_plane.c
··· 44 44 if (!state->fb || !state->crtc) 45 45 return 0; 46 46 47 - switch (fb->pixel_format) { 47 + switch (fb->format->format) { 48 48 case DRM_FORMAT_RGB565: 49 49 case DRM_FORMAT_RGB888: 50 50 case DRM_FORMAT_XRGB8888: ··· 96 96 97 97 gem = drm_fb_cma_get_gem_obj(fb, 0); 98 98 99 - switch (fb->pixel_format) { 99 + switch (fb->format->format) { 100 100 case DRM_FORMAT_RGB565: 101 101 bpp = FSL_DCU_RGB565; 102 102 break;
+1 -4
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
··· 160 160 if (!bridge) 161 161 return -ENODEV; 162 162 163 - fsl_dev->encoder.bridge = bridge; 164 - bridge->encoder = &fsl_dev->encoder; 165 - 166 - return drm_bridge_attach(fsl_dev->drm, bridge); 163 + return drm_bridge_attach(&fsl_dev->encoder, bridge, NULL); 167 164 } 168 165 169 166 int fsl_dcu_create_outputs(struct fsl_dcu_drm_device *fsl_dev)
+1 -1
drivers/gpu/drm/gma500/accel_2d.c
··· 254 254 offset = psbfb->gtt->offset; 255 255 stride = fb->pitches[0]; 256 256 257 - switch (fb->depth) { 257 + switch (fb->format->depth) { 258 258 case 8: 259 259 src_format = PSB_2D_SRC_332RGB; 260 260 dst_format = PSB_2D_DST_332RGB;
+3 -3
drivers/gpu/drm/gma500/framebuffer.c
··· 77 77 (transp << info->var.transp.offset); 78 78 79 79 if (regno < 16) { 80 - switch (fb->bits_per_pixel) { 80 + switch (fb->format->cpp[0] * 8) { 81 81 case 16: 82 82 ((uint32_t *) info->pseudo_palette)[regno] = v; 83 83 break; ··· 244 244 if (mode_cmd->pitches[0] & 63) 245 245 return -EINVAL; 246 246 247 - drm_helper_mode_fill_fb_struct(&fb->base, mode_cmd); 247 + drm_helper_mode_fill_fb_struct(dev, &fb->base, mode_cmd); 248 248 fb->gtt = gt; 249 249 ret = drm_framebuffer_init(dev, &fb->base, &psb_fb_funcs); 250 250 if (ret) { ··· 407 407 408 408 fbdev->psb_fb_helper.fb = fb; 409 409 410 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 410 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 411 411 strcpy(info->fix.id, "psbdrmfb"); 412 412 413 413 info->flags = FBINFO_DEFAULT;
+7 -6
drivers/gpu/drm/gma500/gma_display.c
··· 59 59 struct drm_device *dev = crtc->dev; 60 60 struct drm_psb_private *dev_priv = dev->dev_private; 61 61 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 62 - struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb); 62 + struct drm_framebuffer *fb = crtc->primary->fb; 63 + struct psb_framebuffer *psbfb = to_psb_fb(fb); 63 64 int pipe = gma_crtc->pipe; 64 65 const struct psb_offset *map = &dev_priv->regmap[pipe]; 65 66 unsigned long start, offset; ··· 71 70 return 0; 72 71 73 72 /* no fb bound */ 74 - if (!crtc->primary->fb) { 73 + if (!fb) { 75 74 dev_err(dev->dev, "No FB bound\n"); 76 75 goto gma_pipe_cleaner; 77 76 } ··· 82 81 if (ret < 0) 83 82 goto gma_pipe_set_base_exit; 84 83 start = psbfb->gtt->offset; 85 - offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8); 84 + offset = y * fb->pitches[0] + x * fb->format->cpp[0]; 86 85 87 - REG_WRITE(map->stride, crtc->primary->fb->pitches[0]); 86 + REG_WRITE(map->stride, fb->pitches[0]); 88 87 89 88 dspcntr = REG_READ(map->cntr); 90 89 dspcntr &= ~DISPPLANE_PIXFORMAT_MASK; 91 90 92 - switch (crtc->primary->fb->bits_per_pixel) { 91 + switch (fb->format->cpp[0] * 8) { 93 92 case 8: 94 93 dspcntr |= DISPPLANE_8BPP; 95 94 break; 96 95 case 16: 97 - if (crtc->primary->fb->depth == 15) 96 + if (fb->format->depth == 15) 98 97 dspcntr |= DISPPLANE_15_16BPP; 99 98 else 100 99 dspcntr |= DISPPLANE_16BPP;
+9 -8
drivers/gpu/drm/gma500/mdfld_intel_display.c
··· 148 148 if (!fb) 149 149 return 0; 150 150 151 - switch (fb->bits_per_pixel) { 151 + switch (fb->format->cpp[0] * 8) { 152 152 case 8: 153 153 case 16: 154 154 case 24: ··· 165 165 { 166 166 struct drm_device *dev = crtc->dev; 167 167 struct drm_psb_private *dev_priv = dev->dev_private; 168 + struct drm_framebuffer *fb = crtc->primary->fb; 168 169 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 169 - struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb); 170 + struct psb_framebuffer *psbfb = to_psb_fb(fb); 170 171 int pipe = gma_crtc->pipe; 171 172 const struct psb_offset *map = &dev_priv->regmap[pipe]; 172 173 unsigned long start, offset; ··· 179 178 dev_dbg(dev->dev, "pipe = 0x%x.\n", pipe); 180 179 181 180 /* no fb bound */ 182 - if (!crtc->primary->fb) { 181 + if (!fb) { 183 182 dev_dbg(dev->dev, "No FB bound\n"); 184 183 return 0; 185 184 } 186 185 187 - ret = check_fb(crtc->primary->fb); 186 + ret = check_fb(fb); 188 187 if (ret) 189 188 return ret; 190 189 ··· 197 196 return 0; 198 197 199 198 start = psbfb->gtt->offset; 200 - offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8); 199 + offset = y * fb->pitches[0] + x * fb->format->cpp[0]; 201 200 202 - REG_WRITE(map->stride, crtc->primary->fb->pitches[0]); 201 + REG_WRITE(map->stride, fb->pitches[0]); 203 202 dspcntr = REG_READ(map->cntr); 204 203 dspcntr &= ~DISPPLANE_PIXFORMAT_MASK; 205 204 206 - switch (crtc->primary->fb->bits_per_pixel) { 205 + switch (fb->format->cpp[0] * 8) { 207 206 case 8: 208 207 dspcntr |= DISPPLANE_8BPP; 209 208 break; 210 209 case 16: 211 - if (crtc->primary->fb->depth == 15) 210 + if (fb->format->depth == 15) 212 211 dspcntr |= DISPPLANE_15_16BPP; 213 212 else 214 213 dspcntr |= DISPPLANE_16BPP;
+7 -6
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 599 599 struct drm_device *dev = crtc->dev; 600 600 struct drm_psb_private *dev_priv = dev->dev_private; 601 601 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 602 - struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb); 602 + struct drm_framebuffer *fb = crtc->primary->fb; 603 + struct psb_framebuffer *psbfb = to_psb_fb(fb); 603 604 int pipe = gma_crtc->pipe; 604 605 const struct psb_offset *map = &dev_priv->regmap[pipe]; 605 606 unsigned long start, offset; ··· 609 608 int ret = 0; 610 609 611 610 /* no fb bound */ 612 - if (!crtc->primary->fb) { 611 + if (!fb) { 613 612 dev_dbg(dev->dev, "No FB bound\n"); 614 613 return 0; 615 614 } ··· 618 617 return 0; 619 618 620 619 start = psbfb->gtt->offset; 621 - offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8); 620 + offset = y * fb->pitches[0] + x * fb->format->cpp[0]; 622 621 623 - REG_WRITE(map->stride, crtc->primary->fb->pitches[0]); 622 + REG_WRITE(map->stride, fb->pitches[0]); 624 623 625 624 dspcntr = REG_READ(map->cntr); 626 625 dspcntr &= ~DISPPLANE_PIXFORMAT_MASK; 627 626 628 - switch (crtc->primary->fb->bits_per_pixel) { 627 + switch (fb->format->cpp[0] * 8) { 629 628 case 8: 630 629 dspcntr |= DISPPLANE_8BPP; 631 630 break; 632 631 case 16: 633 - if (crtc->primary->fb->depth == 15) 632 + if (fb->format->depth == 15) 634 633 dspcntr |= DISPPLANE_15_16BPP; 635 634 else 636 635 dspcntr |= DISPPLANE_16BPP;
+1
drivers/gpu/drm/gma500/psb_intel_drv.h
··· 23 23 #include <linux/i2c-algo-bit.h> 24 24 #include <drm/drm_crtc.h> 25 25 #include <drm/drm_crtc_helper.h> 26 + #include <drm/drm_encoder.h> 26 27 #include <linux/gpio.h> 27 28 #include "gma_display.h" 28 29
+3 -3
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
··· 122 122 123 123 writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS); 124 124 125 - reg = state->fb->width * (state->fb->bits_per_pixel / 8); 125 + reg = state->fb->width * (state->fb->format->cpp[0]); 126 126 /* now line_pad is 16 */ 127 127 reg = PADDING(16, reg); 128 128 129 - line_l = state->fb->width * state->fb->bits_per_pixel / 8; 129 + line_l = state->fb->width * state->fb->format->cpp[0]; 130 130 line_l = PADDING(16, line_l); 131 131 writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) | 132 132 HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l), ··· 136 136 reg = readl(priv->mmio + HIBMC_CRT_DISP_CTL); 137 137 reg &= ~HIBMC_CRT_DISP_CTL_FORMAT_MASK; 138 138 reg |= HIBMC_FIELD(HIBMC_CRT_DISP_CTL_FORMAT, 139 - state->fb->bits_per_pixel / 16); 139 + state->fb->format->cpp[0] * 8 / 16); 140 140 writel(reg, priv->mmio + HIBMC_CRT_DISP_CTL); 141 141 } 142 142
+1 -1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
··· 135 135 info->fbops = &hibmc_drm_fb_ops; 136 136 137 137 drm_fb_helper_fill_fix(info, hi_fbdev->fb->fb.pitches[0], 138 - hi_fbdev->fb->fb.depth); 138 + hi_fbdev->fb->fb.format->depth); 139 139 drm_fb_helper_fill_var(info, &priv->fbdev->helper, sizes->fb_width, 140 140 sizes->fb_height); 141 141
+1 -1
drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
··· 512 512 return ERR_PTR(-ENOMEM); 513 513 } 514 514 515 - drm_helper_mode_fill_fb_struct(&hibmc_fb->fb, mode_cmd); 515 + drm_helper_mode_fill_fb_struct(dev, &hibmc_fb->fb, mode_cmd); 516 516 hibmc_fb->obj = obj; 517 517 ret = drm_framebuffer_init(dev, &hibmc_fb->fb, &hibmc_fb_funcs); 518 518 if (ret) {
+1 -4
drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c
··· 709 709 int ret; 710 710 711 711 /* associate the bridge to dsi encoder */ 712 - encoder->bridge = bridge; 713 - bridge->encoder = encoder; 714 - 715 - ret = drm_bridge_attach(dev, bridge); 712 + ret = drm_bridge_attach(encoder, bridge, NULL); 716 713 if (ret) { 717 714 DRM_ERROR("failed to attach external bridge\n"); 718 715 return ret;
+6 -11
drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
··· 617 617 ch + 1, y, in_h, stride, (u32)obj->paddr); 618 618 DRM_DEBUG_DRIVER("addr=0x%x, fb:%dx%d, pixel_format=%d(%s)\n", 619 619 addr, fb->width, fb->height, fmt, 620 - drm_get_format_name(fb->pixel_format, &format_name)); 620 + drm_get_format_name(fb->format->format, &format_name)); 621 621 622 622 /* get reg offset */ 623 623 reg_ctrl = RD_CH_CTRL(ch); ··· 773 773 { 774 774 struct ade_hw_ctx *ctx = aplane->ctx; 775 775 void __iomem *base = ctx->base; 776 - u32 fmt = ade_get_format(fb->pixel_format); 776 + u32 fmt = ade_get_format(fb->format->format); 777 777 u32 ch = aplane->ch; 778 778 u32 in_w; 779 779 u32 in_h; ··· 835 835 if (!crtc || !fb) 836 836 return 0; 837 837 838 - fmt = ade_get_format(fb->pixel_format); 838 + fmt = ade_get_format(fb->format->format); 839 839 if (fmt == ADE_FORMAT_UNSUPPORT) 840 840 return -EINVAL; 841 841 ··· 973 973 return 0; 974 974 } 975 975 976 - static int ade_drm_init(struct drm_device *dev) 976 + static int ade_drm_init(struct platform_device *pdev) 977 977 { 978 - struct platform_device *pdev = dev->platformdev; 978 + struct drm_device *dev = platform_get_drvdata(pdev); 979 979 struct ade_data *ade; 980 980 struct ade_hw_ctx *ctx; 981 981 struct ade_crtc *acrtc; ··· 1034 1034 return 0; 1035 1035 } 1036 1036 1037 - static void ade_drm_cleanup(struct drm_device *dev) 1037 + static void ade_drm_cleanup(struct platform_device *pdev) 1038 1038 { 1039 - struct platform_device *pdev = dev->platformdev; 1040 - struct ade_data *ade = platform_get_drvdata(pdev); 1041 - struct drm_crtc *crtc = &ade->acrtc.base; 1042 - 1043 - drm_crtc_cleanup(crtc); 1044 1039 } 1045 1040 1046 1041 const struct kirin_dc_ops ade_dc_ops = {
+3 -5
drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.c
··· 42 42 #endif 43 43 drm_kms_helper_poll_fini(dev); 44 44 drm_vblank_cleanup(dev); 45 - dc_ops->cleanup(dev); 45 + dc_ops->cleanup(to_platform_device(dev->dev)); 46 46 drm_mode_config_cleanup(dev); 47 47 devm_kfree(dev->dev, priv); 48 48 dev->dev_private = NULL; ··· 104 104 kirin_drm_mode_config_init(dev); 105 105 106 106 /* display controller init */ 107 - ret = dc_ops->init(dev); 107 + ret = dc_ops->init(to_platform_device(dev->dev)); 108 108 if (ret) 109 109 goto err_mode_config_cleanup; 110 110 ··· 138 138 err_unbind_all: 139 139 component_unbind_all(dev->dev, dev); 140 140 err_dc_cleanup: 141 - dc_ops->cleanup(dev); 141 + dc_ops->cleanup(to_platform_device(dev->dev)); 142 142 err_mode_config_cleanup: 143 143 drm_mode_config_cleanup(dev); 144 144 devm_kfree(dev->dev, priv); ··· 208 208 drm_dev = drm_dev_alloc(driver, dev); 209 209 if (IS_ERR(drm_dev)) 210 210 return PTR_ERR(drm_dev); 211 - 212 - drm_dev->platformdev = to_platform_device(dev); 213 211 214 212 ret = kirin_drm_kms_init(drm_dev); 215 213 if (ret)
+2 -2
drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.h
··· 15 15 16 16 /* display controller init/cleanup ops */ 17 17 struct kirin_dc_ops { 18 - int (*init)(struct drm_device *dev); 19 - void (*cleanup)(struct drm_device *dev); 18 + int (*init)(struct platform_device *pdev); 19 + void (*cleanup)(struct platform_device *pdev); 20 20 }; 21 21 22 22 struct kirin_drm_private {
+6 -5
drivers/gpu/drm/i915/i915_debugfs.c
··· 1873 1873 seq_printf(m, "fbcon size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ", 1874 1874 fbdev_fb->base.width, 1875 1875 fbdev_fb->base.height, 1876 - fbdev_fb->base.depth, 1877 - fbdev_fb->base.bits_per_pixel, 1876 + fbdev_fb->base.format->depth, 1877 + fbdev_fb->base.format->cpp[0] * 8, 1878 1878 fbdev_fb->base.modifier, 1879 1879 drm_framebuffer_read_refcount(&fbdev_fb->base)); 1880 1880 describe_obj(m, fbdev_fb->obj); ··· 1891 1891 seq_printf(m, "user size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ", 1892 1892 fb->base.width, 1893 1893 fb->base.height, 1894 - fb->base.depth, 1895 - fb->base.bits_per_pixel, 1894 + fb->base.format->depth, 1895 + fb->base.format->cpp[0] * 8, 1896 1896 fb->base.modifier, 1897 1897 drm_framebuffer_read_refcount(&fb->base)); 1898 1898 describe_obj(m, fb->obj); ··· 3021 3021 state = plane->state; 3022 3022 3023 3023 if (state->fb) { 3024 - drm_get_format_name(state->fb->pixel_format, &format_name); 3024 + drm_get_format_name(state->fb->format->format, 3025 + &format_name); 3025 3026 } else { 3026 3027 sprintf(format_name.str, "N/A"); 3027 3028 }
+2 -2
drivers/gpu/drm/i915/i915_drv.h
··· 1026 1026 1027 1027 struct { 1028 1028 u64 ilk_ggtt_offset; 1029 - uint32_t pixel_format; 1029 + const struct drm_format_info *format; 1030 1030 unsigned int stride; 1031 1031 int fence_reg; 1032 1032 unsigned int tiling_mode; ··· 1042 1042 1043 1043 struct { 1044 1044 u64 ggtt_offset; 1045 - uint32_t pixel_format; 1045 + const struct drm_format_info *format; 1046 1046 unsigned int stride; 1047 1047 int fence_reg; 1048 1048 } fb;
+20 -11
drivers/gpu/drm/i915/i915_gem_evict.c
··· 51 51 } 52 52 53 53 static bool 54 - mark_free(struct i915_vma *vma, unsigned int flags, struct list_head *unwind) 54 + mark_free(struct drm_mm_scan *scan, 55 + struct i915_vma *vma, 56 + unsigned int flags, 57 + struct list_head *unwind) 55 58 { 56 59 if (i915_vma_is_pinned(vma)) 57 60 return false; ··· 66 63 return false; 67 64 68 65 list_add(&vma->exec_list, unwind); 69 - return drm_mm_scan_add_block(&vma->node); 66 + return drm_mm_scan_add_block(scan, &vma->node); 70 67 } 71 68 72 69 /** ··· 100 97 unsigned flags) 101 98 { 102 99 struct drm_i915_private *dev_priv = to_i915(vm->dev); 100 + struct drm_mm_scan scan; 103 101 struct list_head eviction_list; 104 102 struct list_head *phases[] = { 105 103 &vm->inactive_list, ··· 108 104 NULL, 109 105 }, **phase; 110 106 struct i915_vma *vma, *next; 107 + struct drm_mm_node *node; 111 108 int ret; 112 109 113 110 lockdep_assert_held(&vm->dev->struct_mutex); ··· 127 122 * On each list, the oldest objects lie at the HEAD with the freshest 128 123 * object on the TAIL. 129 124 */ 130 - if (start != 0 || end != vm->total) { 131 - drm_mm_init_scan_with_range(&vm->mm, min_size, 132 - alignment, cache_level, 133 - start, end); 134 - } else 135 - drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level); 125 + drm_mm_scan_init_with_range(&scan, &vm->mm, 126 + min_size, alignment, cache_level, 127 + start, end, 128 + flags & PIN_HIGH ? DRM_MM_CREATE_TOP : 0); 136 129 137 130 if (flags & PIN_NONBLOCK) 138 131 phases[1] = NULL; ··· 140 137 phase = phases; 141 138 do { 142 139 list_for_each_entry(vma, *phase, vm_link) 143 - if (mark_free(vma, flags, &eviction_list)) 140 + if (mark_free(&scan, vma, flags, &eviction_list)) 144 141 goto found; 145 142 } while (*++phase); 146 143 147 144 /* Nothing found, clean up and bail out! */ 148 145 list_for_each_entry_safe(vma, next, &eviction_list, exec_list) { 149 - ret = drm_mm_scan_remove_block(&vma->node); 146 + ret = drm_mm_scan_remove_block(&scan, &vma->node); 150 147 BUG_ON(ret); 151 148 152 149 INIT_LIST_HEAD(&vma->exec_list); ··· 195 192 * of any of our objects, thus corrupting the list). 196 193 */ 197 194 list_for_each_entry_safe(vma, next, &eviction_list, exec_list) { 198 - if (drm_mm_scan_remove_block(&vma->node)) 195 + if (drm_mm_scan_remove_block(&scan, &vma->node)) 199 196 __i915_vma_pin(vma); 200 197 else 201 198 list_del_init(&vma->exec_list); ··· 212 209 if (ret == 0) 213 210 ret = i915_vma_unbind(vma); 214 211 } 212 + 213 + while (ret == 0 && (node = drm_mm_scan_color_evict(&scan))) { 214 + vma = container_of(node, struct i915_vma, node); 215 + ret = i915_vma_unbind(vma); 216 + } 217 + 215 218 return ret; 216 219 } 217 220
+3 -5
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 2721 2721 dma_unmap_sg(kdev, pages->sgl, pages->nents, PCI_DMA_BIDIRECTIONAL); 2722 2722 } 2723 2723 2724 - static void i915_gtt_color_adjust(struct drm_mm_node *node, 2724 + static void i915_gtt_color_adjust(const struct drm_mm_node *node, 2725 2725 unsigned long color, 2726 2726 u64 *start, 2727 2727 u64 *end) ··· 2729 2729 if (node->color != color) 2730 2730 *start += 4096; 2731 2731 2732 - node = list_first_entry_or_null(&node->node_list, 2733 - struct drm_mm_node, 2734 - node_list); 2735 - if (node && node->allocated && node->color != color) 2732 + node = list_next_entry(node, node_list); 2733 + if (node->allocated && node->color != color) 2736 2734 *end -= 4096; 2737 2735 } 2738 2736
+2 -2
drivers/gpu/drm/i915/i915_vma.c
··· 320 320 return true; 321 321 322 322 other = list_entry(gtt_space->node_list.prev, struct drm_mm_node, node_list); 323 - if (other->allocated && !other->hole_follows && other->color != cache_level) 323 + if (other->allocated && !drm_mm_hole_follows(other) && other->color != cache_level) 324 324 return false; 325 325 326 326 other = list_entry(gtt_space->node_list.next, struct drm_mm_node, node_list); 327 - if (other->allocated && !gtt_space->hole_follows && other->color != cache_level) 327 + if (other->allocated && !drm_mm_hole_follows(gtt_space) && other->color != cache_level) 328 328 return false; 329 329 330 330 return true;
+32 -19
drivers/gpu/drm/i915/intel_atomic_plane.c
··· 103 103 drm_atomic_helper_plane_destroy_state(plane, state); 104 104 } 105 105 106 - static int intel_plane_atomic_check(struct drm_plane *plane, 107 - struct drm_plane_state *state) 106 + int intel_plane_atomic_check_with_state(struct intel_crtc_state *crtc_state, 107 + struct intel_plane_state *intel_state) 108 108 { 109 + struct drm_plane *plane = intel_state->base.plane; 109 110 struct drm_i915_private *dev_priv = to_i915(plane->dev); 110 - struct drm_crtc *crtc = state->crtc; 111 - struct intel_crtc *intel_crtc; 112 - struct intel_crtc_state *crtc_state; 111 + struct drm_plane_state *state = &intel_state->base; 113 112 struct intel_plane *intel_plane = to_intel_plane(plane); 114 - struct intel_plane_state *intel_state = to_intel_plane_state(state); 115 - struct drm_crtc_state *drm_crtc_state; 116 113 int ret; 117 - 118 - crtc = crtc ? crtc : plane->state->crtc; 119 - intel_crtc = to_intel_crtc(crtc); 120 114 121 115 /* 122 116 * Both crtc and plane->crtc could be NULL if we're updating a ··· 118 124 * anything driver-specific we need to test in that case, so 119 125 * just return success. 120 126 */ 121 - if (!crtc) 127 + if (!intel_state->base.crtc && !plane->state->crtc) 122 128 return 0; 123 - 124 - drm_crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 125 - if (WARN_ON(!drm_crtc_state)) 126 - return -EINVAL; 127 - 128 - crtc_state = to_intel_crtc_state(drm_crtc_state); 129 129 130 130 /* Clip all planes to CRTC size, or 0x0 if CRTC is disabled */ 131 131 intel_state->clip.x1 = 0; ··· 143 155 * RGB 16-bit 5:6:5, and Indexed 8-bit. 144 156 * TBD: Add RGB64 case once its added in supported format list. 145 157 */ 146 - switch (state->fb->pixel_format) { 158 + switch (state->fb->format->format) { 147 159 case DRM_FORMAT_C8: 148 160 case DRM_FORMAT_RGB565: 149 161 DRM_DEBUG_KMS("Unsupported pixel format %s for 90/270!\n", 150 - drm_get_format_name(state->fb->pixel_format, 162 + drm_get_format_name(state->fb->format->format, 151 163 &format_name)); 152 164 return -EINVAL; 153 165 ··· 170 182 return ret; 171 183 172 184 return intel_plane_atomic_calc_changes(&crtc_state->base, state); 185 + } 186 + 187 + static int intel_plane_atomic_check(struct drm_plane *plane, 188 + struct drm_plane_state *state) 189 + { 190 + struct drm_crtc *crtc = state->crtc; 191 + struct drm_crtc_state *drm_crtc_state; 192 + 193 + crtc = crtc ? crtc : plane->state->crtc; 194 + 195 + /* 196 + * Both crtc and plane->crtc could be NULL if we're updating a 197 + * property while the plane is disabled. We don't actually have 198 + * anything driver-specific we need to test in that case, so 199 + * just return success. 200 + */ 201 + if (!crtc) 202 + return 0; 203 + 204 + drm_crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 205 + if (WARN_ON(!drm_crtc_state)) 206 + return -EINVAL; 207 + 208 + return intel_plane_atomic_check_with_state(to_intel_crtc_state(drm_crtc_state), 209 + to_intel_plane_state(state)); 173 210 } 174 211 175 212 static void intel_plane_atomic_update(struct drm_plane *plane,
+171 -39
drivers/gpu/drm/i915/intel_display.c
··· 2275 2275 int plane) 2276 2276 { 2277 2277 const struct drm_framebuffer *fb = state->base.fb; 2278 - unsigned int cpp = drm_format_plane_cpp(fb->pixel_format, plane); 2278 + unsigned int cpp = fb->format->cpp[plane]; 2279 2279 unsigned int pitch = fb->pitches[plane]; 2280 2280 2281 2281 return y * pitch + x * cpp; ··· 2344 2344 { 2345 2345 const struct drm_i915_private *dev_priv = to_i915(state->base.plane->dev); 2346 2346 const struct drm_framebuffer *fb = state->base.fb; 2347 - unsigned int cpp = drm_format_plane_cpp(fb->pixel_format, plane); 2347 + unsigned int cpp = fb->format->cpp[plane]; 2348 2348 unsigned int rotation = state->base.rotation; 2349 2349 unsigned int pitch = intel_fb_pitch(fb, plane, rotation); 2350 2350 ··· 2400 2400 u32 alignment) 2401 2401 { 2402 2402 uint64_t fb_modifier = fb->modifier; 2403 - unsigned int cpp = drm_format_plane_cpp(fb->pixel_format, plane); 2403 + unsigned int cpp = fb->format->cpp[plane]; 2404 2404 u32 offset, offset_aligned; 2405 2405 2406 2406 if (alignment) ··· 2455 2455 u32 alignment; 2456 2456 2457 2457 /* AUX_DIST needs only 4K alignment */ 2458 - if (fb->pixel_format == DRM_FORMAT_NV12 && plane == 1) 2458 + if (fb->format->format == DRM_FORMAT_NV12 && plane == 1) 2459 2459 alignment = 4096; 2460 2460 else 2461 2461 alignment = intel_surf_alignment(dev_priv, fb->modifier); ··· 2468 2468 static void intel_fb_offset_to_xy(int *x, int *y, 2469 2469 const struct drm_framebuffer *fb, int plane) 2470 2470 { 2471 - unsigned int cpp = drm_format_plane_cpp(fb->pixel_format, plane); 2471 + unsigned int cpp = fb->format->cpp[plane]; 2472 2472 unsigned int pitch = fb->pitches[plane]; 2473 2473 u32 linear_offset = fb->offsets[plane]; 2474 2474 ··· 2496 2496 struct intel_rotation_info *rot_info = &intel_fb->rot_info; 2497 2497 u32 gtt_offset_rotated = 0; 2498 2498 unsigned int max_size = 0; 2499 - uint32_t format = fb->pixel_format; 2500 - int i, num_planes = drm_format_num_planes(format); 2499 + int i, num_planes = fb->format->num_planes; 2501 2500 unsigned int tile_size = intel_tile_size(dev_priv); 2502 2501 2503 2502 for (i = 0; i < num_planes; i++) { ··· 2505 2506 u32 offset; 2506 2507 int x, y; 2507 2508 2508 - cpp = drm_format_plane_cpp(format, i); 2509 - width = drm_format_plane_width(fb->width, format, i); 2510 - height = drm_format_plane_height(fb->height, format, i); 2509 + cpp = fb->format->cpp[i]; 2510 + width = drm_framebuffer_plane_width(fb->width, fb, i); 2511 + height = drm_framebuffer_plane_height(fb->height, fb, i); 2511 2512 2512 2513 intel_fb_offset_to_xy(&x, &y, fb, i); 2513 2514 ··· 2700 2701 if (plane_config->tiling == I915_TILING_X) 2701 2702 obj->tiling_and_stride = fb->pitches[0] | I915_TILING_X; 2702 2703 2703 - mode_cmd.pixel_format = fb->pixel_format; 2704 + mode_cmd.pixel_format = fb->format->format; 2704 2705 mode_cmd.width = fb->width; 2705 2706 mode_cmd.height = fb->height; 2706 2707 mode_cmd.pitches[0] = fb->pitches[0]; ··· 2832 2833 static int skl_max_plane_width(const struct drm_framebuffer *fb, int plane, 2833 2834 unsigned int rotation) 2834 2835 { 2835 - int cpp = drm_format_plane_cpp(fb->pixel_format, plane); 2836 + int cpp = fb->format->cpp[plane]; 2836 2837 2837 2838 switch (fb->modifier) { 2838 2839 case DRM_FORMAT_MOD_NONE: ··· 2911 2912 * TODO: linear and Y-tiled seem fine, Yf untested, 2912 2913 */ 2913 2914 if (fb->modifier == I915_FORMAT_MOD_X_TILED) { 2914 - int cpp = drm_format_plane_cpp(fb->pixel_format, 0); 2915 + int cpp = fb->format->cpp[0]; 2915 2916 2916 2917 while ((x + w) * cpp > fb->pitches[0]) { 2917 2918 if (offset == 0) { ··· 2976 2977 * Handle the AUX surface first since 2977 2978 * the main surface setup depends on it. 2978 2979 */ 2979 - if (fb->pixel_format == DRM_FORMAT_NV12) { 2980 + if (fb->format->format == DRM_FORMAT_NV12) { 2980 2981 ret = skl_check_nv12_aux_surface(plane_state); 2981 2982 if (ret) 2982 2983 return ret; ··· 3031 3032 I915_WRITE(PRIMCNSTALPHA(plane), 0); 3032 3033 } 3033 3034 3034 - switch (fb->pixel_format) { 3035 + switch (fb->format->format) { 3035 3036 case DRM_FORMAT_C8: 3036 3037 dspcntr |= DISPPLANE_8BPP; 3037 3038 break; ··· 3146 3147 if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) 3147 3148 dspcntr |= DISPPLANE_PIPE_CSC_ENABLE; 3148 3149 3149 - switch (fb->pixel_format) { 3150 + switch (fb->format->format) { 3150 3151 case DRM_FORMAT_C8: 3151 3152 dspcntr |= DISPPLANE_8BPP; 3152 3153 break; ··· 3277 3278 * linear buffers or in number of tiles for tiled buffers. 3278 3279 */ 3279 3280 if (drm_rotation_90_or_270(rotation)) { 3280 - int cpp = drm_format_plane_cpp(fb->pixel_format, plane); 3281 + int cpp = fb->format->cpp[plane]; 3281 3282 3282 3283 stride /= intel_tile_height(dev_priv, fb->modifier, cpp); 3283 3284 } else { 3284 3285 stride /= intel_fb_stride_alignment(dev_priv, fb->modifier, 3285 - fb->pixel_format); 3286 + fb->format->format); 3286 3287 } 3287 3288 3288 3289 return stride; ··· 3396 3397 PLANE_CTL_PIPE_GAMMA_ENABLE | 3397 3398 PLANE_CTL_PIPE_CSC_ENABLE; 3398 3399 3399 - plane_ctl |= skl_plane_ctl_format(fb->pixel_format); 3400 + plane_ctl |= skl_plane_ctl_format(fb->format->format); 3400 3401 plane_ctl |= skl_plane_ctl_tiling(fb->modifier); 3401 3402 plane_ctl |= PLANE_CTL_PLANE_GAMMA_DISABLE; 3402 3403 plane_ctl |= skl_plane_ctl_rotation(rotation); ··· 4768 4769 } 4769 4770 4770 4771 /* Check src format */ 4771 - switch (fb->pixel_format) { 4772 + switch (fb->format->format) { 4772 4773 case DRM_FORMAT_RGB565: 4773 4774 case DRM_FORMAT_XBGR8888: 4774 4775 case DRM_FORMAT_XRGB8888: ··· 4784 4785 default: 4785 4786 DRM_DEBUG_KMS("[PLANE:%d:%s] FB:%d unsupported scaling format 0x%x\n", 4786 4787 intel_plane->base.base.id, intel_plane->base.name, 4787 - fb->base.id, fb->pixel_format); 4788 + fb->base.id, fb->format->format); 4788 4789 return -EINVAL; 4789 4790 } 4790 4791 ··· 8692 8693 8693 8694 fb = &intel_fb->base; 8694 8695 8696 + fb->dev = dev; 8697 + 8695 8698 if (INTEL_GEN(dev_priv) >= 4) { 8696 8699 if (val & DISPPLANE_TILED) { 8697 8700 plane_config->tiling = I915_TILING_X; ··· 8703 8702 8704 8703 pixel_format = val & DISPPLANE_PIXFORMAT_MASK; 8705 8704 fourcc = i9xx_format_to_fourcc(pixel_format); 8706 - fb->pixel_format = fourcc; 8707 - fb->bits_per_pixel = drm_format_plane_cpp(fourcc, 0) * 8; 8705 + fb->format = drm_format_info(fourcc); 8708 8706 8709 8707 if (INTEL_GEN(dev_priv) >= 4) { 8710 8708 if (plane_config->tiling) ··· 8724 8724 fb->pitches[0] = val & 0xffffffc0; 8725 8725 8726 8726 aligned_height = intel_fb_align_height(dev, fb->height, 8727 - fb->pixel_format, 8727 + fb->format->format, 8728 8728 fb->modifier); 8729 8729 8730 8730 plane_config->size = fb->pitches[0] * aligned_height; 8731 8731 8732 8732 DRM_DEBUG_KMS("pipe/plane %c/%d with fb: size=%dx%d@%d, offset=%x, pitch %d, size 0x%x\n", 8733 8733 pipe_name(pipe), plane, fb->width, fb->height, 8734 - fb->bits_per_pixel, base, fb->pitches[0], 8734 + fb->format->cpp[0] * 8, base, fb->pitches[0], 8735 8735 plane_config->size); 8736 8736 8737 8737 plane_config->fb = intel_fb; ··· 9723 9723 9724 9724 fb = &intel_fb->base; 9725 9725 9726 + fb->dev = dev; 9727 + 9726 9728 val = I915_READ(PLANE_CTL(pipe, 0)); 9727 9729 if (!(val & PLANE_CTL_ENABLE)) 9728 9730 goto error; ··· 9733 9731 fourcc = skl_format_to_fourcc(pixel_format, 9734 9732 val & PLANE_CTL_ORDER_RGBX, 9735 9733 val & PLANE_CTL_ALPHA_MASK); 9736 - fb->pixel_format = fourcc; 9737 - fb->bits_per_pixel = drm_format_plane_cpp(fourcc, 0) * 8; 9734 + fb->format = drm_format_info(fourcc); 9738 9735 9739 9736 tiling = val & PLANE_CTL_TILED_MASK; 9740 9737 switch (tiling) { ··· 9766 9765 9767 9766 val = I915_READ(PLANE_STRIDE(pipe, 0)); 9768 9767 stride_mult = intel_fb_stride_alignment(dev_priv, fb->modifier, 9769 - fb->pixel_format); 9768 + fb->format->format); 9770 9769 fb->pitches[0] = (val & 0x3ff) * stride_mult; 9771 9770 9772 9771 aligned_height = intel_fb_align_height(dev, fb->height, 9773 - fb->pixel_format, 9772 + fb->format->format, 9774 9773 fb->modifier); 9775 9774 9776 9775 plane_config->size = fb->pitches[0] * aligned_height; 9777 9776 9778 9777 DRM_DEBUG_KMS("pipe %c with fb: size=%dx%d@%d, offset=%x, pitch %d, size 0x%x\n", 9779 9778 pipe_name(pipe), fb->width, fb->height, 9780 - fb->bits_per_pixel, base, fb->pitches[0], 9779 + fb->format->cpp[0] * 8, base, fb->pitches[0], 9781 9780 plane_config->size); 9782 9781 9783 9782 plane_config->fb = intel_fb; ··· 9836 9835 9837 9836 fb = &intel_fb->base; 9838 9837 9838 + fb->dev = dev; 9839 + 9839 9840 if (INTEL_GEN(dev_priv) >= 4) { 9840 9841 if (val & DISPPLANE_TILED) { 9841 9842 plane_config->tiling = I915_TILING_X; ··· 9847 9844 9848 9845 pixel_format = val & DISPPLANE_PIXFORMAT_MASK; 9849 9846 fourcc = i9xx_format_to_fourcc(pixel_format); 9850 - fb->pixel_format = fourcc; 9851 - fb->bits_per_pixel = drm_format_plane_cpp(fourcc, 0) * 8; 9847 + fb->format = drm_format_info(fourcc); 9852 9848 9853 9849 base = I915_READ(DSPSURF(pipe)) & 0xfffff000; 9854 9850 if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { ··· 9868 9866 fb->pitches[0] = val & 0xffffffc0; 9869 9867 9870 9868 aligned_height = intel_fb_align_height(dev, fb->height, 9871 - fb->pixel_format, 9869 + fb->format->format, 9872 9870 fb->modifier); 9873 9871 9874 9872 plane_config->size = fb->pitches[0] * aligned_height; 9875 9873 9876 9874 DRM_DEBUG_KMS("pipe %c with fb: size=%dx%d@%d, offset=%x, pitch %d, size 0x%x\n", 9877 9875 pipe_name(pipe), fb->width, fb->height, 9878 - fb->bits_per_pixel, base, fb->pitches[0], 9876 + fb->format->cpp[0] * 8, base, fb->pitches[0], 9879 9877 plane_config->size); 9880 9878 9881 9879 plane_config->fb = intel_fb; ··· 11034 11032 11035 11033 fb = &dev_priv->fbdev->fb->base; 11036 11034 if (fb->pitches[0] < intel_framebuffer_pitch_for_width(mode->hdisplay, 11037 - fb->bits_per_pixel)) 11035 + fb->format->cpp[0] * 8)) 11038 11036 return NULL; 11039 11037 11040 11038 if (obj->base.size < mode->vdisplay * fb->pitches[0]) ··· 12136 12134 return -EBUSY; 12137 12135 12138 12136 /* Can't change pixel format via MI display flips. */ 12139 - if (fb->pixel_format != crtc->primary->fb->pixel_format) 12137 + if (fb->format != crtc->primary->fb->format) 12140 12138 return -EINVAL; 12141 12139 12142 12140 /* ··· 12833 12831 DRM_DEBUG_KMS("[PLANE:%d:%s] FB:%d, fb = %ux%u format = %s\n", 12834 12832 plane->base.id, plane->name, 12835 12833 fb->base.id, fb->width, fb->height, 12836 - drm_get_format_name(fb->pixel_format, &format_name)); 12834 + drm_get_format_name(fb->format->format, &format_name)); 12837 12835 if (INTEL_GEN(dev_priv) >= 9) 12838 12836 DRM_DEBUG_KMS("\tscaler:%d src %dx%d+%d+%d dst %dx%d+%d+%d\n", 12839 12837 state->scaler_id, ··· 14943 14941 .atomic_destroy_state = intel_plane_destroy_state, 14944 14942 }; 14945 14943 14944 + static int 14945 + intel_legacy_cursor_update(struct drm_plane *plane, 14946 + struct drm_crtc *crtc, 14947 + struct drm_framebuffer *fb, 14948 + int crtc_x, int crtc_y, 14949 + unsigned int crtc_w, unsigned int crtc_h, 14950 + uint32_t src_x, uint32_t src_y, 14951 + uint32_t src_w, uint32_t src_h) 14952 + { 14953 + struct drm_i915_private *dev_priv = to_i915(crtc->dev); 14954 + int ret; 14955 + struct drm_plane_state *old_plane_state, *new_plane_state; 14956 + struct intel_plane *intel_plane = to_intel_plane(plane); 14957 + struct drm_framebuffer *old_fb; 14958 + struct drm_crtc_state *crtc_state = crtc->state; 14959 + 14960 + /* 14961 + * When crtc is inactive or there is a modeset pending, 14962 + * wait for it to complete in the slowpath 14963 + */ 14964 + if (!crtc_state->active || needs_modeset(crtc_state) || 14965 + to_intel_crtc_state(crtc_state)->update_pipe) 14966 + goto slow; 14967 + 14968 + old_plane_state = plane->state; 14969 + 14970 + /* 14971 + * If any parameters change that may affect watermarks, 14972 + * take the slowpath. Only changing fb or position should be 14973 + * in the fastpath. 14974 + */ 14975 + if (old_plane_state->crtc != crtc || 14976 + old_plane_state->src_w != src_w || 14977 + old_plane_state->src_h != src_h || 14978 + old_plane_state->crtc_w != crtc_w || 14979 + old_plane_state->crtc_h != crtc_h || 14980 + !old_plane_state->visible || 14981 + old_plane_state->fb->modifier != fb->modifier) 14982 + goto slow; 14983 + 14984 + new_plane_state = intel_plane_duplicate_state(plane); 14985 + if (!new_plane_state) 14986 + return -ENOMEM; 14987 + 14988 + drm_atomic_set_fb_for_plane(new_plane_state, fb); 14989 + 14990 + new_plane_state->src_x = src_x; 14991 + new_plane_state->src_y = src_y; 14992 + new_plane_state->src_w = src_w; 14993 + new_plane_state->src_h = src_h; 14994 + new_plane_state->crtc_x = crtc_x; 14995 + new_plane_state->crtc_y = crtc_y; 14996 + new_plane_state->crtc_w = crtc_w; 14997 + new_plane_state->crtc_h = crtc_h; 14998 + 14999 + ret = intel_plane_atomic_check_with_state(to_intel_crtc_state(crtc->state), 15000 + to_intel_plane_state(new_plane_state)); 15001 + if (ret) 15002 + goto out_free; 15003 + 15004 + /* Visibility changed, must take slowpath. */ 15005 + if (!new_plane_state->visible) 15006 + goto slow_free; 15007 + 15008 + ret = mutex_lock_interruptible(&dev_priv->drm.struct_mutex); 15009 + if (ret) 15010 + goto out_free; 15011 + 15012 + if (INTEL_INFO(dev_priv)->cursor_needs_physical) { 15013 + int align = IS_I830(dev_priv) ? 16 * 1024 : 256; 15014 + 15015 + ret = i915_gem_object_attach_phys(intel_fb_obj(fb), align); 15016 + if (ret) { 15017 + DRM_DEBUG_KMS("failed to attach phys object\n"); 15018 + goto out_unlock; 15019 + } 15020 + } else { 15021 + struct i915_vma *vma; 15022 + 15023 + vma = intel_pin_and_fence_fb_obj(fb, new_plane_state->rotation); 15024 + if (IS_ERR(vma)) { 15025 + DRM_DEBUG_KMS("failed to pin object\n"); 15026 + 15027 + ret = PTR_ERR(vma); 15028 + goto out_unlock; 15029 + } 15030 + } 15031 + 15032 + old_fb = old_plane_state->fb; 15033 + 15034 + i915_gem_track_fb(intel_fb_obj(old_fb), intel_fb_obj(fb), 15035 + intel_plane->frontbuffer_bit); 15036 + 15037 + /* Swap plane state */ 15038 + new_plane_state->fence = old_plane_state->fence; 15039 + *to_intel_plane_state(old_plane_state) = *to_intel_plane_state(new_plane_state); 15040 + new_plane_state->fence = NULL; 15041 + new_plane_state->fb = old_fb; 15042 + 15043 + intel_plane->update_plane(plane, 15044 + to_intel_crtc_state(crtc->state), 15045 + to_intel_plane_state(plane->state)); 15046 + 15047 + intel_cleanup_plane_fb(plane, new_plane_state); 15048 + 15049 + out_unlock: 15050 + mutex_unlock(&dev_priv->drm.struct_mutex); 15051 + out_free: 15052 + intel_plane_destroy_state(plane, new_plane_state); 15053 + return ret; 15054 + 15055 + slow_free: 15056 + intel_plane_destroy_state(plane, new_plane_state); 15057 + slow: 15058 + return drm_atomic_helper_update_plane(plane, crtc, fb, 15059 + crtc_x, crtc_y, crtc_w, crtc_h, 15060 + src_x, src_y, src_w, src_h); 15061 + } 15062 + 15063 + static const struct drm_plane_funcs intel_cursor_plane_funcs = { 15064 + .update_plane = intel_legacy_cursor_update, 15065 + .disable_plane = drm_atomic_helper_disable_plane, 15066 + .destroy = intel_plane_destroy, 15067 + .set_property = drm_atomic_helper_plane_set_property, 15068 + .atomic_get_property = intel_plane_atomic_get_property, 15069 + .atomic_set_property = intel_plane_atomic_set_property, 15070 + .atomic_duplicate_state = intel_plane_duplicate_state, 15071 + .atomic_destroy_state = intel_plane_destroy_state, 15072 + }; 15073 + 14946 15074 static struct intel_plane * 14947 15075 intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) 14948 15076 { ··· 15317 15185 cursor->disable_plane = intel_disable_cursor_plane; 15318 15186 15319 15187 ret = drm_universal_plane_init(&dev_priv->drm, &cursor->base, 15320 - 0, &intel_plane_funcs, 15188 + 0, &intel_cursor_plane_funcs, 15321 15189 intel_cursor_formats, 15322 15190 ARRAY_SIZE(intel_cursor_formats), 15323 15191 DRM_PLANE_TYPE_CURSOR, ··· 15998 15866 if (mode_cmd->offsets[0] != 0) 15999 15867 return -EINVAL; 16000 15868 16001 - drm_helper_mode_fill_fb_struct(&intel_fb->base, mode_cmd); 15869 + drm_helper_mode_fill_fb_struct(dev, &intel_fb->base, mode_cmd); 16002 15870 intel_fb->obj = obj; 16003 15871 16004 15872 ret = intel_fill_fb_info(dev_priv, &intel_fb->base);
+3
drivers/gpu/drm/i915/intel_drv.h
··· 32 32 #include "i915_drv.h" 33 33 #include <drm/drm_crtc.h> 34 34 #include <drm/drm_crtc_helper.h> 35 + #include <drm/drm_encoder.h> 35 36 #include <drm/drm_fb_helper.h> 36 37 #include <drm/drm_dp_dual_mode_helper.h> 37 38 #include <drm/drm_dp_mst_helper.h> ··· 1815 1814 void intel_plane_destroy_state(struct drm_plane *plane, 1816 1815 struct drm_plane_state *state); 1817 1816 extern const struct drm_plane_helper_funcs intel_plane_helper_funcs; 1817 + int intel_plane_atomic_check_with_state(struct intel_crtc_state *crtc_state, 1818 + struct intel_plane_state *intel_state); 1818 1819 1819 1820 /* intel_color.c */ 1820 1821 void intel_color_init(struct drm_crtc *crtc);
+7 -7
drivers/gpu/drm/i915/intel_fbc.c
··· 188 188 u32 dpfc_ctl; 189 189 190 190 dpfc_ctl = DPFC_CTL_PLANE(params->crtc.plane) | DPFC_SR_EN; 191 - if (drm_format_plane_cpp(params->fb.pixel_format, 0) == 2) 191 + if (params->fb.format->cpp[0] == 2) 192 192 dpfc_ctl |= DPFC_CTL_LIMIT_2X; 193 193 else 194 194 dpfc_ctl |= DPFC_CTL_LIMIT_1X; ··· 235 235 int threshold = dev_priv->fbc.threshold; 236 236 237 237 dpfc_ctl = DPFC_CTL_PLANE(params->crtc.plane); 238 - if (drm_format_plane_cpp(params->fb.pixel_format, 0) == 2) 238 + if (params->fb.format->cpp[0] == 2) 239 239 threshold++; 240 240 241 241 switch (threshold) { ··· 303 303 if (IS_IVYBRIDGE(dev_priv)) 304 304 dpfc_ctl |= IVB_DPFC_CTL_PLANE(params->crtc.plane); 305 305 306 - if (drm_format_plane_cpp(params->fb.pixel_format, 0) == 2) 306 + if (params->fb.format->cpp[0] == 2) 307 307 threshold++; 308 308 309 309 switch (threshold) { ··· 581 581 WARN_ON(drm_mm_node_allocated(&fbc->compressed_fb)); 582 582 583 583 size = intel_fbc_calculate_cfb_size(dev_priv, &fbc->state_cache); 584 - fb_cpp = drm_format_plane_cpp(fbc->state_cache.fb.pixel_format, 0); 584 + fb_cpp = fbc->state_cache.fb.format->cpp[0]; 585 585 586 586 ret = find_compression_threshold(dev_priv, &fbc->compressed_fb, 587 587 size, fb_cpp); ··· 764 764 * platforms that need. */ 765 765 if (IS_GEN(dev_priv, 5, 6)) 766 766 cache->fb.ilk_ggtt_offset = i915_gem_object_ggtt_offset(obj, NULL); 767 - cache->fb.pixel_format = fb->pixel_format; 767 + cache->fb.format = fb->format; 768 768 cache->fb.stride = fb->pitches[0]; 769 769 cache->fb.fence_reg = get_fence_id(fb); 770 770 cache->fb.tiling_mode = i915_gem_object_get_tiling(obj); ··· 823 823 return false; 824 824 } 825 825 826 - if (!pixel_format_is_valid(dev_priv, cache->fb.pixel_format)) { 826 + if (!pixel_format_is_valid(dev_priv, cache->fb.format->format)) { 827 827 fbc->no_fbc_reason = "pixel format is invalid"; 828 828 return false; 829 829 } ··· 892 892 params->crtc.plane = crtc->plane; 893 893 params->crtc.fence_y_offset = get_crtc_fence_y_offset(crtc); 894 894 895 - params->fb.pixel_format = cache->fb.pixel_format; 895 + params->fb.format = cache->fb.format; 896 896 params->fb.stride = cache->fb.stride; 897 897 params->fb.fence_reg = cache->fb.fence_reg; 898 898
+5 -5
drivers/gpu/drm/i915/intel_fbdev.c
··· 261 261 /* This driver doesn't need a VT switch to restore the mode on resume */ 262 262 info->skip_vt_switch = true; 263 263 264 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 264 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 265 265 drm_fb_helper_fill_var(info, &ifbdev->helper, sizes->fb_width, sizes->fb_height); 266 266 267 267 /* If the object is shmemfs backed, it will have given us zeroed pages. ··· 621 621 * rather than the current pipe's, since they differ. 622 622 */ 623 623 cur_size = intel_crtc->config->base.adjusted_mode.crtc_hdisplay; 624 - cur_size = cur_size * fb->base.bits_per_pixel / 8; 624 + cur_size = cur_size * fb->base.format->cpp[0]; 625 625 if (fb->base.pitches[0] < cur_size) { 626 626 DRM_DEBUG_KMS("fb not wide enough for plane %c (%d vs %d)\n", 627 627 pipe_name(intel_crtc->pipe), ··· 632 632 633 633 cur_size = intel_crtc->config->base.adjusted_mode.crtc_vdisplay; 634 634 cur_size = intel_fb_align_height(dev, cur_size, 635 - fb->base.pixel_format, 635 + fb->base.format->format, 636 636 fb->base.modifier); 637 637 cur_size *= fb->base.pitches[0]; 638 638 DRM_DEBUG_KMS("pipe %c area: %dx%d, bpp: %d, size: %d\n", 639 639 pipe_name(intel_crtc->pipe), 640 640 intel_crtc->config->base.adjusted_mode.crtc_hdisplay, 641 641 intel_crtc->config->base.adjusted_mode.crtc_vdisplay, 642 - fb->base.bits_per_pixel, 642 + fb->base.format->cpp[0] * 8, 643 643 cur_size); 644 644 645 645 if (cur_size > max_size) { ··· 660 660 goto out; 661 661 } 662 662 663 - ifbdev->preferred_bpp = fb->base.bits_per_pixel; 663 + ifbdev->preferred_bpp = fb->base.format->cpp[0] * 8; 664 664 ifbdev->fb = fb; 665 665 666 666 drm_framebuffer_reference(&ifbdev->fb->base);
+12 -14
drivers/gpu/drm/i915/intel_overlay.c
··· 659 659 static void update_colorkey(struct intel_overlay *overlay, 660 660 struct overlay_registers __iomem *regs) 661 661 { 662 + const struct drm_framebuffer *fb = 663 + overlay->crtc->base.primary->fb; 662 664 u32 key = overlay->color_key; 663 665 u32 flags; 664 666 ··· 668 666 if (overlay->color_key_enabled) 669 667 flags |= DST_KEY_ENABLE; 670 668 671 - switch (overlay->crtc->base.primary->fb->bits_per_pixel) { 672 - case 8: 669 + switch (fb->format->format) { 670 + case DRM_FORMAT_C8: 673 671 key = 0; 674 672 flags |= CLK_RGB8I_MASK; 675 673 break; 676 - 677 - case 16: 678 - if (overlay->crtc->base.primary->fb->depth == 15) { 679 - key = RGB15_TO_COLORKEY(key); 680 - flags |= CLK_RGB15_MASK; 681 - } else { 682 - key = RGB16_TO_COLORKEY(key); 683 - flags |= CLK_RGB16_MASK; 684 - } 674 + case DRM_FORMAT_XRGB1555: 675 + key = RGB15_TO_COLORKEY(key); 676 + flags |= CLK_RGB15_MASK; 685 677 break; 686 - 687 - case 24: 688 - case 32: 678 + case DRM_FORMAT_RGB565: 679 + key = RGB16_TO_COLORKEY(key); 680 + flags |= CLK_RGB16_MASK; 681 + break; 682 + default: 689 683 flags |= CLK_RGB24_MASK; 690 684 break; 691 685 }
+37 -30
drivers/gpu/drm/i915/intel_pm.c
··· 652 652 &crtc->config->base.adjusted_mode; 653 653 const struct drm_framebuffer *fb = 654 654 crtc->base.primary->state->fb; 655 - int cpp = drm_format_plane_cpp(fb->pixel_format, 0); 655 + int cpp = fb->format->cpp[0]; 656 656 int clock = adjusted_mode->crtc_clock; 657 657 658 658 /* Display SR */ ··· 727 727 clock = adjusted_mode->crtc_clock; 728 728 htotal = adjusted_mode->crtc_htotal; 729 729 hdisplay = crtc->config->pipe_src_w; 730 - cpp = drm_format_plane_cpp(fb->pixel_format, 0); 730 + cpp = fb->format->cpp[0]; 731 731 732 732 /* Use the small buffer method to calculate plane watermark */ 733 733 entries = ((clock * cpp / 1000) * display_latency_ns) / 1000; ··· 816 816 clock = adjusted_mode->crtc_clock; 817 817 htotal = adjusted_mode->crtc_htotal; 818 818 hdisplay = crtc->config->pipe_src_w; 819 - cpp = drm_format_plane_cpp(fb->pixel_format, 0); 819 + cpp = fb->format->cpp[0]; 820 820 821 821 line_time_us = max(htotal * 1000 / clock, 1); 822 822 line_count = (latency_ns / line_time_us + 1000) / 1000; ··· 963 963 if (!state->base.visible) 964 964 return 0; 965 965 966 - cpp = drm_format_plane_cpp(state->base.fb->pixel_format, 0); 966 + cpp = state->base.fb->format->cpp[0]; 967 967 clock = crtc->config->base.adjusted_mode.crtc_clock; 968 968 htotal = crtc->config->base.adjusted_mode.crtc_htotal; 969 969 width = crtc->config->pipe_src_w; ··· 1004 1004 1005 1005 if (state->base.visible) { 1006 1006 wm_state->num_active_planes++; 1007 - total_rate += drm_format_plane_cpp(state->base.fb->pixel_format, 0); 1007 + total_rate += state->base.fb->format->cpp[0]; 1008 1008 } 1009 1009 } 1010 1010 ··· 1023 1023 continue; 1024 1024 } 1025 1025 1026 - rate = drm_format_plane_cpp(state->base.fb->pixel_format, 0); 1026 + rate = state->base.fb->format->cpp[0]; 1027 1027 plane->wm.fifo_size = fifo_size * rate / total_rate; 1028 1028 fifo_left -= plane->wm.fifo_size; 1029 1029 } ··· 1455 1455 int clock = adjusted_mode->crtc_clock; 1456 1456 int htotal = adjusted_mode->crtc_htotal; 1457 1457 int hdisplay = crtc->config->pipe_src_w; 1458 - int cpp = drm_format_plane_cpp(fb->pixel_format, 0); 1458 + int cpp = fb->format->cpp[0]; 1459 1459 unsigned long line_time_us; 1460 1460 int entries; 1461 1461 ··· 1541 1541 if (IS_GEN2(dev_priv)) 1542 1542 cpp = 4; 1543 1543 else 1544 - cpp = drm_format_plane_cpp(fb->pixel_format, 0); 1544 + cpp = fb->format->cpp[0]; 1545 1545 1546 1546 planea_wm = intel_calculate_wm(adjusted_mode->crtc_clock, 1547 1547 wm_info, fifo_size, cpp, ··· 1568 1568 if (IS_GEN2(dev_priv)) 1569 1569 cpp = 4; 1570 1570 else 1571 - cpp = drm_format_plane_cpp(fb->pixel_format, 0); 1571 + cpp = fb->format->cpp[0]; 1572 1572 1573 1573 planeb_wm = intel_calculate_wm(adjusted_mode->crtc_clock, 1574 1574 wm_info, fifo_size, cpp, ··· 1621 1621 if (IS_I915GM(dev_priv) || IS_I945GM(dev_priv)) 1622 1622 cpp = 4; 1623 1623 else 1624 - cpp = drm_format_plane_cpp(fb->pixel_format, 0); 1624 + cpp = fb->format->cpp[0]; 1625 1625 1626 1626 line_time_us = max(htotal * 1000 / clock, 1); 1627 1627 ··· 1781 1781 uint32_t mem_value, 1782 1782 bool is_lp) 1783 1783 { 1784 - int cpp = pstate->base.fb ? 1785 - drm_format_plane_cpp(pstate->base.fb->pixel_format, 0) : 0; 1786 1784 uint32_t method1, method2; 1785 + int cpp; 1787 1786 1788 1787 if (!cstate->base.active || !pstate->base.visible) 1789 1788 return 0; 1789 + 1790 + cpp = pstate->base.fb->format->cpp[0]; 1790 1791 1791 1792 method1 = ilk_wm_method1(ilk_pipe_pixel_rate(cstate), cpp, mem_value); 1792 1793 ··· 1810 1809 const struct intel_plane_state *pstate, 1811 1810 uint32_t mem_value) 1812 1811 { 1813 - int cpp = pstate->base.fb ? 1814 - drm_format_plane_cpp(pstate->base.fb->pixel_format, 0) : 0; 1815 1812 uint32_t method1, method2; 1813 + int cpp; 1816 1814 1817 1815 if (!cstate->base.active || !pstate->base.visible) 1818 1816 return 0; 1817 + 1818 + cpp = pstate->base.fb->format->cpp[0]; 1819 1819 1820 1820 method1 = ilk_wm_method1(ilk_pipe_pixel_rate(cstate), cpp, mem_value); 1821 1821 method2 = ilk_wm_method2(ilk_pipe_pixel_rate(cstate), ··· 1855 1853 const struct intel_plane_state *pstate, 1856 1854 uint32_t pri_val) 1857 1855 { 1858 - int cpp = pstate->base.fb ? 1859 - drm_format_plane_cpp(pstate->base.fb->pixel_format, 0) : 0; 1856 + int cpp; 1860 1857 1861 1858 if (!cstate->base.active || !pstate->base.visible) 1862 1859 return 0; 1860 + 1861 + cpp = pstate->base.fb->format->cpp[0]; 1863 1862 1864 1863 return ilk_wm_fbc(pri_val, drm_rect_width(&pstate->base.dst), cpp); 1865 1864 } ··· 3216 3213 int y) 3217 3214 { 3218 3215 struct intel_plane_state *intel_pstate = to_intel_plane_state(pstate); 3219 - struct drm_framebuffer *fb = pstate->fb; 3220 3216 uint32_t down_scale_amount, data_rate; 3221 3217 uint32_t width = 0, height = 0; 3222 - unsigned format = fb ? fb->pixel_format : DRM_FORMAT_XRGB8888; 3218 + struct drm_framebuffer *fb; 3219 + u32 format; 3223 3220 3224 3221 if (!intel_pstate->base.visible) 3225 3222 return 0; 3223 + 3224 + fb = pstate->fb; 3225 + format = fb->format->format; 3226 + 3226 3227 if (pstate->plane->type == DRM_PLANE_TYPE_CURSOR) 3227 3228 return 0; 3228 3229 if (y && format != DRM_FORMAT_NV12) ··· 3242 3235 if (format == DRM_FORMAT_NV12) { 3243 3236 if (y) /* y-plane data rate */ 3244 3237 data_rate = width * height * 3245 - drm_format_plane_cpp(format, 0); 3238 + fb->format->cpp[0]; 3246 3239 else /* uv-plane data rate */ 3247 3240 data_rate = (width / 2) * (height / 2) * 3248 - drm_format_plane_cpp(format, 1); 3241 + fb->format->cpp[1]; 3249 3242 } else { 3250 3243 /* for packed formats */ 3251 - data_rate = width * height * drm_format_plane_cpp(format, 0); 3244 + data_rate = width * height * fb->format->cpp[0]; 3252 3245 } 3253 3246 3254 3247 down_scale_amount = skl_plane_downscale_amount(intel_pstate); ··· 3314 3307 return 0; 3315 3308 3316 3309 /* For packed formats, no y-plane, return 0 */ 3317 - if (y && fb->pixel_format != DRM_FORMAT_NV12) 3310 + if (y && fb->format->format != DRM_FORMAT_NV12) 3318 3311 return 0; 3319 3312 3320 3313 /* For Non Y-tile return 8-blocks */ ··· 3329 3322 swap(src_w, src_h); 3330 3323 3331 3324 /* Halve UV plane width and height for NV12 */ 3332 - if (fb->pixel_format == DRM_FORMAT_NV12 && !y) { 3325 + if (fb->format->format == DRM_FORMAT_NV12 && !y) { 3333 3326 src_w /= 2; 3334 3327 src_h /= 2; 3335 3328 } 3336 3329 3337 - if (fb->pixel_format == DRM_FORMAT_NV12 && !y) 3338 - plane_bpp = drm_format_plane_cpp(fb->pixel_format, 1); 3330 + if (fb->format->format == DRM_FORMAT_NV12 && !y) 3331 + plane_bpp = fb->format->cpp[1]; 3339 3332 else 3340 - plane_bpp = drm_format_plane_cpp(fb->pixel_format, 0); 3333 + plane_bpp = fb->format->cpp[0]; 3341 3334 3342 3335 if (drm_rotation_90_or_270(pstate->rotation)) { 3343 3336 switch (plane_bpp) { ··· 3597 3590 if (drm_rotation_90_or_270(pstate->rotation)) 3598 3591 swap(width, height); 3599 3592 3600 - cpp = drm_format_plane_cpp(fb->pixel_format, 0); 3593 + cpp = fb->format->cpp[0]; 3601 3594 plane_pixel_rate = skl_adjusted_plane_pixel_rate(cstate, intel_pstate); 3602 3595 3603 3596 if (drm_rotation_90_or_270(pstate->rotation)) { 3604 - int cpp = (fb->pixel_format == DRM_FORMAT_NV12) ? 3605 - drm_format_plane_cpp(fb->pixel_format, 1) : 3606 - drm_format_plane_cpp(fb->pixel_format, 0); 3597 + int cpp = (fb->format->format == DRM_FORMAT_NV12) ? 3598 + fb->format->cpp[1] : 3599 + fb->format->cpp[0]; 3607 3600 3608 3601 switch (cpp) { 3609 3602 case 1:
+7 -7
drivers/gpu/drm/i915/intel_sprite.c
··· 223 223 PLANE_CTL_PIPE_GAMMA_ENABLE | 224 224 PLANE_CTL_PIPE_CSC_ENABLE; 225 225 226 - plane_ctl |= skl_plane_ctl_format(fb->pixel_format); 226 + plane_ctl |= skl_plane_ctl_format(fb->format->format); 227 227 plane_ctl |= skl_plane_ctl_tiling(fb->modifier); 228 228 229 229 plane_ctl |= skl_plane_ctl_rotation(rotation); ··· 357 357 358 358 sprctl = SP_ENABLE; 359 359 360 - switch (fb->pixel_format) { 360 + switch (fb->format->format) { 361 361 case DRM_FORMAT_YUYV: 362 362 sprctl |= SP_FORMAT_YUV422 | SP_YUV_ORDER_YUYV; 363 363 break; ··· 443 443 sprctl |= SP_SOURCE_KEY; 444 444 445 445 if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B) 446 - chv_update_csc(intel_plane, fb->pixel_format); 446 + chv_update_csc(intel_plane, fb->format->format); 447 447 448 448 I915_WRITE(SPSTRIDE(pipe, plane), fb->pitches[0]); 449 449 I915_WRITE(SPPOS(pipe, plane), (crtc_y << 16) | crtc_x); ··· 502 502 503 503 sprctl = SPRITE_ENABLE; 504 504 505 - switch (fb->pixel_format) { 505 + switch (fb->format->format) { 506 506 case DRM_FORMAT_XBGR8888: 507 507 sprctl |= SPRITE_FORMAT_RGBX888 | SPRITE_RGB_ORDER_RGBX; 508 508 break; ··· 640 640 641 641 dvscntr = DVS_ENABLE; 642 642 643 - switch (fb->pixel_format) { 643 + switch (fb->format->format) { 644 644 case DRM_FORMAT_XBGR8888: 645 645 dvscntr |= DVS_FORMAT_RGBX888 | DVS_RGB_ORDER_XBGR; 646 646 break; ··· 866 866 src_y = src->y1 >> 16; 867 867 src_h = drm_rect_height(src) >> 16; 868 868 869 - if (format_is_yuv(fb->pixel_format)) { 869 + if (format_is_yuv(fb->format->format)) { 870 870 src_x &= ~1; 871 871 src_w &= ~1; 872 872 ··· 885 885 /* Check size restrictions when scaling */ 886 886 if (state->base.visible && (src_w != crtc_w || src_h != crtc_h)) { 887 887 unsigned int width_bytes; 888 - int cpp = drm_format_plane_cpp(fb->pixel_format, 0); 888 + int cpp = fb->format->cpp[0]; 889 889 890 890 WARN_ON(!can_scale); 891 891
+2 -6
drivers/gpu/drm/imx/imx-ldb.c
··· 454 454 DRM_MODE_ENCODER_LVDS, NULL); 455 455 456 456 if (imx_ldb_ch->bridge) { 457 - imx_ldb_ch->bridge->encoder = encoder; 458 - 459 - imx_ldb_ch->encoder.bridge = imx_ldb_ch->bridge; 460 - ret = drm_bridge_attach(drm, imx_ldb_ch->bridge); 457 + ret = drm_bridge_attach(&imx_ldb_ch->encoder, 458 + imx_ldb_ch->bridge, NULL); 461 459 if (ret) { 462 460 DRM_ERROR("Failed to initialize bridge with drm\n"); 463 461 return ret; ··· 736 738 for (i = 0; i < 2; i++) { 737 739 struct imx_ldb_channel *channel = &imx_ldb->channel[i]; 738 740 739 - if (channel->bridge) 740 - drm_bridge_detach(channel->bridge); 741 741 if (channel->panel) 742 742 drm_panel_detach(channel->panel); 743 743
+20 -20
drivers/gpu/drm/imx/ipuv3-plane.c
··· 77 77 BUG_ON(!cma_obj); 78 78 79 79 return cma_obj->paddr + fb->offsets[0] + fb->pitches[0] * y + 80 - drm_format_plane_cpp(fb->pixel_format, 0) * x; 80 + fb->format->cpp[0] * x; 81 81 } 82 82 83 83 static inline unsigned long ··· 92 92 cma_obj = drm_fb_cma_get_gem_obj(fb, 1); 93 93 BUG_ON(!cma_obj); 94 94 95 - x /= drm_format_horz_chroma_subsampling(fb->pixel_format); 96 - y /= drm_format_vert_chroma_subsampling(fb->pixel_format); 95 + x /= drm_format_horz_chroma_subsampling(fb->format->format); 96 + y /= drm_format_vert_chroma_subsampling(fb->format->format); 97 97 98 98 return cma_obj->paddr + fb->offsets[1] + fb->pitches[1] * y + 99 - drm_format_plane_cpp(fb->pixel_format, 1) * x - eba; 99 + fb->format->cpp[1] * x - eba; 100 100 } 101 101 102 102 static inline unsigned long ··· 111 111 cma_obj = drm_fb_cma_get_gem_obj(fb, 2); 112 112 BUG_ON(!cma_obj); 113 113 114 - x /= drm_format_horz_chroma_subsampling(fb->pixel_format); 115 - y /= drm_format_vert_chroma_subsampling(fb->pixel_format); 114 + x /= drm_format_horz_chroma_subsampling(fb->format->format); 115 + y /= drm_format_vert_chroma_subsampling(fb->format->format); 116 116 117 117 return cma_obj->paddr + fb->offsets[2] + fb->pitches[2] * y + 118 - drm_format_plane_cpp(fb->pixel_format, 2) * x - eba; 118 + fb->format->cpp[2] * x - eba; 119 119 } 120 120 121 121 void ipu_plane_put_resources(struct ipu_plane *ipu_plane) ··· 281 281 */ 282 282 if (old_fb && (state->src_w != old_state->src_w || 283 283 state->src_h != old_state->src_h || 284 - fb->pixel_format != old_fb->pixel_format)) 284 + fb->format != old_fb->format)) 285 285 crtc_state->mode_changed = true; 286 286 287 287 eba = drm_plane_state_to_eba(state); ··· 295 295 if (old_fb && fb->pitches[0] != old_fb->pitches[0]) 296 296 crtc_state->mode_changed = true; 297 297 298 - switch (fb->pixel_format) { 298 + switch (fb->format->format) { 299 299 case DRM_FORMAT_YUV420: 300 300 case DRM_FORMAT_YVU420: 301 301 case DRM_FORMAT_YUV422: ··· 315 315 if (vbo & 0x7 || vbo > 0xfffff8) 316 316 return -EINVAL; 317 317 318 - if (old_fb && (fb->pixel_format == old_fb->pixel_format)) { 318 + if (old_fb && (fb->format == old_fb->format)) { 319 319 old_vbo = drm_plane_state_to_vbo(old_state); 320 320 if (vbo != old_vbo) 321 321 crtc_state->mode_changed = true; ··· 332 332 if (ubo & 0x7 || ubo > 0xfffff8) 333 333 return -EINVAL; 334 334 335 - if (old_fb && (fb->pixel_format == old_fb->pixel_format)) { 335 + if (old_fb && (fb->format == old_fb->format)) { 336 336 old_ubo = drm_plane_state_to_ubo(old_state); 337 337 if (ubo != old_ubo) 338 338 crtc_state->mode_changed = true; ··· 348 348 * The x/y offsets must be even in case of horizontal/vertical 349 349 * chroma subsampling. 350 350 */ 351 - hsub = drm_format_horz_chroma_subsampling(fb->pixel_format); 352 - vsub = drm_format_vert_chroma_subsampling(fb->pixel_format); 351 + hsub = drm_format_horz_chroma_subsampling(fb->format->format); 352 + vsub = drm_format_vert_chroma_subsampling(fb->format->format); 353 353 if (((state->src_x >> 16) & (hsub - 1)) || 354 354 ((state->src_y >> 16) & (vsub - 1))) 355 355 return -EINVAL; ··· 392 392 ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true); 393 393 break; 394 394 case IPU_DP_FLOW_SYNC_FG: 395 - ics = ipu_drm_fourcc_to_colorspace(state->fb->pixel_format); 395 + ics = ipu_drm_fourcc_to_colorspace(state->fb->format->format); 396 396 ipu_dp_setup_channel(ipu_plane->dp, ics, 397 397 IPUV3_COLORSPACE_UNKNOWN); 398 398 ipu_dp_set_window_pos(ipu_plane->dp, state->crtc_x, 399 399 state->crtc_y); 400 400 /* Enable local alpha on partial plane */ 401 - switch (state->fb->pixel_format) { 401 + switch (state->fb->format->format) { 402 402 case DRM_FORMAT_ARGB1555: 403 403 case DRM_FORMAT_ABGR1555: 404 404 case DRM_FORMAT_RGBA5551: ··· 421 421 ipu_cpmem_zero(ipu_plane->ipu_ch); 422 422 ipu_cpmem_set_resolution(ipu_plane->ipu_ch, state->src_w >> 16, 423 423 state->src_h >> 16); 424 - ipu_cpmem_set_fmt(ipu_plane->ipu_ch, state->fb->pixel_format); 424 + ipu_cpmem_set_fmt(ipu_plane->ipu_ch, state->fb->format->format); 425 425 ipu_cpmem_set_high_priority(ipu_plane->ipu_ch); 426 426 ipu_idmac_set_double_buffer(ipu_plane->ipu_ch, 1); 427 427 ipu_cpmem_set_stride(ipu_plane->ipu_ch, state->fb->pitches[0]); 428 - switch (fb->pixel_format) { 428 + switch (fb->format->format) { 429 429 case DRM_FORMAT_YUV420: 430 430 case DRM_FORMAT_YVU420: 431 431 case DRM_FORMAT_YUV422: ··· 434 434 case DRM_FORMAT_YVU444: 435 435 ubo = drm_plane_state_to_ubo(state); 436 436 vbo = drm_plane_state_to_vbo(state); 437 - if (fb->pixel_format == DRM_FORMAT_YVU420 || 438 - fb->pixel_format == DRM_FORMAT_YVU422 || 439 - fb->pixel_format == DRM_FORMAT_YVU444) 437 + if (fb->format->format == DRM_FORMAT_YVU420 || 438 + fb->format->format == DRM_FORMAT_YVU422 || 439 + fb->format->format == DRM_FORMAT_YVU444) 440 440 swap(ubo, vbo); 441 441 442 442 ipu_cpmem_set_yuv_planar_full(ipu_plane->ipu_ch,
+1 -5
drivers/gpu/drm/imx/parallel-display.c
··· 191 191 drm_panel_attach(imxpd->panel, &imxpd->connector); 192 192 193 193 if (imxpd->bridge) { 194 - imxpd->bridge->encoder = encoder; 195 - encoder->bridge = imxpd->bridge; 196 - ret = drm_bridge_attach(drm, imxpd->bridge); 194 + ret = drm_bridge_attach(encoder, imxpd->bridge, NULL); 197 195 if (ret < 0) { 198 196 dev_err(imxpd->dev, "failed to attach bridge: %d\n", 199 197 ret); ··· 284 286 { 285 287 struct imx_parallel_display *imxpd = dev_get_drvdata(dev); 286 288 287 - if (imxpd->bridge) 288 - drm_bridge_detach(imxpd->bridge); 289 289 if (imxpd->panel) 290 290 drm_panel_detach(imxpd->panel); 291 291
+41
drivers/gpu/drm/lib/drm_random.c
··· 1 + #include <linux/bitops.h> 2 + #include <linux/kernel.h> 3 + #include <linux/random.h> 4 + #include <linux/slab.h> 5 + #include <linux/types.h> 6 + 7 + #include "drm_random.h" 8 + 9 + static inline u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state) 10 + { 11 + return upper_32_bits((u64)prandom_u32_state(state) * ep_ro); 12 + } 13 + 14 + void drm_random_reorder(unsigned int *order, unsigned int count, 15 + struct rnd_state *state) 16 + { 17 + unsigned int i, j; 18 + 19 + for (i = 0; i < count; ++i) { 20 + BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32)); 21 + j = drm_prandom_u32_max_state(count, state); 22 + swap(order[i], order[j]); 23 + } 24 + } 25 + EXPORT_SYMBOL(drm_random_reorder); 26 + 27 + unsigned int *drm_random_order(unsigned int count, struct rnd_state *state) 28 + { 29 + unsigned int *order, i; 30 + 31 + order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY); 32 + if (!order) 33 + return order; 34 + 35 + for (i = 0; i < count; i++) 36 + order[i] = i; 37 + 38 + drm_random_reorder(order, count, state); 39 + return order; 40 + } 41 + EXPORT_SYMBOL(drm_random_order);
+25
drivers/gpu/drm/lib/drm_random.h
··· 1 + #ifndef __DRM_RANDOM_H__ 2 + #define __DRM_RANDOM_H__ 3 + 4 + /* This is a temporary home for a couple of utility functions that should 5 + * be transposed to lib/ at the earliest convenience. 6 + */ 7 + 8 + #include <linux/random.h> 9 + 10 + #define DRM_RND_STATE_INITIALIZER(seed__) ({ \ 11 + struct rnd_state state__; \ 12 + prandom_seed_state(&state__, (seed__)); \ 13 + state__; \ 14 + }) 15 + 16 + #define DRM_RND_STATE(name__, seed__) \ 17 + struct rnd_state name__ = DRM_RND_STATE_INITIALIZER(seed__) 18 + 19 + unsigned int *drm_random_order(unsigned int count, 20 + struct rnd_state *state); 21 + void drm_random_reorder(unsigned int *order, 22 + unsigned int count, 23 + struct rnd_state *state); 24 + 25 + #endif /* !__DRM_RANDOM_H__ */
+4 -4
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 63 63 struct mtk_dpi { 64 64 struct mtk_ddp_comp ddp_comp; 65 65 struct drm_encoder encoder; 66 + struct drm_bridge *bridge; 66 67 void __iomem *regs; 67 68 struct device *dev; 68 69 struct clk *engine_clk; ··· 621 620 /* Currently DPI0 is fixed to be driven by OVL1 */ 622 621 dpi->encoder.possible_crtcs = BIT(1); 623 622 624 - dpi->encoder.bridge->encoder = &dpi->encoder; 625 - ret = drm_bridge_attach(dpi->encoder.dev, dpi->encoder.bridge); 623 + ret = drm_bridge_attach(&dpi->encoder, dpi->bridge, NULL); 626 624 if (ret) { 627 625 dev_err(dev, "Failed to attach bridge: %d\n", ret); 628 626 goto err_cleanup; ··· 718 718 719 719 dev_info(dev, "Found bridge node: %s\n", bridge_node->full_name); 720 720 721 - dpi->encoder.bridge = of_drm_find_bridge(bridge_node); 721 + dpi->bridge = of_drm_find_bridge(bridge_node); 722 722 of_node_put(bridge_node); 723 - if (!dpi->encoder.bridge) 723 + if (!dpi->bridge) 724 724 return -EPROBE_DEFER; 725 725 726 726 comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DPI);
+2 -1
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 321 321 { 322 322 struct mtk_drm_private *private = dev_get_drvdata(dev); 323 323 324 - drm_put_dev(private->drm); 324 + drm_dev_unregister(private->drm); 325 + drm_dev_unref(private->drm); 325 326 private->drm = NULL; 326 327 } 327 328
+1 -1
drivers/gpu/drm/mediatek/mtk_drm_fb.c
··· 82 82 if (!mtk_fb) 83 83 return ERR_PTR(-ENOMEM); 84 84 85 - drm_helper_mode_fill_fb_struct(&mtk_fb->base, mode); 85 + drm_helper_mode_fill_fb_struct(dev, &mtk_fb->base, mode); 86 86 87 87 mtk_fb->gem_obj = obj; 88 88
+2 -2
drivers/gpu/drm/mediatek/mtk_drm_plane.c
··· 133 133 mtk_gem = to_mtk_gem_obj(gem); 134 134 addr = mtk_gem->dma_addr; 135 135 pitch = fb->pitches[0]; 136 - format = fb->pixel_format; 136 + format = fb->format->format; 137 137 138 - addr += (plane->state->src.x1 >> 16) * drm_format_plane_cpp(format, 0); 138 + addr += (plane->state->src.x1 >> 16) * fb->format->cpp[0]; 139 139 addr += (plane->state->src.y1 >> 16) * pitch; 140 140 141 141 state->pending.enable = true;
+3 -21
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 622 622 .get_modes = mtk_dsi_connector_get_modes, 623 623 }; 624 624 625 - static int mtk_drm_attach_bridge(struct drm_bridge *bridge, 626 - struct drm_encoder *encoder) 627 - { 628 - int ret; 629 - 630 - if (!bridge) 631 - return -ENOENT; 632 - 633 - encoder->bridge = bridge; 634 - bridge->encoder = encoder; 635 - ret = drm_bridge_attach(encoder->dev, bridge); 636 - if (ret) { 637 - DRM_ERROR("Failed to attach bridge to drm\n"); 638 - encoder->bridge = NULL; 639 - bridge->encoder = NULL; 640 - } 641 - 642 - return ret; 643 - } 644 - 645 625 static int mtk_dsi_create_connector(struct drm_device *drm, struct mtk_dsi *dsi) 646 626 { 647 627 int ret; ··· 672 692 dsi->encoder.possible_crtcs = 1; 673 693 674 694 /* If there's a bridge, attach to it and let it create the connector */ 675 - ret = mtk_drm_attach_bridge(dsi->bridge, &dsi->encoder); 695 + ret = drm_bridge_attach(&dsi->encoder, dsi->bridge, NULL); 676 696 if (ret) { 697 + DRM_ERROR("Failed to attach bridge to drm\n"); 698 + 677 699 /* Otherwise create our own connector and attach to a panel */ 678 700 ret = mtk_dsi_create_connector(drm, dsi); 679 701 if (ret)
+6 -5
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 149 149 150 150 struct mtk_hdmi { 151 151 struct drm_bridge bridge; 152 + struct drm_bridge *next_bridge; 152 153 struct drm_connector conn; 153 154 struct device *dev; 154 155 struct phy *phy; ··· 1315 1314 return ret; 1316 1315 } 1317 1316 1318 - if (bridge->next) { 1319 - bridge->next->encoder = bridge->encoder; 1320 - ret = drm_bridge_attach(bridge->encoder->dev, bridge->next); 1317 + if (hdmi->next_bridge) { 1318 + ret = drm_bridge_attach(bridge->encoder, hdmi->next_bridge, 1319 + bridge); 1321 1320 if (ret) { 1322 1321 dev_err(hdmi->dev, 1323 1322 "Failed to attach external bridge: %d\n", ret); ··· 1511 1510 of_node_put(ep); 1512 1511 1513 1512 if (!of_device_is_compatible(remote, "hdmi-connector")) { 1514 - hdmi->bridge.next = of_drm_find_bridge(remote); 1515 - if (!hdmi->bridge.next) { 1513 + hdmi->next_bridge = of_drm_find_bridge(remote); 1514 + if (!hdmi->next_bridge) { 1516 1515 dev_err(dev, "Waiting for external bridge\n"); 1517 1516 of_node_put(remote); 1518 1517 return -EPROBE_DEFER;
+1 -1
drivers/gpu/drm/meson/meson_plane.c
··· 113 113 if (meson_vpu_is_compatible(priv, "amlogic,meson-gxbb-vpu")) 114 114 priv->viu.osd1_blk0_cfg[0] |= OSD_OUTPUT_COLOR_RGB; 115 115 116 - switch (fb->pixel_format) { 116 + switch (fb->format->format) { 117 117 case DRM_FORMAT_XRGB8888: 118 118 /* For XRGB, replace the pixel's alpha by 0xFF */ 119 119 writel_bits_relaxed(OSD_REPLACE_EN, OSD_REPLACE_EN,
+1
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 15 15 16 16 #include <video/vga.h> 17 17 18 + #include <drm/drm_encoder.h> 18 19 #include <drm/drm_fb_helper.h> 19 20 #include <drm/ttm/ttm_bo_api.h> 20 21 #include <drm/ttm/ttm_bo_driver.h>
+2 -2
drivers/gpu/drm/mgag200/mgag200_fb.c
··· 24 24 struct drm_gem_object *obj; 25 25 struct mgag200_bo *bo; 26 26 int src_offset, dst_offset; 27 - int bpp = (mfbdev->mfb.base.bits_per_pixel + 7)/8; 27 + int bpp = mfbdev->mfb.base.format->cpp[0]; 28 28 int ret = -EBUSY; 29 29 bool unmap = false; 30 30 bool store_for_later = false; ··· 217 217 info->apertures->ranges[0].base = mdev->dev->mode_config.fb_base; 218 218 info->apertures->ranges[0].size = mdev->mc.vram_size; 219 219 220 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 220 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 221 221 drm_fb_helper_fill_var(info, &mfbdev->helper, sizes->fb_width, 222 222 sizes->fb_height); 223 223
+1 -1
drivers/gpu/drm/mgag200/mgag200_main.c
··· 34 34 { 35 35 int ret; 36 36 37 - drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd); 37 + drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd); 38 38 gfb->obj = obj; 39 39 ret = drm_framebuffer_init(dev, &gfb->base, &mga_fb_funcs); 40 40 if (ret) {
+12 -11
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 38 38 39 39 WREG8(DAC_INDEX + MGA1064_INDEX, 0); 40 40 41 - if (fb && fb->bits_per_pixel == 16) { 42 - int inc = (fb->depth == 15) ? 8 : 4; 41 + if (fb && fb->format->cpp[0] * 8 == 16) { 42 + int inc = (fb->format->depth == 15) ? 8 : 4; 43 43 u8 r, b; 44 44 for (i = 0; i < MGAG200_LUT_SIZE; i += inc) { 45 - if (fb->depth == 16) { 45 + if (fb->format->depth == 16) { 46 46 if (i > (MGAG200_LUT_SIZE >> 1)) { 47 47 r = b = 0; 48 48 } else { ··· 880 880 { 881 881 struct drm_device *dev = crtc->dev; 882 882 struct mga_device *mdev = dev->dev_private; 883 + const struct drm_framebuffer *fb = crtc->primary->fb; 883 884 int hdisplay, hsyncstart, hsyncend, htotal; 884 885 int vdisplay, vsyncstart, vsyncend, vtotal; 885 886 int pitch; ··· 903 902 /* 0x48: */ 0, 0, 0, 0, 0, 0, 0, 0 904 903 }; 905 904 906 - bppshift = mdev->bpp_shifts[(crtc->primary->fb->bits_per_pixel >> 3) - 1]; 905 + bppshift = mdev->bpp_shifts[fb->format->cpp[0] - 1]; 907 906 908 907 switch (mdev->type) { 909 908 case G200_SE_A: ··· 942 941 break; 943 942 } 944 943 945 - switch (crtc->primary->fb->bits_per_pixel) { 944 + switch (fb->format->cpp[0] * 8) { 946 945 case 8: 947 946 dacvalue[MGA1064_MUL_CTL] = MGA1064_MUL_CTL_8bits; 948 947 break; 949 948 case 16: 950 - if (crtc->primary->fb->depth == 15) 949 + if (fb->format->depth == 15) 951 950 dacvalue[MGA1064_MUL_CTL] = MGA1064_MUL_CTL_15bits; 952 951 else 953 952 dacvalue[MGA1064_MUL_CTL] = MGA1064_MUL_CTL_16bits; ··· 998 997 WREG_SEQ(3, 0); 999 998 WREG_SEQ(4, 0xe); 1000 999 1001 - pitch = crtc->primary->fb->pitches[0] / (crtc->primary->fb->bits_per_pixel / 8); 1002 - if (crtc->primary->fb->bits_per_pixel == 24) 1000 + pitch = fb->pitches[0] / fb->format->cpp[0]; 1001 + if (fb->format->cpp[0] * 8 == 24) 1003 1002 pitch = (pitch * 3) >> (4 - bppshift); 1004 1003 else 1005 1004 pitch = pitch >> (4 - bppshift); ··· 1076 1075 ((vdisplay & 0xc00) >> 7) | 1077 1076 ((vsyncstart & 0xc00) >> 5) | 1078 1077 ((vdisplay & 0x400) >> 3); 1079 - if (crtc->primary->fb->bits_per_pixel == 24) 1078 + if (fb->format->cpp[0] * 8 == 24) 1080 1079 ext_vga[3] = (((1 << bppshift) * 3) - 1) | 0x80; 1081 1080 else 1082 1081 ext_vga[3] = ((1 << bppshift) - 1) | 0x80; ··· 1139 1138 u32 bpp; 1140 1139 u32 mb; 1141 1140 1142 - if (crtc->primary->fb->bits_per_pixel > 16) 1141 + if (fb->format->cpp[0] * 8 > 16) 1143 1142 bpp = 32; 1144 - else if (crtc->primary->fb->bits_per_pixel > 8) 1143 + else if (fb->format->cpp[0] * 8 > 8) 1145 1144 bpp = 16; 1146 1145 else 1147 1146 bpp = 8;
+11 -6
drivers/gpu/drm/msm/dsi/dsi_manager.c
··· 579 579 struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id); 580 580 struct drm_bridge *bridge = NULL; 581 581 struct dsi_bridge *dsi_bridge; 582 + struct drm_encoder *encoder; 582 583 int ret; 583 584 584 585 dsi_bridge = devm_kzalloc(msm_dsi->dev->dev, ··· 591 590 592 591 dsi_bridge->id = id; 593 592 593 + /* 594 + * HACK: we may not know the external DSI bridge device's mode 595 + * flags here. We'll get to know them only when the device 596 + * attaches to the dsi host. For now, assume the bridge supports 597 + * DSI video mode 598 + */ 599 + encoder = msm_dsi->encoders[MSM_DSI_VIDEO_ENCODER_ID]; 600 + 594 601 bridge = &dsi_bridge->base; 595 602 bridge->funcs = &dsi_mgr_bridge_funcs; 596 603 597 - ret = drm_bridge_attach(msm_dsi->dev, bridge); 604 + ret = drm_bridge_attach(encoder, bridge, NULL); 598 605 if (ret) 599 606 goto fail; 600 607 ··· 637 628 encoder = msm_dsi->encoders[MSM_DSI_VIDEO_ENCODER_ID]; 638 629 639 630 /* link the internal dsi bridge to the external bridge */ 640 - int_bridge->next = ext_bridge; 641 - /* set the external bridge's encoder as dsi's encoder */ 642 - ext_bridge->encoder = encoder; 643 - 644 - drm_bridge_attach(dev, ext_bridge); 631 + drm_bridge_attach(encoder, ext_bridge, int_bridge); 645 632 646 633 /* 647 634 * we need the drm_connector created by the external bridge
+1 -1
drivers/gpu/drm/msm/edp/edp_bridge.c
··· 106 106 bridge = &edp_bridge->base; 107 107 bridge->funcs = &edp_bridge_funcs; 108 108 109 - ret = drm_bridge_attach(edp->dev, bridge); 109 + ret = drm_bridge_attach(edp->encoder, bridge, NULL); 110 110 if (ret) 111 111 goto fail; 112 112
+1 -1
drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
··· 227 227 bridge = &hdmi_bridge->base; 228 228 bridge->funcs = &msm_hdmi_bridge_funcs; 229 229 230 - ret = drm_bridge_attach(hdmi->dev, bridge); 230 + ret = drm_bridge_attach(hdmi->encoder, bridge, NULL); 231 231 if (ret) 232 232 goto fail; 233 233
+1 -1
drivers/gpu/drm/msm/mdp/mdp4/mdp4_plane.c
··· 43 43 if (fb->modifier == DRM_FORMAT_MOD_SAMSUNG_64_32_TILE) 44 44 is_tile = true; 45 45 46 - if (fb->pixel_format == DRM_FORMAT_NV12 && is_tile) 46 + if (fb->format->format == DRM_FORMAT_NV12 && is_tile) 47 47 return FRAME_TILE_YCBCR_420; 48 48 49 49 return FRAME_LINEAR;
+1 -1
drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
··· 699 699 unsigned long flags; 700 700 int ret; 701 701 702 - nplanes = drm_format_num_planes(fb->pixel_format); 702 + nplanes = fb->format->num_planes; 703 703 704 704 /* bad formats should already be rejected: */ 705 705 if (WARN_ON(nplanes > pipe2nclients(pipe)))
+6 -6
drivers/gpu/drm/msm/msm_fb.c
··· 41 41 static void msm_framebuffer_destroy(struct drm_framebuffer *fb) 42 42 { 43 43 struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 44 - int i, n = drm_format_num_planes(fb->pixel_format); 44 + int i, n = fb->format->num_planes; 45 45 46 46 DBG("destroy: FB ID: %d (%p)", fb->base.id, fb); 47 47 ··· 65 65 void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) 66 66 { 67 67 struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 68 - int i, n = drm_format_num_planes(fb->pixel_format); 68 + int i, n = fb->format->num_planes; 69 69 70 70 seq_printf(m, "fb: %dx%d@%4.4s (%2d, ID:%d)\n", 71 - fb->width, fb->height, (char *)&fb->pixel_format, 71 + fb->width, fb->height, (char *)&fb->format->format, 72 72 drm_framebuffer_read_refcount(fb), fb->base.id); 73 73 74 74 for (i = 0; i < n; i++) { ··· 87 87 int msm_framebuffer_prepare(struct drm_framebuffer *fb, int id) 88 88 { 89 89 struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 90 - int ret, i, n = drm_format_num_planes(fb->pixel_format); 90 + int ret, i, n = fb->format->num_planes; 91 91 uint64_t iova; 92 92 93 93 for (i = 0; i < n; i++) { ··· 103 103 void msm_framebuffer_cleanup(struct drm_framebuffer *fb, int id) 104 104 { 105 105 struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 106 - int i, n = drm_format_num_planes(fb->pixel_format); 106 + int i, n = fb->format->num_planes; 107 107 108 108 for (i = 0; i < n; i++) 109 109 msm_gem_put_iova(msm_fb->planes[i], id); ··· 217 217 msm_fb->planes[i] = bos[i]; 218 218 } 219 219 220 - drm_helper_mode_fill_fb_struct(fb, mode_cmd); 220 + drm_helper_mode_fill_fb_struct(dev, fb, mode_cmd); 221 221 222 222 ret = drm_framebuffer_init(dev, fb, &msm_framebuffer_funcs); 223 223 if (ret) {
+1 -1
drivers/gpu/drm/msm/msm_fbdev.c
··· 148 148 149 149 strcpy(fbi->fix.id, "msm"); 150 150 151 - drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); 151 + drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth); 152 152 drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); 153 153 154 154 dev->mode_config.fb_base = paddr;
+1 -1
drivers/gpu/drm/mxsfb/mxsfb_crtc.c
··· 46 46 { 47 47 struct drm_crtc *crtc = &mxsfb->pipe.crtc; 48 48 struct drm_device *drm = crtc->dev; 49 - const u32 format = crtc->primary->state->fb->pixel_format; 49 + const u32 format = crtc->primary->state->fb->format->format; 50 50 u32 ctrl, ctrl1; 51 51 52 52 ctrl = CTRL_BYPASS_COUNT | CTRL_MASTER;
+2 -2
drivers/gpu/drm/mxsfb/mxsfb_drv.c
··· 395 395 pdev->id_entry = of_id->data; 396 396 397 397 drm = drm_dev_alloc(&mxsfb_driver, &pdev->dev); 398 - if (!drm) 399 - return -ENOMEM; 398 + if (IS_ERR(drm)) 399 + return PTR_ERR(drm); 400 400 401 401 ret = mxsfb_load(drm, 0); 402 402 if (ret)
+9 -8
drivers/gpu/drm/nouveau/dispnv04/crtc.c
··· 460 460 struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc); 461 461 struct nv04_crtc_reg *regp = &nv04_display(dev)->mode_reg.crtc_reg[nv_crtc->index]; 462 462 struct nv04_crtc_reg *savep = &nv04_display(dev)->saved_reg.crtc_reg[nv_crtc->index]; 463 + const struct drm_framebuffer *fb = crtc->primary->fb; 463 464 struct drm_encoder *encoder; 464 465 bool lvds_output = false, tmds_output = false, tv_output = false, 465 466 off_chip_digital = false; ··· 570 569 regp->CRTC[NV_CIO_CRE_86] = 0x1; 571 570 } 572 571 573 - regp->CRTC[NV_CIO_CRE_PIXEL_INDEX] = (crtc->primary->fb->depth + 1) / 8; 572 + regp->CRTC[NV_CIO_CRE_PIXEL_INDEX] = (fb->format->depth + 1) / 8; 574 573 /* Enable slaved mode (called MODE_TV in nv4ref.h) */ 575 574 if (lvds_output || tmds_output || tv_output) 576 575 regp->CRTC[NV_CIO_CRE_PIXEL_INDEX] |= (1 << 7); ··· 584 583 regp->ramdac_gen_ctrl = NV_PRAMDAC_GENERAL_CONTROL_BPC_8BITS | 585 584 NV_PRAMDAC_GENERAL_CONTROL_VGA_STATE_SEL | 586 585 NV_PRAMDAC_GENERAL_CONTROL_PIXMIX_ON; 587 - if (crtc->primary->fb->depth == 16) 586 + if (fb->format->depth == 16) 588 587 regp->ramdac_gen_ctrl |= NV_PRAMDAC_GENERAL_CONTROL_ALT_MODE_SEL; 589 588 if (drm->device.info.chipset >= 0x11) 590 589 regp->ramdac_gen_ctrl |= NV_PRAMDAC_GENERAL_CONTROL_PIPE_LONG; ··· 848 847 849 848 nv_crtc->fb.offset = fb->nvbo->bo.offset; 850 849 851 - if (nv_crtc->lut.depth != drm_fb->depth) { 852 - nv_crtc->lut.depth = drm_fb->depth; 850 + if (nv_crtc->lut.depth != drm_fb->format->depth) { 851 + nv_crtc->lut.depth = drm_fb->format->depth; 853 852 nv_crtc_gamma_load(crtc); 854 853 } 855 854 856 855 /* Update the framebuffer format. */ 857 856 regp->CRTC[NV_CIO_CRE_PIXEL_INDEX] &= ~3; 858 - regp->CRTC[NV_CIO_CRE_PIXEL_INDEX] |= (crtc->primary->fb->depth + 1) / 8; 857 + regp->CRTC[NV_CIO_CRE_PIXEL_INDEX] |= (drm_fb->format->depth + 1) / 8; 859 858 regp->ramdac_gen_ctrl &= ~NV_PRAMDAC_GENERAL_CONTROL_ALT_MODE_SEL; 860 - if (crtc->primary->fb->depth == 16) 859 + if (drm_fb->format->depth == 16) 861 860 regp->ramdac_gen_ctrl |= NV_PRAMDAC_GENERAL_CONTROL_ALT_MODE_SEL; 862 861 crtc_wr_cio_state(crtc, regp, NV_CIO_CRE_PIXEL_INDEX); 863 862 NVWriteRAMDAC(dev, nv_crtc->index, NV_PRAMDAC_GENERAL_CONTROL, ··· 874 873 875 874 /* Update the framebuffer location. */ 876 875 regp->fb_start = nv_crtc->fb.offset & ~3; 877 - regp->fb_start += (y * drm_fb->pitches[0]) + (x * drm_fb->bits_per_pixel / 8); 876 + regp->fb_start += (y * drm_fb->pitches[0]) + (x * drm_fb->format->cpp[0]); 878 877 nv_set_crtc_base(dev, nv_crtc->index, regp->fb_start); 879 878 880 879 /* Update the arbitration parameters. */ 881 - nouveau_calc_arb(dev, crtc->mode.clock, drm_fb->bits_per_pixel, 880 + nouveau_calc_arb(dev, crtc->mode.clock, drm_fb->format->cpp[0] * 8, 882 881 &arb_burst, &arb_lwm); 883 882 884 883 regp->CRTC[NV_CIO_CRE_FF_INDEX] = arb_burst;
+2 -1
drivers/gpu/drm/nouveau/dispnv04/dfp.c
··· 290 290 struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); 291 291 struct drm_display_mode *output_mode = &nv_encoder->mode; 292 292 struct drm_connector *connector = &nv_connector->base; 293 + const struct drm_framebuffer *fb = encoder->crtc->primary->fb; 293 294 uint32_t mode_ratio, panel_ratio; 294 295 295 296 NV_DEBUG(drm, "Output mode on CRTC %d:\n", nv_crtc->index); ··· 416 415 /* Output property. */ 417 416 if ((nv_connector->dithering_mode == DITHERING_MODE_ON) || 418 417 (nv_connector->dithering_mode == DITHERING_MODE_AUTO && 419 - encoder->crtc->primary->fb->depth > connector->display_info.bpc * 3)) { 418 + fb->format->depth > connector->display_info.bpc * 3)) { 420 419 if (drm->device.info.chipset == 0x11) 421 420 regp->dither = savep->dither | 0x00010000; 422 421 else {
+4 -4
drivers/gpu/drm/nouveau/dispnv04/overlay.c
··· 145 145 nvif_wr32(dev, NV_PVIDEO_POINT_OUT(flip), crtc_y << 16 | crtc_x); 146 146 nvif_wr32(dev, NV_PVIDEO_SIZE_OUT(flip), crtc_h << 16 | crtc_w); 147 147 148 - if (fb->pixel_format != DRM_FORMAT_UYVY) 148 + if (fb->format->format != DRM_FORMAT_UYVY) 149 149 format |= NV_PVIDEO_FORMAT_COLOR_LE_CR8YB8CB8YA8; 150 - if (fb->pixel_format == DRM_FORMAT_NV12) 150 + if (fb->format->format == DRM_FORMAT_NV12) 151 151 format |= NV_PVIDEO_FORMAT_PLANAR; 152 152 if (nv_plane->iturbt_709) 153 153 format |= NV_PVIDEO_FORMAT_MATRIX_ITURBT709; 154 154 if (nv_plane->colorkey & (1 << 24)) 155 155 format |= NV_PVIDEO_FORMAT_DISPLAY_COLOR_KEY; 156 156 157 - if (fb->pixel_format == DRM_FORMAT_NV12) { 157 + if (fb->format->format == DRM_FORMAT_NV12) { 158 158 nvif_wr32(dev, NV_PVIDEO_UVPLANE_BASE(flip), 0); 159 159 nvif_wr32(dev, NV_PVIDEO_UVPLANE_OFFSET_BUFF(flip), 160 160 nv_fb->nvbo->bo.offset + fb->offsets[1]); ··· 411 411 412 412 if (nv_plane->colorkey & (1 << 24)) 413 413 overlay |= 0x10; 414 - if (fb->pixel_format == DRM_FORMAT_YUYV) 414 + if (fb->format->format == DRM_FORMAT_YUYV) 415 415 overlay |= 0x100; 416 416 417 417 nvif_wr32(dev, NV_PVIDEO_OVERLAY, overlay);
+3 -2
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 33 33 #include <drm/drm_atomic_helper.h> 34 34 #include <drm/drm_edid.h> 35 35 #include <drm/drm_crtc_helper.h> 36 + #include <drm/drm_atomic.h> 36 37 37 38 #include "nouveau_reg.h" 38 39 #include "nouveau_drv.h" ··· 770 769 struct drm_encoder *encoder = to_drm_encoder(nv_encoder); 771 770 int ret; 772 771 773 - if (connector->dev->mode_config.funcs->atomic_commit) 772 + if (drm_drv_uses_atomic_modeset(connector->dev)) 774 773 return drm_atomic_helper_connector_set_property(connector, property, value); 775 774 776 775 ret = connector->funcs->atomic_set_property(&nv_connector->base, ··· 1075 1074 static int 1076 1075 nouveau_connector_dpms(struct drm_connector *connector, int mode) 1077 1076 { 1078 - if (connector->dev->mode_config.funcs->atomic_commit) 1077 + if (drm_drv_uses_atomic_modeset(connector->dev)) 1079 1078 return drm_atomic_helper_connector_dpms(connector, mode); 1080 1079 return drm_helper_connector_dpms(connector, mode); 1081 1080 }
+1
drivers/gpu/drm/nouveau/nouveau_connector.h
··· 30 30 #include <nvif/notify.h> 31 31 32 32 #include <drm/drm_edid.h> 33 + #include <drm/drm_encoder.h> 33 34 #include <drm/drm_dp_helper.h> 34 35 #include "nouveau_crtc.h" 35 36
+5 -5
drivers/gpu/drm/nouveau/nouveau_display.c
··· 162 162 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 163 163 if (nouveau_crtc(crtc)->index == pipe) { 164 164 struct drm_display_mode *mode; 165 - if (dev->mode_config.funcs->atomic_commit) 165 + if (drm_drv_uses_atomic_modeset(dev)) 166 166 mode = &crtc->state->adjusted_mode; 167 167 else 168 168 mode = &crtc->hwmode; ··· 259 259 if (!(fb = *pfb = kzalloc(sizeof(*fb), GFP_KERNEL))) 260 260 return -ENOMEM; 261 261 262 - drm_helper_mode_fill_fb_struct(&fb->base, mode_cmd); 262 + drm_helper_mode_fill_fb_struct(dev, &fb->base, mode_cmd); 263 263 fb->nvbo = nvbo; 264 264 265 265 ret = drm_framebuffer_init(dev, &fb->base, &nouveau_framebuffer_funcs); ··· 738 738 struct nouveau_display *disp = nouveau_display(dev); 739 739 struct drm_crtc *crtc; 740 740 741 - if (dev->mode_config.funcs->atomic_commit) { 741 + if (drm_drv_uses_atomic_modeset(dev)) { 742 742 if (!runtime) { 743 743 disp->suspend = nouveau_atomic_suspend(dev); 744 744 if (IS_ERR(disp->suspend)) { ··· 784 784 struct drm_crtc *crtc; 785 785 int ret; 786 786 787 - if (dev->mode_config.funcs->atomic_commit) { 787 + if (drm_drv_uses_atomic_modeset(dev)) { 788 788 nouveau_display_init(dev); 789 789 if (disp->suspend) { 790 790 drm_atomic_helper_resume(dev, disp->suspend); ··· 947 947 948 948 /* Initialize a page flip struct */ 949 949 *s = (struct nouveau_page_flip_state) 950 - { { }, event, crtc, fb->bits_per_pixel, fb->pitches[0], 950 + { { }, event, crtc, fb->format->cpp[0] * 8, fb->pitches[0], 951 951 new_bo->bo.offset }; 952 952 953 953 /* Keep vblanks on during flip, for the target crtc of this flip */
+4 -2
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 41 41 #include <drm/drm_crtc.h> 42 42 #include <drm/drm_crtc_helper.h> 43 43 #include <drm/drm_fb_helper.h> 44 + #include <drm/drm_atomic.h> 44 45 45 46 #include "nouveau_drv.h" 46 47 #include "nouveau_gem.h" ··· 401 400 info->screen_base = nvbo_kmap_obj_iovirtual(fb->nvbo); 402 401 info->screen_size = fb->nvbo->bo.mem.num_pages << PAGE_SHIFT; 403 402 404 - drm_fb_helper_fill_fix(info, fb->base.pitches[0], fb->base.depth); 403 + drm_fb_helper_fill_fix(info, fb->base.pitches[0], 404 + fb->base.format->depth); 405 405 drm_fb_helper_fill_var(info, &fbcon->helper, sizes->fb_width, sizes->fb_height); 406 406 407 407 /* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */ ··· 525 523 preferred_bpp = 32; 526 524 527 525 /* disable all the possible outputs/crtcs before entering KMS mode */ 528 - if (!dev->mode_config.funcs->atomic_commit) 526 + if (!drm_drv_uses_atomic_modeset(dev)) 529 527 drm_helper_disable_unused_functions(dev); 530 528 531 529 ret = drm_fb_helper_initial_config(&fbcon->helper, preferred_bpp);
+14 -14
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 107 107 } 108 108 109 109 const struct ttm_mem_type_manager_func nouveau_vram_manager = { 110 - nouveau_vram_manager_init, 111 - nouveau_vram_manager_fini, 112 - nouveau_vram_manager_new, 113 - nouveau_vram_manager_del, 110 + .init = nouveau_vram_manager_init, 111 + .takedown = nouveau_vram_manager_fini, 112 + .get_node = nouveau_vram_manager_new, 113 + .put_node = nouveau_vram_manager_del, 114 114 }; 115 115 116 116 static int ··· 184 184 } 185 185 186 186 const struct ttm_mem_type_manager_func nouveau_gart_manager = { 187 - nouveau_gart_manager_init, 188 - nouveau_gart_manager_fini, 189 - nouveau_gart_manager_new, 190 - nouveau_gart_manager_del, 191 - nouveau_gart_manager_debug 187 + .init = nouveau_gart_manager_init, 188 + .takedown = nouveau_gart_manager_fini, 189 + .get_node = nouveau_gart_manager_new, 190 + .put_node = nouveau_gart_manager_del, 191 + .debug = nouveau_gart_manager_debug 192 192 }; 193 193 194 194 /*XXX*/ ··· 257 257 } 258 258 259 259 const struct ttm_mem_type_manager_func nv04_gart_manager = { 260 - nv04_gart_manager_init, 261 - nv04_gart_manager_fini, 262 - nv04_gart_manager_new, 263 - nv04_gart_manager_del, 264 - nv04_gart_manager_debug 260 + .init = nv04_gart_manager_init, 261 + .takedown = nv04_gart_manager_fini, 262 + .get_node = nv04_gart_manager_new, 263 + .put_node = nv04_gart_manager_del, 264 + .debug = nv04_gart_manager_debug 265 265 }; 266 266 267 267 int
+6 -8
drivers/gpu/drm/nouveau/nv50_display.c
··· 1153 1153 if (asyw->state.fb->width != asyw->state.fb->height) 1154 1154 return -EINVAL; 1155 1155 1156 - switch (asyw->state.fb->pixel_format) { 1156 + switch (asyw->state.fb->format->format) { 1157 1157 case DRM_FORMAT_ARGB8888: asyh->curs.format = 1; break; 1158 1158 default: 1159 1159 WARN_ON(1); ··· 1418 1418 nv50_base_acquire(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, 1419 1419 struct nv50_head_atom *asyh) 1420 1420 { 1421 - const u32 format = asyw->state.fb->pixel_format; 1422 - const struct drm_format_info *info; 1421 + const struct drm_framebuffer *fb = asyw->state.fb; 1423 1422 int ret; 1424 1423 1425 - info = drm_format_info(format); 1426 - if (!info || !info->depth) 1424 + if (!fb->format->depth) 1427 1425 return -EINVAL; 1428 1426 1429 1427 ret = drm_plane_helper_check_state(&asyw->state, &asyw->clip, ··· 1431 1433 if (ret) 1432 1434 return ret; 1433 1435 1434 - asyh->base.depth = info->depth; 1435 - asyh->base.cpp = info->cpp[0]; 1436 + asyh->base.depth = fb->format->depth; 1437 + asyh->base.cpp = fb->format->cpp[0]; 1436 1438 asyh->base.x = asyw->state.src.x1 >> 16; 1437 1439 asyh->base.y = asyw->state.src.y1 >> 16; 1438 1440 asyh->base.w = asyw->state.fb->width; 1439 1441 asyh->base.h = asyw->state.fb->height; 1440 1442 1441 - switch (format) { 1443 + switch (fb->format->format) { 1442 1444 case DRM_FORMAT_C8 : asyw->image.format = 0x1e; break; 1443 1445 case DRM_FORMAT_RGB565 : asyw->image.format = 0xe8; break; 1444 1446 case DRM_FORMAT_XRGB1555 :
+6 -6
drivers/gpu/drm/omapdrm/omap_fb.c
··· 107 107 static void omap_framebuffer_destroy(struct drm_framebuffer *fb) 108 108 { 109 109 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 110 - int i, n = drm_format_num_planes(fb->pixel_format); 110 + int i, n = fb->format->num_planes; 111 111 112 112 DBG("destroy: FB ID: %d (%p)", fb->base.id, fb); 113 113 ··· 252 252 int omap_framebuffer_pin(struct drm_framebuffer *fb) 253 253 { 254 254 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 255 - int ret, i, n = drm_format_num_planes(fb->pixel_format); 255 + int ret, i, n = fb->format->num_planes; 256 256 257 257 mutex_lock(&omap_fb->lock); 258 258 ··· 292 292 void omap_framebuffer_unpin(struct drm_framebuffer *fb) 293 293 { 294 294 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 295 - int i, n = drm_format_num_planes(fb->pixel_format); 295 + int i, n = fb->format->num_planes; 296 296 297 297 mutex_lock(&omap_fb->lock); 298 298 ··· 343 343 void omap_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) 344 344 { 345 345 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 346 - int i, n = drm_format_num_planes(fb->pixel_format); 346 + int i, n = fb->format->num_planes; 347 347 348 348 seq_printf(m, "fb: %dx%d@%4.4s\n", fb->width, fb->height, 349 - (char *)&fb->pixel_format); 349 + (char *)&fb->format->format); 350 350 351 351 for (i = 0; i < n; i++) { 352 352 struct plane *plane = &omap_fb->planes[i]; ··· 457 457 plane->paddr = 0; 458 458 } 459 459 460 - drm_helper_mode_fill_fb_struct(fb, mode_cmd); 460 + drm_helper_mode_fill_fb_struct(dev, fb, mode_cmd); 461 461 462 462 ret = drm_framebuffer_init(dev, fb, &omap_framebuffer_funcs); 463 463 if (ret) {
+1 -1
drivers/gpu/drm/omapdrm/omap_fbdev.c
··· 190 190 191 191 strcpy(fbi->fix.id, MODULE_NAME); 192 192 193 - drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); 193 + drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth); 194 194 drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); 195 195 196 196 dev->mode_config.fb_base = paddr;
+1 -1
drivers/gpu/drm/qxl/qxl_display.c
··· 624 624 int ret; 625 625 626 626 qfb->obj = obj; 627 + drm_helper_mode_fill_fb_struct(dev, &qfb->base, mode_cmd); 627 628 ret = drm_framebuffer_init(dev, &qfb->base, funcs); 628 629 if (ret) { 629 630 qfb->obj = NULL; 630 631 return ret; 631 632 } 632 - drm_helper_mode_fill_fb_struct(&qfb->base, mode_cmd); 633 633 return 0; 634 634 } 635 635
+1 -1
drivers/gpu/drm/qxl/qxl_draw.c
··· 283 283 struct qxl_rect *rects; 284 284 int stride = qxl_fb->base.pitches[0]; 285 285 /* depth is not actually interesting, we don't mask with it */ 286 - int depth = qxl_fb->base.bits_per_pixel; 286 + int depth = qxl_fb->base.format->cpp[0] * 8; 287 287 uint8_t *surface_base; 288 288 struct qxl_release *release; 289 289 struct qxl_bo *clips_bo;
+1
drivers/gpu/drm/qxl/qxl_drv.h
··· 43 43 #include <ttm/ttm_placement.h> 44 44 #include <ttm/ttm_module.h> 45 45 46 + #include <drm/drm_encoder.h> 46 47 #include <drm/drm_gem.h> 47 48 48 49 /* just for ttm_validate_buffer */
+3 -2
drivers/gpu/drm/qxl/qxl_fb.c
··· 279 279 qfbdev->shadow = shadow; 280 280 strcpy(info->fix.id, "qxldrmfb"); 281 281 282 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 282 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 283 283 284 284 info->flags = FBINFO_DEFAULT | FBINFO_HWACCEL_COPYAREA | FBINFO_HWACCEL_FILLRECT; 285 285 info->fbops = &qxlfb_ops; ··· 316 316 qdev->fbdev_info = info; 317 317 qdev->fbdev_qfb = &qfbdev->qfb; 318 318 DRM_INFO("fb mappable at 0x%lX, size %lu\n", info->fix.smem_start, (unsigned long)info->screen_size); 319 - DRM_INFO("fb: depth %d, pitch %d, width %d, height %d\n", fb->depth, fb->pitches[0], fb->width, fb->height); 319 + DRM_INFO("fb: depth %d, pitch %d, width %d, height %d\n", 320 + fb->format->depth, fb->pitches[0], fb->width, fb->height); 320 321 return 0; 321 322 322 323 out_destroy_fbi:
+10 -9
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1195 1195 radeon_bo_get_tiling_flags(rbo, &tiling_flags, NULL); 1196 1196 radeon_bo_unreserve(rbo); 1197 1197 1198 - switch (target_fb->pixel_format) { 1198 + switch (target_fb->format->format) { 1199 1199 case DRM_FORMAT_C8: 1200 1200 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_8BPP) | 1201 1201 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_INDEXED)); ··· 1261 1261 break; 1262 1262 default: 1263 1263 DRM_ERROR("Unsupported screen format %s\n", 1264 - drm_get_format_name(target_fb->pixel_format, &format_name)); 1264 + drm_get_format_name(target_fb->format->format, &format_name)); 1265 1265 return -EINVAL; 1266 1266 } 1267 1267 ··· 1277 1277 1278 1278 /* Calculate the macrotile mode index. */ 1279 1279 tile_split_bytes = 64 << tile_split; 1280 - tileb = 8 * 8 * target_fb->bits_per_pixel / 8; 1280 + tileb = 8 * 8 * target_fb->format->cpp[0]; 1281 1281 tileb = min(tile_split_bytes, tileb); 1282 1282 1283 1283 for (index = 0; tileb > 64; index++) ··· 1285 1285 1286 1286 if (index >= 16) { 1287 1287 DRM_ERROR("Wrong screen bpp (%u) or tile split (%u)\n", 1288 - target_fb->bits_per_pixel, tile_split); 1288 + target_fb->format->cpp[0] * 8, 1289 + tile_split); 1289 1290 return -EINVAL; 1290 1291 } 1291 1292 1292 1293 num_banks = (rdev->config.cik.macrotile_mode_array[index] >> 6) & 0x3; 1293 1294 } else { 1294 - switch (target_fb->bits_per_pixel) { 1295 + switch (target_fb->format->cpp[0] * 8) { 1295 1296 case 8: 1296 1297 index = 10; 1297 1298 break; ··· 1415 1414 WREG32(EVERGREEN_GRPH_X_END + radeon_crtc->crtc_offset, target_fb->width); 1416 1415 WREG32(EVERGREEN_GRPH_Y_END + radeon_crtc->crtc_offset, target_fb->height); 1417 1416 1418 - fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 1417 + fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 1419 1418 WREG32(EVERGREEN_GRPH_PITCH + radeon_crtc->crtc_offset, fb_pitch_pixels); 1420 1419 WREG32(EVERGREEN_GRPH_ENABLE + radeon_crtc->crtc_offset, 1); 1421 1420 ··· 1511 1510 radeon_bo_get_tiling_flags(rbo, &tiling_flags, NULL); 1512 1511 radeon_bo_unreserve(rbo); 1513 1512 1514 - switch (target_fb->pixel_format) { 1513 + switch (target_fb->format->format) { 1515 1514 case DRM_FORMAT_C8: 1516 1515 fb_format = 1517 1516 AVIVO_D1GRPH_CONTROL_DEPTH_8BPP | ··· 1564 1563 break; 1565 1564 default: 1566 1565 DRM_ERROR("Unsupported screen format %s\n", 1567 - drm_get_format_name(target_fb->pixel_format, &format_name)); 1566 + drm_get_format_name(target_fb->format->format, &format_name)); 1568 1567 return -EINVAL; 1569 1568 } 1570 1569 ··· 1622 1621 WREG32(AVIVO_D1GRPH_X_END + radeon_crtc->crtc_offset, target_fb->width); 1623 1622 WREG32(AVIVO_D1GRPH_Y_END + radeon_crtc->crtc_offset, target_fb->height); 1624 1623 1625 - fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 1624 + fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 1626 1625 WREG32(AVIVO_D1GRPH_PITCH + radeon_crtc->crtc_offset, fb_pitch_pixels); 1627 1626 WREG32(AVIVO_D1GRPH_ENABLE + radeon_crtc->crtc_offset, 1); 1628 1627
+8 -2
drivers/gpu/drm/radeon/r100.c
··· 3225 3225 radeon_update_display_priority(rdev); 3226 3226 3227 3227 if (rdev->mode_info.crtcs[0]->base.enabled) { 3228 + const struct drm_framebuffer *fb = 3229 + rdev->mode_info.crtcs[0]->base.primary->fb; 3230 + 3228 3231 mode1 = &rdev->mode_info.crtcs[0]->base.mode; 3229 - pixel_bytes1 = rdev->mode_info.crtcs[0]->base.primary->fb->bits_per_pixel / 8; 3232 + pixel_bytes1 = fb->format->cpp[0]; 3230 3233 } 3231 3234 if (!(rdev->flags & RADEON_SINGLE_CRTC)) { 3232 3235 if (rdev->mode_info.crtcs[1]->base.enabled) { 3236 + const struct drm_framebuffer *fb = 3237 + rdev->mode_info.crtcs[1]->base.primary->fb; 3238 + 3233 3239 mode2 = &rdev->mode_info.crtcs[1]->base.mode; 3234 - pixel_bytes2 = rdev->mode_info.crtcs[1]->base.primary->fb->bits_per_pixel / 8; 3240 + pixel_bytes2 = fb->format->cpp[0]; 3235 3241 } 3236 3242 } 3237 3243
+4 -4
drivers/gpu/drm/radeon/radeon_display.c
··· 549 549 if (!ASIC_IS_AVIVO(rdev)) { 550 550 /* crtc offset is from display base addr not FB location */ 551 551 base -= radeon_crtc->legacy_display_base_addr; 552 - pitch_pixels = fb->pitches[0] / (fb->bits_per_pixel / 8); 552 + pitch_pixels = fb->pitches[0] / fb->format->cpp[0]; 553 553 554 554 if (tiling_flags & RADEON_TILING_MACRO) { 555 555 if (ASIC_IS_R300(rdev)) { 556 556 base &= ~0x7ff; 557 557 } else { 558 - int byteshift = fb->bits_per_pixel >> 4; 558 + int byteshift = fb->format->cpp[0] * 8 >> 4; 559 559 int tile_addr = (((crtc->y >> 3) * pitch_pixels + crtc->x) >> (8 - byteshift)) << 11; 560 560 base += tile_addr + ((crtc->x << byteshift) % 256) + ((crtc->y % 8) << 8); 561 561 } 562 562 } else { 563 563 int offset = crtc->y * pitch_pixels + crtc->x; 564 - switch (fb->bits_per_pixel) { 564 + switch (fb->format->cpp[0] * 8) { 565 565 case 8: 566 566 default: 567 567 offset *= 1; ··· 1327 1327 { 1328 1328 int ret; 1329 1329 rfb->obj = obj; 1330 - drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd); 1330 + drm_helper_mode_fill_fb_struct(dev, &rfb->base, mode_cmd); 1331 1331 ret = drm_framebuffer_init(dev, &rfb->base, &radeon_fb_funcs); 1332 1332 if (ret) { 1333 1333 rfb->obj = NULL;
+2 -2
drivers/gpu/drm/radeon/radeon_fb.c
··· 263 263 264 264 strcpy(info->fix.id, "radeondrmfb"); 265 265 266 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 266 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 267 267 268 268 info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT; 269 269 info->fbops = &radeonfb_ops; ··· 290 290 DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); 291 291 DRM_INFO("vram apper at 0x%lX\n", (unsigned long)rdev->mc.aper_base); 292 292 DRM_INFO("size %lu\n", (unsigned long)radeon_bo_size(rbo)); 293 - DRM_INFO("fb depth is %d\n", fb->depth); 293 + DRM_INFO("fb depth is %d\n", fb->format->depth); 294 294 DRM_INFO(" pitch is %d\n", fb->pitches[0]); 295 295 296 296 vga_switcheroo_client_fb_set(rdev->ddev->pdev, info);
+4 -8
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 85 85 return; 86 86 87 87 mutex_lock(&mode_config->mutex); 88 - if (mode_config->num_connector) { 89 - list_for_each_entry(connector, &mode_config->connector_list, head) 90 - radeon_connector_hotplug(connector); 91 - } 88 + list_for_each_entry(connector, &mode_config->connector_list, head) 89 + radeon_connector_hotplug(connector); 92 90 mutex_unlock(&mode_config->mutex); 93 91 /* Just fire off a uevent and let userspace tell us what to do */ 94 92 drm_helper_hpd_irq_event(dev); ··· 101 103 struct drm_connector *connector; 102 104 103 105 /* this should take a mutex */ 104 - if (mode_config->num_connector) { 105 - list_for_each_entry(connector, &mode_config->connector_list, head) 106 - radeon_connector_hotplug(connector); 107 - } 106 + list_for_each_entry(connector, &mode_config->connector_list, head) 107 + radeon_connector_hotplug(connector); 108 108 } 109 109 /** 110 110 * radeon_driver_irq_preinstall_kms - drm irq preinstall callback
+8 -8
drivers/gpu/drm/radeon/radeon_legacy_crtc.c
··· 402 402 target_fb = crtc->primary->fb; 403 403 } 404 404 405 - switch (target_fb->bits_per_pixel) { 405 + switch (target_fb->format->cpp[0] * 8) { 406 406 case 8: 407 407 format = 2; 408 408 break; ··· 476 476 477 477 crtc_offset_cntl = 0; 478 478 479 - pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); 480 - crtc_pitch = (((pitch_pixels * target_fb->bits_per_pixel) + 481 - ((target_fb->bits_per_pixel * 8) - 1)) / 482 - (target_fb->bits_per_pixel * 8)); 479 + pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0]; 480 + crtc_pitch = DIV_ROUND_UP(pitch_pixels * target_fb->format->cpp[0] * 8, 481 + target_fb->format->cpp[0] * 8 * 8); 483 482 crtc_pitch |= crtc_pitch << 16; 484 483 485 484 crtc_offset_cntl |= RADEON_CRTC_GUI_TRIG_OFFSET_LEFT_EN; ··· 503 504 crtc_tile_x0_y0 = x | (y << 16); 504 505 base &= ~0x7ff; 505 506 } else { 506 - int byteshift = target_fb->bits_per_pixel >> 4; 507 + int byteshift = target_fb->format->cpp[0] * 8 >> 4; 507 508 int tile_addr = (((y >> 3) * pitch_pixels + x) >> (8 - byteshift)) << 11; 508 509 base += tile_addr + ((x << byteshift) % 256) + ((y % 8) << 8); 509 510 crtc_offset_cntl |= (y % 16); 510 511 } 511 512 } else { 512 513 int offset = y * pitch_pixels + x; 513 - switch (target_fb->bits_per_pixel) { 514 + switch (target_fb->format->cpp[0] * 8) { 514 515 case 8: 515 516 offset *= 1; 516 517 break; ··· 578 579 struct drm_device *dev = crtc->dev; 579 580 struct radeon_device *rdev = dev->dev_private; 580 581 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 582 + const struct drm_framebuffer *fb = crtc->primary->fb; 581 583 struct drm_encoder *encoder; 582 584 int format; 583 585 int hsync_start; ··· 602 602 } 603 603 } 604 604 605 - switch (crtc->primary->fb->bits_per_pixel) { 605 + switch (fb->format->cpp[0] * 8) { 606 606 case 8: 607 607 format = 2; 608 608 break;
+1
drivers/gpu/drm/radeon/radeon_mode.h
··· 32 32 33 33 #include <drm/drm_crtc.h> 34 34 #include <drm/drm_edid.h> 35 + #include <drm/drm_encoder.h> 35 36 #include <drm/drm_dp_helper.h> 36 37 #include <drm/drm_dp_mst_helper.h> 37 38 #include <drm/drm_fixed.h>
+1
drivers/gpu/drm/rcar-du/rcar_du_encoder.h
··· 15 15 #define __RCAR_DU_ENCODER_H__ 16 16 17 17 #include <drm/drm_crtc.h> 18 + #include <drm/drm_encoder.h> 18 19 19 20 struct rcar_du_device; 20 21 struct rcar_du_hdmienc;
+1 -4
drivers/gpu/drm/rcar-du/rcar_du_hdmienc.c
··· 124 124 hdmienc->renc = renc; 125 125 126 126 /* Link the bridge to the encoder. */ 127 - bridge->encoder = encoder; 128 - encoder->bridge = bridge; 129 - 130 - ret = drm_bridge_attach(rcdu->ddev, bridge); 127 + ret = drm_bridge_attach(encoder, bridge, NULL); 131 128 if (ret) { 132 129 drm_encoder_cleanup(encoder); 133 130 return ret;
+2 -2
drivers/gpu/drm/rcar-du/rcar_du_plane.c
··· 567 567 return -EINVAL; 568 568 } 569 569 570 - rstate->format = rcar_du_format_info(state->fb->pixel_format); 570 + rstate->format = rcar_du_format_info(state->fb->format->format); 571 571 if (rstate->format == NULL) { 572 572 dev_dbg(rcdu->dev, "%s: unsupported format %08x\n", __func__, 573 - state->fb->pixel_format); 573 + state->fb->format->format); 574 574 return -EINVAL; 575 575 } 576 576
+2 -2
drivers/gpu/drm/rcar-du/rcar_du_vsp.c
··· 201 201 return -EINVAL; 202 202 } 203 203 204 - rstate->format = rcar_du_format_info(state->fb->pixel_format); 204 + rstate->format = rcar_du_format_info(state->fb->format->format); 205 205 if (rstate->format == NULL) { 206 206 dev_dbg(rcdu->dev, "%s: unsupported format %08x\n", __func__, 207 - state->fb->pixel_format); 207 + state->fb->format->format); 208 208 return -EINVAL; 209 209 } 210 210
+1 -1
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 92 92 if (!rockchip_fb) 93 93 return ERR_PTR(-ENOMEM); 94 94 95 - drm_helper_mode_fill_fb_struct(&rockchip_fb->fb, mode_cmd); 95 + drm_helper_mode_fill_fb_struct(dev, &rockchip_fb->fb, mode_cmd); 96 96 97 97 for (i = 0; i < num_planes; i++) 98 98 rockchip_fb->obj[i] = obj[i];
+3 -2
drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
··· 94 94 fbi->fbops = &rockchip_drm_fbdev_ops; 95 95 96 96 fb = helper->fb; 97 - drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); 97 + drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth); 98 98 drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); 99 99 100 100 offset = fbi->var.xoffset * bytes_per_pixel; ··· 106 106 fbi->fix.smem_len = rk_obj->base.size; 107 107 108 108 DRM_DEBUG_KMS("FB [%dx%d]-%d kvaddr=%p offset=%ld size=%zu\n", 109 - fb->width, fb->height, fb->depth, rk_obj->kvaddr, 109 + fb->width, fb->height, fb->format->depth, 110 + rk_obj->kvaddr, 110 111 offset, size); 111 112 112 113 fbi->skip_vt_switch = true;
+11 -11
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 668 668 if (!state->visible) 669 669 return 0; 670 670 671 - ret = vop_convert_format(fb->pixel_format); 671 + ret = vop_convert_format(fb->format->format); 672 672 if (ret < 0) 673 673 return ret; 674 674 ··· 676 676 * Src.x1 can be odd when do clip, but yuv plane start point 677 677 * need align with 2 pixel. 678 678 */ 679 - if (is_yuv_support(fb->pixel_format) && ((state->src.x1 >> 16) % 2)) 679 + if (is_yuv_support(fb->format->format) && ((state->src.x1 >> 16) % 2)) 680 680 return -EINVAL; 681 681 682 682 return 0; ··· 749 749 dsp_sty = dest->y1 + crtc->mode.vtotal - crtc->mode.vsync_start; 750 750 dsp_st = dsp_sty << 16 | (dsp_stx & 0xffff); 751 751 752 - offset = (src->x1 >> 16) * drm_format_plane_cpp(fb->pixel_format, 0); 752 + offset = (src->x1 >> 16) * fb->format->cpp[0]; 753 753 offset += (src->y1 >> 16) * fb->pitches[0]; 754 754 dma_addr = rk_obj->dma_addr + offset + fb->offsets[0]; 755 755 756 - format = vop_convert_format(fb->pixel_format); 756 + format = vop_convert_format(fb->format->format); 757 757 758 758 spin_lock(&vop->reg_lock); 759 759 760 760 VOP_WIN_SET(vop, win, format, format); 761 761 VOP_WIN_SET(vop, win, yrgb_vir, fb->pitches[0] >> 2); 762 762 VOP_WIN_SET(vop, win, yrgb_mst, dma_addr); 763 - if (is_yuv_support(fb->pixel_format)) { 764 - int hsub = drm_format_horz_chroma_subsampling(fb->pixel_format); 765 - int vsub = drm_format_vert_chroma_subsampling(fb->pixel_format); 766 - int bpp = drm_format_plane_cpp(fb->pixel_format, 1); 763 + if (is_yuv_support(fb->format->format)) { 764 + int hsub = drm_format_horz_chroma_subsampling(fb->format->format); 765 + int vsub = drm_format_vert_chroma_subsampling(fb->format->format); 766 + int bpp = fb->format->cpp[1]; 767 767 768 768 uv_obj = rockchip_fb_get_gem_obj(fb, 1); 769 769 rk_uv_obj = to_rockchip_obj(uv_obj); ··· 779 779 if (win->phy->scl) 780 780 scl_vop_cal_scl_fac(vop, win, actual_w, actual_h, 781 781 drm_rect_width(dest), drm_rect_height(dest), 782 - fb->pixel_format); 782 + fb->format->format); 783 783 784 784 VOP_WIN_SET(vop, win, act_info, act_info); 785 785 VOP_WIN_SET(vop, win, dsp_info, dsp_info); 786 786 VOP_WIN_SET(vop, win, dsp_st, dsp_st); 787 787 788 - rb_swap = has_rb_swapped(fb->pixel_format); 788 + rb_swap = has_rb_swapped(fb->format->format); 789 789 VOP_WIN_SET(vop, win, rb_swap, rb_swap); 790 790 791 - if (is_alpha_support(fb->pixel_format)) { 791 + if (is_alpha_support(fb->format->format)) { 792 792 VOP_WIN_SET(vop, win, dst_alpha_ctl, 793 793 DST_FACTOR_M0(ALPHA_SRC_INVERSE)); 794 794 val = SRC_ALPHA_EN(1) | SRC_COLOR_M0(ALPHA_SRC_PRE_MUL) |
+1
drivers/gpu/drm/selftests/Makefile
··· 1 + obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += test-drm_mm.o
+23
drivers/gpu/drm/selftests/drm_mm_selftests.h
··· 1 + /* List each unit test as selftest(name, function) 2 + * 3 + * The name is used as both an enum and expanded as igt__name to create 4 + * a module parameter. It must be unique and legal for a C identifier. 5 + * 6 + * Tests are executed in order by igt/drm_mm 7 + */ 8 + selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */ 9 + selftest(init, igt_init) 10 + selftest(debug, igt_debug) 11 + selftest(reserve, igt_reserve) 12 + selftest(insert, igt_insert) 13 + selftest(replace, igt_replace) 14 + selftest(insert_range, igt_insert_range) 15 + selftest(align, igt_align) 16 + selftest(align32, igt_align32) 17 + selftest(align64, igt_align64) 18 + selftest(evict, igt_evict) 19 + selftest(evict_range, igt_evict_range) 20 + selftest(topdown, igt_topdown) 21 + selftest(color, igt_color) 22 + selftest(color_evict, igt_color_evict) 23 + selftest(color_evict_range, igt_color_evict_range)
+109
drivers/gpu/drm/selftests/drm_selftest.c
··· 1 + /* 2 + * Copyright © 2016 Intel Corporation 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice (including the next 12 + * paragraph) shall be included in all copies or substantial portions of the 13 + * Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20 + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21 + * IN THE SOFTWARE. 22 + */ 23 + 24 + #include <linux/compiler.h> 25 + 26 + #define selftest(name, func) __idx_##name, 27 + enum { 28 + #include TESTS 29 + }; 30 + #undef selftest 31 + 32 + #define selftest(n, f) [__idx_##n] = { .name = #n, .func = f }, 33 + static struct drm_selftest { 34 + bool enabled; 35 + const char *name; 36 + int (*func)(void *); 37 + } selftests[] = { 38 + #include TESTS 39 + }; 40 + #undef selftest 41 + 42 + /* Embed the line number into the parameter name so that we can order tests */ 43 + #define param(n) __PASTE(igt__, __PASTE(__PASTE(__LINE__, __), n)) 44 + #define selftest_0(n, func, id) \ 45 + module_param_named(id, selftests[__idx_##n].enabled, bool, 0400); 46 + #define selftest(n, func) selftest_0(n, func, param(n)) 47 + #include TESTS 48 + #undef selftest 49 + 50 + static void set_default_test_all(struct drm_selftest *st, unsigned long count) 51 + { 52 + unsigned long i; 53 + 54 + for (i = 0; i < count; i++) 55 + if (st[i].enabled) 56 + return; 57 + 58 + for (i = 0; i < count; i++) 59 + st[i].enabled = true; 60 + } 61 + 62 + static int run_selftests(struct drm_selftest *st, 63 + unsigned long count, 64 + void *data) 65 + { 66 + int err = 0; 67 + 68 + set_default_test_all(st, count); 69 + 70 + /* Tests are listed in natural order in drm_*_selftests.h */ 71 + for (; count--; st++) { 72 + if (!st->enabled) 73 + continue; 74 + 75 + pr_debug("drm: Running %s\n", st->name); 76 + err = st->func(data); 77 + if (err) 78 + break; 79 + } 80 + 81 + if (WARN(err > 0 || err == -ENOTTY, 82 + "%s returned %d, conflicting with selftest's magic values!\n", 83 + st->name, err)) 84 + err = -1; 85 + 86 + rcu_barrier(); 87 + return err; 88 + } 89 + 90 + static int __maybe_unused 91 + __drm_subtests(const char *caller, 92 + const struct drm_subtest *st, 93 + int count, 94 + void *data) 95 + { 96 + int err; 97 + 98 + for (; count--; st++) { 99 + pr_debug("Running %s/%s\n", caller, st->name); 100 + err = st->func(data); 101 + if (err) { 102 + pr_err("%s: %s failed with error %d\n", 103 + caller, st->name, err); 104 + return err; 105 + } 106 + } 107 + 108 + return 0; 109 + }
+41
drivers/gpu/drm/selftests/drm_selftest.h
··· 1 + /* 2 + * Copyright © 2016 Intel Corporation 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice (including the next 12 + * paragraph) shall be included in all copies or substantial portions of the 13 + * Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20 + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21 + * IN THE SOFTWARE. 22 + */ 23 + 24 + #ifndef __DRM_SELFTEST_H__ 25 + #define __DRM_SELFTEST_H__ 26 + 27 + struct drm_subtest { 28 + int (*func)(void *data); 29 + const char *name; 30 + }; 31 + 32 + static int __drm_subtests(const char *caller, 33 + const struct drm_subtest *st, 34 + int count, 35 + void *data); 36 + #define drm_subtests(T, data) \ 37 + __drm_subtests(__func__, T, ARRAY_SIZE(T), data) 38 + 39 + #define SUBTEST(x) { x, #x } 40 + 41 + #endif /* __DRM_SELFTEST_H__ */
+2172
drivers/gpu/drm/selftests/test-drm_mm.c
··· 1 + /* 2 + * Test cases for the drm_mm range manager 3 + */ 4 + 5 + #define pr_fmt(fmt) "drm_mm: " fmt 6 + 7 + #include <linux/module.h> 8 + #include <linux/prime_numbers.h> 9 + #include <linux/slab.h> 10 + #include <linux/random.h> 11 + #include <linux/vmalloc.h> 12 + 13 + #include <drm/drm_mm.h> 14 + 15 + #include "../lib/drm_random.h" 16 + 17 + #define TESTS "drm_mm_selftests.h" 18 + #include "drm_selftest.h" 19 + 20 + static unsigned int random_seed; 21 + static unsigned int max_iterations = 8192; 22 + static unsigned int max_prime = 128; 23 + 24 + enum { 25 + DEFAULT, 26 + TOPDOWN, 27 + BEST, 28 + }; 29 + 30 + static const struct insert_mode { 31 + const char *name; 32 + unsigned int search_flags; 33 + unsigned int create_flags; 34 + } insert_modes[] = { 35 + [DEFAULT] = { "default", DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT }, 36 + [TOPDOWN] = { "top-down", DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP }, 37 + [BEST] = { "best", DRM_MM_SEARCH_BEST, DRM_MM_CREATE_DEFAULT }, 38 + {} 39 + }, evict_modes[] = { 40 + { "default", DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT }, 41 + { "top-down", DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP }, 42 + {} 43 + }; 44 + 45 + static int igt_sanitycheck(void *ignored) 46 + { 47 + pr_info("%s - ok!\n", __func__); 48 + return 0; 49 + } 50 + 51 + static bool assert_no_holes(const struct drm_mm *mm) 52 + { 53 + struct drm_mm_node *hole; 54 + u64 hole_start, hole_end; 55 + unsigned long count; 56 + 57 + count = 0; 58 + drm_mm_for_each_hole(hole, mm, hole_start, hole_end) 59 + count++; 60 + if (count) { 61 + pr_err("Expected to find no holes (after reserve), found %lu instead\n", count); 62 + return false; 63 + } 64 + 65 + drm_mm_for_each_node(hole, mm) { 66 + if (drm_mm_hole_follows(hole)) { 67 + pr_err("Hole follows node, expected none!\n"); 68 + return false; 69 + } 70 + } 71 + 72 + return true; 73 + } 74 + 75 + static bool assert_one_hole(const struct drm_mm *mm, u64 start, u64 end) 76 + { 77 + struct drm_mm_node *hole; 78 + u64 hole_start, hole_end; 79 + unsigned long count; 80 + bool ok = true; 81 + 82 + if (end <= start) 83 + return true; 84 + 85 + count = 0; 86 + drm_mm_for_each_hole(hole, mm, hole_start, hole_end) { 87 + if (start != hole_start || end != hole_end) { 88 + if (ok) 89 + pr_err("empty mm has incorrect hole, found (%llx, %llx), expect (%llx, %llx)\n", 90 + hole_start, hole_end, 91 + start, end); 92 + ok = false; 93 + } 94 + count++; 95 + } 96 + if (count != 1) { 97 + pr_err("Expected to find one hole, found %lu instead\n", count); 98 + ok = false; 99 + } 100 + 101 + return ok; 102 + } 103 + 104 + static bool assert_continuous(const struct drm_mm *mm, u64 size) 105 + { 106 + struct drm_mm_node *node, *check, *found; 107 + unsigned long n; 108 + u64 addr; 109 + 110 + if (!assert_no_holes(mm)) 111 + return false; 112 + 113 + n = 0; 114 + addr = 0; 115 + drm_mm_for_each_node(node, mm) { 116 + if (node->start != addr) { 117 + pr_err("node[%ld] list out of order, expected %llx found %llx\n", 118 + n, addr, node->start); 119 + return false; 120 + } 121 + 122 + if (node->size != size) { 123 + pr_err("node[%ld].size incorrect, expected %llx, found %llx\n", 124 + n, size, node->size); 125 + return false; 126 + } 127 + 128 + if (drm_mm_hole_follows(node)) { 129 + pr_err("node[%ld] is followed by a hole!\n", n); 130 + return false; 131 + } 132 + 133 + found = NULL; 134 + drm_mm_for_each_node_in_range(check, mm, addr, addr + size) { 135 + if (node != check) { 136 + pr_err("lookup return wrong node, expected start %llx, found %llx\n", 137 + node->start, check->start); 138 + return false; 139 + } 140 + found = check; 141 + } 142 + if (!found) { 143 + pr_err("lookup failed for node %llx + %llx\n", 144 + addr, size); 145 + return false; 146 + } 147 + 148 + addr += size; 149 + n++; 150 + } 151 + 152 + return true; 153 + } 154 + 155 + static u64 misalignment(struct drm_mm_node *node, u64 alignment) 156 + { 157 + u64 rem; 158 + 159 + if (!alignment) 160 + return 0; 161 + 162 + div64_u64_rem(node->start, alignment, &rem); 163 + return rem; 164 + } 165 + 166 + static bool assert_node(struct drm_mm_node *node, struct drm_mm *mm, 167 + u64 size, u64 alignment, unsigned long color) 168 + { 169 + bool ok = true; 170 + 171 + if (!drm_mm_node_allocated(node) || node->mm != mm) { 172 + pr_err("node not allocated\n"); 173 + ok = false; 174 + } 175 + 176 + if (node->size != size) { 177 + pr_err("node has wrong size, found %llu, expected %llu\n", 178 + node->size, size); 179 + ok = false; 180 + } 181 + 182 + if (misalignment(node, alignment)) { 183 + pr_err("node is misalinged, start %llx rem %llu, expected alignment %llu\n", 184 + node->start, misalignment(node, alignment), alignment); 185 + ok = false; 186 + } 187 + 188 + if (node->color != color) { 189 + pr_err("node has wrong color, found %lu, expected %lu\n", 190 + node->color, color); 191 + ok = false; 192 + } 193 + 194 + return ok; 195 + } 196 + 197 + static int igt_init(void *ignored) 198 + { 199 + const unsigned int size = 4096; 200 + struct drm_mm mm; 201 + struct drm_mm_node tmp; 202 + int ret = -EINVAL; 203 + 204 + /* Start with some simple checks on initialising the struct drm_mm */ 205 + memset(&mm, 0, sizeof(mm)); 206 + if (drm_mm_initialized(&mm)) { 207 + pr_err("zeroed mm claims to be initialized\n"); 208 + return ret; 209 + } 210 + 211 + memset(&mm, 0xff, sizeof(mm)); 212 + drm_mm_init(&mm, 0, size); 213 + if (!drm_mm_initialized(&mm)) { 214 + pr_err("mm claims not to be initialized\n"); 215 + goto out; 216 + } 217 + 218 + if (!drm_mm_clean(&mm)) { 219 + pr_err("mm not empty on creation\n"); 220 + goto out; 221 + } 222 + 223 + /* After creation, it should all be one massive hole */ 224 + if (!assert_one_hole(&mm, 0, size)) { 225 + ret = -EINVAL; 226 + goto out; 227 + } 228 + 229 + memset(&tmp, 0, sizeof(tmp)); 230 + tmp.start = 0; 231 + tmp.size = size; 232 + ret = drm_mm_reserve_node(&mm, &tmp); 233 + if (ret) { 234 + pr_err("failed to reserve whole drm_mm\n"); 235 + goto out; 236 + } 237 + 238 + /* After filling the range entirely, there should be no holes */ 239 + if (!assert_no_holes(&mm)) { 240 + ret = -EINVAL; 241 + goto out; 242 + } 243 + 244 + /* And then after emptying it again, the massive hole should be back */ 245 + drm_mm_remove_node(&tmp); 246 + if (!assert_one_hole(&mm, 0, size)) { 247 + ret = -EINVAL; 248 + goto out; 249 + } 250 + 251 + out: 252 + if (ret) 253 + drm_mm_debug_table(&mm, __func__); 254 + drm_mm_takedown(&mm); 255 + return ret; 256 + } 257 + 258 + static int igt_debug(void *ignored) 259 + { 260 + struct drm_mm mm; 261 + struct drm_mm_node nodes[2]; 262 + int ret; 263 + 264 + /* Create a small drm_mm with a couple of nodes and a few holes, and 265 + * check that the debug iterator doesn't explode over a trivial drm_mm. 266 + */ 267 + 268 + drm_mm_init(&mm, 0, 4096); 269 + 270 + memset(nodes, 0, sizeof(nodes)); 271 + nodes[0].start = 512; 272 + nodes[0].size = 1024; 273 + ret = drm_mm_reserve_node(&mm, &nodes[0]); 274 + if (ret) { 275 + pr_err("failed to reserve node[0] {start=%lld, size=%lld)\n", 276 + nodes[0].start, nodes[0].size); 277 + return ret; 278 + } 279 + 280 + nodes[1].size = 1024; 281 + nodes[1].start = 4096 - 512 - nodes[1].size; 282 + ret = drm_mm_reserve_node(&mm, &nodes[1]); 283 + if (ret) { 284 + pr_err("failed to reserve node[1] {start=%lld, size=%lld)\n", 285 + nodes[1].start, nodes[1].size); 286 + return ret; 287 + } 288 + 289 + drm_mm_debug_table(&mm, __func__); 290 + return 0; 291 + } 292 + 293 + static struct drm_mm_node *set_node(struct drm_mm_node *node, 294 + u64 start, u64 size) 295 + { 296 + node->start = start; 297 + node->size = size; 298 + return node; 299 + } 300 + 301 + static bool expect_reserve_fail(struct drm_mm *mm, struct drm_mm_node *node) 302 + { 303 + int err; 304 + 305 + err = drm_mm_reserve_node(mm, node); 306 + if (likely(err == -ENOSPC)) 307 + return true; 308 + 309 + if (!err) { 310 + pr_err("impossible reserve succeeded, node %llu + %llu\n", 311 + node->start, node->size); 312 + drm_mm_remove_node(node); 313 + } else { 314 + pr_err("impossible reserve failed with wrong error %d [expected %d], node %llu + %llu\n", 315 + err, -ENOSPC, node->start, node->size); 316 + } 317 + return false; 318 + } 319 + 320 + static bool check_reserve_boundaries(struct drm_mm *mm, 321 + unsigned int count, 322 + u64 size) 323 + { 324 + const struct boundary { 325 + u64 start, size; 326 + const char *name; 327 + } boundaries[] = { 328 + #define B(st, sz) { (st), (sz), "{ " #st ", " #sz "}" } 329 + B(0, 0), 330 + B(-size, 0), 331 + B(size, 0), 332 + B(size * count, 0), 333 + B(-size, size), 334 + B(-size, -size), 335 + B(-size, 2*size), 336 + B(0, -size), 337 + B(size, -size), 338 + B(count*size, size), 339 + B(count*size, -size), 340 + B(count*size, count*size), 341 + B(count*size, -count*size), 342 + B(count*size, -(count+1)*size), 343 + B((count+1)*size, size), 344 + B((count+1)*size, -size), 345 + B((count+1)*size, -2*size), 346 + #undef B 347 + }; 348 + struct drm_mm_node tmp = {}; 349 + int n; 350 + 351 + for (n = 0; n < ARRAY_SIZE(boundaries); n++) { 352 + if (!expect_reserve_fail(mm, 353 + set_node(&tmp, 354 + boundaries[n].start, 355 + boundaries[n].size))) { 356 + pr_err("boundary[%d:%s] failed, count=%u, size=%lld\n", 357 + n, boundaries[n].name, count, size); 358 + return false; 359 + } 360 + } 361 + 362 + return true; 363 + } 364 + 365 + static int __igt_reserve(unsigned int count, u64 size) 366 + { 367 + DRM_RND_STATE(prng, random_seed); 368 + struct drm_mm mm; 369 + struct drm_mm_node tmp, *nodes, *node, *next; 370 + unsigned int *order, n, m, o = 0; 371 + int ret, err; 372 + 373 + /* For exercising drm_mm_reserve_node(), we want to check that 374 + * reservations outside of the drm_mm range are rejected, and to 375 + * overlapping and otherwise already occupied ranges. Afterwards, 376 + * the tree and nodes should be intact. 377 + */ 378 + 379 + DRM_MM_BUG_ON(!count); 380 + DRM_MM_BUG_ON(!size); 381 + 382 + ret = -ENOMEM; 383 + order = drm_random_order(count, &prng); 384 + if (!order) 385 + goto err; 386 + 387 + nodes = vzalloc(sizeof(*nodes) * count); 388 + if (!nodes) 389 + goto err_order; 390 + 391 + ret = -EINVAL; 392 + drm_mm_init(&mm, 0, count * size); 393 + 394 + if (!check_reserve_boundaries(&mm, count, size)) 395 + goto out; 396 + 397 + for (n = 0; n < count; n++) { 398 + nodes[n].start = order[n] * size; 399 + nodes[n].size = size; 400 + 401 + err = drm_mm_reserve_node(&mm, &nodes[n]); 402 + if (err) { 403 + pr_err("reserve failed, step %d, start %llu\n", 404 + n, nodes[n].start); 405 + ret = err; 406 + goto out; 407 + } 408 + 409 + if (!drm_mm_node_allocated(&nodes[n])) { 410 + pr_err("reserved node not allocated! step %d, start %llu\n", 411 + n, nodes[n].start); 412 + goto out; 413 + } 414 + 415 + if (!expect_reserve_fail(&mm, &nodes[n])) 416 + goto out; 417 + } 418 + 419 + /* After random insertion the nodes should be in order */ 420 + if (!assert_continuous(&mm, size)) 421 + goto out; 422 + 423 + /* Repeated use should then fail */ 424 + drm_random_reorder(order, count, &prng); 425 + for (n = 0; n < count; n++) { 426 + if (!expect_reserve_fail(&mm, 427 + set_node(&tmp, order[n] * size, 1))) 428 + goto out; 429 + 430 + /* Remove and reinsert should work */ 431 + drm_mm_remove_node(&nodes[order[n]]); 432 + err = drm_mm_reserve_node(&mm, &nodes[order[n]]); 433 + if (err) { 434 + pr_err("reserve failed, step %d, start %llu\n", 435 + n, nodes[n].start); 436 + ret = err; 437 + goto out; 438 + } 439 + } 440 + 441 + if (!assert_continuous(&mm, size)) 442 + goto out; 443 + 444 + /* Overlapping use should then fail */ 445 + for (n = 0; n < count; n++) { 446 + if (!expect_reserve_fail(&mm, set_node(&tmp, 0, size*count))) 447 + goto out; 448 + } 449 + for (n = 0; n < count; n++) { 450 + if (!expect_reserve_fail(&mm, 451 + set_node(&tmp, 452 + size * n, 453 + size * (count - n)))) 454 + goto out; 455 + } 456 + 457 + /* Remove several, reinsert, check full */ 458 + for_each_prime_number(n, min(max_prime, count)) { 459 + for (m = 0; m < n; m++) { 460 + node = &nodes[order[(o + m) % count]]; 461 + drm_mm_remove_node(node); 462 + } 463 + 464 + for (m = 0; m < n; m++) { 465 + node = &nodes[order[(o + m) % count]]; 466 + err = drm_mm_reserve_node(&mm, node); 467 + if (err) { 468 + pr_err("reserve failed, step %d/%d, start %llu\n", 469 + m, n, node->start); 470 + ret = err; 471 + goto out; 472 + } 473 + } 474 + 475 + o += n; 476 + 477 + if (!assert_continuous(&mm, size)) 478 + goto out; 479 + } 480 + 481 + ret = 0; 482 + out: 483 + drm_mm_for_each_node_safe(node, next, &mm) 484 + drm_mm_remove_node(node); 485 + drm_mm_takedown(&mm); 486 + vfree(nodes); 487 + err_order: 488 + kfree(order); 489 + err: 490 + return ret; 491 + } 492 + 493 + static int igt_reserve(void *ignored) 494 + { 495 + const unsigned int count = min_t(unsigned int, BIT(10), max_iterations); 496 + int n, ret; 497 + 498 + for_each_prime_number_from(n, 1, 54) { 499 + u64 size = BIT_ULL(n); 500 + 501 + ret = __igt_reserve(count, size - 1); 502 + if (ret) 503 + return ret; 504 + 505 + ret = __igt_reserve(count, size); 506 + if (ret) 507 + return ret; 508 + 509 + ret = __igt_reserve(count, size + 1); 510 + if (ret) 511 + return ret; 512 + } 513 + 514 + return 0; 515 + } 516 + 517 + static bool expect_insert(struct drm_mm *mm, struct drm_mm_node *node, 518 + u64 size, u64 alignment, unsigned long color, 519 + const struct insert_mode *mode) 520 + { 521 + int err; 522 + 523 + err = drm_mm_insert_node_generic(mm, node, 524 + size, alignment, color, 525 + mode->search_flags, 526 + mode->create_flags); 527 + if (err) { 528 + pr_err("insert (size=%llu, alignment=%llu, color=%lu, mode=%s) failed with err=%d\n", 529 + size, alignment, color, mode->name, err); 530 + return false; 531 + } 532 + 533 + if (!assert_node(node, mm, size, alignment, color)) { 534 + drm_mm_remove_node(node); 535 + return false; 536 + } 537 + 538 + return true; 539 + } 540 + 541 + static bool expect_insert_fail(struct drm_mm *mm, u64 size) 542 + { 543 + struct drm_mm_node tmp = {}; 544 + int err; 545 + 546 + err = drm_mm_insert_node(mm, &tmp, size, 0, DRM_MM_SEARCH_DEFAULT); 547 + if (likely(err == -ENOSPC)) 548 + return true; 549 + 550 + if (!err) { 551 + pr_err("impossible insert succeeded, node %llu + %llu\n", 552 + tmp.start, tmp.size); 553 + drm_mm_remove_node(&tmp); 554 + } else { 555 + pr_err("impossible insert failed with wrong error %d [expected %d], size %llu\n", 556 + err, -ENOSPC, size); 557 + } 558 + return false; 559 + } 560 + 561 + static int __igt_insert(unsigned int count, u64 size, bool replace) 562 + { 563 + DRM_RND_STATE(prng, random_seed); 564 + const struct insert_mode *mode; 565 + struct drm_mm mm; 566 + struct drm_mm_node *nodes, *node, *next; 567 + unsigned int *order, n, m, o = 0; 568 + int ret; 569 + 570 + /* Fill a range with lots of nodes, check it doesn't fail too early */ 571 + 572 + DRM_MM_BUG_ON(!count); 573 + DRM_MM_BUG_ON(!size); 574 + 575 + ret = -ENOMEM; 576 + nodes = vmalloc(count * sizeof(*nodes)); 577 + if (!nodes) 578 + goto err; 579 + 580 + order = drm_random_order(count, &prng); 581 + if (!order) 582 + goto err_nodes; 583 + 584 + ret = -EINVAL; 585 + drm_mm_init(&mm, 0, count * size); 586 + 587 + for (mode = insert_modes; mode->name; mode++) { 588 + for (n = 0; n < count; n++) { 589 + struct drm_mm_node tmp; 590 + 591 + node = replace ? &tmp : &nodes[n]; 592 + memset(node, 0, sizeof(*node)); 593 + if (!expect_insert(&mm, node, size, 0, n, mode)) { 594 + pr_err("%s insert failed, size %llu step %d\n", 595 + mode->name, size, n); 596 + goto out; 597 + } 598 + 599 + if (replace) { 600 + drm_mm_replace_node(&tmp, &nodes[n]); 601 + if (drm_mm_node_allocated(&tmp)) { 602 + pr_err("replaced old-node still allocated! step %d\n", 603 + n); 604 + goto out; 605 + } 606 + 607 + if (!assert_node(&nodes[n], &mm, size, 0, n)) { 608 + pr_err("replaced node did not inherit parameters, size %llu step %d\n", 609 + size, n); 610 + goto out; 611 + } 612 + 613 + if (tmp.start != nodes[n].start) { 614 + pr_err("replaced node mismatch location expected [%llx + %llx], found [%llx + %llx]\n", 615 + tmp.start, size, 616 + nodes[n].start, nodes[n].size); 617 + goto out; 618 + } 619 + } 620 + } 621 + 622 + /* After random insertion the nodes should be in order */ 623 + if (!assert_continuous(&mm, size)) 624 + goto out; 625 + 626 + /* Repeated use should then fail */ 627 + if (!expect_insert_fail(&mm, size)) 628 + goto out; 629 + 630 + /* Remove one and reinsert, as the only hole it should refill itself */ 631 + for (n = 0; n < count; n++) { 632 + u64 addr = nodes[n].start; 633 + 634 + drm_mm_remove_node(&nodes[n]); 635 + if (!expect_insert(&mm, &nodes[n], size, 0, n, mode)) { 636 + pr_err("%s reinsert failed, size %llu step %d\n", 637 + mode->name, size, n); 638 + goto out; 639 + } 640 + 641 + if (nodes[n].start != addr) { 642 + pr_err("%s reinsert node moved, step %d, expected %llx, found %llx\n", 643 + mode->name, n, addr, nodes[n].start); 644 + goto out; 645 + } 646 + 647 + if (!assert_continuous(&mm, size)) 648 + goto out; 649 + } 650 + 651 + /* Remove several, reinsert, check full */ 652 + for_each_prime_number(n, min(max_prime, count)) { 653 + for (m = 0; m < n; m++) { 654 + node = &nodes[order[(o + m) % count]]; 655 + drm_mm_remove_node(node); 656 + } 657 + 658 + for (m = 0; m < n; m++) { 659 + node = &nodes[order[(o + m) % count]]; 660 + if (!expect_insert(&mm, node, size, 0, n, mode)) { 661 + pr_err("%s multiple reinsert failed, size %llu step %d\n", 662 + mode->name, size, n); 663 + goto out; 664 + } 665 + } 666 + 667 + o += n; 668 + 669 + if (!assert_continuous(&mm, size)) 670 + goto out; 671 + 672 + if (!expect_insert_fail(&mm, size)) 673 + goto out; 674 + } 675 + 676 + drm_mm_for_each_node_safe(node, next, &mm) 677 + drm_mm_remove_node(node); 678 + DRM_MM_BUG_ON(!drm_mm_clean(&mm)); 679 + } 680 + 681 + ret = 0; 682 + out: 683 + drm_mm_for_each_node_safe(node, next, &mm) 684 + drm_mm_remove_node(node); 685 + drm_mm_takedown(&mm); 686 + kfree(order); 687 + err_nodes: 688 + vfree(nodes); 689 + err: 690 + return ret; 691 + } 692 + 693 + static int igt_insert(void *ignored) 694 + { 695 + const unsigned int count = min_t(unsigned int, BIT(10), max_iterations); 696 + unsigned int n; 697 + int ret; 698 + 699 + for_each_prime_number_from(n, 1, 54) { 700 + u64 size = BIT_ULL(n); 701 + 702 + ret = __igt_insert(count, size - 1, false); 703 + if (ret) 704 + return ret; 705 + 706 + ret = __igt_insert(count, size, false); 707 + if (ret) 708 + return ret; 709 + 710 + ret = __igt_insert(count, size + 1, false); 711 + } 712 + 713 + return 0; 714 + } 715 + 716 + static int igt_replace(void *ignored) 717 + { 718 + const unsigned int count = min_t(unsigned int, BIT(10), max_iterations); 719 + unsigned int n; 720 + int ret; 721 + 722 + /* Reuse igt_insert to exercise replacement by inserting a dummy node, 723 + * then replacing it with the intended node. We want to check that 724 + * the tree is intact and all the information we need is carried 725 + * across to the target node. 726 + */ 727 + 728 + for_each_prime_number_from(n, 1, 54) { 729 + u64 size = BIT_ULL(n); 730 + 731 + ret = __igt_insert(count, size - 1, true); 732 + if (ret) 733 + return ret; 734 + 735 + ret = __igt_insert(count, size, true); 736 + if (ret) 737 + return ret; 738 + 739 + ret = __igt_insert(count, size + 1, true); 740 + } 741 + 742 + return 0; 743 + } 744 + 745 + static bool expect_insert_in_range(struct drm_mm *mm, struct drm_mm_node *node, 746 + u64 size, u64 alignment, unsigned long color, 747 + u64 range_start, u64 range_end, 748 + const struct insert_mode *mode) 749 + { 750 + int err; 751 + 752 + err = drm_mm_insert_node_in_range_generic(mm, node, 753 + size, alignment, color, 754 + range_start, range_end, 755 + mode->search_flags, 756 + mode->create_flags); 757 + if (err) { 758 + pr_err("insert (size=%llu, alignment=%llu, color=%lu, mode=%s) nto range [%llx, %llx] failed with err=%d\n", 759 + size, alignment, color, mode->name, 760 + range_start, range_end, err); 761 + return false; 762 + } 763 + 764 + if (!assert_node(node, mm, size, alignment, color)) { 765 + drm_mm_remove_node(node); 766 + return false; 767 + } 768 + 769 + return true; 770 + } 771 + 772 + static bool expect_insert_in_range_fail(struct drm_mm *mm, 773 + u64 size, 774 + u64 range_start, 775 + u64 range_end) 776 + { 777 + struct drm_mm_node tmp = {}; 778 + int err; 779 + 780 + err = drm_mm_insert_node_in_range_generic(mm, &tmp, 781 + size, 0, 0, 782 + range_start, range_end, 783 + DRM_MM_SEARCH_DEFAULT, 784 + DRM_MM_CREATE_DEFAULT); 785 + if (likely(err == -ENOSPC)) 786 + return true; 787 + 788 + if (!err) { 789 + pr_err("impossible insert succeeded, node %llx + %llu, range [%llx, %llx]\n", 790 + tmp.start, tmp.size, range_start, range_end); 791 + drm_mm_remove_node(&tmp); 792 + } else { 793 + pr_err("impossible insert failed with wrong error %d [expected %d], size %llu, range [%llx, %llx]\n", 794 + err, -ENOSPC, size, range_start, range_end); 795 + } 796 + 797 + return false; 798 + } 799 + 800 + static bool assert_contiguous_in_range(struct drm_mm *mm, 801 + u64 size, 802 + u64 start, 803 + u64 end) 804 + { 805 + struct drm_mm_node *node; 806 + unsigned int n; 807 + 808 + if (!expect_insert_in_range_fail(mm, size, start, end)) 809 + return false; 810 + 811 + n = div64_u64(start + size - 1, size); 812 + drm_mm_for_each_node(node, mm) { 813 + if (node->start < start || node->start + node->size > end) { 814 + pr_err("node %d out of range, address [%llx + %llu], range [%llx, %llx]\n", 815 + n, node->start, node->start + node->size, start, end); 816 + return false; 817 + } 818 + 819 + if (node->start != n * size) { 820 + pr_err("node %d out of order, expected start %llx, found %llx\n", 821 + n, n * size, node->start); 822 + return false; 823 + } 824 + 825 + if (node->size != size) { 826 + pr_err("node %d has wrong size, expected size %llx, found %llx\n", 827 + n, size, node->size); 828 + return false; 829 + } 830 + 831 + if (drm_mm_hole_follows(node) && 832 + drm_mm_hole_node_end(node) < end) { 833 + pr_err("node %d is followed by a hole!\n", n); 834 + return false; 835 + } 836 + 837 + n++; 838 + } 839 + 840 + drm_mm_for_each_node_in_range(node, mm, 0, start) { 841 + if (node) { 842 + pr_err("node before start: node=%llx+%llu, start=%llx\n", 843 + node->start, node->size, start); 844 + return false; 845 + } 846 + } 847 + 848 + drm_mm_for_each_node_in_range(node, mm, end, U64_MAX) { 849 + if (node) { 850 + pr_err("node after end: node=%llx+%llu, end=%llx\n", 851 + node->start, node->size, end); 852 + return false; 853 + } 854 + } 855 + 856 + return true; 857 + } 858 + 859 + static int __igt_insert_range(unsigned int count, u64 size, u64 start, u64 end) 860 + { 861 + const struct insert_mode *mode; 862 + struct drm_mm mm; 863 + struct drm_mm_node *nodes, *node, *next; 864 + unsigned int n, start_n, end_n; 865 + int ret; 866 + 867 + DRM_MM_BUG_ON(!count); 868 + DRM_MM_BUG_ON(!size); 869 + DRM_MM_BUG_ON(end <= start); 870 + 871 + /* Very similar to __igt_insert(), but now instead of populating the 872 + * full range of the drm_mm, we try to fill a small portion of it. 873 + */ 874 + 875 + ret = -ENOMEM; 876 + nodes = vzalloc(count * sizeof(*nodes)); 877 + if (!nodes) 878 + goto err; 879 + 880 + ret = -EINVAL; 881 + drm_mm_init(&mm, 0, count * size); 882 + 883 + start_n = div64_u64(start + size - 1, size); 884 + end_n = div64_u64(end - size, size); 885 + 886 + for (mode = insert_modes; mode->name; mode++) { 887 + for (n = start_n; n <= end_n; n++) { 888 + if (!expect_insert_in_range(&mm, &nodes[n], 889 + size, size, n, 890 + start, end, mode)) { 891 + pr_err("%s insert failed, size %llu, step %d [%d, %d], range [%llx, %llx]\n", 892 + mode->name, size, n, 893 + start_n, end_n, 894 + start, end); 895 + goto out; 896 + } 897 + } 898 + 899 + if (!assert_contiguous_in_range(&mm, size, start, end)) { 900 + pr_err("%s: range [%llx, %llx] not full after initialisation, size=%llu\n", 901 + mode->name, start, end, size); 902 + goto out; 903 + } 904 + 905 + /* Remove one and reinsert, it should refill itself */ 906 + for (n = start_n; n <= end_n; n++) { 907 + u64 addr = nodes[n].start; 908 + 909 + drm_mm_remove_node(&nodes[n]); 910 + if (!expect_insert_in_range(&mm, &nodes[n], 911 + size, size, n, 912 + start, end, mode)) { 913 + pr_err("%s reinsert failed, step %d\n", mode->name, n); 914 + goto out; 915 + } 916 + 917 + if (nodes[n].start != addr) { 918 + pr_err("%s reinsert node moved, step %d, expected %llx, found %llx\n", 919 + mode->name, n, addr, nodes[n].start); 920 + goto out; 921 + } 922 + } 923 + 924 + if (!assert_contiguous_in_range(&mm, size, start, end)) { 925 + pr_err("%s: range [%llx, %llx] not full after reinsertion, size=%llu\n", 926 + mode->name, start, end, size); 927 + goto out; 928 + } 929 + 930 + drm_mm_for_each_node_safe(node, next, &mm) 931 + drm_mm_remove_node(node); 932 + DRM_MM_BUG_ON(!drm_mm_clean(&mm)); 933 + } 934 + 935 + ret = 0; 936 + out: 937 + drm_mm_for_each_node_safe(node, next, &mm) 938 + drm_mm_remove_node(node); 939 + drm_mm_takedown(&mm); 940 + vfree(nodes); 941 + err: 942 + return ret; 943 + } 944 + 945 + static int insert_outside_range(void) 946 + { 947 + struct drm_mm mm; 948 + const unsigned int start = 1024; 949 + const unsigned int end = 2048; 950 + const unsigned int size = end - start; 951 + 952 + drm_mm_init(&mm, start, size); 953 + 954 + if (!expect_insert_in_range_fail(&mm, 1, 0, start)) 955 + return -EINVAL; 956 + 957 + if (!expect_insert_in_range_fail(&mm, size, 958 + start - size/2, start + (size+1)/2)) 959 + return -EINVAL; 960 + 961 + if (!expect_insert_in_range_fail(&mm, size, 962 + end - (size+1)/2, end + size/2)) 963 + return -EINVAL; 964 + 965 + if (!expect_insert_in_range_fail(&mm, 1, end, end + size)) 966 + return -EINVAL; 967 + 968 + drm_mm_takedown(&mm); 969 + return 0; 970 + } 971 + 972 + static int igt_insert_range(void *ignored) 973 + { 974 + const unsigned int count = min_t(unsigned int, BIT(13), max_iterations); 975 + unsigned int n; 976 + int ret; 977 + 978 + /* Check that requests outside the bounds of drm_mm are rejected. */ 979 + ret = insert_outside_range(); 980 + if (ret) 981 + return ret; 982 + 983 + for_each_prime_number_from(n, 1, 50) { 984 + const u64 size = BIT_ULL(n); 985 + const u64 max = count * size; 986 + 987 + ret = __igt_insert_range(count, size, 0, max); 988 + if (ret) 989 + return ret; 990 + 991 + ret = __igt_insert_range(count, size, 1, max); 992 + if (ret) 993 + return ret; 994 + 995 + ret = __igt_insert_range(count, size, 0, max - 1); 996 + if (ret) 997 + return ret; 998 + 999 + ret = __igt_insert_range(count, size, 0, max/2); 1000 + if (ret) 1001 + return ret; 1002 + 1003 + ret = __igt_insert_range(count, size, max/2, max); 1004 + if (ret) 1005 + return ret; 1006 + 1007 + ret = __igt_insert_range(count, size, max/4+1, 3*max/4-1); 1008 + if (ret) 1009 + return ret; 1010 + } 1011 + 1012 + return 0; 1013 + } 1014 + 1015 + static int igt_align(void *ignored) 1016 + { 1017 + const struct insert_mode *mode; 1018 + const unsigned int max_count = min(8192u, max_prime); 1019 + struct drm_mm mm; 1020 + struct drm_mm_node *nodes, *node, *next; 1021 + unsigned int prime; 1022 + int ret = -EINVAL; 1023 + 1024 + /* For each of the possible insertion modes, we pick a few 1025 + * arbitrary alignments and check that the inserted node 1026 + * meets our requirements. 1027 + */ 1028 + 1029 + nodes = vzalloc(max_count * sizeof(*nodes)); 1030 + if (!nodes) 1031 + goto err; 1032 + 1033 + drm_mm_init(&mm, 1, U64_MAX - 2); 1034 + 1035 + for (mode = insert_modes; mode->name; mode++) { 1036 + unsigned int i = 0; 1037 + 1038 + for_each_prime_number_from(prime, 1, max_count) { 1039 + u64 size = next_prime_number(prime); 1040 + 1041 + if (!expect_insert(&mm, &nodes[i], 1042 + size, prime, i, 1043 + mode)) { 1044 + pr_err("%s insert failed with alignment=%d", 1045 + mode->name, prime); 1046 + goto out; 1047 + } 1048 + 1049 + i++; 1050 + } 1051 + 1052 + drm_mm_for_each_node_safe(node, next, &mm) 1053 + drm_mm_remove_node(node); 1054 + DRM_MM_BUG_ON(!drm_mm_clean(&mm)); 1055 + } 1056 + 1057 + ret = 0; 1058 + out: 1059 + drm_mm_for_each_node_safe(node, next, &mm) 1060 + drm_mm_remove_node(node); 1061 + drm_mm_takedown(&mm); 1062 + vfree(nodes); 1063 + err: 1064 + return ret; 1065 + } 1066 + 1067 + static int igt_align_pot(int max) 1068 + { 1069 + struct drm_mm mm; 1070 + struct drm_mm_node *node, *next; 1071 + int bit; 1072 + int ret = -EINVAL; 1073 + 1074 + /* Check that we can align to the full u64 address space */ 1075 + 1076 + drm_mm_init(&mm, 1, U64_MAX - 2); 1077 + 1078 + for (bit = max - 1; bit; bit--) { 1079 + u64 align, size; 1080 + 1081 + node = kzalloc(sizeof(*node), GFP_KERNEL); 1082 + if (!node) { 1083 + ret = -ENOMEM; 1084 + goto out; 1085 + } 1086 + 1087 + align = BIT_ULL(bit); 1088 + size = BIT_ULL(bit-1) + 1; 1089 + if (!expect_insert(&mm, node, 1090 + size, align, bit, 1091 + &insert_modes[0])) { 1092 + pr_err("insert failed with alignment=%llx [%d]", 1093 + align, bit); 1094 + goto out; 1095 + } 1096 + } 1097 + 1098 + ret = 0; 1099 + out: 1100 + drm_mm_for_each_node_safe(node, next, &mm) { 1101 + drm_mm_remove_node(node); 1102 + kfree(node); 1103 + } 1104 + drm_mm_takedown(&mm); 1105 + return ret; 1106 + } 1107 + 1108 + static int igt_align32(void *ignored) 1109 + { 1110 + return igt_align_pot(32); 1111 + } 1112 + 1113 + static int igt_align64(void *ignored) 1114 + { 1115 + return igt_align_pot(64); 1116 + } 1117 + 1118 + static void show_scan(const struct drm_mm_scan *scan) 1119 + { 1120 + pr_info("scan: hit [%llx, %llx], size=%lld, align=%lld, color=%ld\n", 1121 + scan->hit_start, scan->hit_end, 1122 + scan->size, scan->alignment, scan->color); 1123 + } 1124 + 1125 + static void show_holes(const struct drm_mm *mm, int count) 1126 + { 1127 + u64 hole_start, hole_end; 1128 + struct drm_mm_node *hole; 1129 + 1130 + drm_mm_for_each_hole(hole, mm, hole_start, hole_end) { 1131 + struct drm_mm_node *next = list_next_entry(hole, node_list); 1132 + const char *node1 = NULL, *node2 = NULL; 1133 + 1134 + if (hole->allocated) 1135 + node1 = kasprintf(GFP_KERNEL, 1136 + "[%llx + %lld, color=%ld], ", 1137 + hole->start, hole->size, hole->color); 1138 + 1139 + if (next->allocated) 1140 + node2 = kasprintf(GFP_KERNEL, 1141 + ", [%llx + %lld, color=%ld]", 1142 + next->start, next->size, next->color); 1143 + 1144 + pr_info("%sHole [%llx - %llx, size %lld]%s\n", 1145 + node1, 1146 + hole_start, hole_end, hole_end - hole_start, 1147 + node2); 1148 + 1149 + kfree(node2); 1150 + kfree(node1); 1151 + 1152 + if (!--count) 1153 + break; 1154 + } 1155 + } 1156 + 1157 + struct evict_node { 1158 + struct drm_mm_node node; 1159 + struct list_head link; 1160 + }; 1161 + 1162 + static bool evict_nodes(struct drm_mm_scan *scan, 1163 + struct evict_node *nodes, 1164 + unsigned int *order, 1165 + unsigned int count, 1166 + bool use_color, 1167 + struct list_head *evict_list) 1168 + { 1169 + struct evict_node *e, *en; 1170 + unsigned int i; 1171 + 1172 + for (i = 0; i < count; i++) { 1173 + e = &nodes[order ? order[i] : i]; 1174 + list_add(&e->link, evict_list); 1175 + if (drm_mm_scan_add_block(scan, &e->node)) 1176 + break; 1177 + } 1178 + list_for_each_entry_safe(e, en, evict_list, link) { 1179 + if (!drm_mm_scan_remove_block(scan, &e->node)) 1180 + list_del(&e->link); 1181 + } 1182 + if (list_empty(evict_list)) { 1183 + pr_err("Failed to find eviction: size=%lld [avail=%d], align=%lld (color=%lu)\n", 1184 + scan->size, count, scan->alignment, scan->color); 1185 + return false; 1186 + } 1187 + 1188 + list_for_each_entry(e, evict_list, link) 1189 + drm_mm_remove_node(&e->node); 1190 + 1191 + if (use_color) { 1192 + struct drm_mm_node *node; 1193 + 1194 + while ((node = drm_mm_scan_color_evict(scan))) { 1195 + e = container_of(node, typeof(*e), node); 1196 + drm_mm_remove_node(&e->node); 1197 + list_add(&e->link, evict_list); 1198 + } 1199 + } else { 1200 + if (drm_mm_scan_color_evict(scan)) { 1201 + pr_err("drm_mm_scan_color_evict unexpectedly reported overlapping nodes!\n"); 1202 + return false; 1203 + } 1204 + } 1205 + 1206 + return true; 1207 + } 1208 + 1209 + static bool evict_nothing(struct drm_mm *mm, 1210 + unsigned int total_size, 1211 + struct evict_node *nodes) 1212 + { 1213 + struct drm_mm_scan scan; 1214 + LIST_HEAD(evict_list); 1215 + struct evict_node *e; 1216 + struct drm_mm_node *node; 1217 + unsigned int n; 1218 + 1219 + drm_mm_scan_init(&scan, mm, 1, 0, 0, 0); 1220 + for (n = 0; n < total_size; n++) { 1221 + e = &nodes[n]; 1222 + list_add(&e->link, &evict_list); 1223 + drm_mm_scan_add_block(&scan, &e->node); 1224 + } 1225 + list_for_each_entry(e, &evict_list, link) 1226 + drm_mm_scan_remove_block(&scan, &e->node); 1227 + 1228 + for (n = 0; n < total_size; n++) { 1229 + e = &nodes[n]; 1230 + 1231 + if (!drm_mm_node_allocated(&e->node)) { 1232 + pr_err("node[%d] no longer allocated!\n", n); 1233 + return false; 1234 + } 1235 + 1236 + e->link.next = NULL; 1237 + } 1238 + 1239 + drm_mm_for_each_node(node, mm) { 1240 + e = container_of(node, typeof(*e), node); 1241 + e->link.next = &e->link; 1242 + } 1243 + 1244 + for (n = 0; n < total_size; n++) { 1245 + e = &nodes[n]; 1246 + 1247 + if (!e->link.next) { 1248 + pr_err("node[%d] no longer connected!\n", n); 1249 + return false; 1250 + } 1251 + } 1252 + 1253 + return assert_continuous(mm, nodes[0].node.size); 1254 + } 1255 + 1256 + static bool evict_everything(struct drm_mm *mm, 1257 + unsigned int total_size, 1258 + struct evict_node *nodes) 1259 + { 1260 + struct drm_mm_scan scan; 1261 + LIST_HEAD(evict_list); 1262 + struct evict_node *e; 1263 + unsigned int n; 1264 + int err; 1265 + 1266 + drm_mm_scan_init(&scan, mm, total_size, 0, 0, 0); 1267 + for (n = 0; n < total_size; n++) { 1268 + e = &nodes[n]; 1269 + list_add(&e->link, &evict_list); 1270 + if (drm_mm_scan_add_block(&scan, &e->node)) 1271 + break; 1272 + } 1273 + list_for_each_entry(e, &evict_list, link) { 1274 + if (!drm_mm_scan_remove_block(&scan, &e->node)) { 1275 + pr_err("Node %lld not marked for eviction!\n", 1276 + e->node.start); 1277 + list_del(&e->link); 1278 + } 1279 + } 1280 + 1281 + list_for_each_entry(e, &evict_list, link) 1282 + drm_mm_remove_node(&e->node); 1283 + 1284 + if (!assert_one_hole(mm, 0, total_size)) 1285 + return false; 1286 + 1287 + list_for_each_entry(e, &evict_list, link) { 1288 + err = drm_mm_reserve_node(mm, &e->node); 1289 + if (err) { 1290 + pr_err("Failed to reinsert node after eviction: start=%llx\n", 1291 + e->node.start); 1292 + return false; 1293 + } 1294 + } 1295 + 1296 + return assert_continuous(mm, nodes[0].node.size); 1297 + } 1298 + 1299 + static int evict_something(struct drm_mm *mm, 1300 + u64 range_start, u64 range_end, 1301 + struct evict_node *nodes, 1302 + unsigned int *order, 1303 + unsigned int count, 1304 + unsigned int size, 1305 + unsigned int alignment, 1306 + const struct insert_mode *mode) 1307 + { 1308 + struct drm_mm_scan scan; 1309 + LIST_HEAD(evict_list); 1310 + struct evict_node *e; 1311 + struct drm_mm_node tmp; 1312 + int err; 1313 + 1314 + drm_mm_scan_init_with_range(&scan, mm, 1315 + size, alignment, 0, 1316 + range_start, range_end, 1317 + mode->create_flags); 1318 + if (!evict_nodes(&scan, 1319 + nodes, order, count, false, 1320 + &evict_list)) 1321 + return -EINVAL; 1322 + 1323 + memset(&tmp, 0, sizeof(tmp)); 1324 + err = drm_mm_insert_node_generic(mm, &tmp, size, alignment, 0, 1325 + mode->search_flags, 1326 + mode->create_flags); 1327 + if (err) { 1328 + pr_err("Failed to insert into eviction hole: size=%d, align=%d\n", 1329 + size, alignment); 1330 + show_scan(&scan); 1331 + show_holes(mm, 3); 1332 + return err; 1333 + } 1334 + 1335 + if (tmp.start < range_start || tmp.start + tmp.size > range_end) { 1336 + pr_err("Inserted [address=%llu + %llu] did not fit into the request range [%llu, %llu]\n", 1337 + tmp.start, tmp.size, range_start, range_end); 1338 + err = -EINVAL; 1339 + } 1340 + 1341 + if (!assert_node(&tmp, mm, size, alignment, 0) || 1342 + drm_mm_hole_follows(&tmp)) { 1343 + pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%lld], start=%llx, hole-follows?=%d\n", 1344 + tmp.size, size, 1345 + alignment, misalignment(&tmp, alignment), 1346 + tmp.start, drm_mm_hole_follows(&tmp)); 1347 + err = -EINVAL; 1348 + } 1349 + 1350 + drm_mm_remove_node(&tmp); 1351 + if (err) 1352 + return err; 1353 + 1354 + list_for_each_entry(e, &evict_list, link) { 1355 + err = drm_mm_reserve_node(mm, &e->node); 1356 + if (err) { 1357 + pr_err("Failed to reinsert node after eviction: start=%llx\n", 1358 + e->node.start); 1359 + return err; 1360 + } 1361 + } 1362 + 1363 + if (!assert_continuous(mm, nodes[0].node.size)) { 1364 + pr_err("range is no longer continuous\n"); 1365 + return -EINVAL; 1366 + } 1367 + 1368 + return 0; 1369 + } 1370 + 1371 + static int igt_evict(void *ignored) 1372 + { 1373 + DRM_RND_STATE(prng, random_seed); 1374 + const unsigned int size = 8192; 1375 + const struct insert_mode *mode; 1376 + struct drm_mm mm; 1377 + struct evict_node *nodes; 1378 + struct drm_mm_node *node, *next; 1379 + unsigned int *order, n; 1380 + int ret, err; 1381 + 1382 + /* Here we populate a full drm_mm and then try and insert a new node 1383 + * by evicting other nodes in a random order. The drm_mm_scan should 1384 + * pick the first matching hole it finds from the random list. We 1385 + * repeat that for different allocation strategies, alignments and 1386 + * sizes to try and stress the hole finder. 1387 + */ 1388 + 1389 + ret = -ENOMEM; 1390 + nodes = vzalloc(size * sizeof(*nodes)); 1391 + if (!nodes) 1392 + goto err; 1393 + 1394 + order = drm_random_order(size, &prng); 1395 + if (!order) 1396 + goto err_nodes; 1397 + 1398 + ret = -EINVAL; 1399 + drm_mm_init(&mm, 0, size); 1400 + for (n = 0; n < size; n++) { 1401 + err = drm_mm_insert_node(&mm, &nodes[n].node, 1, 0, 1402 + DRM_MM_SEARCH_DEFAULT); 1403 + if (err) { 1404 + pr_err("insert failed, step %d\n", n); 1405 + ret = err; 1406 + goto out; 1407 + } 1408 + } 1409 + 1410 + /* First check that using the scanner doesn't break the mm */ 1411 + if (!evict_nothing(&mm, size, nodes)) { 1412 + pr_err("evict_nothing() failed\n"); 1413 + goto out; 1414 + } 1415 + if (!evict_everything(&mm, size, nodes)) { 1416 + pr_err("evict_everything() failed\n"); 1417 + goto out; 1418 + } 1419 + 1420 + for (mode = evict_modes; mode->name; mode++) { 1421 + for (n = 1; n <= size; n <<= 1) { 1422 + drm_random_reorder(order, size, &prng); 1423 + err = evict_something(&mm, 0, U64_MAX, 1424 + nodes, order, size, 1425 + n, 1, 1426 + mode); 1427 + if (err) { 1428 + pr_err("%s evict_something(size=%u) failed\n", 1429 + mode->name, n); 1430 + ret = err; 1431 + goto out; 1432 + } 1433 + } 1434 + 1435 + for (n = 1; n < size; n <<= 1) { 1436 + drm_random_reorder(order, size, &prng); 1437 + err = evict_something(&mm, 0, U64_MAX, 1438 + nodes, order, size, 1439 + size/2, n, 1440 + mode); 1441 + if (err) { 1442 + pr_err("%s evict_something(size=%u, alignment=%u) failed\n", 1443 + mode->name, size/2, n); 1444 + ret = err; 1445 + goto out; 1446 + } 1447 + } 1448 + 1449 + for_each_prime_number_from(n, 1, min(size, max_prime)) { 1450 + unsigned int nsize = (size - n + 1) / 2; 1451 + 1452 + DRM_MM_BUG_ON(!nsize); 1453 + 1454 + drm_random_reorder(order, size, &prng); 1455 + err = evict_something(&mm, 0, U64_MAX, 1456 + nodes, order, size, 1457 + nsize, n, 1458 + mode); 1459 + if (err) { 1460 + pr_err("%s evict_something(size=%u, alignment=%u) failed\n", 1461 + mode->name, nsize, n); 1462 + ret = err; 1463 + goto out; 1464 + } 1465 + } 1466 + } 1467 + 1468 + ret = 0; 1469 + out: 1470 + drm_mm_for_each_node_safe(node, next, &mm) 1471 + drm_mm_remove_node(node); 1472 + drm_mm_takedown(&mm); 1473 + kfree(order); 1474 + err_nodes: 1475 + vfree(nodes); 1476 + err: 1477 + return ret; 1478 + } 1479 + 1480 + static int igt_evict_range(void *ignored) 1481 + { 1482 + DRM_RND_STATE(prng, random_seed); 1483 + const unsigned int size = 8192; 1484 + const unsigned int range_size = size / 2; 1485 + const unsigned int range_start = size / 4; 1486 + const unsigned int range_end = range_start + range_size; 1487 + const struct insert_mode *mode; 1488 + struct drm_mm mm; 1489 + struct evict_node *nodes; 1490 + struct drm_mm_node *node, *next; 1491 + unsigned int *order, n; 1492 + int ret, err; 1493 + 1494 + /* Like igt_evict() but now we are limiting the search to a 1495 + * small portion of the full drm_mm. 1496 + */ 1497 + 1498 + ret = -ENOMEM; 1499 + nodes = vzalloc(size * sizeof(*nodes)); 1500 + if (!nodes) 1501 + goto err; 1502 + 1503 + order = drm_random_order(size, &prng); 1504 + if (!order) 1505 + goto err_nodes; 1506 + 1507 + ret = -EINVAL; 1508 + drm_mm_init(&mm, 0, size); 1509 + for (n = 0; n < size; n++) { 1510 + err = drm_mm_insert_node(&mm, &nodes[n].node, 1, 0, 1511 + DRM_MM_SEARCH_DEFAULT); 1512 + if (err) { 1513 + pr_err("insert failed, step %d\n", n); 1514 + ret = err; 1515 + goto out; 1516 + } 1517 + } 1518 + 1519 + for (mode = evict_modes; mode->name; mode++) { 1520 + for (n = 1; n <= range_size; n <<= 1) { 1521 + drm_random_reorder(order, size, &prng); 1522 + err = evict_something(&mm, range_start, range_end, 1523 + nodes, order, size, 1524 + n, 1, 1525 + mode); 1526 + if (err) { 1527 + pr_err("%s evict_something(size=%u) failed with range [%u, %u]\n", 1528 + mode->name, n, range_start, range_end); 1529 + goto out; 1530 + } 1531 + } 1532 + 1533 + for (n = 1; n <= range_size; n <<= 1) { 1534 + drm_random_reorder(order, size, &prng); 1535 + err = evict_something(&mm, range_start, range_end, 1536 + nodes, order, size, 1537 + range_size/2, n, 1538 + mode); 1539 + if (err) { 1540 + pr_err("%s evict_something(size=%u, alignment=%u) failed with range [%u, %u]\n", 1541 + mode->name, range_size/2, n, range_start, range_end); 1542 + goto out; 1543 + } 1544 + } 1545 + 1546 + for_each_prime_number_from(n, 1, min(range_size, max_prime)) { 1547 + unsigned int nsize = (range_size - n + 1) / 2; 1548 + 1549 + DRM_MM_BUG_ON(!nsize); 1550 + 1551 + drm_random_reorder(order, size, &prng); 1552 + err = evict_something(&mm, range_start, range_end, 1553 + nodes, order, size, 1554 + nsize, n, 1555 + mode); 1556 + if (err) { 1557 + pr_err("%s evict_something(size=%u, alignment=%u) failed with range [%u, %u]\n", 1558 + mode->name, nsize, n, range_start, range_end); 1559 + goto out; 1560 + } 1561 + } 1562 + } 1563 + 1564 + ret = 0; 1565 + out: 1566 + drm_mm_for_each_node_safe(node, next, &mm) 1567 + drm_mm_remove_node(node); 1568 + drm_mm_takedown(&mm); 1569 + kfree(order); 1570 + err_nodes: 1571 + vfree(nodes); 1572 + err: 1573 + return ret; 1574 + } 1575 + 1576 + static unsigned int node_index(const struct drm_mm_node *node) 1577 + { 1578 + return div64_u64(node->start, node->size); 1579 + } 1580 + 1581 + static int igt_topdown(void *ignored) 1582 + { 1583 + const struct insert_mode *topdown = &insert_modes[TOPDOWN]; 1584 + DRM_RND_STATE(prng, random_seed); 1585 + const unsigned int count = 8192; 1586 + unsigned int size; 1587 + unsigned long *bitmap = NULL; 1588 + struct drm_mm mm; 1589 + struct drm_mm_node *nodes, *node, *next; 1590 + unsigned int *order, n, m, o = 0; 1591 + int ret; 1592 + 1593 + /* When allocating top-down, we expect to be returned a node 1594 + * from a suitable hole at the top of the drm_mm. We check that 1595 + * the returned node does match the highest available slot. 1596 + */ 1597 + 1598 + ret = -ENOMEM; 1599 + nodes = vzalloc(count * sizeof(*nodes)); 1600 + if (!nodes) 1601 + goto err; 1602 + 1603 + bitmap = kzalloc(count / BITS_PER_LONG * sizeof(unsigned long), 1604 + GFP_TEMPORARY); 1605 + if (!bitmap) 1606 + goto err_nodes; 1607 + 1608 + order = drm_random_order(count, &prng); 1609 + if (!order) 1610 + goto err_bitmap; 1611 + 1612 + ret = -EINVAL; 1613 + for (size = 1; size <= 64; size <<= 1) { 1614 + drm_mm_init(&mm, 0, size*count); 1615 + for (n = 0; n < count; n++) { 1616 + if (!expect_insert(&mm, &nodes[n], 1617 + size, 0, n, 1618 + topdown)) { 1619 + pr_err("insert failed, size %u step %d\n", size, n); 1620 + goto out; 1621 + } 1622 + 1623 + if (drm_mm_hole_follows(&nodes[n])) { 1624 + pr_err("hole after topdown insert %d, start=%llx\n, size=%u", 1625 + n, nodes[n].start, size); 1626 + goto out; 1627 + } 1628 + 1629 + if (!assert_one_hole(&mm, 0, size*(count - n - 1))) 1630 + goto out; 1631 + } 1632 + 1633 + if (!assert_continuous(&mm, size)) 1634 + goto out; 1635 + 1636 + drm_random_reorder(order, count, &prng); 1637 + for_each_prime_number_from(n, 1, min(count, max_prime)) { 1638 + for (m = 0; m < n; m++) { 1639 + node = &nodes[order[(o + m) % count]]; 1640 + drm_mm_remove_node(node); 1641 + __set_bit(node_index(node), bitmap); 1642 + } 1643 + 1644 + for (m = 0; m < n; m++) { 1645 + unsigned int last; 1646 + 1647 + node = &nodes[order[(o + m) % count]]; 1648 + if (!expect_insert(&mm, node, 1649 + size, 0, 0, 1650 + topdown)) { 1651 + pr_err("insert failed, step %d/%d\n", m, n); 1652 + goto out; 1653 + } 1654 + 1655 + if (drm_mm_hole_follows(node)) { 1656 + pr_err("hole after topdown insert %d/%d, start=%llx\n", 1657 + m, n, node->start); 1658 + goto out; 1659 + } 1660 + 1661 + last = find_last_bit(bitmap, count); 1662 + if (node_index(node) != last) { 1663 + pr_err("node %d/%d, size %d, not inserted into upmost hole, expected %d, found %d\n", 1664 + m, n, size, last, node_index(node)); 1665 + goto out; 1666 + } 1667 + 1668 + __clear_bit(last, bitmap); 1669 + } 1670 + 1671 + DRM_MM_BUG_ON(find_first_bit(bitmap, count) != count); 1672 + 1673 + o += n; 1674 + } 1675 + 1676 + drm_mm_for_each_node_safe(node, next, &mm) 1677 + drm_mm_remove_node(node); 1678 + DRM_MM_BUG_ON(!drm_mm_clean(&mm)); 1679 + } 1680 + 1681 + ret = 0; 1682 + out: 1683 + drm_mm_for_each_node_safe(node, next, &mm) 1684 + drm_mm_remove_node(node); 1685 + drm_mm_takedown(&mm); 1686 + kfree(order); 1687 + err_bitmap: 1688 + kfree(bitmap); 1689 + err_nodes: 1690 + vfree(nodes); 1691 + err: 1692 + return ret; 1693 + } 1694 + 1695 + static void separate_adjacent_colors(const struct drm_mm_node *node, 1696 + unsigned long color, 1697 + u64 *start, 1698 + u64 *end) 1699 + { 1700 + if (node->allocated && node->color != color) 1701 + ++*start; 1702 + 1703 + node = list_next_entry(node, node_list); 1704 + if (node->allocated && node->color != color) 1705 + --*end; 1706 + } 1707 + 1708 + static bool colors_abutt(const struct drm_mm_node *node) 1709 + { 1710 + if (!drm_mm_hole_follows(node) && 1711 + list_next_entry(node, node_list)->allocated) { 1712 + pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n", 1713 + node->color, node->start, node->size, 1714 + list_next_entry(node, node_list)->color, 1715 + list_next_entry(node, node_list)->start, 1716 + list_next_entry(node, node_list)->size); 1717 + return true; 1718 + } 1719 + 1720 + return false; 1721 + } 1722 + 1723 + static int igt_color(void *ignored) 1724 + { 1725 + const unsigned int count = min(4096u, max_iterations); 1726 + const struct insert_mode *mode; 1727 + struct drm_mm mm; 1728 + struct drm_mm_node *node, *nn; 1729 + unsigned int n; 1730 + int ret = -EINVAL, err; 1731 + 1732 + /* Color adjustment complicates everything. First we just check 1733 + * that when we insert a node we apply any color_adjustment callback. 1734 + * The callback we use should ensure that there is a gap between 1735 + * any two nodes, and so after each insertion we check that those 1736 + * holes are inserted and that they are preserved. 1737 + */ 1738 + 1739 + drm_mm_init(&mm, 0, U64_MAX); 1740 + 1741 + for (n = 1; n <= count; n++) { 1742 + node = kzalloc(sizeof(*node), GFP_KERNEL); 1743 + if (!node) { 1744 + ret = -ENOMEM; 1745 + goto out; 1746 + } 1747 + 1748 + if (!expect_insert(&mm, node, 1749 + n, 0, n, 1750 + &insert_modes[0])) { 1751 + pr_err("insert failed, step %d\n", n); 1752 + kfree(node); 1753 + goto out; 1754 + } 1755 + } 1756 + 1757 + drm_mm_for_each_node_safe(node, nn, &mm) { 1758 + if (node->color != node->size) { 1759 + pr_err("invalid color stored: expected %lld, found %ld\n", 1760 + node->size, node->color); 1761 + 1762 + goto out; 1763 + } 1764 + 1765 + drm_mm_remove_node(node); 1766 + kfree(node); 1767 + } 1768 + 1769 + /* Now, let's start experimenting with applying a color callback */ 1770 + mm.color_adjust = separate_adjacent_colors; 1771 + for (mode = insert_modes; mode->name; mode++) { 1772 + u64 last; 1773 + 1774 + node = kzalloc(sizeof(*node), GFP_KERNEL); 1775 + if (!node) { 1776 + ret = -ENOMEM; 1777 + goto out; 1778 + } 1779 + 1780 + node->size = 1 + 2*count; 1781 + node->color = node->size; 1782 + 1783 + err = drm_mm_reserve_node(&mm, node); 1784 + if (err) { 1785 + pr_err("initial reserve failed!\n"); 1786 + ret = err; 1787 + goto out; 1788 + } 1789 + 1790 + last = node->start + node->size; 1791 + 1792 + for (n = 1; n <= count; n++) { 1793 + int rem; 1794 + 1795 + node = kzalloc(sizeof(*node), GFP_KERNEL); 1796 + if (!node) { 1797 + ret = -ENOMEM; 1798 + goto out; 1799 + } 1800 + 1801 + node->start = last; 1802 + node->size = n + count; 1803 + node->color = node->size; 1804 + 1805 + err = drm_mm_reserve_node(&mm, node); 1806 + if (err != -ENOSPC) { 1807 + pr_err("reserve %d did not report color overlap! err=%d\n", 1808 + n, err); 1809 + goto out; 1810 + } 1811 + 1812 + node->start += n + 1; 1813 + rem = misalignment(node, n + count); 1814 + node->start += n + count - rem; 1815 + 1816 + err = drm_mm_reserve_node(&mm, node); 1817 + if (err) { 1818 + pr_err("reserve %d failed, err=%d\n", n, err); 1819 + ret = err; 1820 + goto out; 1821 + } 1822 + 1823 + last = node->start + node->size; 1824 + } 1825 + 1826 + for (n = 1; n <= count; n++) { 1827 + node = kzalloc(sizeof(*node), GFP_KERNEL); 1828 + if (!node) { 1829 + ret = -ENOMEM; 1830 + goto out; 1831 + } 1832 + 1833 + if (!expect_insert(&mm, node, 1834 + n, n, n, 1835 + mode)) { 1836 + pr_err("%s insert failed, step %d\n", 1837 + mode->name, n); 1838 + kfree(node); 1839 + goto out; 1840 + } 1841 + } 1842 + 1843 + drm_mm_for_each_node_safe(node, nn, &mm) { 1844 + u64 rem; 1845 + 1846 + if (node->color != node->size) { 1847 + pr_err("%s invalid color stored: expected %lld, found %ld\n", 1848 + mode->name, node->size, node->color); 1849 + 1850 + goto out; 1851 + } 1852 + 1853 + if (colors_abutt(node)) 1854 + goto out; 1855 + 1856 + div64_u64_rem(node->start, node->size, &rem); 1857 + if (rem) { 1858 + pr_err("%s colored node misaligned, start=%llx expected alignment=%lld [rem=%lld]\n", 1859 + mode->name, node->start, node->size, rem); 1860 + goto out; 1861 + } 1862 + 1863 + drm_mm_remove_node(node); 1864 + kfree(node); 1865 + } 1866 + } 1867 + 1868 + ret = 0; 1869 + out: 1870 + drm_mm_for_each_node_safe(node, nn, &mm) { 1871 + drm_mm_remove_node(node); 1872 + kfree(node); 1873 + } 1874 + drm_mm_takedown(&mm); 1875 + return ret; 1876 + } 1877 + 1878 + static int evict_color(struct drm_mm *mm, 1879 + u64 range_start, u64 range_end, 1880 + struct evict_node *nodes, 1881 + unsigned int *order, 1882 + unsigned int count, 1883 + unsigned int size, 1884 + unsigned int alignment, 1885 + unsigned long color, 1886 + const struct insert_mode *mode) 1887 + { 1888 + struct drm_mm_scan scan; 1889 + LIST_HEAD(evict_list); 1890 + struct evict_node *e; 1891 + struct drm_mm_node tmp; 1892 + int err; 1893 + 1894 + drm_mm_scan_init_with_range(&scan, mm, 1895 + size, alignment, color, 1896 + range_start, range_end, 1897 + mode->create_flags); 1898 + if (!evict_nodes(&scan, 1899 + nodes, order, count, true, 1900 + &evict_list)) 1901 + return -EINVAL; 1902 + 1903 + memset(&tmp, 0, sizeof(tmp)); 1904 + err = drm_mm_insert_node_generic(mm, &tmp, size, alignment, color, 1905 + mode->search_flags, 1906 + mode->create_flags); 1907 + if (err) { 1908 + pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%lu, err=%d\n", 1909 + size, alignment, color, err); 1910 + show_scan(&scan); 1911 + show_holes(mm, 3); 1912 + return err; 1913 + } 1914 + 1915 + if (tmp.start < range_start || tmp.start + tmp.size > range_end) { 1916 + pr_err("Inserted [address=%llu + %llu] did not fit into the request range [%llu, %llu]\n", 1917 + tmp.start, tmp.size, range_start, range_end); 1918 + err = -EINVAL; 1919 + } 1920 + 1921 + if (colors_abutt(&tmp)) 1922 + err = -EINVAL; 1923 + 1924 + if (!assert_node(&tmp, mm, size, alignment, color)) { 1925 + pr_err("Inserted did not fit the eviction hole: size=%lld [%d], align=%d [rem=%lld], start=%llx\n", 1926 + tmp.size, size, 1927 + alignment, misalignment(&tmp, alignment), tmp.start); 1928 + err = -EINVAL; 1929 + } 1930 + 1931 + drm_mm_remove_node(&tmp); 1932 + if (err) 1933 + return err; 1934 + 1935 + list_for_each_entry(e, &evict_list, link) { 1936 + err = drm_mm_reserve_node(mm, &e->node); 1937 + if (err) { 1938 + pr_err("Failed to reinsert node after eviction: start=%llx\n", 1939 + e->node.start); 1940 + return err; 1941 + } 1942 + } 1943 + 1944 + return 0; 1945 + } 1946 + 1947 + static int igt_color_evict(void *ignored) 1948 + { 1949 + DRM_RND_STATE(prng, random_seed); 1950 + const unsigned int total_size = min(8192u, max_iterations); 1951 + const struct insert_mode *mode; 1952 + unsigned long color = 0; 1953 + struct drm_mm mm; 1954 + struct evict_node *nodes; 1955 + struct drm_mm_node *node, *next; 1956 + unsigned int *order, n; 1957 + int ret, err; 1958 + 1959 + /* Check that the drm_mm_scan also honours color adjustment when 1960 + * choosing its victims to create a hole. Our color_adjust does not 1961 + * allow two nodes to be placed together without an intervening hole 1962 + * enlarging the set of victims that must be evicted. 1963 + */ 1964 + 1965 + ret = -ENOMEM; 1966 + nodes = vzalloc(total_size * sizeof(*nodes)); 1967 + if (!nodes) 1968 + goto err; 1969 + 1970 + order = drm_random_order(total_size, &prng); 1971 + if (!order) 1972 + goto err_nodes; 1973 + 1974 + ret = -EINVAL; 1975 + drm_mm_init(&mm, 0, 2*total_size - 1); 1976 + mm.color_adjust = separate_adjacent_colors; 1977 + for (n = 0; n < total_size; n++) { 1978 + if (!expect_insert(&mm, &nodes[n].node, 1979 + 1, 0, color++, 1980 + &insert_modes[0])) { 1981 + pr_err("insert failed, step %d\n", n); 1982 + goto out; 1983 + } 1984 + } 1985 + 1986 + for (mode = evict_modes; mode->name; mode++) { 1987 + for (n = 1; n <= total_size; n <<= 1) { 1988 + drm_random_reorder(order, total_size, &prng); 1989 + err = evict_color(&mm, 0, U64_MAX, 1990 + nodes, order, total_size, 1991 + n, 1, color++, 1992 + mode); 1993 + if (err) { 1994 + pr_err("%s evict_color(size=%u) failed\n", 1995 + mode->name, n); 1996 + goto out; 1997 + } 1998 + } 1999 + 2000 + for (n = 1; n < total_size; n <<= 1) { 2001 + drm_random_reorder(order, total_size, &prng); 2002 + err = evict_color(&mm, 0, U64_MAX, 2003 + nodes, order, total_size, 2004 + total_size/2, n, color++, 2005 + mode); 2006 + if (err) { 2007 + pr_err("%s evict_color(size=%u, alignment=%u) failed\n", 2008 + mode->name, total_size/2, n); 2009 + goto out; 2010 + } 2011 + } 2012 + 2013 + for_each_prime_number_from(n, 1, min(total_size, max_prime)) { 2014 + unsigned int nsize = (total_size - n + 1) / 2; 2015 + 2016 + DRM_MM_BUG_ON(!nsize); 2017 + 2018 + drm_random_reorder(order, total_size, &prng); 2019 + err = evict_color(&mm, 0, U64_MAX, 2020 + nodes, order, total_size, 2021 + nsize, n, color++, 2022 + mode); 2023 + if (err) { 2024 + pr_err("%s evict_color(size=%u, alignment=%u) failed\n", 2025 + mode->name, nsize, n); 2026 + goto out; 2027 + } 2028 + } 2029 + } 2030 + 2031 + ret = 0; 2032 + out: 2033 + if (ret) 2034 + drm_mm_debug_table(&mm, __func__); 2035 + drm_mm_for_each_node_safe(node, next, &mm) 2036 + drm_mm_remove_node(node); 2037 + drm_mm_takedown(&mm); 2038 + kfree(order); 2039 + err_nodes: 2040 + vfree(nodes); 2041 + err: 2042 + return ret; 2043 + } 2044 + 2045 + static int igt_color_evict_range(void *ignored) 2046 + { 2047 + DRM_RND_STATE(prng, random_seed); 2048 + const unsigned int total_size = 8192; 2049 + const unsigned int range_size = total_size / 2; 2050 + const unsigned int range_start = total_size / 4; 2051 + const unsigned int range_end = range_start + range_size; 2052 + const struct insert_mode *mode; 2053 + unsigned long color = 0; 2054 + struct drm_mm mm; 2055 + struct evict_node *nodes; 2056 + struct drm_mm_node *node, *next; 2057 + unsigned int *order, n; 2058 + int ret, err; 2059 + 2060 + /* Like igt_color_evict(), but limited to small portion of the full 2061 + * drm_mm range. 2062 + */ 2063 + 2064 + ret = -ENOMEM; 2065 + nodes = vzalloc(total_size * sizeof(*nodes)); 2066 + if (!nodes) 2067 + goto err; 2068 + 2069 + order = drm_random_order(total_size, &prng); 2070 + if (!order) 2071 + goto err_nodes; 2072 + 2073 + ret = -EINVAL; 2074 + drm_mm_init(&mm, 0, 2*total_size - 1); 2075 + mm.color_adjust = separate_adjacent_colors; 2076 + for (n = 0; n < total_size; n++) { 2077 + if (!expect_insert(&mm, &nodes[n].node, 2078 + 1, 0, color++, 2079 + &insert_modes[0])) { 2080 + pr_err("insert failed, step %d\n", n); 2081 + goto out; 2082 + } 2083 + } 2084 + 2085 + for (mode = evict_modes; mode->name; mode++) { 2086 + for (n = 1; n <= range_size; n <<= 1) { 2087 + drm_random_reorder(order, range_size, &prng); 2088 + err = evict_color(&mm, range_start, range_end, 2089 + nodes, order, total_size, 2090 + n, 1, color++, 2091 + mode); 2092 + if (err) { 2093 + pr_err("%s evict_color(size=%u) failed for range [%x, %x]\n", 2094 + mode->name, n, range_start, range_end); 2095 + goto out; 2096 + } 2097 + } 2098 + 2099 + for (n = 1; n < range_size; n <<= 1) { 2100 + drm_random_reorder(order, total_size, &prng); 2101 + err = evict_color(&mm, range_start, range_end, 2102 + nodes, order, total_size, 2103 + range_size/2, n, color++, 2104 + mode); 2105 + if (err) { 2106 + pr_err("%s evict_color(size=%u, alignment=%u) failed for range [%x, %x]\n", 2107 + mode->name, total_size/2, n, range_start, range_end); 2108 + goto out; 2109 + } 2110 + } 2111 + 2112 + for_each_prime_number_from(n, 1, min(range_size, max_prime)) { 2113 + unsigned int nsize = (range_size - n + 1) / 2; 2114 + 2115 + DRM_MM_BUG_ON(!nsize); 2116 + 2117 + drm_random_reorder(order, total_size, &prng); 2118 + err = evict_color(&mm, range_start, range_end, 2119 + nodes, order, total_size, 2120 + nsize, n, color++, 2121 + mode); 2122 + if (err) { 2123 + pr_err("%s evict_color(size=%u, alignment=%u) failed for range [%x, %x]\n", 2124 + mode->name, nsize, n, range_start, range_end); 2125 + goto out; 2126 + } 2127 + } 2128 + } 2129 + 2130 + ret = 0; 2131 + out: 2132 + if (ret) 2133 + drm_mm_debug_table(&mm, __func__); 2134 + drm_mm_for_each_node_safe(node, next, &mm) 2135 + drm_mm_remove_node(node); 2136 + drm_mm_takedown(&mm); 2137 + kfree(order); 2138 + err_nodes: 2139 + vfree(nodes); 2140 + err: 2141 + return ret; 2142 + } 2143 + 2144 + #include "drm_selftest.c" 2145 + 2146 + static int __init test_drm_mm_init(void) 2147 + { 2148 + int err; 2149 + 2150 + while (!random_seed) 2151 + random_seed = get_random_int(); 2152 + 2153 + pr_info("Testing DRM range manger (struct drm_mm), with random_seed=0x%x max_iterations=%u max_prime=%u\n", 2154 + random_seed, max_iterations, max_prime); 2155 + err = run_selftests(selftests, ARRAY_SIZE(selftests), NULL); 2156 + 2157 + return err > 0 ? 0 : err; 2158 + } 2159 + 2160 + static void __exit test_drm_mm_exit(void) 2161 + { 2162 + } 2163 + 2164 + module_init(test_drm_mm_init); 2165 + module_exit(test_drm_mm_exit); 2166 + 2167 + module_param(random_seed, uint, 0400); 2168 + module_param(max_iterations, uint, 0400); 2169 + module_param(max_prime, uint, 0400); 2170 + 2171 + MODULE_AUTHOR("Intel Corporation"); 2172 + MODULE_LICENSE("GPL");
+3 -3
drivers/gpu/drm/shmobile/shmob_drm_crtc.c
··· 174 174 if (scrtc->started) 175 175 return; 176 176 177 - format = shmob_drm_format_info(crtc->primary->fb->pixel_format); 177 + format = shmob_drm_format_info(crtc->primary->fb->format->format); 178 178 if (WARN_ON(format == NULL)) 179 179 return; 180 180 ··· 376 376 const struct shmob_drm_format_info *format; 377 377 void *cache; 378 378 379 - format = shmob_drm_format_info(crtc->primary->fb->pixel_format); 379 + format = shmob_drm_format_info(crtc->primary->fb->format->format); 380 380 if (format == NULL) { 381 381 dev_dbg(sdev->dev, "mode_set: unsupported format %08x\n", 382 - crtc->primary->fb->pixel_format); 382 + crtc->primary->fb->format->format); 383 383 return -EINVAL; 384 384 } 385 385
+1
drivers/gpu/drm/shmobile/shmob_drm_crtc.h
··· 16 16 17 17 #include <drm/drmP.h> 18 18 #include <drm/drm_crtc.h> 19 + #include <drm/drm_encoder.h> 19 20 20 21 struct backlight_device; 21 22 struct shmob_drm_device;
+2 -2
drivers/gpu/drm/shmobile/shmob_drm_plane.c
··· 183 183 struct shmob_drm_device *sdev = plane->dev->dev_private; 184 184 const struct shmob_drm_format_info *format; 185 185 186 - format = shmob_drm_format_info(fb->pixel_format); 186 + format = shmob_drm_format_info(fb->format->format); 187 187 if (format == NULL) { 188 188 dev_dbg(sdev->dev, "update_plane: unsupported format %08x\n", 189 - fb->pixel_format); 189 + fb->format->format); 190 190 return -EINVAL; 191 191 } 192 192
+1 -2
drivers/gpu/drm/sti/sti_dvo.c
··· 478 478 return err; 479 479 } 480 480 481 - err = drm_bridge_attach(drm_dev, bridge); 481 + err = drm_bridge_attach(encoder, bridge, NULL); 482 482 if (err) { 483 483 DRM_ERROR("Failed to attach bridge\n"); 484 484 return err; 485 485 } 486 486 487 487 dvo->bridge = bridge; 488 - encoder->bridge = bridge; 489 488 connector->encoder = encoder; 490 489 dvo->encoder = encoder; 491 490
+5 -5
drivers/gpu/drm/sti/sti_gdp.c
··· 636 636 src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX); 637 637 src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX); 638 638 639 - format = sti_gdp_fourcc2format(fb->pixel_format); 639 + format = sti_gdp_fourcc2format(fb->format->format); 640 640 if (format == -1) { 641 641 DRM_ERROR("Format not supported by GDP %.4s\n", 642 - (char *)&fb->pixel_format); 642 + (char *)&fb->format->format); 643 643 return -EINVAL; 644 644 } 645 645 ··· 745 745 /* build the top field */ 746 746 top_field->gam_gdp_agc = GAM_GDP_AGC_FULL_RANGE; 747 747 top_field->gam_gdp_ctl = WAIT_NEXT_VSYNC; 748 - format = sti_gdp_fourcc2format(fb->pixel_format); 748 + format = sti_gdp_fourcc2format(fb->format->format); 749 749 top_field->gam_gdp_ctl |= format; 750 750 top_field->gam_gdp_ctl |= sti_gdp_get_alpharange(format); 751 751 top_field->gam_gdp_ppt &= ~GAM_GDP_PPT_IGNORE; ··· 753 753 cma_obj = drm_fb_cma_get_gem_obj(fb, 0); 754 754 755 755 DRM_DEBUG_DRIVER("drm FB:%d format:%.4s phys@:0x%lx\n", fb->base.id, 756 - (char *)&fb->pixel_format, 756 + (char *)&fb->format->format, 757 757 (unsigned long)cma_obj->paddr); 758 758 759 759 /* pixel memory location */ 760 - bpp = drm_format_plane_cpp(fb->pixel_format, 0); 760 + bpp = fb->format->cpp[0]; 761 761 top_field->gam_gdp_pml = (u32)cma_obj->paddr + fb->offsets[0]; 762 762 top_field->gam_gdp_pml += src_x * bpp; 763 763 top_field->gam_gdp_pml += src_y * fb->pitches[0];
+1 -2
drivers/gpu/drm/sti/sti_hda.c
··· 707 707 708 708 bridge->driver_private = hda; 709 709 bridge->funcs = &sti_hda_bridge_funcs; 710 - drm_bridge_attach(drm_dev, bridge); 710 + drm_bridge_attach(encoder, bridge, NULL); 711 711 712 - encoder->bridge = bridge; 713 712 connector->encoder = encoder; 714 713 715 714 drm_connector = (struct drm_connector *)connector;
+1 -2
drivers/gpu/drm/sti/sti_hdmi.c
··· 1308 1308 1309 1309 bridge->driver_private = hdmi; 1310 1310 bridge->funcs = &sti_hdmi_bridge_funcs; 1311 - drm_bridge_attach(drm_dev, bridge); 1311 + drm_bridge_attach(encoder, bridge, NULL); 1312 1312 1313 - encoder->bridge = bridge; 1314 1313 connector->encoder = encoder; 1315 1314 1316 1315 drm_connector = (struct drm_connector *)connector;
+1 -1
drivers/gpu/drm/sti/sti_hqvdp.c
··· 1147 1147 cma_obj = drm_fb_cma_get_gem_obj(fb, 0); 1148 1148 1149 1149 DRM_DEBUG_DRIVER("drm FB:%d format:%.4s phys@:0x%lx\n", fb->base.id, 1150 - (char *)&fb->pixel_format, 1150 + (char *)&fb->format->format, 1151 1151 (unsigned long)cma_obj->paddr); 1152 1152 1153 1153 /* Buffer planes address */
+3 -2
drivers/gpu/drm/sun4i/sun4i_backend.c
··· 189 189 DRM_DEBUG_DRIVER("Switching display backend interlaced mode %s\n", 190 190 interlaced ? "on" : "off"); 191 191 192 - ret = sun4i_backend_drm_format_to_layer(plane, fb->pixel_format, &val); 192 + ret = sun4i_backend_drm_format_to_layer(plane, fb->format->format, 193 + &val); 193 194 if (ret) { 194 195 DRM_DEBUG_DRIVER("Invalid format\n"); 195 196 return val; ··· 219 218 DRM_DEBUG_DRIVER("Using GEM @ %pad\n", &gem->paddr); 220 219 221 220 /* Compute the start of the displayed memory */ 222 - bpp = drm_format_plane_cpp(fb->pixel_format, 0); 221 + bpp = fb->format->cpp[0]; 223 222 paddr = gem->paddr + fb->offsets[0]; 224 223 paddr += (state->src_x >> 16) * bpp; 225 224 paddr += (state->src_y >> 16) * fb->pitches[0];
+5 -8
drivers/gpu/drm/sun4i/sun4i_rgb.c
··· 208 208 struct sun4i_drv *drv = drm->dev_private; 209 209 struct sun4i_tcon *tcon = drv->tcon; 210 210 struct drm_encoder *encoder; 211 + struct drm_bridge *bridge; 211 212 struct sun4i_rgb *rgb; 212 213 int ret; 213 214 ··· 219 218 encoder = &rgb->encoder; 220 219 221 220 tcon->panel = sun4i_tcon_find_panel(tcon->dev->of_node); 222 - encoder->bridge = sun4i_tcon_find_bridge(tcon->dev->of_node); 223 - if (IS_ERR(tcon->panel) && IS_ERR(encoder->bridge)) { 221 + bridge = sun4i_tcon_find_bridge(tcon->dev->of_node); 222 + if (IS_ERR(tcon->panel) && IS_ERR(bridge)) { 224 223 dev_info(drm->dev, "No panel or bridge found... RGB output disabled\n"); 225 224 return 0; 226 225 } ··· 261 260 } 262 261 } 263 262 264 - if (!IS_ERR(encoder->bridge)) { 265 - encoder->bridge->encoder = &rgb->encoder; 266 - 267 - ret = drm_bridge_attach(drm, encoder->bridge); 263 + if (!IS_ERR(bridge)) { 264 + ret = drm_bridge_attach(encoder, bridge, NULL); 268 265 if (ret) { 269 266 dev_err(drm->dev, "Couldn't attach our bridge\n"); 270 267 goto err_cleanup_connector; 271 268 } 272 - } else { 273 - encoder->bridge = NULL; 274 269 } 275 270 276 271 return 0;
+4 -4
drivers/gpu/drm/tegra/dc.c
··· 511 511 if (!state->crtc) 512 512 return 0; 513 513 514 - err = tegra_dc_format(state->fb->pixel_format, &plane_state->format, 514 + err = tegra_dc_format(state->fb->format->format, &plane_state->format, 515 515 &plane_state->swap); 516 516 if (err < 0) 517 517 return err; ··· 531 531 * error out if the user tries to display a framebuffer with such a 532 532 * configuration. 533 533 */ 534 - if (drm_format_num_planes(state->fb->pixel_format) > 2) { 534 + if (state->fb->format->num_planes > 2) { 535 535 if (state->fb->pitches[2] != state->fb->pitches[1]) { 536 536 DRM_ERROR("unsupported UV-plane configuration\n"); 537 537 return -EINVAL; ··· 568 568 window.dst.y = plane->state->crtc_y; 569 569 window.dst.w = plane->state->crtc_w; 570 570 window.dst.h = plane->state->crtc_h; 571 - window.bits_per_pixel = fb->bits_per_pixel; 571 + window.bits_per_pixel = fb->format->cpp[0] * 8; 572 572 window.bottom_up = tegra_fb_is_bottom_up(fb); 573 573 574 574 /* copy from state */ ··· 576 576 window.format = state->format; 577 577 window.swap = state->swap; 578 578 579 - for (i = 0; i < drm_format_num_planes(fb->pixel_format); i++) { 579 + for (i = 0; i < fb->format->num_planes; i++) { 580 580 struct tegra_bo *bo = tegra_fb_get_plane(fb, i); 581 581 582 582 window.base[i] = bo->paddr + fb->offsets[i];
+3 -2
drivers/gpu/drm/tegra/drm.c
··· 875 875 876 876 list_for_each_entry(fb, &drm->mode_config.fb_list, head) { 877 877 seq_printf(s, "%3d: user size: %d x %d, depth %d, %d bpp, refcount %d\n", 878 - fb->base.id, fb->width, fb->height, fb->depth, 879 - fb->bits_per_pixel, 878 + fb->base.id, fb->width, fb->height, 879 + fb->format->depth, 880 + fb->format->cpp[0] * 8, 880 881 drm_framebuffer_read_refcount(fb)); 881 882 } 882 883
+1
drivers/gpu/drm/tegra/drm.h
··· 17 17 #include <drm/drmP.h> 18 18 #include <drm/drm_crtc_helper.h> 19 19 #include <drm/drm_edid.h> 20 + #include <drm/drm_encoder.h> 20 21 #include <drm/drm_fb_helper.h> 21 22 #include <drm/drm_fixed.h> 22 23
+3 -3
drivers/gpu/drm/tegra/fb.c
··· 32 32 { 33 33 struct tegra_fb *fb = to_tegra_fb(framebuffer); 34 34 35 - if (index >= drm_format_num_planes(framebuffer->pixel_format)) 35 + if (index >= framebuffer->format->num_planes) 36 36 return NULL; 37 37 38 38 return fb->planes[index]; ··· 114 114 115 115 fb->num_planes = num_planes; 116 116 117 - drm_helper_mode_fill_fb_struct(&fb->base, mode_cmd); 117 + drm_helper_mode_fill_fb_struct(drm, &fb->base, mode_cmd); 118 118 119 119 for (i = 0; i < fb->num_planes; i++) 120 120 fb->planes[i] = planes[i]; ··· 246 246 info->flags = FBINFO_FLAG_DEFAULT; 247 247 info->fbops = &tegra_fb_ops; 248 248 249 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 249 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 250 250 drm_fb_helper_fill_var(info, helper, fb->width, fb->height); 251 251 252 252 offset = info->var.xoffset * bytes_per_pixel +
+2 -2
drivers/gpu/drm/tilcdc/tilcdc_crtc.c
··· 91 91 92 92 start = gem->paddr + fb->offsets[0] + 93 93 crtc->y * fb->pitches[0] + 94 - crtc->x * drm_format_plane_cpp(fb->pixel_format, 0); 94 + crtc->x * fb->format->cpp[0]; 95 95 96 96 end = start + (crtc->mode.vdisplay * fb->pitches[0]); 97 97 ··· 399 399 if (info->tft_alt_mode) 400 400 reg |= LCDC_TFT_ALT_ENABLE; 401 401 if (priv->rev == 2) { 402 - switch (fb->pixel_format) { 402 + switch (fb->format->format) { 403 403 case DRM_FORMAT_BGR565: 404 404 case DRM_FORMAT_RGB565: 405 405 break;
+1 -3
drivers/gpu/drm/tilcdc/tilcdc_external.c
··· 167 167 int ret; 168 168 169 169 priv->external_encoder->possible_crtcs = BIT(0); 170 - priv->external_encoder->bridge = bridge; 171 - bridge->encoder = priv->external_encoder; 172 170 173 - ret = drm_bridge_attach(ddev, bridge); 171 + ret = drm_bridge_attach(priv->external_encoder, bridge, NULL); 174 172 if (ret) { 175 173 dev_err(ddev->dev, "drm_bridge_attach() failed %d\n", ret); 176 174 return ret;
+2 -2
drivers/gpu/drm/tilcdc/tilcdc_plane.c
··· 69 69 } 70 70 71 71 pitch = crtc_state->mode.hdisplay * 72 - drm_format_plane_cpp(state->fb->pixel_format, 0); 72 + state->fb->format->cpp[0]; 73 73 if (state->fb->pitches[0] != pitch) { 74 74 dev_err(plane->dev->dev, 75 75 "Invalid pitch: fb and crtc widths must be the same"); ··· 77 77 } 78 78 79 79 if (state->fb && old_state->fb && 80 - state->fb->pixel_format != old_state->fb->pixel_format) { 80 + state->fb->format != old_state->fb->format) { 81 81 dev_dbg(plane->dev->dev, 82 82 "%s(): pixel format change requires mode_change\n", 83 83 __func__);
+5 -5
drivers/gpu/drm/ttm/ttm_bo_manager.c
··· 148 148 } 149 149 150 150 const struct ttm_mem_type_manager_func ttm_bo_manager_func = { 151 - ttm_bo_man_init, 152 - ttm_bo_man_takedown, 153 - ttm_bo_man_get_node, 154 - ttm_bo_man_put_node, 155 - ttm_bo_man_debug 151 + .init = ttm_bo_man_init, 152 + .takedown = ttm_bo_man_takedown, 153 + .get_node = ttm_bo_man_get_node, 154 + .put_node = ttm_bo_man_put_node, 155 + .debug = ttm_bo_man_debug 156 156 }; 157 157 EXPORT_SYMBOL(ttm_bo_manager_func);
+3 -3
drivers/gpu/drm/udl/udl_fb.c
··· 89 89 int bytes_identical = 0; 90 90 struct urb *urb; 91 91 int aligned_x; 92 - int bpp = (fb->base.bits_per_pixel / 8); 92 + int bpp = fb->base.format->cpp[0]; 93 93 94 94 if (!fb->active_16) 95 95 return 0; ··· 330 330 int ret; 331 331 332 332 ufb->obj = obj; 333 - drm_helper_mode_fill_fb_struct(&ufb->base, mode_cmd); 333 + drm_helper_mode_fill_fb_struct(dev, &ufb->base, mode_cmd); 334 334 ret = drm_framebuffer_init(dev, &ufb->base, &udlfb_funcs); 335 335 return ret; 336 336 } ··· 395 395 396 396 info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT; 397 397 info->fbops = &udlfb_ops; 398 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 398 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 399 399 drm_fb_helper_fill_var(info, &ufbdev->helper, sizes->fb_width, sizes->fb_height); 400 400 401 401 DRM_DEBUG_KMS("allocated %dx%d vmal %p\n",
+2
drivers/gpu/drm/vc4/vc4_drv.h
··· 9 9 #include "drmP.h" 10 10 #include "drm_gem_cma_helper.h" 11 11 12 + #include <drm/drm_encoder.h> 13 + 12 14 struct vc4_dev { 13 15 struct drm_device *dev; 14 16
+4 -4
drivers/gpu/drm/vc4/vc4_plane.c
··· 295 295 struct drm_framebuffer *fb = state->fb; 296 296 struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0); 297 297 u32 subpixel_src_mask = (1 << 16) - 1; 298 - u32 format = fb->pixel_format; 299 - int num_planes = drm_format_num_planes(format); 298 + u32 format = fb->format->format; 299 + int num_planes = fb->format->num_planes; 300 300 u32 h_subsample = 1; 301 301 u32 v_subsample = 1; 302 302 int i; ··· 369 369 */ 370 370 if (vc4_state->crtc_x < 0) { 371 371 for (i = 0; i < num_planes; i++) { 372 - u32 cpp = drm_format_plane_cpp(fb->pixel_format, i); 372 + u32 cpp = fb->format->cpp[i]; 373 373 u32 subs = ((i == 0) ? 1 : h_subsample); 374 374 375 375 vc4_state->offsets[i] += (cpp * ··· 496 496 struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 497 497 struct drm_framebuffer *fb = state->fb; 498 498 u32 ctl0_offset = vc4_state->dlist_count; 499 - const struct hvs_format *format = vc4_get_hvs_format(fb->pixel_format); 499 + const struct hvs_format *format = vc4_get_hvs_format(fb->format->format); 500 500 int num_planes = drm_format_num_planes(format->drm); 501 501 u32 scl0, scl1; 502 502 u32 lbm_size;
+2 -1
drivers/gpu/drm/virtio/virtgpu_display.c
··· 88 88 89 89 bo = gem_to_virtio_gpu_obj(obj); 90 90 91 + drm_helper_mode_fill_fb_struct(dev, &vgfb->base, mode_cmd); 92 + 91 93 ret = drm_framebuffer_init(dev, &vgfb->base, &virtio_gpu_fb_funcs); 92 94 if (ret) { 93 95 vgfb->obj = NULL; 94 96 return ret; 95 97 } 96 - drm_helper_mode_fill_fb_struct(&vgfb->base, mode_cmd); 97 98 98 99 spin_lock_init(&vgfb->dirty_lock); 99 100 vgfb->x1 = vgfb->y1 = INT_MAX;
+1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 35 35 #include <drm/drm_gem.h> 36 36 #include <drm/drm_atomic.h> 37 37 #include <drm/drm_crtc_helper.h> 38 + #include <drm/drm_encoder.h> 38 39 #include <ttm/ttm_bo_api.h> 39 40 #include <ttm/ttm_bo_driver.h> 40 41 #include <ttm/ttm_placement.h>
+2 -2
drivers/gpu/drm/virtio/virtgpu_fb.c
··· 43 43 struct drm_device *dev = fb->base.dev; 44 44 struct virtio_gpu_device *vgdev = dev->dev_private; 45 45 bool store_for_later = false; 46 - int bpp = fb->base.bits_per_pixel / 8; 46 + int bpp = fb->base.format->cpp[0]; 47 47 int x2, y2; 48 48 unsigned long flags; 49 49 struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(fb->obj); ··· 333 333 334 334 info->screen_base = obj->vmap; 335 335 info->screen_size = obj->gem_base.size; 336 - drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 336 + drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); 337 337 drm_fb_helper_fill_var(info, &vfbdev->helper, 338 338 sizes->fb_width, sizes->fb_height); 339 339
+5 -5
drivers/gpu/drm/virtio/virtgpu_ttm.c
··· 198 198 } 199 199 200 200 static const struct ttm_mem_type_manager_func virtio_gpu_bo_manager_func = { 201 - ttm_bo_man_init, 202 - ttm_bo_man_takedown, 203 - ttm_bo_man_get_node, 204 - ttm_bo_man_put_node, 205 - ttm_bo_man_debug 201 + .init = ttm_bo_man_init, 202 + .takedown = ttm_bo_man_takedown, 203 + .get_node = ttm_bo_man_get_node, 204 + .put_node = ttm_bo_man_put_node, 205 + .debug = ttm_bo_man_debug 206 206 }; 207 207 208 208 static int virtio_gpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
+6 -5
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 83 83 return 1; 84 84 } 85 85 86 - switch (par->set_fb->depth) { 86 + switch (par->set_fb->format->depth) { 87 87 case 24: 88 88 case 32: 89 89 pal[regno] = ((red & 0xff00) << 8) | ··· 91 91 ((blue & 0xff00) >> 8); 92 92 break; 93 93 default: 94 - DRM_ERROR("Bad depth %u, bpp %u.\n", par->set_fb->depth, 95 - par->set_fb->bits_per_pixel); 94 + DRM_ERROR("Bad depth %u, bpp %u.\n", 95 + par->set_fb->format->depth, 96 + par->set_fb->format->cpp[0] * 8); 96 97 return 1; 97 98 } 98 99 ··· 198 197 * Handle panning when copying from vmalloc to framebuffer. 199 198 * Clip dirty area to framebuffer. 200 199 */ 201 - cpp = (cur_fb->bits_per_pixel + 7) / 8; 200 + cpp = cur_fb->format->cpp[0]; 202 201 max_x = par->fb_x + cur_fb->width; 203 202 max_y = par->fb_y + cur_fb->height; 204 203 ··· 488 487 cur_fb = par->set_fb; 489 488 if (cur_fb && cur_fb->width == mode_cmd.width && 490 489 cur_fb->height == mode_cmd.height && 491 - cur_fb->pixel_format == mode_cmd.pixel_format && 490 + cur_fb->format->format == mode_cmd.pixel_format && 492 491 cur_fb->pitches[0] == mode_cmd.pitches[0]) 493 492 return 0; 494 493
+5 -5
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 164 164 } 165 165 166 166 const struct ttm_mem_type_manager_func vmw_gmrid_manager_func = { 167 - vmw_gmrid_man_init, 168 - vmw_gmrid_man_takedown, 169 - vmw_gmrid_man_get_node, 170 - vmw_gmrid_man_put_node, 171 - vmw_gmrid_man_debug 167 + .init = vmw_gmrid_man_init, 168 + .takedown = vmw_gmrid_man_takedown, 169 + .get_node = vmw_gmrid_man_get_node, 170 + .put_node = vmw_gmrid_man_put_node, 171 + .debug = vmw_gmrid_man_debug 172 172 };
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 583 583 goto out_err1; 584 584 } 585 585 586 - drm_helper_mode_fill_fb_struct(&vfbs->base.base, mode_cmd); 586 + drm_helper_mode_fill_fb_struct(dev, &vfbs->base.base, mode_cmd); 587 587 vfbs->surface = vmw_surface_reference(surface); 588 588 vfbs->base.user_handle = mode_cmd->handles[0]; 589 589 vfbs->is_dmabuf_proxy = is_dmabuf_proxy; ··· 864 864 goto out_err1; 865 865 } 866 866 867 - drm_helper_mode_fill_fb_struct(&vfbd->base.base, mode_cmd); 867 + drm_helper_mode_fill_fb_struct(dev, &vfbd->base.base, mode_cmd); 868 868 vfbd->base.dmabuf = true; 869 869 vfbd->buffer = vmw_dmabuf_reference(dmabuf); 870 870 vfbd->base.user_handle = mode_cmd->handles[0];
+1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 30 30 31 31 #include <drm/drmP.h> 32 32 #include <drm/drm_crtc_helper.h> 33 + #include <drm/drm_encoder.h> 33 34 #include "vmwgfx_drv.h" 34 35 35 36 /**
+3 -2
drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
··· 97 97 fb = entry->base.crtc.primary->fb; 98 98 99 99 return vmw_kms_write_svga(dev_priv, w, h, fb->pitches[0], 100 - fb->bits_per_pixel, fb->depth); 100 + fb->format->cpp[0] * 8, 101 + fb->format->depth); 101 102 } 102 103 103 104 if (!list_empty(&lds->active)) { ··· 106 105 fb = entry->base.crtc.primary->fb; 107 106 108 107 vmw_kms_write_svga(dev_priv, fb->width, fb->height, fb->pitches[0], 109 - fb->bits_per_pixel, fb->depth); 108 + fb->format->cpp[0] * 8, fb->format->depth); 110 109 } 111 110 112 111 /* Make sure we always show something. */
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
··· 598 598 struct vmw_dma_buffer *buf = 599 599 container_of(framebuffer, struct vmw_framebuffer_dmabuf, 600 600 base)->buffer; 601 - int depth = framebuffer->base.depth; 601 + int depth = framebuffer->base.format->depth; 602 602 struct { 603 603 uint32_t header; 604 604 SVGAFifoCmdDefineGMRFB body; ··· 618 618 } 619 619 620 620 cmd->header = SVGA_CMD_DEFINE_GMRFB; 621 - cmd->body.format.bitsPerPixel = framebuffer->base.bits_per_pixel; 621 + cmd->body.format.bitsPerPixel = framebuffer->base.format->cpp[0] * 8; 622 622 cmd->body.format.colorDepth = depth; 623 623 cmd->body.format.reserved = 0; 624 624 cmd->body.bytesPerLine = framebuffer->base.pitches[0];
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 424 424 */ 425 425 if (new_content_type == SEPARATE_DMA) { 426 426 427 - switch (new_fb->bits_per_pixel) { 427 + switch (new_fb->format->cpp[0] * 8) { 428 428 case 32: 429 429 content_srf.format = SVGA3D_X8R8G8B8; 430 430 break;
+2 -2
drivers/gpu/drm/zte/zx_plane.c
··· 146 146 if (!fb) 147 147 return; 148 148 149 - format = fb->pixel_format; 149 + format = fb->format->format; 150 150 stride = fb->pitches[0]; 151 151 152 152 src_x = plane->state->src_x >> 16; ··· 159 159 dst_w = plane->state->crtc_w; 160 160 dst_h = plane->state->crtc_h; 161 161 162 - bpp = drm_format_plane_cpp(format, 0); 162 + bpp = fb->format->cpp[0]; 163 163 164 164 cma_obj = drm_fb_cma_get_gem_obj(fb, 0); 165 165 paddr = cma_obj->paddr + fb->offsets[0];
+13
include/drm/drmP.h
··· 634 634 int switch_power_state; 635 635 }; 636 636 637 + /** 638 + * drm_drv_uses_atomic_modeset - check if the driver implements 639 + * atomic_commit() 640 + * @dev: DRM device 641 + * 642 + * This check is useful if drivers do not have DRIVER_ATOMIC set but 643 + * have atomic modesetting internally implemented. 644 + */ 645 + static inline bool drm_drv_uses_atomic_modeset(struct drm_device *dev) 646 + { 647 + return dev->mode_config.funcs->atomic_commit != NULL; 648 + } 649 + 637 650 #include <drm/drm_irq.h> 638 651 639 652 #define DRM_SWITCH_POWER_ON 0
+1 -7
include/drm/drm_atomic.h
··· 145 145 struct drm_crtc_state *state; 146 146 struct drm_crtc_commit *commit; 147 147 s64 __user *out_fence_ptr; 148 + unsigned last_vblank_count; 148 149 }; 149 150 150 151 struct __drm_connnectors_state { ··· 370 369 371 370 void drm_state_dump(struct drm_device *dev, struct drm_printer *p); 372 371 373 - #ifdef CONFIG_DEBUG_FS 374 - struct drm_minor; 375 - int drm_atomic_debugfs_init(struct drm_minor *minor); 376 - int drm_atomic_debugfs_cleanup(struct drm_minor *minor); 377 - #endif 378 - 379 372 #define for_each_connector_in_state(__state, connector, connector_state, __i) \ 380 373 for ((__i) = 0; \ 381 374 (__i) < (__state)->num_connector && \ ··· 418 423 return state->mode_changed || state->active_changed || 419 424 state->connectors_changed; 420 425 } 421 - 422 426 423 427 #endif /* DRM_ATOMIC_H_ */
-3
include/drm/drm_atomic_helper.h
··· 48 48 int drm_atomic_helper_wait_for_fences(struct drm_device *dev, 49 49 struct drm_atomic_state *state, 50 50 bool pre_swap); 51 - bool drm_atomic_helper_framebuffer_changed(struct drm_device *dev, 52 - struct drm_atomic_state *old_state, 53 - struct drm_crtc *crtc); 54 51 55 52 void drm_atomic_helper_wait_for_vblanks(struct drm_device *dev, 56 53 struct drm_atomic_state *old_state);
+13 -4
include/drm/drm_auth.h
··· 33 33 * 34 34 * @refcount: Refcount for this master object. 35 35 * @dev: Link back to the DRM device 36 - * @unique: Unique identifier: e.g. busid. Protected by drm_global_mutex. 37 - * @unique_len: Length of unique field. Protected by drm_global_mutex. 38 - * @magic_map: Map of used authentication tokens. Protected by struct_mutex. 39 - * @lock: DRI lock information. 36 + * @lock: DRI1 lock information. 40 37 * @driver_priv: Pointer to driver-private information. 41 38 * 42 39 * Note that master structures are only relevant for the legacy/primary device ··· 42 45 struct drm_master { 43 46 struct kref refcount; 44 47 struct drm_device *dev; 48 + /** 49 + * @unique: Unique identifier: e.g. busid. Protected by struct 50 + * &drm_device master_mutex. 51 + */ 45 52 char *unique; 53 + /** 54 + * @unique_len: Length of unique field. Protected by struct &drm_device 55 + * master_mutex. 56 + */ 46 57 int unique_len; 58 + /** 59 + * @magic_map: Map of used authentication tokens. Protected by struct 60 + * &drm_device master_mutex. 61 + */ 47 62 struct idr magic_map; 48 63 struct drm_lock_data lock; 49 64 void *driver_priv;
+2 -2
include/drm/drm_bridge.h
··· 201 201 int drm_bridge_add(struct drm_bridge *bridge); 202 202 void drm_bridge_remove(struct drm_bridge *bridge); 203 203 struct drm_bridge *of_drm_find_bridge(struct device_node *np); 204 - int drm_bridge_attach(struct drm_device *dev, struct drm_bridge *bridge); 205 - void drm_bridge_detach(struct drm_bridge *bridge); 204 + int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge, 205 + struct drm_bridge *previous); 206 206 207 207 bool drm_bridge_mode_fixup(struct drm_bridge *bridge, 208 208 const struct drm_display_mode *mode,
+73 -6
include/drm/drm_connector.h
··· 117 117 118 118 /** 119 119 * @pixel_clock: Maximum pixel clock supported by the sink, in units of 120 - * 100Hz. This mismatches the clok in &drm_display_mode (which is in 120 + * 100Hz. This mismatches the clock in &drm_display_mode (which is in 121 121 * kHZ), because that's what the EDID uses as base unit. 122 122 */ 123 123 unsigned int pixel_clock; ··· 381 381 * core drm connector interfaces. Everything added from this callback 382 382 * should be unregistered in the early_unregister callback. 383 383 * 384 + * This is called while holding drm_connector->mutex. 385 + * 384 386 * Returns: 385 387 * 386 388 * 0 on success, or a negative error code on failure. ··· 397 395 * late_register(). It is called from drm_connector_unregister(), 398 396 * early in the driver unload sequence to disable userspace access 399 397 * before data structures are torndown. 398 + * 399 + * This is called while holding drm_connector->mutex. 400 400 */ 401 401 void (*early_unregister)(struct drm_connector *connector); 402 402 ··· 563 559 * @interlace_allowed: can this connector handle interlaced modes? 564 560 * @doublescan_allowed: can this connector handle doublescan? 565 561 * @stereo_allowed: can this connector handle stereo modes? 566 - * @registered: is this connector exposed (registered) with userspace? 567 - * @modes: modes available on this connector (from fill_modes() + user) 568 - * @status: one of the drm_connector_status enums (connected, not, or unknown) 569 - * @probed_modes: list of modes derived directly from the display 570 562 * @funcs: connector control functions 571 563 * @edid_blob_ptr: DRM property containing EDID if present 572 564 * @properties: property tracking for this connector ··· 608 608 char *name; 609 609 610 610 /** 611 + * @mutex: Lock for general connector state, but currently only protects 612 + * @registered. Most of the connector state is still protected by the 613 + * mutex in &drm_mode_config. 614 + */ 615 + struct mutex mutex; 616 + 617 + /** 611 618 * @index: Compacted connector index, which matches the position inside 612 619 * the mode_config.list for drivers not supporting hot-add/removing. Can 613 620 * be used as an array index. It is invariant over the lifetime of the ··· 627 620 bool interlace_allowed; 628 621 bool doublescan_allowed; 629 622 bool stereo_allowed; 623 + /** 624 + * @registered: Is this connector exposed (registered) with userspace? 625 + * Protected by @mutex. 626 + */ 630 627 bool registered; 628 + 629 + /** 630 + * @modes: 631 + * Modes available on this connector (from fill_modes() + user). 632 + * Protected by dev->mode_config.mutex. 633 + */ 631 634 struct list_head modes; /* list of modes on this connector */ 632 635 636 + /** 637 + * @status: 638 + * One of the drm_connector_status enums (connected, not, or unknown). 639 + * Protected by dev->mode_config.mutex. 640 + */ 633 641 enum drm_connector_status status; 634 642 635 - /* these are modes added by probing with DDC or the BIOS */ 643 + /** 644 + * @probed_modes: 645 + * These are modes added by probing with DDC or the BIOS, before 646 + * filtering is applied. Used by the probe helpers.Protected by 647 + * dev->mode_config.mutex. 648 + */ 636 649 struct list_head probed_modes; 637 650 638 651 /** ··· 661 634 * flat panels in embedded systems, the driver should initialize the 662 635 * display_info.width_mm and display_info.height_mm fields with the 663 636 * physical size of the display. 637 + * 638 + * Protected by dev->mode_config.mutex. 664 639 */ 665 640 struct drm_display_info display_info; 666 641 const struct drm_connector_funcs *funcs; ··· 868 839 * @dev: the DRM device 869 840 * 870 841 * Iterate over all connectors of @dev. 842 + * 843 + * WARNING: 844 + * 845 + * This iterator is not safe against hotadd/removal of connectors and is 846 + * deprecated. Use drm_for_each_connector_iter() instead. 871 847 */ 872 848 #define drm_for_each_connector(connector, dev) \ 873 849 for (assert_drm_connector_list_read_locked(&(dev)->mode_config), \ ··· 880 846 struct drm_connector, head); \ 881 847 &connector->head != (&(dev)->mode_config.connector_list); \ 882 848 connector = list_next_entry(connector, head)) 849 + 850 + /** 851 + * struct drm_connector_list_iter - connector_list iterator 852 + * 853 + * This iterator tracks state needed to be able to walk the connector_list 854 + * within struct drm_mode_config. Only use together with 855 + * drm_connector_list_iter_get(), drm_connector_list_iter_put() and 856 + * drm_connector_list_iter_next() respectively the convenience macro 857 + * drm_for_each_connector_iter(). 858 + */ 859 + struct drm_connector_list_iter { 860 + /* private: */ 861 + struct drm_device *dev; 862 + struct drm_connector *conn; 863 + }; 864 + 865 + void drm_connector_list_iter_get(struct drm_device *dev, 866 + struct drm_connector_list_iter *iter); 867 + struct drm_connector * 868 + drm_connector_list_iter_next(struct drm_connector_list_iter *iter); 869 + void drm_connector_list_iter_put(struct drm_connector_list_iter *iter); 870 + 871 + /** 872 + * drm_for_each_connector_iter - connector_list iterator macro 873 + * @connector: struct &drm_connector pointer used as cursor 874 + * @iter: struct &drm_connector_list_iter 875 + * 876 + * Note that @connector is only valid within the list body, if you want to use 877 + * @connector after calling drm_connector_list_iter_put() then you need to grab 878 + * your own reference first using drm_connector_reference(). 879 + */ 880 + #define drm_for_each_connector_iter(connector, iter) \ 881 + while ((connector = drm_connector_list_iter_next(iter))) 883 882 884 883 #endif
-8
include/drm/drm_crtc.h
··· 39 39 #include <drm/drm_framebuffer.h> 40 40 #include <drm/drm_modes.h> 41 41 #include <drm/drm_connector.h> 42 - #include <drm/drm_encoder.h> 43 42 #include <drm/drm_property.h> 44 43 #include <drm/drm_bridge.h> 45 44 #include <drm/drm_edid.h> ··· 67 68 } 68 69 69 70 struct drm_crtc; 70 - struct drm_encoder; 71 71 struct drm_pending_vblank_event; 72 72 struct drm_plane; 73 73 struct drm_bridge; 74 74 struct drm_atomic_state; 75 75 76 76 struct drm_crtc_helper_funcs; 77 - struct drm_encoder_helper_funcs; 78 77 struct drm_plane_helper_funcs; 79 78 80 79 /** ··· 90 93 * @plane_mask: bitmask of (1 << drm_plane_index(plane)) of attached planes 91 94 * @connector_mask: bitmask of (1 << drm_connector_index(connector)) of attached connectors 92 95 * @encoder_mask: bitmask of (1 << drm_encoder_index(encoder)) of attached encoders 93 - * @last_vblank_count: for helpers and drivers to capture the vblank of the 94 - * update to ensure framebuffer cleanup isn't done too early 95 96 * @adjusted_mode: for use by helpers and drivers to compute adjusted mode timings 96 97 * @mode: current mode timings 97 98 * @mode_blob: &drm_property_blob for @mode ··· 134 139 135 140 u32 connector_mask; 136 141 u32 encoder_mask; 137 - 138 - /* last_vblank_count: for vblank waits before cleanup */ 139 - u32 last_vblank_count; 140 142 141 143 /* adjusted_mode: for use by helpers and drivers */ 142 144 struct drm_display_mode adjusted_mode;
+4 -3
include/drm/drm_encoder.h
··· 25 25 26 26 #include <linux/list.h> 27 27 #include <linux/ctype.h> 28 + #include <drm/drm_crtc.h> 29 + #include <drm/drm_mode.h> 28 30 #include <drm/drm_mode_object.h> 31 + 32 + struct drm_encoder; 29 33 30 34 /** 31 35 * struct drm_encoder_funcs - encoder controls ··· 191 187 { 192 188 return encoder->index; 193 189 } 194 - 195 - /* FIXME: We have an include file mess still, drm_crtc.h needs untangling. */ 196 - static inline uint32_t drm_crtc_mask(const struct drm_crtc *crtc); 197 190 198 191 /** 199 192 * drm_encoder_crtc_ok - can a given crtc drive a given encoder?
+1
include/drm/drm_encoder_slave.h
··· 29 29 30 30 #include <drm/drmP.h> 31 31 #include <drm/drm_crtc.h> 32 + #include <drm/drm_encoder.h> 32 33 33 34 /** 34 35 * struct drm_encoder_slave_funcs - Entry points exposed by a slave encoder driver
+10 -17
include/drm/drm_framebuffer.h
··· 122 122 */ 123 123 struct drm_mode_object base; 124 124 /** 125 + * @format: framebuffer format information 126 + */ 127 + const struct drm_format_info *format; 128 + /** 125 129 * @funcs: framebuffer vfunc table 126 130 */ 127 131 const struct drm_framebuffer_funcs *funcs; ··· 170 166 */ 171 167 unsigned int height; 172 168 /** 173 - * @depth: Depth in bits per pixel for RGB formats. 0 for everything 174 - * else. Legacy information derived from @pixel_format, it's suggested to use 175 - * the DRM FOURCC codes and helper functions directly instead. 176 - */ 177 - unsigned int depth; 178 - /** 179 - * @bits_per_pixel: Storage used bits per pixel for RGB formats. 0 for 180 - * everything else. Legacy information derived from @pixel_format, it's 181 - * suggested to use the DRM FOURCC codes and helper functions directly 182 - * instead. 183 - */ 184 - int bits_per_pixel; 185 - /** 186 169 * @flags: Framebuffer flags like DRM_MODE_FB_INTERLACED or 187 170 * DRM_MODE_FB_MODIFIERS. 188 171 */ 189 172 int flags; 190 - /** 191 - * @pixel_format: DRM FOURCC code describing the pixel format. 192 - */ 193 - uint32_t pixel_format; /* fourcc format */ 194 173 /** 195 174 * @hot_x: X coordinate of the cursor hotspot. Used by the legacy cursor 196 175 * IOCTL when the driver supports cursor through a DRM_PLANE_TYPE_CURSOR ··· 269 282 struct drm_framebuffer, head); \ 270 283 &fb->head != (&(dev)->mode_config.fb_list); \ 271 284 fb = list_next_entry(fb, head)) 285 + 286 + int drm_framebuffer_plane_width(int width, 287 + const struct drm_framebuffer *fb, int plane); 288 + int drm_framebuffer_plane_height(int height, 289 + const struct drm_framebuffer *fb, int plane); 290 + 272 291 #endif
+209 -89
include/drm/drm_mm.h
··· 1 1 /************************************************************************** 2 2 * 3 3 * Copyright 2006-2008 Tungsten Graphics, Inc., Cedar Park, TX. USA. 4 + * Copyright 2016 Intel Corporation 4 5 * All Rights Reserved. 5 6 * 6 7 * Permission is hereby granted, free of charge, to any person obtaining a ··· 49 48 #include <linux/stackdepot.h> 50 49 #endif 51 50 51 + #ifdef CONFIG_DRM_DEBUG_MM 52 + #define DRM_MM_BUG_ON(expr) BUG_ON(expr) 53 + #else 54 + #define DRM_MM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr) 55 + #endif 56 + 52 57 enum drm_mm_search_flags { 53 58 DRM_MM_SEARCH_DEFAULT = 0, 54 59 DRM_MM_SEARCH_BEST = 1 << 0, ··· 74 67 struct list_head hole_stack; 75 68 struct rb_node rb; 76 69 unsigned hole_follows : 1; 77 - unsigned scanned_block : 1; 78 - unsigned scanned_prev_free : 1; 79 - unsigned scanned_next_free : 1; 80 - unsigned scanned_preceeds_hole : 1; 81 70 unsigned allocated : 1; 71 + bool scanned_block : 1; 82 72 unsigned long color; 83 73 u64 start; 84 74 u64 size; ··· 95 91 /* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */ 96 92 struct rb_root interval_tree; 97 93 98 - unsigned int scan_check_range : 1; 99 - unsigned scan_alignment; 100 - unsigned long scan_color; 101 - u64 scan_size; 102 - u64 scan_hit_start; 103 - u64 scan_hit_end; 104 - unsigned scanned_blocks; 105 - u64 scan_start; 106 - u64 scan_end; 107 - struct drm_mm_node *prev_scanned_node; 108 - 109 - void (*color_adjust)(struct drm_mm_node *node, unsigned long color, 94 + void (*color_adjust)(const struct drm_mm_node *node, 95 + unsigned long color, 110 96 u64 *start, u64 *end); 97 + 98 + unsigned long scan_active; 99 + }; 100 + 101 + struct drm_mm_scan { 102 + struct drm_mm *mm; 103 + 104 + u64 size; 105 + u64 alignment; 106 + u64 remainder_mask; 107 + 108 + u64 range_start; 109 + u64 range_end; 110 + 111 + u64 hit_start; 112 + u64 hit_end; 113 + 114 + unsigned long color; 115 + unsigned int flags; 111 116 }; 112 117 113 118 /** 114 119 * drm_mm_node_allocated - checks whether a node is allocated 115 120 * @node: drm_mm_node to check 116 121 * 117 - * Drivers should use this helpers for proper encapusulation of drm_mm 122 + * Drivers are required to clear a node prior to using it with the 123 + * drm_mm range manager. 124 + * 125 + * Drivers should use this helper for proper encapsulation of drm_mm 118 126 * internals. 119 127 * 120 128 * Returns: 121 129 * True if the @node is allocated. 122 130 */ 123 - static inline bool drm_mm_node_allocated(struct drm_mm_node *node) 131 + static inline bool drm_mm_node_allocated(const struct drm_mm_node *node) 124 132 { 125 133 return node->allocated; 126 134 } ··· 141 125 * drm_mm_initialized - checks whether an allocator is initialized 142 126 * @mm: drm_mm to check 143 127 * 144 - * Drivers should use this helpers for proper encapusulation of drm_mm 128 + * Drivers should clear the struct drm_mm prior to initialisation if they 129 + * want to use this function. 130 + * 131 + * Drivers should use this helper for proper encapsulation of drm_mm 145 132 * internals. 146 133 * 147 134 * Returns: 148 135 * True if the @mm is initialized. 149 136 */ 150 - static inline bool drm_mm_initialized(struct drm_mm *mm) 137 + static inline bool drm_mm_initialized(const struct drm_mm *mm) 151 138 { 152 139 return mm->hole_stack.next; 153 140 } 154 141 155 - static inline u64 __drm_mm_hole_node_start(struct drm_mm_node *hole_node) 142 + /** 143 + * drm_mm_hole_follows - checks whether a hole follows this node 144 + * @node: drm_mm_node to check 145 + * 146 + * Holes are embedded into the drm_mm using the tail of a drm_mm_node. 147 + * If you wish to know whether a hole follows this particular node, 148 + * query this function. 149 + * 150 + * Returns: 151 + * True if a hole follows the @node. 152 + */ 153 + static inline bool drm_mm_hole_follows(const struct drm_mm_node *node) 154 + { 155 + return node->hole_follows; 156 + } 157 + 158 + static inline u64 __drm_mm_hole_node_start(const struct drm_mm_node *hole_node) 156 159 { 157 160 return hole_node->start + hole_node->size; 158 161 } ··· 180 145 * drm_mm_hole_node_start - computes the start of the hole following @node 181 146 * @hole_node: drm_mm_node which implicitly tracks the following hole 182 147 * 183 - * This is useful for driver-sepific debug dumpers. Otherwise drivers should not 184 - * inspect holes themselves. Drivers must check first whether a hole indeed 185 - * follows by looking at node->hole_follows. 148 + * This is useful for driver-specific debug dumpers. Otherwise drivers should 149 + * not inspect holes themselves. Drivers must check first whether a hole indeed 150 + * follows by looking at drm_mm_hole_follows() 186 151 * 187 152 * Returns: 188 153 * Start of the subsequent hole. 189 154 */ 190 - static inline u64 drm_mm_hole_node_start(struct drm_mm_node *hole_node) 155 + static inline u64 drm_mm_hole_node_start(const struct drm_mm_node *hole_node) 191 156 { 192 - BUG_ON(!hole_node->hole_follows); 157 + DRM_MM_BUG_ON(!drm_mm_hole_follows(hole_node)); 193 158 return __drm_mm_hole_node_start(hole_node); 194 159 } 195 160 196 - static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node) 161 + static inline u64 __drm_mm_hole_node_end(const struct drm_mm_node *hole_node) 197 162 { 198 163 return list_next_entry(hole_node, node_list)->start; 199 164 } ··· 202 167 * drm_mm_hole_node_end - computes the end of the hole following @node 203 168 * @hole_node: drm_mm_node which implicitly tracks the following hole 204 169 * 205 - * This is useful for driver-sepific debug dumpers. Otherwise drivers should not 206 - * inspect holes themselves. Drivers must check first whether a hole indeed 207 - * follows by looking at node->hole_follows. 170 + * This is useful for driver-specific debug dumpers. Otherwise drivers should 171 + * not inspect holes themselves. Drivers must check first whether a hole indeed 172 + * follows by looking at drm_mm_hole_follows(). 208 173 * 209 174 * Returns: 210 175 * End of the subsequent hole. 211 176 */ 212 - static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node) 177 + static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node) 213 178 { 214 179 return __drm_mm_hole_node_end(hole_node); 215 180 } 181 + 182 + /** 183 + * drm_mm_nodes - list of nodes under the drm_mm range manager 184 + * @mm: the struct drm_mm range manger 185 + * 186 + * As the drm_mm range manager hides its node_list deep with its 187 + * structure, extracting it looks painful and repetitive. This is 188 + * not expected to be used outside of the drm_mm_for_each_node() 189 + * macros and similar internal functions. 190 + * 191 + * Returns: 192 + * The node list, may be empty. 193 + */ 194 + #define drm_mm_nodes(mm) (&(mm)->head_node.node_list) 216 195 217 196 /** 218 197 * drm_mm_for_each_node - iterator to walk over all allocated nodes ··· 236 187 * This iterator walks over all nodes in the range allocator. It is implemented 237 188 * with list_for_each, so not save against removal of elements. 238 189 */ 239 - #define drm_mm_for_each_node(entry, mm) list_for_each_entry(entry, \ 240 - &(mm)->head_node.node_list, \ 241 - node_list) 190 + #define drm_mm_for_each_node(entry, mm) \ 191 + list_for_each_entry(entry, drm_mm_nodes(mm), node_list) 192 + 193 + /** 194 + * drm_mm_for_each_node_safe - iterator to walk over all allocated nodes 195 + * @entry: drm_mm_node structure to assign to in each iteration step 196 + * @next: drm_mm_node structure to store the next step 197 + * @mm: drm_mm allocator to walk 198 + * 199 + * This iterator walks over all nodes in the range allocator. It is implemented 200 + * with list_for_each_safe, so save against removal of elements. 201 + */ 202 + #define drm_mm_for_each_node_safe(entry, next, mm) \ 203 + list_for_each_entry_safe(entry, next, drm_mm_nodes(mm), node_list) 242 204 243 205 #define __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, backwards) \ 244 206 for (entry = list_entry((backwards) ? (mm)->hole_stack.prev : (mm)->hole_stack.next, struct drm_mm_node, hole_stack); \ ··· 285 225 * Basic range manager support (drm_mm.c) 286 226 */ 287 227 int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node); 288 - 289 - int drm_mm_insert_node_generic(struct drm_mm *mm, 290 - struct drm_mm_node *node, 291 - u64 size, 292 - unsigned alignment, 293 - unsigned long color, 294 - enum drm_mm_search_flags sflags, 295 - enum drm_mm_allocator_flags aflags); 296 - /** 297 - * drm_mm_insert_node - search for space and insert @node 298 - * @mm: drm_mm to allocate from 299 - * @node: preallocate node to insert 300 - * @size: size of the allocation 301 - * @alignment: alignment of the allocation 302 - * @flags: flags to fine-tune the allocation 303 - * 304 - * This is a simplified version of drm_mm_insert_node_generic() with @color set 305 - * to 0. 306 - * 307 - * The preallocated node must be cleared to 0. 308 - * 309 - * Returns: 310 - * 0 on success, -ENOSPC if there's no suitable hole. 311 - */ 312 - static inline int drm_mm_insert_node(struct drm_mm *mm, 313 - struct drm_mm_node *node, 314 - u64 size, 315 - unsigned alignment, 316 - enum drm_mm_search_flags flags) 317 - { 318 - return drm_mm_insert_node_generic(mm, node, size, alignment, 0, flags, 319 - DRM_MM_CREATE_DEFAULT); 320 - } 321 - 322 228 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, 323 229 struct drm_mm_node *node, 324 230 u64 size, 325 - unsigned alignment, 231 + u64 alignment, 326 232 unsigned long color, 327 233 u64 start, 328 234 u64 end, 329 235 enum drm_mm_search_flags sflags, 330 236 enum drm_mm_allocator_flags aflags); 237 + 331 238 /** 332 239 * drm_mm_insert_node_in_range - ranged search for space and insert @node 333 240 * @mm: drm_mm to allocate from ··· 316 289 static inline int drm_mm_insert_node_in_range(struct drm_mm *mm, 317 290 struct drm_mm_node *node, 318 291 u64 size, 319 - unsigned alignment, 292 + u64 alignment, 320 293 u64 start, 321 294 u64 end, 322 295 enum drm_mm_search_flags flags) ··· 326 299 DRM_MM_CREATE_DEFAULT); 327 300 } 328 301 302 + /** 303 + * drm_mm_insert_node_generic - search for space and insert @node 304 + * @mm: drm_mm to allocate from 305 + * @node: preallocate node to insert 306 + * @size: size of the allocation 307 + * @alignment: alignment of the allocation 308 + * @color: opaque tag value to use for this node 309 + * @sflags: flags to fine-tune the allocation search 310 + * @aflags: flags to fine-tune the allocation behavior 311 + * 312 + * The preallocated node must be cleared to 0. 313 + * 314 + * Returns: 315 + * 0 on success, -ENOSPC if there's no suitable hole. 316 + */ 317 + static inline int 318 + drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node, 319 + u64 size, u64 alignment, 320 + unsigned long color, 321 + enum drm_mm_search_flags sflags, 322 + enum drm_mm_allocator_flags aflags) 323 + { 324 + return drm_mm_insert_node_in_range_generic(mm, node, 325 + size, alignment, 0, 326 + 0, U64_MAX, 327 + sflags, aflags); 328 + } 329 + 330 + /** 331 + * drm_mm_insert_node - search for space and insert @node 332 + * @mm: drm_mm to allocate from 333 + * @node: preallocate node to insert 334 + * @size: size of the allocation 335 + * @alignment: alignment of the allocation 336 + * @flags: flags to fine-tune the allocation 337 + * 338 + * This is a simplified version of drm_mm_insert_node_generic() with @color set 339 + * to 0. 340 + * 341 + * The preallocated node must be cleared to 0. 342 + * 343 + * Returns: 344 + * 0 on success, -ENOSPC if there's no suitable hole. 345 + */ 346 + static inline int drm_mm_insert_node(struct drm_mm *mm, 347 + struct drm_mm_node *node, 348 + u64 size, 349 + u64 alignment, 350 + enum drm_mm_search_flags flags) 351 + { 352 + return drm_mm_insert_node_generic(mm, node, 353 + size, alignment, 0, 354 + flags, DRM_MM_CREATE_DEFAULT); 355 + } 356 + 329 357 void drm_mm_remove_node(struct drm_mm_node *node); 330 358 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new); 331 - void drm_mm_init(struct drm_mm *mm, 332 - u64 start, 333 - u64 size); 359 + void drm_mm_init(struct drm_mm *mm, u64 start, u64 size); 334 360 void drm_mm_takedown(struct drm_mm *mm); 335 - bool drm_mm_clean(struct drm_mm *mm); 361 + 362 + /** 363 + * drm_mm_clean - checks whether an allocator is clean 364 + * @mm: drm_mm allocator to check 365 + * 366 + * Returns: 367 + * True if the allocator is completely free, false if there's still a node 368 + * allocated in it. 369 + */ 370 + static inline bool drm_mm_clean(const struct drm_mm *mm) 371 + { 372 + return list_empty(drm_mm_nodes(mm)); 373 + } 336 374 337 375 struct drm_mm_node * 338 - __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last); 376 + __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last); 339 377 340 378 /** 341 379 * drm_mm_for_each_node_in_range - iterator to walk over a range of ··· 421 329 node__ && node__->start < (end__); \ 422 330 node__ = list_next_entry(node__, node_list)) 423 331 424 - void drm_mm_init_scan(struct drm_mm *mm, 425 - u64 size, 426 - unsigned alignment, 427 - unsigned long color); 428 - void drm_mm_init_scan_with_range(struct drm_mm *mm, 429 - u64 size, 430 - unsigned alignment, 431 - unsigned long color, 432 - u64 start, 433 - u64 end); 434 - bool drm_mm_scan_add_block(struct drm_mm_node *node); 435 - bool drm_mm_scan_remove_block(struct drm_mm_node *node); 332 + void drm_mm_scan_init_with_range(struct drm_mm_scan *scan, 333 + struct drm_mm *mm, 334 + u64 size, u64 alignment, unsigned long color, 335 + u64 start, u64 end, 336 + unsigned int flags); 436 337 437 - void drm_mm_debug_table(struct drm_mm *mm, const char *prefix); 338 + /** 339 + * drm_mm_scan_init - initialize lru scanning 340 + * @scan: scan state 341 + * @mm: drm_mm to scan 342 + * @size: size of the allocation 343 + * @alignment: alignment of the allocation 344 + * @color: opaque tag value to use for the allocation 345 + * @flags: flags to specify how the allocation will be performed afterwards 346 + * 347 + * This simply sets up the scanning routines with the parameters for the desired 348 + * hole. 349 + * 350 + * Warning: 351 + * As long as the scan list is non-empty, no other operations than 352 + * adding/removing nodes to/from the scan list are allowed. 353 + */ 354 + static inline void drm_mm_scan_init(struct drm_mm_scan *scan, 355 + struct drm_mm *mm, 356 + u64 size, 357 + u64 alignment, 358 + unsigned long color, 359 + unsigned int flags) 360 + { 361 + drm_mm_scan_init_with_range(scan, mm, 362 + size, alignment, color, 363 + 0, U64_MAX, 364 + flags); 365 + } 366 + 367 + bool drm_mm_scan_add_block(struct drm_mm_scan *scan, 368 + struct drm_mm_node *node); 369 + bool drm_mm_scan_remove_block(struct drm_mm_scan *scan, 370 + struct drm_mm_node *node); 371 + struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan); 372 + 373 + void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix); 438 374 #ifdef CONFIG_DEBUG_FS 439 - int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm); 375 + int drm_mm_dump_table(struct seq_file *m, const struct drm_mm *mm); 440 376 #endif 441 377 442 378 #endif
+10 -2
include/drm/drm_mode_config.h
··· 365 365 struct list_head fb_list; 366 366 367 367 /** 368 - * @num_connector: Number of connectors on this device. 368 + * @connector_list_lock: Protects @num_connector and 369 + * @connector_list. 370 + */ 371 + spinlock_t connector_list_lock; 372 + /** 373 + * @num_connector: Number of connectors on this device. Protected by 374 + * @connector_list_lock. 369 375 */ 370 376 int num_connector; 371 377 /** ··· 379 373 */ 380 374 struct ida connector_ida; 381 375 /** 382 - * @connector_list: List of connector objects. 376 + * @connector_list: List of connector objects. Protected by 377 + * @connector_list_lock. Only use drm_for_each_connector_iter() and 378 + * struct &drm_connector_list_iter to walk this list. 383 379 */ 384 380 struct list_head connector_list; 385 381 int num_encoder;
+2 -1
include/drm/drm_modeset_helper.h
··· 27 27 28 28 void drm_helper_move_panel_connectors_to_head(struct drm_device *); 29 29 30 - void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb, 30 + void drm_helper_mode_fill_fb_struct(struct drm_device *dev, 31 + struct drm_framebuffer *fb, 31 32 const struct drm_mode_fb_cmd2 *mode_cmd); 32 33 33 34 int drm_crtc_init(struct drm_device *dev, struct drm_crtc *crtc,
+1
include/drm/drm_modeset_helper_vtables.h
··· 30 30 #define __DRM_MODESET_HELPER_VTABLES_H__ 31 31 32 32 #include <drm/drm_crtc.h> 33 + #include <drm/drm_encoder.h> 33 34 34 35 /** 35 36 * DOC: overview
-2
include/drm/drm_simple_kms_helper.h
··· 114 114 int drm_simple_display_pipe_attach_bridge(struct drm_simple_display_pipe *pipe, 115 115 struct drm_bridge *bridge); 116 116 117 - void drm_simple_display_pipe_detach_bridge(struct drm_simple_display_pipe *pipe); 118 - 119 117 int drm_simple_display_pipe_init(struct drm_device *dev, 120 118 struct drm_simple_display_pipe *pipe, 121 119 const struct drm_simple_display_pipe_funcs *funcs,
+193 -31
include/linux/dma-buf.h
··· 39 39 40 40 /** 41 41 * struct dma_buf_ops - operations possible on struct dma_buf 42 - * @attach: [optional] allows different devices to 'attach' themselves to the 43 - * given buffer. It might return -EBUSY to signal that backing storage 44 - * is already allocated and incompatible with the requirements 45 - * of requesting device. 46 - * @detach: [optional] detach a given device from this buffer. 47 - * @map_dma_buf: returns list of scatter pages allocated, increases usecount 48 - * of the buffer. Requires atleast one attach to be called 49 - * before. Returned sg list should already be mapped into 50 - * _device_ address space. This call may sleep. May also return 51 - * -EINTR. Should return -EINVAL if attach hasn't been called yet. 52 - * @unmap_dma_buf: decreases usecount of buffer, might deallocate scatter 53 - * pages. 54 - * @release: release this buffer; to be called after the last dma_buf_put. 55 - * @begin_cpu_access: [optional] called before cpu access to invalidate cpu 56 - * caches and allocate backing storage (if not yet done) 57 - * respectively pin the object into memory. 58 - * @end_cpu_access: [optional] called after cpu access to flush caches. 59 42 * @kmap_atomic: maps a page from the buffer into kernel address 60 43 * space, users may not block until the subsequent unmap call. 61 44 * This callback must not sleep. ··· 46 63 * This Callback must not sleep. 47 64 * @kmap: maps a page from the buffer into kernel address space. 48 65 * @kunmap: [optional] unmaps a page from the buffer. 49 - * @mmap: used to expose the backing storage to userspace. Note that the 50 - * mapping needs to be coherent - if the exporter doesn't directly 51 - * support this, it needs to fake coherency by shooting down any ptes 52 - * when transitioning away from the cpu domain. 53 66 * @vmap: [optional] creates a virtual mapping for the buffer into kernel 54 67 * address space. Same restrictions as for vmap and friends apply. 55 68 * @vunmap: [optional] unmaps a vmap from the buffer 56 69 */ 57 70 struct dma_buf_ops { 71 + /** 72 + * @attach: 73 + * 74 + * This is called from dma_buf_attach() to make sure that a given 75 + * &device can access the provided &dma_buf. Exporters which support 76 + * buffer objects in special locations like VRAM or device-specific 77 + * carveout areas should check whether the buffer could be move to 78 + * system memory (or directly accessed by the provided device), and 79 + * otherwise need to fail the attach operation. 80 + * 81 + * The exporter should also in general check whether the current 82 + * allocation fullfills the DMA constraints of the new device. If this 83 + * is not the case, and the allocation cannot be moved, it should also 84 + * fail the attach operation. 85 + * 86 + * Any exporter-private housekeeping data can be stored in the priv 87 + * pointer of &dma_buf_attachment structure. 88 + * 89 + * This callback is optional. 90 + * 91 + * Returns: 92 + * 93 + * 0 on success, negative error code on failure. It might return -EBUSY 94 + * to signal that backing storage is already allocated and incompatible 95 + * with the requirements of requesting device. 96 + */ 58 97 int (*attach)(struct dma_buf *, struct device *, 59 - struct dma_buf_attachment *); 98 + struct dma_buf_attachment *); 60 99 100 + /** 101 + * @detach: 102 + * 103 + * This is called by dma_buf_detach() to release a &dma_buf_attachment. 104 + * Provided so that exporters can clean up any housekeeping for an 105 + * &dma_buf_attachment. 106 + * 107 + * This callback is optional. 108 + */ 61 109 void (*detach)(struct dma_buf *, struct dma_buf_attachment *); 62 110 63 - /* For {map,unmap}_dma_buf below, any specific buffer attributes 64 - * required should get added to device_dma_parameters accessible 65 - * via dev->dma_params. 111 + /** 112 + * @map_dma_buf: 113 + * 114 + * This is called by dma_buf_map_attachment() and is used to map a 115 + * shared &dma_buf into device address space, and it is mandatory. It 116 + * can only be called if @attach has been called successfully. This 117 + * essentially pins the DMA buffer into place, and it cannot be moved 118 + * any more 119 + * 120 + * This call may sleep, e.g. when the backing storage first needs to be 121 + * allocated, or moved to a location suitable for all currently attached 122 + * devices. 123 + * 124 + * Note that any specific buffer attributes required for this function 125 + * should get added to device_dma_parameters accessible via 126 + * device->dma_params from the &dma_buf_attachment. The @attach callback 127 + * should also check these constraints. 128 + * 129 + * If this is being called for the first time, the exporter can now 130 + * choose to scan through the list of attachments for this buffer, 131 + * collate the requirements of the attached devices, and choose an 132 + * appropriate backing storage for the buffer. 133 + * 134 + * Based on enum dma_data_direction, it might be possible to have 135 + * multiple users accessing at the same time (for reading, maybe), or 136 + * any other kind of sharing that the exporter might wish to make 137 + * available to buffer-users. 138 + * 139 + * Returns: 140 + * 141 + * A &sg_table scatter list of or the backing storage of the DMA buffer, 142 + * already mapped into the device address space of the &device attached 143 + * with the provided &dma_buf_attachment. 144 + * 145 + * On failure, returns a negative error value wrapped into a pointer. 146 + * May also return -EINTR when a signal was received while being 147 + * blocked. 66 148 */ 67 149 struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *, 68 - enum dma_data_direction); 150 + enum dma_data_direction); 151 + /** 152 + * @unmap_dma_buf: 153 + * 154 + * This is called by dma_buf_unmap_attachment() and should unmap and 155 + * release the &sg_table allocated in @map_dma_buf, and it is mandatory. 156 + * It should also unpin the backing storage if this is the last mapping 157 + * of the DMA buffer, it the exporter supports backing storage 158 + * migration. 159 + */ 69 160 void (*unmap_dma_buf)(struct dma_buf_attachment *, 70 - struct sg_table *, 71 - enum dma_data_direction); 161 + struct sg_table *, 162 + enum dma_data_direction); 163 + 72 164 /* TODO: Add try_map_dma_buf version, to return immed with -EBUSY 73 165 * if the call would block. 74 166 */ 75 167 76 - /* after final dma_buf_put() */ 168 + /** 169 + * @release: 170 + * 171 + * Called after the last dma_buf_put to release the &dma_buf, and 172 + * mandatory. 173 + */ 77 174 void (*release)(struct dma_buf *); 78 175 176 + /** 177 + * @begin_cpu_access: 178 + * 179 + * This is called from dma_buf_begin_cpu_access() and allows the 180 + * exporter to ensure that the memory is actually available for cpu 181 + * access - the exporter might need to allocate or swap-in and pin the 182 + * backing storage. The exporter also needs to ensure that cpu access is 183 + * coherent for the access direction. The direction can be used by the 184 + * exporter to optimize the cache flushing, i.e. access with a different 185 + * direction (read instead of write) might return stale or even bogus 186 + * data (e.g. when the exporter needs to copy the data to temporary 187 + * storage). 188 + * 189 + * This callback is optional. 190 + * 191 + * FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command 192 + * from userspace (where storage shouldn't be pinned to avoid handing 193 + * de-factor mlock rights to userspace) and for the kernel-internal 194 + * users of the various kmap interfaces, where the backing storage must 195 + * be pinned to guarantee that the atomic kmap calls can succeed. Since 196 + * there's no in-kernel users of the kmap interfaces yet this isn't a 197 + * real problem. 198 + * 199 + * Returns: 200 + * 201 + * 0 on success or a negative error code on failure. This can for 202 + * example fail when the backing storage can't be allocated. Can also 203 + * return -ERESTARTSYS or -EINTR when the call has been interrupted and 204 + * needs to be restarted. 205 + */ 79 206 int (*begin_cpu_access)(struct dma_buf *, enum dma_data_direction); 207 + 208 + /** 209 + * @end_cpu_access: 210 + * 211 + * This is called from dma_buf_end_cpu_access() when the importer is 212 + * done accessing the CPU. The exporter can use this to flush caches and 213 + * unpin any resources pinned in @begin_cpu_access. 214 + * The result of any dma_buf kmap calls after end_cpu_access is 215 + * undefined. 216 + * 217 + * This callback is optional. 218 + * 219 + * Returns: 220 + * 221 + * 0 on success or a negative error code on failure. Can return 222 + * -ERESTARTSYS or -EINTR when the call has been interrupted and needs 223 + * to be restarted. 224 + */ 80 225 int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction); 81 226 void *(*kmap_atomic)(struct dma_buf *, unsigned long); 82 227 void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *); 83 228 void *(*kmap)(struct dma_buf *, unsigned long); 84 229 void (*kunmap)(struct dma_buf *, unsigned long, void *); 85 230 231 + /** 232 + * @mmap: 233 + * 234 + * This callback is used by the dma_buf_mmap() function 235 + * 236 + * Note that the mapping needs to be incoherent, userspace is expected 237 + * to braket CPU access using the DMA_BUF_IOCTL_SYNC interface. 238 + * 239 + * Because dma-buf buffers have invariant size over their lifetime, the 240 + * dma-buf core checks whether a vma is too large and rejects such 241 + * mappings. The exporter hence does not need to duplicate this check. 242 + * Drivers do not need to check this themselves. 243 + * 244 + * If an exporter needs to manually flush caches and hence needs to fake 245 + * coherency for mmap support, it needs to be able to zap all the ptes 246 + * pointing at the backing storage. Now linux mm needs a struct 247 + * address_space associated with the struct file stored in vma->vm_file 248 + * to do that with the function unmap_mapping_range. But the dma_buf 249 + * framework only backs every dma_buf fd with the anon_file struct file, 250 + * i.e. all dma_bufs share the same file. 251 + * 252 + * Hence exporters need to setup their own file (and address_space) 253 + * association by setting vma->vm_file and adjusting vma->vm_pgoff in 254 + * the dma_buf mmap callback. In the specific case of a gem driver the 255 + * exporter could use the shmem file already provided by gem (and set 256 + * vm_pgoff = 0). Exporters can then zap ptes by unmapping the 257 + * corresponding range of the struct address_space associated with their 258 + * own file. 259 + * 260 + * This callback is optional. 261 + * 262 + * Returns: 263 + * 264 + * 0 on success or a negative error code on failure. 265 + */ 86 266 int (*mmap)(struct dma_buf *, struct vm_area_struct *vma); 87 267 88 268 void *(*vmap)(struct dma_buf *); ··· 270 124 * @poll: for userspace poll support 271 125 * @cb_excl: for userspace poll support 272 126 * @cb_shared: for userspace poll support 127 + * 128 + * This represents a shared buffer, created by calling dma_buf_export(). The 129 + * userspace representation is a normal file descriptor, which can be created by 130 + * calling dma_buf_fd(). 131 + * 132 + * Shared dma buffers are reference counted using dma_buf_put() and 133 + * get_dma_buf(). 134 + * 135 + * Device DMA access is handled by the separate struct &dma_buf_attachment. 273 136 */ 274 137 struct dma_buf { 275 138 size_t size; ··· 315 160 * This structure holds the attachment information between the dma_buf buffer 316 161 * and its user device(s). The list contains one attachment struct per device 317 162 * attached to the buffer. 163 + * 164 + * An attachment is created by calling dma_buf_attach(), and released again by 165 + * calling dma_buf_detach(). The DMA mapping itself needed to initiate a 166 + * transfer is created by dma_buf_map_attachment() and freed again by calling 167 + * dma_buf_unmap_attachment(). 318 168 */ 319 169 struct dma_buf_attachment { 320 170 struct dma_buf *dmabuf; ··· 352 192 }; 353 193 354 194 /** 355 - * helper macro for exporters; zeros and fills in most common values 356 - * 195 + * DEFINE_DMA_BUF_EXPORT_INFO - helper macro for exporters 357 196 * @name: export-info name 197 + * 198 + * DEFINE_DMA_BUF_EXPORT_INFO macro defines the struct &dma_buf_export_info, 199 + * zeroes it out and pre-populates exp_name in it. 358 200 */ 359 201 #define DEFINE_DMA_BUF_EXPORT_INFO(name) \ 360 202 struct dma_buf_export_info name = { .exp_name = KBUILD_MODNAME, \
+1 -1
include/linux/kref.h
··· 133 133 */ 134 134 static inline int __must_check kref_get_unless_zero(struct kref *kref) 135 135 { 136 - return atomic_add_unless(&kref->refcount, 1, 0); 136 + return atomic_inc_not_zero(&kref->refcount); 137 137 } 138 138 #endif /* _KREF_H_ */
+37
include/linux/prime_numbers.h
··· 1 + #ifndef __LINUX_PRIME_NUMBERS_H 2 + #define __LINUX_PRIME_NUMBERS_H 3 + 4 + #include <linux/types.h> 5 + 6 + bool is_prime_number(unsigned long x); 7 + unsigned long next_prime_number(unsigned long x); 8 + 9 + /** 10 + * for_each_prime_number - iterate over each prime upto a value 11 + * @prime: the current prime number in this iteration 12 + * @max: the upper limit 13 + * 14 + * Starting from the first prime number 2 iterate over each prime number up to 15 + * the @max value. On each iteration, @prime is set to the current prime number. 16 + * @max should be less than ULONG_MAX to ensure termination. To begin with 17 + * @prime set to 1 on the first iteration use for_each_prime_number_from() 18 + * instead. 19 + */ 20 + #define for_each_prime_number(prime, max) \ 21 + for_each_prime_number_from((prime), 2, (max)) 22 + 23 + /** 24 + * for_each_prime_number_from - iterate over each prime upto a value 25 + * @prime: the current prime number in this iteration 26 + * @from: the initial value 27 + * @max: the upper limit 28 + * 29 + * Starting from @from iterate over each successive prime number up to the 30 + * @max value. On each iteration, @prime is set to the current prime number. 31 + * @max should be less than ULONG_MAX, and @from less than @max, to ensure 32 + * termination. 33 + */ 34 + #define for_each_prime_number_from(prime, from, max) \ 35 + for (prime = (from); prime <= (max); prime = next_prime_number(prime)) 36 + 37 + #endif /* !__LINUX_PRIME_NUMBERS_H */
+34
include/linux/reservation.h
··· 145 145 } 146 146 147 147 /** 148 + * reservation_object_lock - lock the reservation object 149 + * @obj: the reservation object 150 + * @ctx: the locking context 151 + * 152 + * Locks the reservation object for exclusive access and modification. Note, 153 + * that the lock is only against other writers, readers will run concurrently 154 + * with a writer under RCU. The seqlock is used to notify readers if they 155 + * overlap with a writer. 156 + * 157 + * As the reservation object may be locked by multiple parties in an 158 + * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle 159 + * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation 160 + * object may be locked by itself by passing NULL as @ctx. 161 + */ 162 + static inline int 163 + reservation_object_lock(struct reservation_object *obj, 164 + struct ww_acquire_ctx *ctx) 165 + { 166 + return ww_mutex_lock(&obj->lock, ctx); 167 + } 168 + 169 + /** 170 + * reservation_object_unlock - unlock the reservation object 171 + * @obj: the reservation object 172 + * 173 + * Unlocks the reservation object following exclusive access. 174 + */ 175 + static inline void 176 + reservation_object_unlock(struct reservation_object *obj) 177 + { 178 + ww_mutex_unlock(&obj->lock); 179 + } 180 + 181 + /** 148 182 * reservation_object_get_excl - get the reservation object's 149 183 * exclusive fence, with update-side lock held 150 184 * @obj: the reservation object
+11
include/uapi/drm/drm_fourcc.h
··· 154 154 155 155 /* Vendor Ids: */ 156 156 #define DRM_FORMAT_MOD_NONE 0 157 + #define DRM_FORMAT_MOD_VENDOR_NONE 0 157 158 #define DRM_FORMAT_MOD_VENDOR_INTEL 0x01 158 159 #define DRM_FORMAT_MOD_VENDOR_AMD 0x02 159 160 #define DRM_FORMAT_MOD_VENDOR_NV 0x03 ··· 172 171 * similar to the fourcc codes above. drm_fourcc.h is considered the 173 172 * authoritative source for all of these. 174 173 */ 174 + 175 + /* 176 + * Linear Layout 177 + * 178 + * Just plain linear layout. Note that this is different from no specifying any 179 + * modifier (e.g. not setting DRM_MODE_FB_MODIFIERS in the DRM_ADDFB2 ioctl), 180 + * which tells the driver to also take driver-internal information into account 181 + * and so might actually result in a tiled framebuffer. 182 + */ 183 + #define DRM_FORMAT_MOD_LINEAR fourcc_mod_code(NONE, 0) 175 184 176 185 /* Intel framebuffer modifiers */ 177 186
+7
lib/Kconfig
··· 550 550 config SBITMAP 551 551 bool 552 552 553 + config PRIME_NUMBERS 554 + tristate "Prime number generator" 555 + default n 556 + help 557 + Provides a helper module to generate prime numbers. Useful for writing 558 + test code, especially when checking multiplication and divison. 559 + 553 560 endmenu
+2
lib/Makefile
··· 196 196 197 197 obj-$(CONFIG_FONT_SUPPORT) += fonts/ 198 198 199 + obj-$(CONFIG_PRIME_NUMBERS) += prime_numbers.o 200 + 199 201 hostprogs-y := gen_crc32table 200 202 clean-files := crc32table.h 201 203
+314
lib/prime_numbers.c
··· 1 + #define pr_fmt(fmt) "prime numbers: " fmt "\n" 2 + 3 + #include <linux/module.h> 4 + #include <linux/mutex.h> 5 + #include <linux/prime_numbers.h> 6 + #include <linux/slab.h> 7 + 8 + #define bitmap_size(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long)) 9 + 10 + struct primes { 11 + struct rcu_head rcu; 12 + unsigned long last, sz; 13 + unsigned long primes[]; 14 + }; 15 + 16 + #if BITS_PER_LONG == 64 17 + static const struct primes small_primes = { 18 + .last = 61, 19 + .sz = 64, 20 + .primes = { 21 + BIT(2) | 22 + BIT(3) | 23 + BIT(5) | 24 + BIT(7) | 25 + BIT(11) | 26 + BIT(13) | 27 + BIT(17) | 28 + BIT(19) | 29 + BIT(23) | 30 + BIT(29) | 31 + BIT(31) | 32 + BIT(37) | 33 + BIT(41) | 34 + BIT(43) | 35 + BIT(47) | 36 + BIT(53) | 37 + BIT(59) | 38 + BIT(61) 39 + } 40 + }; 41 + #elif BITS_PER_LONG == 32 42 + static const struct primes small_primes = { 43 + .last = 31, 44 + .sz = 32, 45 + .primes = { 46 + BIT(2) | 47 + BIT(3) | 48 + BIT(5) | 49 + BIT(7) | 50 + BIT(11) | 51 + BIT(13) | 52 + BIT(17) | 53 + BIT(19) | 54 + BIT(23) | 55 + BIT(29) | 56 + BIT(31) 57 + } 58 + }; 59 + #else 60 + #error "unhandled BITS_PER_LONG" 61 + #endif 62 + 63 + static DEFINE_MUTEX(lock); 64 + static const struct primes __rcu *primes = RCU_INITIALIZER(&small_primes); 65 + 66 + static unsigned long selftest_max; 67 + 68 + static bool slow_is_prime_number(unsigned long x) 69 + { 70 + unsigned long y = int_sqrt(x); 71 + 72 + while (y > 1) { 73 + if ((x % y) == 0) 74 + break; 75 + y--; 76 + } 77 + 78 + return y == 1; 79 + } 80 + 81 + static unsigned long slow_next_prime_number(unsigned long x) 82 + { 83 + while (x < ULONG_MAX && !slow_is_prime_number(++x)) 84 + ; 85 + 86 + return x; 87 + } 88 + 89 + static unsigned long clear_multiples(unsigned long x, 90 + unsigned long *p, 91 + unsigned long start, 92 + unsigned long end) 93 + { 94 + unsigned long m; 95 + 96 + m = 2 * x; 97 + if (m < start) 98 + m = roundup(start, x); 99 + 100 + while (m < end) { 101 + __clear_bit(m, p); 102 + m += x; 103 + } 104 + 105 + return x; 106 + } 107 + 108 + static bool expand_to_next_prime(unsigned long x) 109 + { 110 + const struct primes *p; 111 + struct primes *new; 112 + unsigned long sz, y; 113 + 114 + /* Betrand's Postulate (or Chebyshev's theorem) states that if n > 3, 115 + * there is always at least one prime p between n and 2n - 2. 116 + * Equivalently, if n > 1, then there is always at least one prime p 117 + * such that n < p < 2n. 118 + * 119 + * http://mathworld.wolfram.com/BertrandsPostulate.html 120 + * https://en.wikipedia.org/wiki/Bertrand's_postulate 121 + */ 122 + sz = 2 * x; 123 + if (sz < x) 124 + return false; 125 + 126 + sz = round_up(sz, BITS_PER_LONG); 127 + new = kmalloc(sizeof(*new) + bitmap_size(sz), GFP_KERNEL); 128 + if (!new) 129 + return false; 130 + 131 + mutex_lock(&lock); 132 + p = rcu_dereference_protected(primes, lockdep_is_held(&lock)); 133 + if (x < p->last) { 134 + kfree(new); 135 + goto unlock; 136 + } 137 + 138 + /* Where memory permits, track the primes using the 139 + * Sieve of Eratosthenes. The sieve is to remove all multiples of known 140 + * primes from the set, what remains in the set is therefore prime. 141 + */ 142 + bitmap_fill(new->primes, sz); 143 + bitmap_copy(new->primes, p->primes, p->sz); 144 + for (y = 2UL; y < sz; y = find_next_bit(new->primes, sz, y + 1)) 145 + new->last = clear_multiples(y, new->primes, p->sz, sz); 146 + new->sz = sz; 147 + 148 + BUG_ON(new->last <= x); 149 + 150 + rcu_assign_pointer(primes, new); 151 + if (p != &small_primes) 152 + kfree_rcu((struct primes *)p, rcu); 153 + 154 + unlock: 155 + mutex_unlock(&lock); 156 + return true; 157 + } 158 + 159 + static void free_primes(void) 160 + { 161 + const struct primes *p; 162 + 163 + mutex_lock(&lock); 164 + p = rcu_dereference_protected(primes, lockdep_is_held(&lock)); 165 + if (p != &small_primes) { 166 + rcu_assign_pointer(primes, &small_primes); 167 + kfree_rcu((struct primes *)p, rcu); 168 + } 169 + mutex_unlock(&lock); 170 + } 171 + 172 + /** 173 + * next_prime_number - return the next prime number 174 + * @x: the starting point for searching to test 175 + * 176 + * A prime number is an integer greater than 1 that is only divisible by 177 + * itself and 1. The set of prime numbers is computed using the Sieve of 178 + * Eratoshenes (on finding a prime, all multiples of that prime are removed 179 + * from the set) enabling a fast lookup of the next prime number larger than 180 + * @x. If the sieve fails (memory limitation), the search falls back to using 181 + * slow trial-divison, up to the value of ULONG_MAX (which is reported as the 182 + * final prime as a sentinel). 183 + * 184 + * Returns: the next prime number larger than @x 185 + */ 186 + unsigned long next_prime_number(unsigned long x) 187 + { 188 + const struct primes *p; 189 + 190 + rcu_read_lock(); 191 + p = rcu_dereference(primes); 192 + while (x >= p->last) { 193 + rcu_read_unlock(); 194 + 195 + if (!expand_to_next_prime(x)) 196 + return slow_next_prime_number(x); 197 + 198 + rcu_read_lock(); 199 + p = rcu_dereference(primes); 200 + } 201 + x = find_next_bit(p->primes, p->last, x + 1); 202 + rcu_read_unlock(); 203 + 204 + return x; 205 + } 206 + EXPORT_SYMBOL(next_prime_number); 207 + 208 + /** 209 + * is_prime_number - test whether the given number is prime 210 + * @x: the number to test 211 + * 212 + * A prime number is an integer greater than 1 that is only divisible by 213 + * itself and 1. Internally a cache of prime numbers is kept (to speed up 214 + * searching for sequential primes, see next_prime_number()), but if the number 215 + * falls outside of that cache, its primality is tested using trial-divison. 216 + * 217 + * Returns: true if @x is prime, false for composite numbers. 218 + */ 219 + bool is_prime_number(unsigned long x) 220 + { 221 + const struct primes *p; 222 + bool result; 223 + 224 + rcu_read_lock(); 225 + p = rcu_dereference(primes); 226 + while (x >= p->sz) { 227 + rcu_read_unlock(); 228 + 229 + if (!expand_to_next_prime(x)) 230 + return slow_is_prime_number(x); 231 + 232 + rcu_read_lock(); 233 + p = rcu_dereference(primes); 234 + } 235 + result = test_bit(x, p->primes); 236 + rcu_read_unlock(); 237 + 238 + return result; 239 + } 240 + EXPORT_SYMBOL(is_prime_number); 241 + 242 + static void dump_primes(void) 243 + { 244 + const struct primes *p; 245 + char *buf; 246 + 247 + buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 248 + 249 + rcu_read_lock(); 250 + p = rcu_dereference(primes); 251 + 252 + if (buf) 253 + bitmap_print_to_pagebuf(true, buf, p->primes, p->sz); 254 + pr_info("primes.{last=%lu, .sz=%lu, .primes[]=...x%lx} = %s", 255 + p->last, p->sz, p->primes[BITS_TO_LONGS(p->sz) - 1], buf); 256 + 257 + rcu_read_unlock(); 258 + 259 + kfree(buf); 260 + } 261 + 262 + static int selftest(unsigned long max) 263 + { 264 + unsigned long x, last; 265 + 266 + if (!max) 267 + return 0; 268 + 269 + for (last = 0, x = 2; x < max; x++) { 270 + bool slow = slow_is_prime_number(x); 271 + bool fast = is_prime_number(x); 272 + 273 + if (slow != fast) { 274 + pr_err("inconsistent result for is-prime(%lu): slow=%s, fast=%s!", 275 + x, slow ? "yes" : "no", fast ? "yes" : "no"); 276 + goto err; 277 + } 278 + 279 + if (!slow) 280 + continue; 281 + 282 + if (next_prime_number(last) != x) { 283 + pr_err("incorrect result for next-prime(%lu): expected %lu, got %lu", 284 + last, x, next_prime_number(last)); 285 + goto err; 286 + } 287 + last = x; 288 + } 289 + 290 + pr_info("selftest(%lu) passed, last prime was %lu", x, last); 291 + return 0; 292 + 293 + err: 294 + dump_primes(); 295 + return -EINVAL; 296 + } 297 + 298 + static int __init primes_init(void) 299 + { 300 + return selftest(selftest_max); 301 + } 302 + 303 + static void __exit primes_exit(void) 304 + { 305 + free_primes(); 306 + } 307 + 308 + module_init(primes_init); 309 + module_exit(primes_exit); 310 + 311 + module_param_named(selftest, selftest_max, ulong, 0400); 312 + 313 + MODULE_AUTHOR("Intel Corporation"); 314 + MODULE_LICENSE("GPL");
+15
tools/testing/selftests/drivers/gpu/drm_mm.sh
··· 1 + #!/bin/sh 2 + # Runs API tests for struct drm_mm (DRM range manager) 3 + 4 + if ! /sbin/modprobe -n -q test-drm_mm; then 5 + echo "drivers/gpu/drm_mm: [skip]" 6 + exit 77 7 + fi 8 + 9 + if /sbin/modprobe -q test-drm_mm; then 10 + /sbin/modprobe -q -r test-drm_mm 11 + echo "drivers/gpu/drm_mm: ok" 12 + else 13 + echo "drivers/gpu/drm_mm: [FAIL]" 14 + exit 1 15 + fi
+15
tools/testing/selftests/lib/prime_numbers.sh
··· 1 + #!/bin/sh 2 + # Checks fast/slow prime_number generation for inconsistencies 3 + 4 + if ! /sbin/modprobe -q -r prime_numbers; then 5 + echo "prime_numbers: [SKIP]" 6 + exit 77 7 + fi 8 + 9 + if /sbin/modprobe -q prime_numbers selftest=65536; then 10 + /sbin/modprobe -q -r prime_numbers 11 + echo "prime_numbers: ok" 12 + else 13 + echo "prime_numbers: [FAIL]" 14 + exit 1 15 + fi