Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-next-2013-05-20-merged' of git://people.freedesktop.org/~danvet/drm-intel into drm-next

Daniel writes:
Highlights (copy-pasted from my testing cycle mails):
- fbc support for Haswell (Rodrigo)
- streamlined workaround comments, including an igt tool to grep for
them (Damien)
- sdvo and TV out cleanups, including a fixup for sdvo multifunction devices
- refactor our eDP mess a bit (Imre)
- don't register the hdmi connector on haswell when desktop eDP is present
- vlv support is no longer preliminary!
- more vlv fixes from Jesse for stolen and dpll handling
- more flexible power well checking infrastructure from Paulo
- a few gtt patches from Ben
- a bit of OCD cleanups for transcoder #defines and an assorted pile
of smaller things.
- fixes for the gmch modeset sequence
- a bit of OCD around plane/pipe usage (Ville)
- vlv turbo support (Jesse)
- tons of vlv modeset fixes (Jesse et al.)
- vlv pte write fixes (Kenneth Graunke)
- hpd filtering to avoid costly probes on unaffected outputs (Egbert Eich)
- intel dev_info cleanups and refactorings (Damien)
- vlv rc6 support (Jesse)
- random pile of fixes around non-24bpp modes handling
- asle/opregion cleanups and locking fixes (Jani)
- dp dpll refactoring
- improvements for reduced_clock computation on g4x/ilk+
- pfit state refactored to use pipe_config (Jesse)
- lots more computed modeset state moved to pipe_config, including readout
and cross-check support
- fdi auto-dithering for ivb B/C links, using the neat pipe_config
improvements
- drm_rect helpers plus sprite clipping fixes (Ville)
- hw context refcounting (Mika + Ben)

* tag 'drm-intel-next-2013-05-20-merged' of git://people.freedesktop.org/~danvet/drm-intel: (155 commits)
drm/i915: add support for dvo Chrontel 7010B
drm/i915: Use pipe config state to control gmch pfit enable/disable
drm/i915: Use pipe_config state to disable ilk+ pfit
drm/i915: panel fitter hw state readout&check support
drm/i915: implement WADPOClockGatingDisable for LPT
drm/i915: Add missing platform tags to FBC workaround comments
drm/i915: rip out an unused lvds_reg variable
drm/i915: Compute WR PLL dividers dynamically
drm/i915: HSW FBC WaFbcDisableDpfcClockGating
drm/i915: HSW FBC WaFbcAsynchFlipDisableFbcQueue
drm/i915: Enable FBC at Haswell.
drm/i915: IVB FBC WaFbcDisableDpfcClockGating
drm/i915: IVB FBC WaFbcAsynchFlipDisableFbcQueue
drm/i915: Add support for FBC on Ivybridge.
drm/i915: Organize VBT stuff inside drm_i915_private
drm/i915: make SDVO TV-out work for multifunction devices
drm/i915: rip out now unused is_foo tracking from crtc code
drm/i915: rip out TV-out lore ...
drm/i915: drop TVclock special casing on ilk+
drm/i915: move sdvo TV clock computation to intel_sdvo.c
...

+4397 -2158
+2
Documentation/DocBook/drm.tmpl
··· 1653 1653 <sect2> 1654 1654 <title>KMS API Functions</title> 1655 1655 !Edrivers/gpu/drm/drm_crtc.c 1656 + !Edrivers/gpu/drm/drm_rect.c 1657 + !Finclude/drm/drm_rect.h 1656 1658 </sect2> 1657 1659 </sect1> 1658 1660
+2 -1
drivers/gpu/drm/Makefile
··· 12 12 drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \ 13 13 drm_crtc.o drm_modes.o drm_edid.o \ 14 14 drm_info.o drm_debugfs.o drm_encoder_slave.o \ 15 - drm_trace_points.o drm_global.o drm_prime.o 15 + drm_trace_points.o drm_global.o drm_prime.o \ 16 + drm_rect.o 16 17 17 18 drm-$(CONFIG_COMPAT) += drm_ioc32.o 18 19 drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
+295
drivers/gpu/drm/drm_rect.c
··· 1 + /* 2 + * Copyright (C) 2011-2013 Intel Corporation 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice (including the next 12 + * paragraph) shall be included in all copies or substantial portions of the 13 + * Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 + * SOFTWARE. 22 + */ 23 + 24 + #include <linux/errno.h> 25 + #include <linux/export.h> 26 + #include <linux/kernel.h> 27 + #include <drm/drmP.h> 28 + #include <drm/drm_rect.h> 29 + 30 + /** 31 + * drm_rect_intersect - intersect two rectangles 32 + * @r1: first rectangle 33 + * @r2: second rectangle 34 + * 35 + * Calculate the intersection of rectangles @r1 and @r2. 36 + * @r1 will be overwritten with the intersection. 37 + * 38 + * RETURNS: 39 + * %true if rectangle @r1 is still visible after the operation, 40 + * %false otherwise. 41 + */ 42 + bool drm_rect_intersect(struct drm_rect *r1, const struct drm_rect *r2) 43 + { 44 + r1->x1 = max(r1->x1, r2->x1); 45 + r1->y1 = max(r1->y1, r2->y1); 46 + r1->x2 = min(r1->x2, r2->x2); 47 + r1->y2 = min(r1->y2, r2->y2); 48 + 49 + return drm_rect_visible(r1); 50 + } 51 + EXPORT_SYMBOL(drm_rect_intersect); 52 + 53 + /** 54 + * drm_rect_clip_scaled - perform a scaled clip operation 55 + * @src: source window rectangle 56 + * @dst: destination window rectangle 57 + * @clip: clip rectangle 58 + * @hscale: horizontal scaling factor 59 + * @vscale: vertical scaling factor 60 + * 61 + * Clip rectangle @dst by rectangle @clip. Clip rectangle @src by the 62 + * same amounts multiplied by @hscale and @vscale. 63 + * 64 + * RETURNS: 65 + * %true if rectangle @dst is still visible after being clipped, 66 + * %false otherwise 67 + */ 68 + bool drm_rect_clip_scaled(struct drm_rect *src, struct drm_rect *dst, 69 + const struct drm_rect *clip, 70 + int hscale, int vscale) 71 + { 72 + int diff; 73 + 74 + diff = clip->x1 - dst->x1; 75 + if (diff > 0) { 76 + int64_t tmp = src->x1 + (int64_t) diff * hscale; 77 + src->x1 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX); 78 + } 79 + diff = clip->y1 - dst->y1; 80 + if (diff > 0) { 81 + int64_t tmp = src->y1 + (int64_t) diff * vscale; 82 + src->y1 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX); 83 + } 84 + diff = dst->x2 - clip->x2; 85 + if (diff > 0) { 86 + int64_t tmp = src->x2 - (int64_t) diff * hscale; 87 + src->x2 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX); 88 + } 89 + diff = dst->y2 - clip->y2; 90 + if (diff > 0) { 91 + int64_t tmp = src->y2 - (int64_t) diff * vscale; 92 + src->y2 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX); 93 + } 94 + 95 + return drm_rect_intersect(dst, clip); 96 + } 97 + EXPORT_SYMBOL(drm_rect_clip_scaled); 98 + 99 + static int drm_calc_scale(int src, int dst) 100 + { 101 + int scale = 0; 102 + 103 + if (src < 0 || dst < 0) 104 + return -EINVAL; 105 + 106 + if (dst == 0) 107 + return 0; 108 + 109 + scale = src / dst; 110 + 111 + return scale; 112 + } 113 + 114 + /** 115 + * drm_rect_calc_hscale - calculate the horizontal scaling factor 116 + * @src: source window rectangle 117 + * @dst: destination window rectangle 118 + * @min_hscale: minimum allowed horizontal scaling factor 119 + * @max_hscale: maximum allowed horizontal scaling factor 120 + * 121 + * Calculate the horizontal scaling factor as 122 + * (@src width) / (@dst width). 123 + * 124 + * RETURNS: 125 + * The horizontal scaling factor, or errno of out of limits. 126 + */ 127 + int drm_rect_calc_hscale(const struct drm_rect *src, 128 + const struct drm_rect *dst, 129 + int min_hscale, int max_hscale) 130 + { 131 + int src_w = drm_rect_width(src); 132 + int dst_w = drm_rect_width(dst); 133 + int hscale = drm_calc_scale(src_w, dst_w); 134 + 135 + if (hscale < 0 || dst_w == 0) 136 + return hscale; 137 + 138 + if (hscale < min_hscale || hscale > max_hscale) 139 + return -ERANGE; 140 + 141 + return hscale; 142 + } 143 + EXPORT_SYMBOL(drm_rect_calc_hscale); 144 + 145 + /** 146 + * drm_rect_calc_vscale - calculate the vertical scaling factor 147 + * @src: source window rectangle 148 + * @dst: destination window rectangle 149 + * @min_vscale: minimum allowed vertical scaling factor 150 + * @max_vscale: maximum allowed vertical scaling factor 151 + * 152 + * Calculate the vertical scaling factor as 153 + * (@src height) / (@dst height). 154 + * 155 + * RETURNS: 156 + * The vertical scaling factor, or errno of out of limits. 157 + */ 158 + int drm_rect_calc_vscale(const struct drm_rect *src, 159 + const struct drm_rect *dst, 160 + int min_vscale, int max_vscale) 161 + { 162 + int src_h = drm_rect_height(src); 163 + int dst_h = drm_rect_height(dst); 164 + int vscale = drm_calc_scale(src_h, dst_h); 165 + 166 + if (vscale < 0 || dst_h == 0) 167 + return vscale; 168 + 169 + if (vscale < min_vscale || vscale > max_vscale) 170 + return -ERANGE; 171 + 172 + return vscale; 173 + } 174 + EXPORT_SYMBOL(drm_rect_calc_vscale); 175 + 176 + /** 177 + * drm_calc_hscale_relaxed - calculate the horizontal scaling factor 178 + * @src: source window rectangle 179 + * @dst: destination window rectangle 180 + * @min_hscale: minimum allowed horizontal scaling factor 181 + * @max_hscale: maximum allowed horizontal scaling factor 182 + * 183 + * Calculate the horizontal scaling factor as 184 + * (@src width) / (@dst width). 185 + * 186 + * If the calculated scaling factor is below @min_vscale, 187 + * decrease the height of rectangle @dst to compensate. 188 + * 189 + * If the calculated scaling factor is above @max_vscale, 190 + * decrease the height of rectangle @src to compensate. 191 + * 192 + * RETURNS: 193 + * The horizontal scaling factor. 194 + */ 195 + int drm_rect_calc_hscale_relaxed(struct drm_rect *src, 196 + struct drm_rect *dst, 197 + int min_hscale, int max_hscale) 198 + { 199 + int src_w = drm_rect_width(src); 200 + int dst_w = drm_rect_width(dst); 201 + int hscale = drm_calc_scale(src_w, dst_w); 202 + 203 + if (hscale < 0 || dst_w == 0) 204 + return hscale; 205 + 206 + if (hscale < min_hscale) { 207 + int max_dst_w = src_w / min_hscale; 208 + 209 + drm_rect_adjust_size(dst, max_dst_w - dst_w, 0); 210 + 211 + return min_hscale; 212 + } 213 + 214 + if (hscale > max_hscale) { 215 + int max_src_w = dst_w * max_hscale; 216 + 217 + drm_rect_adjust_size(src, max_src_w - src_w, 0); 218 + 219 + return max_hscale; 220 + } 221 + 222 + return hscale; 223 + } 224 + EXPORT_SYMBOL(drm_rect_calc_hscale_relaxed); 225 + 226 + /** 227 + * drm_rect_calc_vscale_relaxed - calculate the vertical scaling factor 228 + * @src: source window rectangle 229 + * @dst: destination window rectangle 230 + * @min_vscale: minimum allowed vertical scaling factor 231 + * @max_vscale: maximum allowed vertical scaling factor 232 + * 233 + * Calculate the vertical scaling factor as 234 + * (@src height) / (@dst height). 235 + * 236 + * If the calculated scaling factor is below @min_vscale, 237 + * decrease the height of rectangle @dst to compensate. 238 + * 239 + * If the calculated scaling factor is above @max_vscale, 240 + * decrease the height of rectangle @src to compensate. 241 + * 242 + * RETURNS: 243 + * The vertical scaling factor. 244 + */ 245 + int drm_rect_calc_vscale_relaxed(struct drm_rect *src, 246 + struct drm_rect *dst, 247 + int min_vscale, int max_vscale) 248 + { 249 + int src_h = drm_rect_height(src); 250 + int dst_h = drm_rect_height(dst); 251 + int vscale = drm_calc_scale(src_h, dst_h); 252 + 253 + if (vscale < 0 || dst_h == 0) 254 + return vscale; 255 + 256 + if (vscale < min_vscale) { 257 + int max_dst_h = src_h / min_vscale; 258 + 259 + drm_rect_adjust_size(dst, 0, max_dst_h - dst_h); 260 + 261 + return min_vscale; 262 + } 263 + 264 + if (vscale > max_vscale) { 265 + int max_src_h = dst_h * max_vscale; 266 + 267 + drm_rect_adjust_size(src, 0, max_src_h - src_h); 268 + 269 + return max_vscale; 270 + } 271 + 272 + return vscale; 273 + } 274 + EXPORT_SYMBOL(drm_rect_calc_vscale_relaxed); 275 + 276 + /** 277 + * drm_rect_debug_print - print the rectangle information 278 + * @r: rectangle to print 279 + * @fixed_point: rectangle is in 16.16 fixed point format 280 + */ 281 + void drm_rect_debug_print(const struct drm_rect *r, bool fixed_point) 282 + { 283 + int w = drm_rect_width(r); 284 + int h = drm_rect_height(r); 285 + 286 + if (fixed_point) 287 + DRM_DEBUG_KMS("%d.%06ux%d.%06u%+d.%06u%+d.%06u\n", 288 + w >> 16, ((w & 0xffff) * 15625) >> 10, 289 + h >> 16, ((h & 0xffff) * 15625) >> 10, 290 + r->x1 >> 16, ((r->x1 & 0xffff) * 15625) >> 10, 291 + r->y1 >> 16, ((r->y1 & 0xffff) * 15625) >> 10); 292 + else 293 + DRM_DEBUG_KMS("%dx%d%+d%+d\n", w, h, r->x1, r->y1); 294 + } 295 + EXPORT_SYMBOL(drm_rect_debug_print);
+26 -2
drivers/gpu/drm/i915/dvo_ch7xxx.c
··· 32 32 #define CH7xxx_REG_DID 0x4b 33 33 34 34 #define CH7011_VID 0x83 /* 7010 as well */ 35 + #define CH7010B_VID 0x05 35 36 #define CH7009A_VID 0x84 36 37 #define CH7009B_VID 0x85 37 38 #define CH7301_VID 0x95 38 39 39 40 #define CH7xxx_VID 0x84 40 41 #define CH7xxx_DID 0x17 42 + #define CH7010_DID 0x16 41 43 42 44 #define CH7xxx_NUM_REGS 0x4c 43 45 ··· 89 87 char *name; 90 88 } ch7xxx_ids[] = { 91 89 { CH7011_VID, "CH7011" }, 90 + { CH7010B_VID, "CH7010B" }, 92 91 { CH7009A_VID, "CH7009A" }, 93 92 { CH7009B_VID, "CH7009B" }, 94 93 { CH7301_VID, "CH7301" }, 94 + }; 95 + 96 + static struct ch7xxx_did_struct { 97 + uint8_t did; 98 + char *name; 99 + } ch7xxx_dids[] = { 100 + { CH7xxx_DID, "CH7XXX" }, 101 + { CH7010_DID, "CH7010B" }, 95 102 }; 96 103 97 104 struct ch7xxx_priv { ··· 114 103 for (i = 0; i < ARRAY_SIZE(ch7xxx_ids); i++) { 115 104 if (ch7xxx_ids[i].vid == vid) 116 105 return ch7xxx_ids[i].name; 106 + } 107 + 108 + return NULL; 109 + } 110 + 111 + static char *ch7xxx_get_did(uint8_t did) 112 + { 113 + int i; 114 + 115 + for (i = 0; i < ARRAY_SIZE(ch7xxx_dids); i++) { 116 + if (ch7xxx_dids[i].did == did) 117 + return ch7xxx_dids[i].name; 117 118 } 118 119 119 120 return NULL; ··· 202 179 /* this will detect the CH7xxx chip on the specified i2c bus */ 203 180 struct ch7xxx_priv *ch7xxx; 204 181 uint8_t vendor, device; 205 - char *name; 182 + char *name, *devid; 206 183 207 184 ch7xxx = kzalloc(sizeof(struct ch7xxx_priv), GFP_KERNEL); 208 185 if (ch7xxx == NULL) ··· 227 204 if (!ch7xxx_readb(dvo, CH7xxx_REG_DID, &device)) 228 205 goto out; 229 206 230 - if (device != CH7xxx_DID) { 207 + devid = ch7xxx_get_did(device); 208 + if (!devid) { 231 209 DRM_DEBUG_KMS("ch7xxx not detected; got 0x%02x from %s " 232 210 "slave %d.\n", 233 211 vendor, adapter->name, dvo->slave_addr);
+56 -14
drivers/gpu/drm/i915/i915_debugfs.c
··· 61 61 62 62 seq_printf(m, "gen: %d\n", info->gen); 63 63 seq_printf(m, "pch: %d\n", INTEL_PCH_TYPE(dev)); 64 - #define DEV_INFO_FLAG(x) seq_printf(m, #x ": %s\n", yesno(info->x)) 65 - #define DEV_INFO_SEP ; 66 - DEV_INFO_FLAGS; 67 - #undef DEV_INFO_FLAG 68 - #undef DEV_INFO_SEP 64 + #define PRINT_FLAG(x) seq_printf(m, #x ": %s\n", yesno(info->x)) 65 + #define SEP_SEMICOLON ; 66 + DEV_INFO_FOR_EACH_FLAG(PRINT_FLAG, SEP_SEMICOLON); 67 + #undef PRINT_FLAG 68 + #undef SEP_SEMICOLON 69 69 70 70 return 0; 71 71 } ··· 941 941 MEMSTAT_VID_SHIFT); 942 942 seq_printf(m, "Current P-state: %d\n", 943 943 (rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT); 944 - } else if (IS_GEN6(dev) || IS_GEN7(dev)) { 944 + } else if ((IS_GEN6(dev) || IS_GEN7(dev)) && !IS_VALLEYVIEW(dev)) { 945 945 u32 gt_perf_status = I915_READ(GEN6_GT_PERF_STATUS); 946 946 u32 rp_state_limits = I915_READ(GEN6_RP_STATE_LIMITS); 947 947 u32 rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); ··· 1009 1009 1010 1010 seq_printf(m, "Max overclocked frequency: %dMHz\n", 1011 1011 dev_priv->rps.hw_max * GT_FREQUENCY_MULTIPLIER); 1012 + } else if (IS_VALLEYVIEW(dev)) { 1013 + u32 freq_sts, val; 1014 + 1015 + mutex_lock(&dev_priv->rps.hw_lock); 1016 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS, 1017 + &freq_sts); 1018 + seq_printf(m, "PUNIT_REG_GPU_FREQ_STS: 0x%08x\n", freq_sts); 1019 + seq_printf(m, "DDR freq: %d MHz\n", dev_priv->mem_freq); 1020 + 1021 + valleyview_punit_read(dev_priv, PUNIT_FUSE_BUS1, &val); 1022 + seq_printf(m, "max GPU freq: %d MHz\n", 1023 + vlv_gpu_freq(dev_priv->mem_freq, val)); 1024 + 1025 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_LFM, &val); 1026 + seq_printf(m, "min GPU freq: %d MHz\n", 1027 + vlv_gpu_freq(dev_priv->mem_freq, val)); 1028 + 1029 + seq_printf(m, "current GPU freq: %d MHz\n", 1030 + vlv_gpu_freq(dev_priv->mem_freq, 1031 + (freq_sts >> 8) & 0xff)); 1032 + mutex_unlock(&dev_priv->rps.hw_lock); 1012 1033 } else { 1013 1034 seq_printf(m, "no P-state info available\n"); 1014 1035 } ··· 1833 1812 if (ret) 1834 1813 return ret; 1835 1814 1836 - *val = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER; 1815 + if (IS_VALLEYVIEW(dev)) 1816 + *val = vlv_gpu_freq(dev_priv->mem_freq, 1817 + dev_priv->rps.max_delay); 1818 + else 1819 + *val = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER; 1837 1820 mutex_unlock(&dev_priv->rps.hw_lock); 1838 1821 1839 1822 return 0; ··· 1862 1837 /* 1863 1838 * Turbo will still be enabled, but won't go above the set value. 1864 1839 */ 1865 - do_div(val, GT_FREQUENCY_MULTIPLIER); 1866 - dev_priv->rps.max_delay = val; 1867 - gen6_set_rps(dev, val); 1840 + if (IS_VALLEYVIEW(dev)) { 1841 + val = vlv_freq_opcode(dev_priv->mem_freq, val); 1842 + dev_priv->rps.max_delay = val; 1843 + gen6_set_rps(dev, val); 1844 + } else { 1845 + do_div(val, GT_FREQUENCY_MULTIPLIER); 1846 + dev_priv->rps.max_delay = val; 1847 + gen6_set_rps(dev, val); 1848 + } 1849 + 1868 1850 mutex_unlock(&dev_priv->rps.hw_lock); 1869 1851 1870 1852 return 0; ··· 1895 1863 if (ret) 1896 1864 return ret; 1897 1865 1898 - *val = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER; 1866 + if (IS_VALLEYVIEW(dev)) 1867 + *val = vlv_gpu_freq(dev_priv->mem_freq, 1868 + dev_priv->rps.min_delay); 1869 + else 1870 + *val = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER; 1899 1871 mutex_unlock(&dev_priv->rps.hw_lock); 1900 1872 1901 1873 return 0; ··· 1924 1888 /* 1925 1889 * Turbo will still be enabled, but won't go below the set value. 1926 1890 */ 1927 - do_div(val, GT_FREQUENCY_MULTIPLIER); 1928 - dev_priv->rps.min_delay = val; 1929 - gen6_set_rps(dev, val); 1891 + if (IS_VALLEYVIEW(dev)) { 1892 + val = vlv_freq_opcode(dev_priv->mem_freq, val); 1893 + dev_priv->rps.min_delay = val; 1894 + valleyview_set_rps(dev, val); 1895 + } else { 1896 + do_div(val, GT_FREQUENCY_MULTIPLIER); 1897 + dev_priv->rps.min_delay = val; 1898 + gen6_set_rps(dev, val); 1899 + } 1930 1900 mutex_unlock(&dev_priv->rps.hw_lock); 1931 1901 1932 1902 return 0;
+16 -11
drivers/gpu/drm/i915/i915_dma.c
··· 1445 1445 { 1446 1446 const struct intel_device_info *info = dev_priv->info; 1447 1447 1448 - #define DEV_INFO_FLAG(name) info->name ? #name "," : "" 1449 - #define DEV_INFO_SEP , 1448 + #define PRINT_S(name) "%s" 1449 + #define SEP_EMPTY 1450 + #define PRINT_FLAG(name) info->name ? #name "," : "" 1451 + #define SEP_COMMA , 1450 1452 DRM_DEBUG_DRIVER("i915 device info: gen=%i, pciid=0x%04x flags=" 1451 - "%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s", 1453 + DEV_INFO_FOR_EACH_FLAG(PRINT_S, SEP_EMPTY), 1452 1454 info->gen, 1453 1455 dev_priv->dev->pdev->device, 1454 - DEV_INFO_FLAGS); 1455 - #undef DEV_INFO_FLAG 1456 - #undef DEV_INFO_SEP 1456 + DEV_INFO_FOR_EACH_FLAG(PRINT_FLAG, SEP_COMMA)); 1457 + #undef PRINT_S 1458 + #undef SEP_EMPTY 1459 + #undef PRINT_FLAG 1460 + #undef SEP_COMMA 1457 1461 } 1458 1462 1459 1463 /** ··· 1472 1468 { 1473 1469 struct drm_i915_private *dev_priv = dev->dev_private; 1474 1470 1475 - if (IS_HASWELL(dev)) 1471 + if (HAS_FPGA_DBG_UNCLAIMED(dev)) 1476 1472 I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM); 1477 1473 } 1478 1474 ··· 1633 1629 spin_lock_init(&dev_priv->irq_lock); 1634 1630 spin_lock_init(&dev_priv->gpu_error.lock); 1635 1631 spin_lock_init(&dev_priv->rps.lock); 1632 + spin_lock_init(&dev_priv->backlight.lock); 1636 1633 mutex_init(&dev_priv->dpio_lock); 1637 1634 1638 1635 mutex_init(&dev_priv->rps.hw_lock); ··· 1742 1737 * free the memory space allocated for the child device 1743 1738 * config parsed from VBT 1744 1739 */ 1745 - if (dev_priv->child_dev && dev_priv->child_dev_num) { 1746 - kfree(dev_priv->child_dev); 1747 - dev_priv->child_dev = NULL; 1748 - dev_priv->child_dev_num = 0; 1740 + if (dev_priv->vbt.child_dev && dev_priv->vbt.child_dev_num) { 1741 + kfree(dev_priv->vbt.child_dev); 1742 + dev_priv->vbt.child_dev = NULL; 1743 + dev_priv->vbt.child_dev_num = 0; 1749 1744 } 1750 1745 1751 1746 vga_switcheroo_unregister_client(dev->pdev);
+13 -11
drivers/gpu/drm/i915/i915_drv.c
··· 280 280 GEN7_FEATURES, 281 281 .is_ivybridge = 1, 282 282 .is_mobile = 1, 283 + .has_fbc = 1, 283 284 }; 284 285 285 286 static const struct intel_device_info intel_ivybridge_q_info = { ··· 309 308 static const struct intel_device_info intel_haswell_d_info = { 310 309 GEN7_FEATURES, 311 310 .is_haswell = 1, 311 + .has_ddi = 1, 312 + .has_fpga_dbg = 1, 312 313 }; 313 314 314 315 static const struct intel_device_info intel_haswell_m_info = { 315 316 GEN7_FEATURES, 316 317 .is_haswell = 1, 317 318 .is_mobile = 1, 319 + .has_ddi = 1, 320 + .has_fpga_dbg = 1, 321 + .has_fbc = 1, 318 322 }; 319 323 320 324 static const struct pci_device_id pciidlist[] = { /* aka */ ··· 555 549 */ 556 550 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) 557 551 dev_priv->display.crtc_disable(crtc); 552 + 553 + intel_modeset_suspend_hw(dev); 558 554 } 559 555 560 556 i915_save_state(dev); ··· 992 984 struct intel_device_info *intel_info = 993 985 (struct intel_device_info *) ent->driver_data; 994 986 995 - if (intel_info->is_valleyview) 996 - if(!i915_preliminary_hw_support) { 997 - DRM_ERROR("Preliminary hardware support disabled\n"); 998 - return -ENODEV; 999 - } 1000 - 1001 987 /* Only bind to function 0 of the device. Early generations 1002 988 * used function 1 as a placeholder for multi-head. This causes 1003 989 * us confusion instead, especially on the systems where both ··· 1220 1218 static void 1221 1219 ilk_dummy_write(struct drm_i915_private *dev_priv) 1222 1220 { 1223 - /* WaIssueDummyWriteToWakeupFromRC6: Issue a dummy write to wake up the 1224 - * chip from rc6 before touching it for real. MI_MODE is masked, hence 1225 - * harmless to write 0 into. */ 1221 + /* WaIssueDummyWriteToWakeupFromRC6:ilk Issue a dummy write to wake up 1222 + * the chip from rc6 before touching it for real. MI_MODE is masked, 1223 + * hence harmless to write 0 into. */ 1226 1224 I915_WRITE_NOTRACE(MI_MODE, 0); 1227 1225 } 1228 1226 1229 1227 static void 1230 1228 hsw_unclaimed_reg_clear(struct drm_i915_private *dev_priv, u32 reg) 1231 1229 { 1232 - if (IS_HASWELL(dev_priv->dev) && 1230 + if (HAS_FPGA_DBG_UNCLAIMED(dev_priv->dev) && 1233 1231 (I915_READ_NOTRACE(FPGA_DBG) & FPGA_DBG_RM_NOCLAIM)) { 1234 1232 DRM_ERROR("Unknown unclaimed register before writing to %x\n", 1235 1233 reg); ··· 1240 1238 static void 1241 1239 hsw_unclaimed_reg_check(struct drm_i915_private *dev_priv, u32 reg) 1242 1240 { 1243 - if (IS_HASWELL(dev_priv->dev) && 1241 + if (HAS_FPGA_DBG_UNCLAIMED(dev_priv->dev) && 1244 1242 (I915_READ_NOTRACE(FPGA_DBG) & FPGA_DBG_RM_NOCLAIM)) { 1245 1243 DRM_ERROR("Unclaimed write to %x\n", reg); 1246 1244 I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
+127 -87
drivers/gpu/drm/i915/i915_drv.h
··· 76 76 }; 77 77 #define plane_name(p) ((p) + 'A') 78 78 79 + #define sprite_name(p, s) ((p) * dev_priv->num_plane + (s) + 'A') 80 + 79 81 enum port { 80 82 PORT_A = 0, 81 83 PORT_B, ··· 87 85 I915_MAX_PORTS 88 86 }; 89 87 #define port_name(p) ((p) + 'A') 88 + 89 + enum intel_display_power_domain { 90 + POWER_DOMAIN_PIPE_A, 91 + POWER_DOMAIN_PIPE_B, 92 + POWER_DOMAIN_PIPE_C, 93 + POWER_DOMAIN_PIPE_A_PANEL_FITTER, 94 + POWER_DOMAIN_PIPE_B_PANEL_FITTER, 95 + POWER_DOMAIN_PIPE_C_PANEL_FITTER, 96 + POWER_DOMAIN_TRANSCODER_A, 97 + POWER_DOMAIN_TRANSCODER_B, 98 + POWER_DOMAIN_TRANSCODER_C, 99 + POWER_DOMAIN_TRANSCODER_EDP = POWER_DOMAIN_TRANSCODER_A + 0xF, 100 + }; 101 + 102 + #define POWER_DOMAIN_PIPE(pipe) ((pipe) + POWER_DOMAIN_PIPE_A) 103 + #define POWER_DOMAIN_PIPE_PANEL_FITTER(pipe) \ 104 + ((pipe) + POWER_DOMAIN_PIPE_A_PANEL_FITTER) 105 + #define POWER_DOMAIN_TRANSCODER(tran) ((tran) + POWER_DOMAIN_TRANSCODER_A) 90 106 91 107 enum hpd_pin { 92 108 HPD_NONE = 0, ··· 351 331 void (*force_wake_put)(struct drm_i915_private *dev_priv); 352 332 }; 353 333 354 - #define DEV_INFO_FLAGS \ 355 - DEV_INFO_FLAG(is_mobile) DEV_INFO_SEP \ 356 - DEV_INFO_FLAG(is_i85x) DEV_INFO_SEP \ 357 - DEV_INFO_FLAG(is_i915g) DEV_INFO_SEP \ 358 - DEV_INFO_FLAG(is_i945gm) DEV_INFO_SEP \ 359 - DEV_INFO_FLAG(is_g33) DEV_INFO_SEP \ 360 - DEV_INFO_FLAG(need_gfx_hws) DEV_INFO_SEP \ 361 - DEV_INFO_FLAG(is_g4x) DEV_INFO_SEP \ 362 - DEV_INFO_FLAG(is_pineview) DEV_INFO_SEP \ 363 - DEV_INFO_FLAG(is_broadwater) DEV_INFO_SEP \ 364 - DEV_INFO_FLAG(is_crestline) DEV_INFO_SEP \ 365 - DEV_INFO_FLAG(is_ivybridge) DEV_INFO_SEP \ 366 - DEV_INFO_FLAG(is_valleyview) DEV_INFO_SEP \ 367 - DEV_INFO_FLAG(is_haswell) DEV_INFO_SEP \ 368 - DEV_INFO_FLAG(has_force_wake) DEV_INFO_SEP \ 369 - DEV_INFO_FLAG(has_fbc) DEV_INFO_SEP \ 370 - DEV_INFO_FLAG(has_pipe_cxsr) DEV_INFO_SEP \ 371 - DEV_INFO_FLAG(has_hotplug) DEV_INFO_SEP \ 372 - DEV_INFO_FLAG(cursor_needs_physical) DEV_INFO_SEP \ 373 - DEV_INFO_FLAG(has_overlay) DEV_INFO_SEP \ 374 - DEV_INFO_FLAG(overlay_needs_physical) DEV_INFO_SEP \ 375 - DEV_INFO_FLAG(supports_tv) DEV_INFO_SEP \ 376 - DEV_INFO_FLAG(has_bsd_ring) DEV_INFO_SEP \ 377 - DEV_INFO_FLAG(has_blt_ring) DEV_INFO_SEP \ 378 - DEV_INFO_FLAG(has_llc) 334 + #define DEV_INFO_FOR_EACH_FLAG(func, sep) \ 335 + func(is_mobile) sep \ 336 + func(is_i85x) sep \ 337 + func(is_i915g) sep \ 338 + func(is_i945gm) sep \ 339 + func(is_g33) sep \ 340 + func(need_gfx_hws) sep \ 341 + func(is_g4x) sep \ 342 + func(is_pineview) sep \ 343 + func(is_broadwater) sep \ 344 + func(is_crestline) sep \ 345 + func(is_ivybridge) sep \ 346 + func(is_valleyview) sep \ 347 + func(is_haswell) sep \ 348 + func(has_force_wake) sep \ 349 + func(has_fbc) sep \ 350 + func(has_pipe_cxsr) sep \ 351 + func(has_hotplug) sep \ 352 + func(cursor_needs_physical) sep \ 353 + func(has_overlay) sep \ 354 + func(overlay_needs_physical) sep \ 355 + func(supports_tv) sep \ 356 + func(has_bsd_ring) sep \ 357 + func(has_blt_ring) sep \ 358 + func(has_llc) sep \ 359 + func(has_ddi) sep \ 360 + func(has_fpga_dbg) 361 + 362 + #define DEFINE_FLAG(name) u8 name:1 363 + #define SEP_SEMICOLON ; 379 364 380 365 struct intel_device_info { 381 366 u32 display_mmio_offset; 382 367 u8 num_pipes:3; 383 368 u8 gen; 384 - u8 is_mobile:1; 385 - u8 is_i85x:1; 386 - u8 is_i915g:1; 387 - u8 is_i945gm:1; 388 - u8 is_g33:1; 389 - u8 need_gfx_hws:1; 390 - u8 is_g4x:1; 391 - u8 is_pineview:1; 392 - u8 is_broadwater:1; 393 - u8 is_crestline:1; 394 - u8 is_ivybridge:1; 395 - u8 is_valleyview:1; 396 - u8 has_force_wake:1; 397 - u8 is_haswell:1; 398 - u8 has_fbc:1; 399 - u8 has_pipe_cxsr:1; 400 - u8 has_hotplug:1; 401 - u8 cursor_needs_physical:1; 402 - u8 has_overlay:1; 403 - u8 overlay_needs_physical:1; 404 - u8 supports_tv:1; 405 - u8 has_bsd_ring:1; 406 - u8 has_blt_ring:1; 407 - u8 has_llc:1; 369 + DEV_INFO_FOR_EACH_FLAG(DEFINE_FLAG, SEP_SEMICOLON); 408 370 }; 371 + 372 + #undef DEFINE_FLAG 373 + #undef SEP_SEMICOLON 409 374 410 375 enum i915_cache_level { 411 376 I915_CACHE_NONE = 0, 412 377 I915_CACHE_LLC, 413 378 I915_CACHE_LLC_MLC, /* gen6+, in docs at least! */ 414 379 }; 380 + 381 + typedef uint32_t gen6_gtt_pte_t; 415 382 416 383 /* The Graphics Translation Table is the way in which GEN hardware translates a 417 384 * Graphics Virtual Address into a Physical Address. In addition to the normal ··· 435 428 struct sg_table *st, 436 429 unsigned int pg_start, 437 430 enum i915_cache_level cache_level); 431 + gen6_gtt_pte_t (*pte_encode)(struct drm_device *dev, 432 + dma_addr_t addr, 433 + enum i915_cache_level level); 438 434 }; 439 435 #define gtt_total_entries(gtt) ((gtt).total >> PAGE_SHIFT) 440 436 ··· 459 449 struct sg_table *st, 460 450 unsigned int pg_start, 461 451 enum i915_cache_level cache_level); 452 + gen6_gtt_pte_t (*pte_encode)(struct drm_device *dev, 453 + dma_addr_t addr, 454 + enum i915_cache_level level); 462 455 int (*enable)(struct drm_device *dev); 463 456 void (*cleanup)(struct i915_hw_ppgtt *ppgtt); 464 457 }; ··· 470 457 /* This must match up with the value previously used for execbuf2.rsvd1. */ 471 458 #define DEFAULT_CONTEXT_ID 0 472 459 struct i915_hw_context { 460 + struct kref ref; 473 461 int id; 474 462 bool is_initialized; 475 463 struct drm_i915_file_private *file_priv; ··· 672 658 673 659 struct intel_gen6_power_mgmt { 674 660 struct work_struct work; 661 + struct delayed_work vlv_work; 675 662 u32 pm_iir; 676 663 /* lock - irqsave spinlock that protectects the work_struct and 677 664 * pm_iir. */ ··· 683 668 u8 cur_delay; 684 669 u8 min_delay; 685 670 u8 max_delay; 671 + u8 rpe_delay; 686 672 u8 hw_max; 687 673 688 674 struct delayed_work delayed_resume_work; ··· 891 875 MODESET_SUSPENDED, 892 876 }; 893 877 878 + struct intel_vbt_data { 879 + struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */ 880 + struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */ 881 + 882 + /* Feature bits */ 883 + unsigned int int_tv_support:1; 884 + unsigned int lvds_dither:1; 885 + unsigned int lvds_vbt:1; 886 + unsigned int int_crt_support:1; 887 + unsigned int lvds_use_ssc:1; 888 + unsigned int display_clock_mode:1; 889 + unsigned int fdi_rx_polarity_inverted:1; 890 + int lvds_ssc_freq; 891 + unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */ 892 + 893 + /* eDP */ 894 + int edp_rate; 895 + int edp_lanes; 896 + int edp_preemphasis; 897 + int edp_vswing; 898 + bool edp_initialized; 899 + bool edp_support; 900 + int edp_bpp; 901 + struct edp_power_seq edp_pps; 902 + 903 + int crt_ddc_pin; 904 + 905 + int child_dev_num; 906 + struct child_device_config *child_dev; 907 + }; 908 + 894 909 typedef struct drm_i915_private { 895 910 struct drm_device *dev; 896 911 struct kmem_cache *slab; ··· 988 941 HPD_MARK_DISABLED = 2 989 942 } hpd_mark; 990 943 } hpd_stats[HPD_NUM_PINS]; 944 + u32 hpd_event_bits; 991 945 struct timer_list hotplug_reenable_timer; 992 946 993 947 int num_pch_pll; ··· 1001 953 struct intel_fbc_work *fbc_work; 1002 954 1003 955 struct intel_opregion opregion; 956 + struct intel_vbt_data vbt; 1004 957 1005 958 /* overlay */ 1006 959 struct intel_overlay *overlay; ··· 1011 962 struct { 1012 963 int level; 1013 964 bool enabled; 965 + spinlock_t lock; /* bl registers and the above bl fields */ 1014 966 struct backlight_device *device; 1015 967 } backlight; 1016 968 1017 969 /* LVDS info */ 1018 970 struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */ 1019 971 struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */ 1020 - 1021 - /* Feature bits from the VBIOS */ 1022 - unsigned int int_tv_support:1; 1023 - unsigned int lvds_dither:1; 1024 - unsigned int lvds_vbt:1; 1025 - unsigned int int_crt_support:1; 1026 - unsigned int lvds_use_ssc:1; 1027 - unsigned int display_clock_mode:1; 1028 - unsigned int fdi_rx_polarity_inverted:1; 1029 - int lvds_ssc_freq; 1030 - unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */ 1031 - struct { 1032 - int rate; 1033 - int lanes; 1034 - int preemphasis; 1035 - int vswing; 1036 - 1037 - bool initialized; 1038 - bool support; 1039 - int bpp; 1040 - struct edp_power_seq pps; 1041 - } edp; 1042 972 bool no_aux_handshake; 1043 973 1044 - int crt_ddc_pin; 1045 974 struct drm_i915_fence_reg fence_regs[I915_MAX_NUM_FENCES]; /* assume 965 */ 1046 975 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */ 1047 976 int num_fence_regs; /* 8 on pre-965, 16 otherwise */ ··· 1047 1020 /* Kernel Modesetting */ 1048 1021 1049 1022 struct sdvo_device_mapping sdvo_mappings[2]; 1050 - /* indicate whether the LVDS_BORDER should be enabled or not */ 1051 - unsigned int lvds_border_bits; 1052 - /* Panel fitter placement and size for Ironlake+ */ 1053 - u32 pch_pf_pos, pch_pf_size; 1054 1023 1055 1024 struct drm_crtc *plane_to_crtc_mapping[3]; 1056 1025 struct drm_crtc *pipe_to_crtc_mapping[3]; ··· 1061 1038 /* indicates the reduced downclock for LVDS*/ 1062 1039 int lvds_downclock; 1063 1040 u16 orig_clock; 1064 - int child_dev_num; 1065 - struct child_device_config *child_dev; 1066 1041 1067 1042 bool mchbar_need_disable; 1068 1043 ··· 1079 1058 struct drm_mm_node *compressed_llb; 1080 1059 1081 1060 struct i915_gpu_error gpu_error; 1061 + 1062 + struct drm_i915_gem_object *vlv_pctx; 1082 1063 1083 1064 /* list of fbdev register on this device */ 1084 1065 struct intel_fbdev *fbdev; ··· 1297 1274 /** Postion in the ringbuffer of the end of the request */ 1298 1275 u32 tail; 1299 1276 1277 + /** Context related to this request */ 1278 + struct i915_hw_context *ctx; 1279 + 1300 1280 /** Time at which this request was emitted, in jiffies. */ 1301 1281 unsigned long emitted_jiffies; 1302 1282 ··· 1399 1373 1400 1374 #define HAS_PIPE_CONTROL(dev) (INTEL_INFO(dev)->gen >= 5) 1401 1375 1402 - #define HAS_DDI(dev) (IS_HASWELL(dev)) 1376 + #define HAS_DDI(dev) (INTEL_INFO(dev)->has_ddi) 1403 1377 #define HAS_POWER_WELL(dev) (IS_HASWELL(dev)) 1378 + #define HAS_FPGA_DBG_UNCLAIMED(dev) (INTEL_INFO(dev)->has_fpga_dbg) 1404 1379 1405 1380 #define INTEL_PCH_DEVICE_ID_MASK 0xff00 1406 1381 #define INTEL_PCH_IBX_DEVICE_ID_TYPE 0x3b00 ··· 1512 1485 1513 1486 void 1514 1487 i915_disable_pipestat(drm_i915_private_t *dev_priv, int pipe, u32 mask); 1515 - 1516 - void intel_enable_asle(struct drm_device *dev); 1517 1488 1518 1489 #ifdef CONFIG_DEBUG_FS 1519 1490 extern void i915_destroy_error_state(struct drm_device *dev); ··· 1728 1703 void i915_gem_context_close(struct drm_device *dev, struct drm_file *file); 1729 1704 int i915_switch_context(struct intel_ring_buffer *ring, 1730 1705 struct drm_file *file, int to_id); 1706 + void i915_gem_context_free(struct kref *ctx_ref); 1707 + static inline void i915_gem_context_reference(struct i915_hw_context *ctx) 1708 + { 1709 + kref_get(&ctx->ref); 1710 + } 1711 + 1712 + static inline void i915_gem_context_unreference(struct i915_hw_context *ctx) 1713 + { 1714 + kref_put(&ctx->ref, i915_gem_context_free); 1715 + } 1716 + 1731 1717 int i915_gem_context_create_ioctl(struct drm_device *dev, void *data, 1732 1718 struct drm_file *file); 1733 1719 int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, ··· 1836 1800 /* intel_i2c.c */ 1837 1801 extern int intel_setup_gmbus(struct drm_device *dev); 1838 1802 extern void intel_teardown_gmbus(struct drm_device *dev); 1839 - extern inline bool intel_gmbus_is_port_valid(unsigned port) 1803 + static inline bool intel_gmbus_is_port_valid(unsigned port) 1840 1804 { 1841 1805 return (port >= GMBUS_PORT_SSC && port <= GMBUS_PORT_DPD); 1842 1806 } ··· 1845 1809 struct drm_i915_private *dev_priv, unsigned port); 1846 1810 extern void intel_gmbus_set_speed(struct i2c_adapter *adapter, int speed); 1847 1811 extern void intel_gmbus_force_bit(struct i2c_adapter *adapter, bool force_bit); 1848 - extern inline bool intel_gmbus_is_forced_bit(struct i2c_adapter *adapter) 1812 + static inline bool intel_gmbus_is_forced_bit(struct i2c_adapter *adapter) 1849 1813 { 1850 1814 return container_of(adapter, struct intel_gmbus, adapter)->force_bit; 1851 1815 } ··· 1857 1821 extern void intel_opregion_init(struct drm_device *dev); 1858 1822 extern void intel_opregion_fini(struct drm_device *dev); 1859 1823 extern void intel_opregion_asle_intr(struct drm_device *dev); 1860 - extern void intel_opregion_gse_intr(struct drm_device *dev); 1861 - extern void intel_opregion_enable_asle(struct drm_device *dev); 1862 1824 #else 1863 1825 static inline void intel_opregion_init(struct drm_device *dev) { return; } 1864 1826 static inline void intel_opregion_fini(struct drm_device *dev) { return; } 1865 1827 static inline void intel_opregion_asle_intr(struct drm_device *dev) { return; } 1866 - static inline void intel_opregion_gse_intr(struct drm_device *dev) { return; } 1867 - static inline void intel_opregion_enable_asle(struct drm_device *dev) { return; } 1868 1828 #endif 1869 1829 1870 1830 /* intel_acpi.c */ ··· 1874 1842 1875 1843 /* modesetting */ 1876 1844 extern void intel_modeset_init_hw(struct drm_device *dev); 1845 + extern void intel_modeset_suspend_hw(struct drm_device *dev); 1877 1846 extern void intel_modeset_init(struct drm_device *dev); 1878 1847 extern void intel_modeset_gem_init(struct drm_device *dev); 1879 1848 extern void intel_modeset_cleanup(struct drm_device *dev); ··· 1887 1854 extern bool ironlake_set_drps(struct drm_device *dev, u8 val); 1888 1855 extern void intel_init_pch_refclk(struct drm_device *dev); 1889 1856 extern void gen6_set_rps(struct drm_device *dev, u8 val); 1857 + extern void valleyview_set_rps(struct drm_device *dev, u8 val); 1858 + extern int valleyview_rps_max_freq(struct drm_i915_private *dev_priv); 1859 + extern int valleyview_rps_min_freq(struct drm_i915_private *dev_priv); 1890 1860 extern void intel_detect_pch(struct drm_device *dev); 1891 1861 extern int intel_trans_dp_port_sel(struct drm_crtc *crtc); 1892 1862 extern int intel_enable_rc6(const struct drm_device *dev); ··· 1921 1885 int sandybridge_pcode_write(struct drm_i915_private *dev_priv, u8 mbox, u32 val); 1922 1886 int valleyview_punit_read(struct drm_i915_private *dev_priv, u8 addr, u32 *val); 1923 1887 int valleyview_punit_write(struct drm_i915_private *dev_priv, u8 addr, u32 val); 1888 + int valleyview_nc_read(struct drm_i915_private *dev_priv, u8 addr, u32 *val); 1889 + 1890 + int vlv_gpu_freq(int ddr_freq, int val); 1891 + int vlv_freq_opcode(int ddr_freq, int val); 1924 1892 1925 1893 #define __i915_read(x, y) \ 1926 1894 u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg);
+18 -6
drivers/gpu/drm/i915/i915_gem.c
··· 2042 2042 request->seqno = intel_ring_get_seqno(ring); 2043 2043 request->ring = ring; 2044 2044 request->tail = request_ring_position; 2045 + request->ctx = ring->last_context; 2046 + 2047 + if (request->ctx) 2048 + i915_gem_context_reference(request->ctx); 2049 + 2045 2050 request->emitted_jiffies = jiffies; 2046 2051 was_empty = list_empty(&ring->request_list); 2047 2052 list_add_tail(&request->list, &ring->request_list); ··· 2099 2094 spin_unlock(&file_priv->mm.lock); 2100 2095 } 2101 2096 2097 + static void i915_gem_free_request(struct drm_i915_gem_request *request) 2098 + { 2099 + list_del(&request->list); 2100 + i915_gem_request_remove_from_client(request); 2101 + 2102 + if (request->ctx) 2103 + i915_gem_context_unreference(request->ctx); 2104 + 2105 + kfree(request); 2106 + } 2107 + 2102 2108 static void i915_gem_reset_ring_lists(struct drm_i915_private *dev_priv, 2103 2109 struct intel_ring_buffer *ring) 2104 2110 { ··· 2120 2104 struct drm_i915_gem_request, 2121 2105 list); 2122 2106 2123 - list_del(&request->list); 2124 - i915_gem_request_remove_from_client(request); 2125 - kfree(request); 2107 + i915_gem_free_request(request); 2126 2108 } 2127 2109 2128 2110 while (!list_empty(&ring->active_list)) { ··· 2212 2198 */ 2213 2199 ring->last_retired_head = request->tail; 2214 2200 2215 - list_del(&request->list); 2216 - i915_gem_request_remove_from_client(request); 2217 - kfree(request); 2201 + i915_gem_free_request(request); 2218 2202 } 2219 2203 2220 2204 /* Move any buffers on the active list that are no longer referenced
+48 -26
drivers/gpu/drm/i915/i915_gem_context.c
··· 124 124 return ret; 125 125 } 126 126 127 - static void do_destroy(struct i915_hw_context *ctx) 127 + void i915_gem_context_free(struct kref *ctx_ref) 128 128 { 129 - if (ctx->file_priv) 130 - idr_remove(&ctx->file_priv->context_idr, ctx->id); 129 + struct i915_hw_context *ctx = container_of(ctx_ref, 130 + typeof(*ctx), ref); 131 131 132 132 drm_gem_object_unreference(&ctx->obj->base); 133 133 kfree(ctx); ··· 145 145 if (ctx == NULL) 146 146 return ERR_PTR(-ENOMEM); 147 147 148 + kref_init(&ctx->ref); 148 149 ctx->obj = i915_gem_alloc_object(dev, dev_priv->hw_context_size); 149 150 if (ctx->obj == NULL) { 150 151 kfree(ctx); ··· 170 169 if (file_priv == NULL) 171 170 return ctx; 172 171 173 - ctx->file_priv = file_priv; 174 - 175 172 ret = idr_alloc(&file_priv->context_idr, ctx, DEFAULT_CONTEXT_ID + 1, 0, 176 173 GFP_KERNEL); 177 174 if (ret < 0) 178 175 goto err_out; 176 + 177 + ctx->file_priv = file_priv; 179 178 ctx->id = ret; 180 179 181 180 return ctx; 182 181 183 182 err_out: 184 - do_destroy(ctx); 183 + i915_gem_context_unreference(ctx); 185 184 return ERR_PTR(ret); 186 185 } 187 186 ··· 227 226 err_unpin: 228 227 i915_gem_object_unpin(ctx->obj); 229 228 err_destroy: 230 - do_destroy(ctx); 229 + i915_gem_context_unreference(ctx); 231 230 return ret; 232 231 } 233 232 ··· 263 262 void i915_gem_context_fini(struct drm_device *dev) 264 263 { 265 264 struct drm_i915_private *dev_priv = dev->dev_private; 265 + struct i915_hw_context *dctx = dev_priv->ring[RCS].default_context; 266 266 267 267 if (dev_priv->hw_contexts_disabled) 268 268 return; ··· 273 271 * other code, leading to spurious errors. */ 274 272 intel_gpu_reset(dev); 275 273 276 - i915_gem_object_unpin(dev_priv->ring[RCS].default_context->obj); 274 + i915_gem_object_unpin(dctx->obj); 277 275 278 - do_destroy(dev_priv->ring[RCS].default_context); 276 + /* When default context is created and switched to, base object refcount 277 + * will be 2 (+1 from object creation and +1 from do_switch()). 278 + * i915_gem_context_fini() will be called after gpu_idle() has switched 279 + * to default context. So we need to unreference the base object once 280 + * to offset the do_switch part, so that i915_gem_context_unreference() 281 + * can then free the base object correctly. */ 282 + drm_gem_object_unreference(&dctx->obj->base); 283 + i915_gem_context_unreference(dctx); 279 284 } 280 285 281 286 static int context_idr_cleanup(int id, void *p, void *data) ··· 291 282 292 283 BUG_ON(id == DEFAULT_CONTEXT_ID); 293 284 294 - do_destroy(ctx); 295 - 285 + i915_gem_context_unreference(ctx); 296 286 return 0; 297 287 } 298 288 ··· 333 325 if (ret) 334 326 return ret; 335 327 328 + /* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw */ 336 329 if (IS_GEN7(ring->dev)) 337 330 intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE); 338 331 else ··· 362 353 static int do_switch(struct i915_hw_context *to) 363 354 { 364 355 struct intel_ring_buffer *ring = to->ring; 365 - struct drm_i915_gem_object *from_obj = ring->last_context_obj; 356 + struct i915_hw_context *from = ring->last_context; 366 357 u32 hw_flags = 0; 367 358 int ret; 368 359 369 - BUG_ON(from_obj != NULL && from_obj->pin_count == 0); 360 + BUG_ON(from != NULL && from->obj != NULL && from->obj->pin_count == 0); 370 361 371 - if (from_obj == to->obj) 362 + if (from == to) 372 363 return 0; 373 364 374 365 ret = i915_gem_object_pin(to->obj, CONTEXT_ALIGN, false, false); ··· 391 382 392 383 if (!to->is_initialized || is_default_context(to)) 393 384 hw_flags |= MI_RESTORE_INHIBIT; 394 - else if (WARN_ON_ONCE(from_obj == to->obj)) /* not yet expected */ 385 + else if (WARN_ON_ONCE(from == to)) /* not yet expected */ 395 386 hw_flags |= MI_FORCE_RESTORE; 396 387 397 388 ret = mi_set_context(ring, to, hw_flags); ··· 406 397 * is a bit suboptimal because the retiring can occur simply after the 407 398 * MI_SET_CONTEXT instead of when the next seqno has completed. 408 399 */ 409 - if (from_obj != NULL) { 410 - from_obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION; 411 - i915_gem_object_move_to_active(from_obj, ring); 400 + if (from != NULL) { 401 + from->obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION; 402 + i915_gem_object_move_to_active(from->obj, ring); 412 403 /* As long as MI_SET_CONTEXT is serializing, ie. it flushes the 413 404 * whole damn pipeline, we don't need to explicitly mark the 414 405 * object dirty. The only exception is that the context must be ··· 416 407 * able to defer doing this until we know the object would be 417 408 * swapped, but there is no way to do that yet. 418 409 */ 419 - from_obj->dirty = 1; 420 - BUG_ON(from_obj->ring != ring); 421 - i915_gem_object_unpin(from_obj); 410 + from->obj->dirty = 1; 411 + BUG_ON(from->obj->ring != ring); 422 412 423 - drm_gem_object_unreference(&from_obj->base); 413 + ret = i915_add_request(ring, NULL, NULL); 414 + if (ret) { 415 + /* Too late, we've already scheduled a context switch. 416 + * Try to undo the change so that the hw state is 417 + * consistent with out tracking. In case of emergency, 418 + * scream. 419 + */ 420 + WARN_ON(mi_set_context(ring, from, MI_RESTORE_INHIBIT)); 421 + return ret; 422 + } 423 + 424 + i915_gem_object_unpin(from->obj); 425 + i915_gem_context_unreference(from); 424 426 } 425 427 426 - drm_gem_object_reference(&to->obj->base); 427 - ring->last_context_obj = to->obj; 428 + i915_gem_context_reference(to); 429 + ring->last_context = to; 428 430 to->is_initialized = true; 429 431 430 432 return 0; ··· 463 443 464 444 if (dev_priv->hw_contexts_disabled) 465 445 return 0; 446 + 447 + WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex)); 466 448 467 449 if (ring != &dev_priv->ring[RCS]) 468 450 return 0; ··· 534 512 return -ENOENT; 535 513 } 536 514 537 - do_destroy(ctx); 538 - 515 + idr_remove(&ctx->file_priv->context_idr, ctx->id); 516 + i915_gem_context_unreference(ctx); 539 517 mutex_unlock(&dev->struct_mutex); 540 518 541 519 DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);
+81 -29
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 28 28 #include "i915_trace.h" 29 29 #include "intel_drv.h" 30 30 31 - typedef uint32_t gen6_gtt_pte_t; 32 - 33 31 /* PPGTT stuff */ 34 32 #define GEN6_GTT_ADDR_ENCODE(addr) ((addr) | (((addr) >> 28) & 0xff0)) 35 33 ··· 42 44 #define GEN6_PTE_CACHE_LLC_MLC (3 << 1) 43 45 #define GEN6_PTE_ADDR_ENCODE(addr) GEN6_GTT_ADDR_ENCODE(addr) 44 46 45 - static inline gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev, 46 - dma_addr_t addr, 47 - enum i915_cache_level level) 47 + static gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev, 48 + dma_addr_t addr, 49 + enum i915_cache_level level) 48 50 { 49 51 gen6_gtt_pte_t pte = GEN6_PTE_VALID; 50 52 pte |= GEN6_PTE_ADDR_ENCODE(addr); 51 53 52 54 switch (level) { 53 55 case I915_CACHE_LLC_MLC: 54 - /* Haswell doesn't set L3 this way */ 55 - if (IS_HASWELL(dev)) 56 - pte |= GEN6_PTE_CACHE_LLC; 57 - else 58 - pte |= GEN6_PTE_CACHE_LLC_MLC; 56 + pte |= GEN6_PTE_CACHE_LLC_MLC; 59 57 break; 60 58 case I915_CACHE_LLC: 61 59 pte |= GEN6_PTE_CACHE_LLC; 62 60 break; 63 61 case I915_CACHE_NONE: 64 - if (IS_HASWELL(dev)) 65 - pte |= HSW_PTE_UNCACHED; 66 - else 67 - pte |= GEN6_PTE_UNCACHED; 62 + pte |= GEN6_PTE_UNCACHED; 68 63 break; 69 64 default: 70 65 BUG(); ··· 66 75 return pte; 67 76 } 68 77 69 - static int gen6_ppgtt_enable(struct drm_device *dev) 78 + #define BYT_PTE_WRITEABLE (1 << 1) 79 + #define BYT_PTE_SNOOPED_BY_CPU_CACHES (1 << 2) 80 + 81 + static gen6_gtt_pte_t byt_pte_encode(struct drm_device *dev, 82 + dma_addr_t addr, 83 + enum i915_cache_level level) 70 84 { 71 - drm_i915_private_t *dev_priv = dev->dev_private; 72 - uint32_t pd_offset; 73 - struct intel_ring_buffer *ring; 74 - struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt; 85 + gen6_gtt_pte_t pte = GEN6_PTE_VALID; 86 + pte |= GEN6_PTE_ADDR_ENCODE(addr); 87 + 88 + /* Mark the page as writeable. Other platforms don't have a 89 + * setting for read-only/writable, so this matches that behavior. 90 + */ 91 + pte |= BYT_PTE_WRITEABLE; 92 + 93 + if (level != I915_CACHE_NONE) 94 + pte |= BYT_PTE_SNOOPED_BY_CPU_CACHES; 95 + 96 + return pte; 97 + } 98 + 99 + static gen6_gtt_pte_t hsw_pte_encode(struct drm_device *dev, 100 + dma_addr_t addr, 101 + enum i915_cache_level level) 102 + { 103 + gen6_gtt_pte_t pte = GEN6_PTE_VALID; 104 + pte |= GEN6_PTE_ADDR_ENCODE(addr); 105 + 106 + if (level != I915_CACHE_NONE) 107 + pte |= GEN6_PTE_CACHE_LLC; 108 + 109 + return pte; 110 + } 111 + 112 + static void gen6_write_pdes(struct i915_hw_ppgtt *ppgtt) 113 + { 114 + struct drm_i915_private *dev_priv = ppgtt->dev->dev_private; 75 115 gen6_gtt_pte_t __iomem *pd_addr; 76 116 uint32_t pd_entry; 77 117 int i; 78 118 119 + WARN_ON(ppgtt->pd_offset & 0x3f); 79 120 pd_addr = (gen6_gtt_pte_t __iomem*)dev_priv->gtt.gsm + 80 121 ppgtt->pd_offset / sizeof(gen6_gtt_pte_t); 81 122 for (i = 0; i < ppgtt->num_pd_entries; i++) { ··· 120 97 writel(pd_entry, pd_addr + i); 121 98 } 122 99 readl(pd_addr); 100 + } 101 + 102 + static int gen6_ppgtt_enable(struct drm_device *dev) 103 + { 104 + drm_i915_private_t *dev_priv = dev->dev_private; 105 + uint32_t pd_offset; 106 + struct intel_ring_buffer *ring; 107 + struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt; 108 + int i; 109 + 110 + BUG_ON(ppgtt->pd_offset & 0x3f); 111 + 112 + gen6_write_pdes(ppgtt); 123 113 124 114 pd_offset = ppgtt->pd_offset; 125 115 pd_offset /= 64; /* in cachelines, */ ··· 190 154 unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES; 191 155 unsigned last_pte, i; 192 156 193 - scratch_pte = gen6_pte_encode(ppgtt->dev, 194 - ppgtt->scratch_page_dma_addr, 195 - I915_CACHE_LLC); 157 + scratch_pte = ppgtt->pte_encode(ppgtt->dev, 158 + ppgtt->scratch_page_dma_addr, 159 + I915_CACHE_LLC); 196 160 197 161 while (num_entries) { 198 162 last_pte = first_pte + num_entries; ··· 227 191 dma_addr_t page_addr; 228 192 229 193 page_addr = sg_page_iter_dma_address(&sg_iter); 230 - pt_vaddr[act_pte] = gen6_pte_encode(ppgtt->dev, page_addr, 231 - cache_level); 194 + pt_vaddr[act_pte] = ppgtt->pte_encode(ppgtt->dev, page_addr, 195 + cache_level); 232 196 if (++act_pte == I915_PPGTT_PT_ENTRIES) { 233 197 kunmap_atomic(pt_vaddr); 234 198 act_pt++; ··· 269 233 /* ppgtt PDEs reside in the global gtt pagetable, which has 512*1024 270 234 * entries. For aliasing ppgtt support we just steal them at the end for 271 235 * now. */ 272 - first_pd_entry_in_global_pt = gtt_total_entries(dev_priv->gtt); 236 + first_pd_entry_in_global_pt = gtt_total_entries(dev_priv->gtt); 273 237 238 + if (IS_HASWELL(dev)) { 239 + ppgtt->pte_encode = hsw_pte_encode; 240 + } else if (IS_VALLEYVIEW(dev)) { 241 + ppgtt->pte_encode = byt_pte_encode; 242 + } else { 243 + ppgtt->pte_encode = gen6_pte_encode; 244 + } 274 245 ppgtt->num_pd_entries = I915_PPGTT_PD_ENTRIES; 275 246 ppgtt->enable = gen6_ppgtt_enable; 276 247 ppgtt->clear_range = gen6_ppgtt_clear_range; ··· 480 437 481 438 for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) { 482 439 addr = sg_page_iter_dma_address(&sg_iter); 483 - iowrite32(gen6_pte_encode(dev, addr, level), &gtt_entries[i]); 440 + iowrite32(dev_priv->gtt.pte_encode(dev, addr, level), 441 + &gtt_entries[i]); 484 442 i++; 485 443 } 486 444 ··· 493 449 */ 494 450 if (i != 0) 495 451 WARN_ON(readl(&gtt_entries[i-1]) 496 - != gen6_pte_encode(dev, addr, level)); 452 + != dev_priv->gtt.pte_encode(dev, addr, level)); 497 453 498 454 /* This next bit makes the above posting read even more important. We 499 455 * want to flush the TLBs only after we're certain all the PTE updates ··· 518 474 first_entry, num_entries, max_entries)) 519 475 num_entries = max_entries; 520 476 521 - scratch_pte = gen6_pte_encode(dev, dev_priv->gtt.scratch_page_dma, 522 - I915_CACHE_LLC); 477 + scratch_pte = dev_priv->gtt.pte_encode(dev, 478 + dev_priv->gtt.scratch_page_dma, 479 + I915_CACHE_LLC); 523 480 for (i = 0; i < num_entries; i++) 524 481 iowrite32(scratch_pte, &gtt_base[i]); 525 482 readl(gtt_base); ··· 854 809 } else { 855 810 dev_priv->gtt.gtt_probe = gen6_gmch_probe; 856 811 dev_priv->gtt.gtt_remove = gen6_gmch_remove; 812 + if (IS_HASWELL(dev)) { 813 + dev_priv->gtt.pte_encode = hsw_pte_encode; 814 + } else if (IS_VALLEYVIEW(dev)) { 815 + dev_priv->gtt.pte_encode = byt_pte_encode; 816 + } else { 817 + dev_priv->gtt.pte_encode = gen6_pte_encode; 818 + } 857 819 } 858 820 859 821 ret = dev_priv->gtt.gtt_probe(dev, &dev_priv->gtt.total,
+15 -3
drivers/gpu/drm/i915/i915_gem_stolen.c
··· 62 62 * its value of TOLUD. 63 63 */ 64 64 base = 0; 65 - if (INTEL_INFO(dev)->gen >= 6) { 65 + if (IS_VALLEYVIEW(dev)) { 66 + pci_read_config_dword(dev->pdev, 0x5c, &base); 67 + base &= ~((1<<20) - 1); 68 + } else if (INTEL_INFO(dev)->gen >= 6) { 66 69 /* Read Base Data of Stolen Memory Register (BDSM) directly. 67 70 * Note that there is also a MCHBAR miror at 0x1080c0 or 68 71 * we could use device 2:0x5c instead. ··· 139 136 err_fb: 140 137 drm_mm_put_block(compressed_fb); 141 138 err: 139 + pr_info_once("drm: not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size); 142 140 return -ENOSPC; 143 141 } 144 142 ··· 186 182 int i915_gem_init_stolen(struct drm_device *dev) 187 183 { 188 184 struct drm_i915_private *dev_priv = dev->dev_private; 185 + int bios_reserved = 0; 189 186 190 187 dev_priv->mm.stolen_base = i915_stolen_to_physical(dev); 191 188 if (dev_priv->mm.stolen_base == 0) ··· 195 190 DRM_DEBUG_KMS("found %zd bytes of stolen memory at %08lx\n", 196 191 dev_priv->gtt.stolen_size, dev_priv->mm.stolen_base); 197 192 193 + if (IS_VALLEYVIEW(dev)) 194 + bios_reserved = 1024*1024; /* top 1M on VLV/BYT */ 195 + 198 196 /* Basic memrange allocator for stolen space */ 199 - drm_mm_init(&dev_priv->mm.stolen, 0, dev_priv->gtt.stolen_size); 197 + drm_mm_init(&dev_priv->mm.stolen, 0, dev_priv->gtt.stolen_size - 198 + bios_reserved); 200 199 201 200 return 0; 202 201 } ··· 339 330 340 331 /* KISS and expect everything to be page-aligned */ 341 332 BUG_ON(stolen_offset & 4095); 342 - BUG_ON(gtt_offset & 4095); 343 333 BUG_ON(size & 4095); 344 334 345 335 if (WARN_ON(size == 0)) ··· 358 350 drm_mm_put_block(stolen); 359 351 return NULL; 360 352 } 353 + 354 + /* Some objects just need physical mem from stolen space */ 355 + if (gtt_offset == -1) 356 + return obj; 361 357 362 358 /* To simplify the initialisation sequence between KMS and GTT, 363 359 * we allow construction of the stolen object prior to
+393 -48
drivers/gpu/drm/i915/i915_irq.c
··· 112 112 } 113 113 } 114 114 115 + static bool ivb_can_enable_err_int(struct drm_device *dev) 116 + { 117 + struct drm_i915_private *dev_priv = dev->dev_private; 118 + struct intel_crtc *crtc; 119 + enum pipe pipe; 120 + 121 + for_each_pipe(pipe) { 122 + crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]); 123 + 124 + if (crtc->cpu_fifo_underrun_disabled) 125 + return false; 126 + } 127 + 128 + return true; 129 + } 130 + 131 + static bool cpt_can_enable_serr_int(struct drm_device *dev) 132 + { 133 + struct drm_i915_private *dev_priv = dev->dev_private; 134 + enum pipe pipe; 135 + struct intel_crtc *crtc; 136 + 137 + for_each_pipe(pipe) { 138 + crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]); 139 + 140 + if (crtc->pch_fifo_underrun_disabled) 141 + return false; 142 + } 143 + 144 + return true; 145 + } 146 + 147 + static void ironlake_set_fifo_underrun_reporting(struct drm_device *dev, 148 + enum pipe pipe, bool enable) 149 + { 150 + struct drm_i915_private *dev_priv = dev->dev_private; 151 + uint32_t bit = (pipe == PIPE_A) ? DE_PIPEA_FIFO_UNDERRUN : 152 + DE_PIPEB_FIFO_UNDERRUN; 153 + 154 + if (enable) 155 + ironlake_enable_display_irq(dev_priv, bit); 156 + else 157 + ironlake_disable_display_irq(dev_priv, bit); 158 + } 159 + 160 + static void ivybridge_set_fifo_underrun_reporting(struct drm_device *dev, 161 + bool enable) 162 + { 163 + struct drm_i915_private *dev_priv = dev->dev_private; 164 + 165 + if (enable) { 166 + if (!ivb_can_enable_err_int(dev)) 167 + return; 168 + 169 + I915_WRITE(GEN7_ERR_INT, ERR_INT_FIFO_UNDERRUN_A | 170 + ERR_INT_FIFO_UNDERRUN_B | 171 + ERR_INT_FIFO_UNDERRUN_C); 172 + 173 + ironlake_enable_display_irq(dev_priv, DE_ERR_INT_IVB); 174 + } else { 175 + ironlake_disable_display_irq(dev_priv, DE_ERR_INT_IVB); 176 + } 177 + } 178 + 179 + static void ibx_set_fifo_underrun_reporting(struct intel_crtc *crtc, 180 + bool enable) 181 + { 182 + struct drm_device *dev = crtc->base.dev; 183 + struct drm_i915_private *dev_priv = dev->dev_private; 184 + uint32_t bit = (crtc->pipe == PIPE_A) ? SDE_TRANSA_FIFO_UNDER : 185 + SDE_TRANSB_FIFO_UNDER; 186 + 187 + if (enable) 188 + I915_WRITE(SDEIMR, I915_READ(SDEIMR) & ~bit); 189 + else 190 + I915_WRITE(SDEIMR, I915_READ(SDEIMR) | bit); 191 + 192 + POSTING_READ(SDEIMR); 193 + } 194 + 195 + static void cpt_set_fifo_underrun_reporting(struct drm_device *dev, 196 + enum transcoder pch_transcoder, 197 + bool enable) 198 + { 199 + struct drm_i915_private *dev_priv = dev->dev_private; 200 + 201 + if (enable) { 202 + if (!cpt_can_enable_serr_int(dev)) 203 + return; 204 + 205 + I915_WRITE(SERR_INT, SERR_INT_TRANS_A_FIFO_UNDERRUN | 206 + SERR_INT_TRANS_B_FIFO_UNDERRUN | 207 + SERR_INT_TRANS_C_FIFO_UNDERRUN); 208 + 209 + I915_WRITE(SDEIMR, I915_READ(SDEIMR) & ~SDE_ERROR_CPT); 210 + } else { 211 + I915_WRITE(SDEIMR, I915_READ(SDEIMR) | SDE_ERROR_CPT); 212 + } 213 + 214 + POSTING_READ(SDEIMR); 215 + } 216 + 217 + /** 218 + * intel_set_cpu_fifo_underrun_reporting - enable/disable FIFO underrun messages 219 + * @dev: drm device 220 + * @pipe: pipe 221 + * @enable: true if we want to report FIFO underrun errors, false otherwise 222 + * 223 + * This function makes us disable or enable CPU fifo underruns for a specific 224 + * pipe. Notice that on some Gens (e.g. IVB, HSW), disabling FIFO underrun 225 + * reporting for one pipe may also disable all the other CPU error interruts for 226 + * the other pipes, due to the fact that there's just one interrupt mask/enable 227 + * bit for all the pipes. 228 + * 229 + * Returns the previous state of underrun reporting. 230 + */ 231 + bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev, 232 + enum pipe pipe, bool enable) 233 + { 234 + struct drm_i915_private *dev_priv = dev->dev_private; 235 + struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 236 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 237 + unsigned long flags; 238 + bool ret; 239 + 240 + spin_lock_irqsave(&dev_priv->irq_lock, flags); 241 + 242 + ret = !intel_crtc->cpu_fifo_underrun_disabled; 243 + 244 + if (enable == ret) 245 + goto done; 246 + 247 + intel_crtc->cpu_fifo_underrun_disabled = !enable; 248 + 249 + if (IS_GEN5(dev) || IS_GEN6(dev)) 250 + ironlake_set_fifo_underrun_reporting(dev, pipe, enable); 251 + else if (IS_GEN7(dev)) 252 + ivybridge_set_fifo_underrun_reporting(dev, enable); 253 + 254 + done: 255 + spin_unlock_irqrestore(&dev_priv->irq_lock, flags); 256 + return ret; 257 + } 258 + 259 + /** 260 + * intel_set_pch_fifo_underrun_reporting - enable/disable FIFO underrun messages 261 + * @dev: drm device 262 + * @pch_transcoder: the PCH transcoder (same as pipe on IVB and older) 263 + * @enable: true if we want to report FIFO underrun errors, false otherwise 264 + * 265 + * This function makes us disable or enable PCH fifo underruns for a specific 266 + * PCH transcoder. Notice that on some PCHs (e.g. CPT/PPT), disabling FIFO 267 + * underrun reporting for one transcoder may also disable all the other PCH 268 + * error interruts for the other transcoders, due to the fact that there's just 269 + * one interrupt mask/enable bit for all the transcoders. 270 + * 271 + * Returns the previous state of underrun reporting. 272 + */ 273 + bool intel_set_pch_fifo_underrun_reporting(struct drm_device *dev, 274 + enum transcoder pch_transcoder, 275 + bool enable) 276 + { 277 + struct drm_i915_private *dev_priv = dev->dev_private; 278 + enum pipe p; 279 + struct drm_crtc *crtc; 280 + struct intel_crtc *intel_crtc; 281 + unsigned long flags; 282 + bool ret; 283 + 284 + if (HAS_PCH_LPT(dev)) { 285 + crtc = NULL; 286 + for_each_pipe(p) { 287 + struct drm_crtc *c = dev_priv->pipe_to_crtc_mapping[p]; 288 + if (intel_pipe_has_type(c, INTEL_OUTPUT_ANALOG)) { 289 + crtc = c; 290 + break; 291 + } 292 + } 293 + if (!crtc) { 294 + DRM_ERROR("PCH FIFO underrun, but no CRTC using the PCH found\n"); 295 + return false; 296 + } 297 + } else { 298 + crtc = dev_priv->pipe_to_crtc_mapping[pch_transcoder]; 299 + } 300 + intel_crtc = to_intel_crtc(crtc); 301 + 302 + spin_lock_irqsave(&dev_priv->irq_lock, flags); 303 + 304 + ret = !intel_crtc->pch_fifo_underrun_disabled; 305 + 306 + if (enable == ret) 307 + goto done; 308 + 309 + intel_crtc->pch_fifo_underrun_disabled = !enable; 310 + 311 + if (HAS_PCH_IBX(dev)) 312 + ibx_set_fifo_underrun_reporting(intel_crtc, enable); 313 + else 314 + cpt_set_fifo_underrun_reporting(dev, pch_transcoder, enable); 315 + 316 + done: 317 + spin_unlock_irqrestore(&dev_priv->irq_lock, flags); 318 + return ret; 319 + } 320 + 321 + 115 322 void 116 323 i915_enable_pipestat(drm_i915_private_t *dev_priv, int pipe, u32 mask) 117 324 { ··· 349 142 } 350 143 351 144 /** 352 - * intel_enable_asle - enable ASLE interrupt for OpRegion 145 + * i915_enable_asle_pipestat - enable ASLE pipestat for OpRegion 353 146 */ 354 - void intel_enable_asle(struct drm_device *dev) 147 + static void i915_enable_asle_pipestat(struct drm_device *dev) 355 148 { 356 149 drm_i915_private_t *dev_priv = dev->dev_private; 357 150 unsigned long irqflags; 358 151 359 - /* FIXME: opregion/asle for VLV */ 360 - if (IS_VALLEYVIEW(dev)) 152 + if (!dev_priv->opregion.asle || !IS_MOBILE(dev)) 361 153 return; 362 154 363 155 spin_lock_irqsave(&dev_priv->irq_lock, irqflags); 364 156 365 - if (HAS_PCH_SPLIT(dev)) 366 - ironlake_enable_display_irq(dev_priv, DE_GSE); 367 - else { 368 - i915_enable_pipestat(dev_priv, 1, 369 - PIPE_LEGACY_BLC_EVENT_ENABLE); 370 - if (INTEL_INFO(dev)->gen >= 4) 371 - i915_enable_pipestat(dev_priv, 0, 372 - PIPE_LEGACY_BLC_EVENT_ENABLE); 373 - } 157 + i915_enable_pipestat(dev_priv, 1, PIPE_LEGACY_BLC_EVENT_ENABLE); 158 + if (INTEL_INFO(dev)->gen >= 4) 159 + i915_enable_pipestat(dev_priv, 0, PIPE_LEGACY_BLC_EVENT_ENABLE); 374 160 375 161 spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 376 162 } ··· 383 183 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; 384 184 enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv, 385 185 pipe); 186 + 187 + if (!intel_display_power_enabled(dev, 188 + POWER_DOMAIN_TRANSCODER(cpu_transcoder))) 189 + return false; 386 190 387 191 return I915_READ(PIPECONF(cpu_transcoder)) & PIPECONF_ENABLE; 388 192 } ··· 538 334 crtc); 539 335 } 540 336 337 + static int intel_hpd_irq_event(struct drm_device *dev, struct drm_connector *connector) 338 + { 339 + enum drm_connector_status old_status; 340 + 341 + WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); 342 + old_status = connector->status; 343 + 344 + connector->status = connector->funcs->detect(connector, false); 345 + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n", 346 + connector->base.id, 347 + drm_get_connector_name(connector), 348 + old_status, connector->status); 349 + return (old_status != connector->status); 350 + } 351 + 541 352 /* 542 353 * Handle hotplug events outside the interrupt handler proper. 543 354 */ ··· 569 350 struct drm_connector *connector; 570 351 unsigned long irqflags; 571 352 bool hpd_disabled = false; 353 + bool changed = false; 354 + u32 hpd_event_bits; 572 355 573 356 /* HPD irq before everything is fully set up. */ 574 357 if (!dev_priv->enable_hotplug_processing) ··· 580 359 DRM_DEBUG_KMS("running encoder hotplug functions\n"); 581 360 582 361 spin_lock_irqsave(&dev_priv->irq_lock, irqflags); 362 + 363 + hpd_event_bits = dev_priv->hpd_event_bits; 364 + dev_priv->hpd_event_bits = 0; 583 365 list_for_each_entry(connector, &mode_config->connector_list, head) { 584 366 intel_connector = to_intel_connector(connector); 585 367 intel_encoder = intel_connector->encoder; ··· 597 373 | DRM_CONNECTOR_POLL_DISCONNECT; 598 374 hpd_disabled = true; 599 375 } 376 + if (hpd_event_bits & (1 << intel_encoder->hpd_pin)) { 377 + DRM_DEBUG_KMS("Connector %s (pin %i) received hotplug event.\n", 378 + drm_get_connector_name(connector), intel_encoder->hpd_pin); 379 + } 600 380 } 601 381 /* if there were no outputs to poll, poll was disabled, 602 382 * therefore make sure it's enabled when disabling HPD on ··· 613 385 614 386 spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 615 387 616 - list_for_each_entry(intel_encoder, &mode_config->encoder_list, base.head) 617 - if (intel_encoder->hot_plug) 618 - intel_encoder->hot_plug(intel_encoder); 619 - 388 + list_for_each_entry(connector, &mode_config->connector_list, head) { 389 + intel_connector = to_intel_connector(connector); 390 + intel_encoder = intel_connector->encoder; 391 + if (hpd_event_bits & (1 << intel_encoder->hpd_pin)) { 392 + if (intel_encoder->hot_plug) 393 + intel_encoder->hot_plug(intel_encoder); 394 + if (intel_hpd_irq_event(dev, connector)) 395 + changed = true; 396 + } 397 + } 620 398 mutex_unlock(&mode_config->mutex); 621 399 622 - /* Just fire off a uevent and let userspace tell us what to do */ 623 - drm_helper_hpd_irq_event(dev); 400 + if (changed) 401 + drm_kms_helper_hotplug_event(dev); 624 402 } 625 403 626 404 static void ironlake_handle_rps_change(struct drm_device *dev) ··· 716 482 */ 717 483 if (!(new_delay > dev_priv->rps.max_delay || 718 484 new_delay < dev_priv->rps.min_delay)) { 719 - gen6_set_rps(dev_priv->dev, new_delay); 485 + if (IS_VALLEYVIEW(dev_priv->dev)) 486 + valleyview_set_rps(dev_priv->dev, new_delay); 487 + else 488 + gen6_set_rps(dev_priv->dev, new_delay); 489 + } 490 + 491 + if (IS_VALLEYVIEW(dev_priv->dev)) { 492 + /* 493 + * On VLV, when we enter RC6 we may not be at the minimum 494 + * voltage level, so arm a timer to check. It should only 495 + * fire when there's activity or once after we've entered 496 + * RC6, and then won't be re-armed until the next RPS interrupt. 497 + */ 498 + mod_delayed_work(dev_priv->wq, &dev_priv->rps.vlv_work, 499 + msecs_to_jiffies(100)); 720 500 } 721 501 722 502 mutex_unlock(&dev_priv->rps.hw_lock); ··· 884 636 dev_priv->hpd_stats[i].hpd_mark != HPD_ENABLED) 885 637 continue; 886 638 639 + dev_priv->hpd_event_bits |= (1 << i); 887 640 if (!time_in_range(jiffies, dev_priv->hpd_stats[i].hpd_last_jiffies, 888 641 dev_priv->hpd_stats[i].hpd_last_jiffies 889 642 + msecs_to_jiffies(HPD_STORM_DETECT_PERIOD))) { ··· 892 643 dev_priv->hpd_stats[i].hpd_cnt = 0; 893 644 } else if (dev_priv->hpd_stats[i].hpd_cnt > HPD_STORM_THRESHOLD) { 894 645 dev_priv->hpd_stats[i].hpd_mark = HPD_MARK_DISABLED; 646 + dev_priv->hpd_event_bits &= ~(1 << i); 895 647 DRM_DEBUG_KMS("HPD interrupt storm detected on PIN %d\n", i); 896 648 ret = true; 897 649 } else { ··· 1013 763 ibx_hpd_irq_setup(dev); 1014 764 queue_work(dev_priv->wq, &dev_priv->hotplug_work); 1015 765 } 1016 - if (pch_iir & SDE_AUDIO_POWER_MASK) 766 + if (pch_iir & SDE_AUDIO_POWER_MASK) { 767 + int port = ffs((pch_iir & SDE_AUDIO_POWER_MASK) >> 768 + SDE_AUDIO_POWER_SHIFT); 1017 769 DRM_DEBUG_DRIVER("PCH audio power change on port %d\n", 1018 - (pch_iir & SDE_AUDIO_POWER_MASK) >> 1019 - SDE_AUDIO_POWER_SHIFT); 770 + port_name(port)); 771 + } 1020 772 1021 773 if (pch_iir & SDE_AUX_MASK) 1022 774 dp_aux_irq_handler(dev); ··· 1047 795 if (pch_iir & (SDE_TRANSB_CRC_ERR | SDE_TRANSA_CRC_ERR)) 1048 796 DRM_DEBUG_DRIVER("PCH transcoder CRC error interrupt\n"); 1049 797 1050 - if (pch_iir & SDE_TRANSB_FIFO_UNDER) 1051 - DRM_DEBUG_DRIVER("PCH transcoder B underrun interrupt\n"); 1052 798 if (pch_iir & SDE_TRANSA_FIFO_UNDER) 1053 - DRM_DEBUG_DRIVER("PCH transcoder A underrun interrupt\n"); 799 + if (intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_A, 800 + false)) 801 + DRM_DEBUG_DRIVER("PCH transcoder A FIFO underrun\n"); 802 + 803 + if (pch_iir & SDE_TRANSB_FIFO_UNDER) 804 + if (intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_B, 805 + false)) 806 + DRM_DEBUG_DRIVER("PCH transcoder B FIFO underrun\n"); 807 + } 808 + 809 + static void ivb_err_int_handler(struct drm_device *dev) 810 + { 811 + struct drm_i915_private *dev_priv = dev->dev_private; 812 + u32 err_int = I915_READ(GEN7_ERR_INT); 813 + 814 + if (err_int & ERR_INT_POISON) 815 + DRM_ERROR("Poison interrupt\n"); 816 + 817 + if (err_int & ERR_INT_FIFO_UNDERRUN_A) 818 + if (intel_set_cpu_fifo_underrun_reporting(dev, PIPE_A, false)) 819 + DRM_DEBUG_DRIVER("Pipe A FIFO underrun\n"); 820 + 821 + if (err_int & ERR_INT_FIFO_UNDERRUN_B) 822 + if (intel_set_cpu_fifo_underrun_reporting(dev, PIPE_B, false)) 823 + DRM_DEBUG_DRIVER("Pipe B FIFO underrun\n"); 824 + 825 + if (err_int & ERR_INT_FIFO_UNDERRUN_C) 826 + if (intel_set_cpu_fifo_underrun_reporting(dev, PIPE_C, false)) 827 + DRM_DEBUG_DRIVER("Pipe C FIFO underrun\n"); 828 + 829 + I915_WRITE(GEN7_ERR_INT, err_int); 830 + } 831 + 832 + static void cpt_serr_int_handler(struct drm_device *dev) 833 + { 834 + struct drm_i915_private *dev_priv = dev->dev_private; 835 + u32 serr_int = I915_READ(SERR_INT); 836 + 837 + if (serr_int & SERR_INT_POISON) 838 + DRM_ERROR("PCH poison interrupt\n"); 839 + 840 + if (serr_int & SERR_INT_TRANS_A_FIFO_UNDERRUN) 841 + if (intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_A, 842 + false)) 843 + DRM_DEBUG_DRIVER("PCH transcoder A FIFO underrun\n"); 844 + 845 + if (serr_int & SERR_INT_TRANS_B_FIFO_UNDERRUN) 846 + if (intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_B, 847 + false)) 848 + DRM_DEBUG_DRIVER("PCH transcoder B FIFO underrun\n"); 849 + 850 + if (serr_int & SERR_INT_TRANS_C_FIFO_UNDERRUN) 851 + if (intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_C, 852 + false)) 853 + DRM_DEBUG_DRIVER("PCH transcoder C FIFO underrun\n"); 854 + 855 + I915_WRITE(SERR_INT, serr_int); 1054 856 } 1055 857 1056 858 static void cpt_irq_handler(struct drm_device *dev, u32 pch_iir) ··· 1118 812 ibx_hpd_irq_setup(dev); 1119 813 queue_work(dev_priv->wq, &dev_priv->hotplug_work); 1120 814 } 1121 - if (pch_iir & SDE_AUDIO_POWER_MASK_CPT) 1122 - DRM_DEBUG_DRIVER("PCH audio power change on port %d\n", 1123 - (pch_iir & SDE_AUDIO_POWER_MASK_CPT) >> 1124 - SDE_AUDIO_POWER_SHIFT_CPT); 815 + if (pch_iir & SDE_AUDIO_POWER_MASK_CPT) { 816 + int port = ffs((pch_iir & SDE_AUDIO_POWER_MASK_CPT) >> 817 + SDE_AUDIO_POWER_SHIFT_CPT); 818 + DRM_DEBUG_DRIVER("PCH audio power change on port %c\n", 819 + port_name(port)); 820 + } 1125 821 1126 822 if (pch_iir & SDE_AUX_MASK_CPT) 1127 823 dp_aux_irq_handler(dev); ··· 1142 834 DRM_DEBUG_DRIVER(" pipe %c FDI IIR: 0x%08x\n", 1143 835 pipe_name(pipe), 1144 836 I915_READ(FDI_RX_IIR(pipe))); 837 + 838 + if (pch_iir & SDE_ERROR_CPT) 839 + cpt_serr_int_handler(dev); 1145 840 } 1146 841 1147 842 static irqreturn_t ivybridge_irq_handler(int irq, void *arg) ··· 1156 845 int i; 1157 846 1158 847 atomic_inc(&dev_priv->irq_received); 848 + 849 + /* We get interrupts on unclaimed registers, so check for this before we 850 + * do any I915_{READ,WRITE}. */ 851 + if (IS_HASWELL(dev) && 852 + (I915_READ_NOTRACE(FPGA_DBG) & FPGA_DBG_RM_NOCLAIM)) { 853 + DRM_ERROR("Unclaimed register before interrupt\n"); 854 + I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM); 855 + } 1159 856 1160 857 /* disable master interrupt before clearing iir */ 1161 858 de_ier = I915_READ(DEIER); ··· 1180 861 POSTING_READ(SDEIER); 1181 862 } 1182 863 864 + /* On Haswell, also mask ERR_INT because we don't want to risk 865 + * generating "unclaimed register" interrupts from inside the interrupt 866 + * handler. */ 867 + if (IS_HASWELL(dev)) 868 + ironlake_disable_display_irq(dev_priv, DE_ERR_INT_IVB); 869 + 1183 870 gt_iir = I915_READ(GTIIR); 1184 871 if (gt_iir) { 1185 872 snb_gt_irq_handler(dev, dev_priv, gt_iir); ··· 1195 870 1196 871 de_iir = I915_READ(DEIIR); 1197 872 if (de_iir) { 873 + if (de_iir & DE_ERR_INT_IVB) 874 + ivb_err_int_handler(dev); 875 + 1198 876 if (de_iir & DE_AUX_CHANNEL_A_IVB) 1199 877 dp_aux_irq_handler(dev); 1200 878 1201 879 if (de_iir & DE_GSE_IVB) 1202 - intel_opregion_gse_intr(dev); 880 + intel_opregion_asle_intr(dev); 1203 881 1204 882 for (i = 0; i < 3; i++) { 1205 883 if (de_iir & (DE_PIPEA_VBLANK_IVB << (5 * i))) ··· 1234 906 I915_WRITE(GEN6_PMIIR, pm_iir); 1235 907 ret = IRQ_HANDLED; 1236 908 } 909 + 910 + if (IS_HASWELL(dev) && ivb_can_enable_err_int(dev)) 911 + ironlake_enable_display_irq(dev_priv, DE_ERR_INT_IVB); 1237 912 1238 913 I915_WRITE(DEIER, de_ier); 1239 914 POSTING_READ(DEIER); ··· 1299 968 dp_aux_irq_handler(dev); 1300 969 1301 970 if (de_iir & DE_GSE) 1302 - intel_opregion_gse_intr(dev); 971 + intel_opregion_asle_intr(dev); 1303 972 1304 973 if (de_iir & DE_PIPEA_VBLANK) 1305 974 drm_handle_vblank(dev, 0); 1306 975 1307 976 if (de_iir & DE_PIPEB_VBLANK) 1308 977 drm_handle_vblank(dev, 1); 978 + 979 + if (de_iir & DE_POISON) 980 + DRM_ERROR("Poison interrupt\n"); 981 + 982 + if (de_iir & DE_PIPEA_FIFO_UNDERRUN) 983 + if (intel_set_cpu_fifo_underrun_reporting(dev, PIPE_A, false)) 984 + DRM_DEBUG_DRIVER("Pipe A FIFO underrun\n"); 985 + 986 + if (de_iir & DE_PIPEB_FIFO_UNDERRUN) 987 + if (intel_set_cpu_fifo_underrun_reporting(dev, PIPE_B, false)) 988 + DRM_DEBUG_DRIVER("Pipe B FIFO underrun\n"); 1309 989 1310 990 if (de_iir & DE_PLANEA_FLIP_DONE) { 1311 991 intel_prepare_page_flip(dev, 0); ··· 2543 2201 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; 2544 2202 u32 mask; 2545 2203 2546 - if (HAS_PCH_IBX(dev)) 2547 - mask = SDE_GMBUS | SDE_AUX_MASK; 2548 - else 2549 - mask = SDE_GMBUS_CPT | SDE_AUX_MASK_CPT; 2204 + if (HAS_PCH_IBX(dev)) { 2205 + mask = SDE_GMBUS | SDE_AUX_MASK | SDE_TRANSB_FIFO_UNDER | 2206 + SDE_TRANSA_FIFO_UNDER | SDE_POISON; 2207 + } else { 2208 + mask = SDE_GMBUS_CPT | SDE_AUX_MASK_CPT | SDE_ERROR_CPT; 2209 + 2210 + I915_WRITE(SERR_INT, I915_READ(SERR_INT)); 2211 + } 2550 2212 2551 2213 if (HAS_PCH_NOP(dev)) 2552 2214 return; ··· 2565 2219 /* enable kind of interrupts always enabled */ 2566 2220 u32 display_mask = DE_MASTER_IRQ_CONTROL | DE_GSE | DE_PCH_EVENT | 2567 2221 DE_PLANEA_FLIP_DONE | DE_PLANEB_FLIP_DONE | 2568 - DE_AUX_CHANNEL_A; 2222 + DE_AUX_CHANNEL_A | DE_PIPEB_FIFO_UNDERRUN | 2223 + DE_PIPEA_FIFO_UNDERRUN | DE_POISON; 2569 2224 u32 render_irqs; 2570 2225 2571 2226 dev_priv->irq_mask = ~display_mask; ··· 2616 2269 DE_PLANEC_FLIP_DONE_IVB | 2617 2270 DE_PLANEB_FLIP_DONE_IVB | 2618 2271 DE_PLANEA_FLIP_DONE_IVB | 2619 - DE_AUX_CHANNEL_A_IVB; 2272 + DE_AUX_CHANNEL_A_IVB | 2273 + DE_ERR_INT_IVB; 2620 2274 u32 render_irqs; 2621 2275 2622 2276 dev_priv->irq_mask = ~display_mask; 2623 2277 2624 2278 /* should always can generate irq */ 2279 + I915_WRITE(GEN7_ERR_INT, I915_READ(GEN7_ERR_INT)); 2625 2280 I915_WRITE(DEIIR, I915_READ(DEIIR)); 2626 2281 I915_WRITE(DEIMR, dev_priv->irq_mask); 2627 2282 I915_WRITE(DEIER, ··· 2654 2305 u32 enable_mask; 2655 2306 u32 pipestat_enable = PLANE_FLIP_DONE_INT_EN_VLV; 2656 2307 u32 render_irqs; 2657 - u16 msid; 2658 2308 2659 2309 enable_mask = I915_DISPLAY_PORT_INTERRUPT; 2660 2310 enable_mask |= I915_DISPLAY_PIPE_A_EVENT_INTERRUPT | ··· 2668 2320 dev_priv->irq_mask = (~enable_mask) | 2669 2321 I915_DISPLAY_PIPE_A_VBLANK_INTERRUPT | 2670 2322 I915_DISPLAY_PIPE_B_VBLANK_INTERRUPT; 2671 - 2672 - /* Hack for broken MSIs on VLV */ 2673 - pci_write_config_dword(dev_priv->dev->pdev, 0x94, 0xfee00000); 2674 - pci_read_config_word(dev->pdev, 0x98, &msid); 2675 - msid &= 0xff; /* mask out delivery bits */ 2676 - msid |= (1<<14); 2677 - pci_write_config_word(dev_priv->dev->pdev, 0x98, msid); 2678 2323 2679 2324 I915_WRITE(PORT_HOTPLUG_EN, 0); 2680 2325 POSTING_READ(PORT_HOTPLUG_EN); ··· 2743 2402 I915_WRITE(DEIMR, 0xffffffff); 2744 2403 I915_WRITE(DEIER, 0x0); 2745 2404 I915_WRITE(DEIIR, I915_READ(DEIIR)); 2405 + if (IS_GEN7(dev)) 2406 + I915_WRITE(GEN7_ERR_INT, I915_READ(GEN7_ERR_INT)); 2746 2407 2747 2408 I915_WRITE(GTIMR, 0xffffffff); 2748 2409 I915_WRITE(GTIER, 0x0); ··· 2756 2413 I915_WRITE(SDEIMR, 0xffffffff); 2757 2414 I915_WRITE(SDEIER, 0x0); 2758 2415 I915_WRITE(SDEIIR, I915_READ(SDEIIR)); 2416 + if (HAS_PCH_CPT(dev) || HAS_PCH_LPT(dev)) 2417 + I915_WRITE(SERR_INT, I915_READ(SERR_INT)); 2759 2418 } 2760 2419 2761 2420 static void i8xx_irq_preinstall(struct drm_device * dev) ··· 2971 2626 I915_WRITE(IER, enable_mask); 2972 2627 POSTING_READ(IER); 2973 2628 2974 - intel_opregion_enable_asle(dev); 2629 + i915_enable_asle_pipestat(dev); 2975 2630 2976 2631 return 0; 2977 2632 } ··· 3205 2860 I915_WRITE(PORT_HOTPLUG_EN, 0); 3206 2861 POSTING_READ(PORT_HOTPLUG_EN); 3207 2862 3208 - intel_opregion_enable_asle(dev); 2863 + i915_enable_asle_pipestat(dev); 3209 2864 3210 2865 return 0; 3211 2866 }
+258 -98
drivers/gpu/drm/i915/i915_reg.h
··· 351 351 * 0x8100: fast clock controls 352 352 * 353 353 * DPIO is VLV only. 354 + * 355 + * Note: digital port B is DDI0, digital pot C is DDI1 354 356 */ 355 357 #define DPIO_PKT (VLV_DISPLAY_BASE + 0x2100) 356 358 #define DPIO_RID (0<<24) ··· 369 367 #define DPIO_SFR_BYPASS (1<<1) 370 368 #define DPIO_RESET (1<<0) 371 369 370 + #define _DPIO_TX3_SWING_CTL4_A 0x690 371 + #define _DPIO_TX3_SWING_CTL4_B 0x2a90 372 + #define DPIO_TX3_SWING_CTL4(pipe) _PIPE(pipe, _DPIO_TX_SWING_CTL4_A, \ 373 + _DPIO_TX3_SWING_CTL4_B) 374 + 375 + /* 376 + * Per pipe/PLL DPIO regs 377 + */ 372 378 #define _DPIO_DIV_A 0x800c 373 379 #define DPIO_POST_DIV_SHIFT (28) /* 3 bits */ 380 + #define DPIO_POST_DIV_DAC 0 381 + #define DPIO_POST_DIV_HDMIDP 1 /* DAC 225-400M rate */ 382 + #define DPIO_POST_DIV_LVDS1 2 383 + #define DPIO_POST_DIV_LVDS2 3 374 384 #define DPIO_K_SHIFT (24) /* 4 bits */ 375 385 #define DPIO_P1_SHIFT (21) /* 3 bits */ 376 386 #define DPIO_P2_SHIFT (16) /* 5 bits */ ··· 408 394 #define _DPIO_CORE_CLK_B 0x803c 409 395 #define DPIO_CORE_CLK(pipe) _PIPE(pipe, _DPIO_CORE_CLK_A, _DPIO_CORE_CLK_B) 410 396 397 + #define _DPIO_IREF_CTL_A 0x8040 398 + #define _DPIO_IREF_CTL_B 0x8060 399 + #define DPIO_IREF_CTL(pipe) _PIPE(pipe, _DPIO_IREF_CTL_A, _DPIO_IREF_CTL_B) 400 + 401 + #define DPIO_IREF_BCAST 0xc044 402 + #define _DPIO_IREF_A 0x8044 403 + #define _DPIO_IREF_B 0x8064 404 + #define DPIO_IREF(pipe) _PIPE(pipe, _DPIO_IREF_A, _DPIO_IREF_B) 405 + 406 + #define _DPIO_PLL_CML_A 0x804c 407 + #define _DPIO_PLL_CML_B 0x806c 408 + #define DPIO_PLL_CML(pipe) _PIPE(pipe, _DPIO_PLL_CML_A, _DPIO_PLL_CML_B) 409 + 411 410 #define _DPIO_LFP_COEFF_A 0x8048 412 411 #define _DPIO_LFP_COEFF_B 0x8068 413 412 #define DPIO_LFP_COEFF(pipe) _PIPE(pipe, _DPIO_LFP_COEFF_A, _DPIO_LFP_COEFF_B) 414 413 414 + #define DPIO_CALIBRATION 0x80ac 415 + 415 416 #define DPIO_FASTCLK_DISABLE 0x8100 416 417 417 - #define DPIO_DATA_CHANNEL1 0x8220 418 - #define DPIO_DATA_CHANNEL2 0x8420 418 + /* 419 + * Per DDI channel DPIO regs 420 + */ 421 + 422 + #define _DPIO_PCS_TX_0 0x8200 423 + #define _DPIO_PCS_TX_1 0x8400 424 + #define DPIO_PCS_TX_LANE2_RESET (1<<16) 425 + #define DPIO_PCS_TX_LANE1_RESET (1<<7) 426 + #define DPIO_PCS_TX(port) _PORT(port, _DPIO_PCS_TX_0, _DPIO_PCS_TX_1) 427 + 428 + #define _DPIO_PCS_CLK_0 0x8204 429 + #define _DPIO_PCS_CLK_1 0x8404 430 + #define DPIO_PCS_CLK_CRI_RXEB_EIOS_EN (1<<22) 431 + #define DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN (1<<21) 432 + #define DPIO_PCS_CLK_DATAWIDTH_SHIFT (6) 433 + #define DPIO_PCS_CLK_SOFT_RESET (1<<5) 434 + #define DPIO_PCS_CLK(port) _PORT(port, _DPIO_PCS_CLK_0, _DPIO_PCS_CLK_1) 435 + 436 + #define _DPIO_PCS_CTL_OVR1_A 0x8224 437 + #define _DPIO_PCS_CTL_OVR1_B 0x8424 438 + #define DPIO_PCS_CTL_OVER1(port) _PORT(port, _DPIO_PCS_CTL_OVR1_A, \ 439 + _DPIO_PCS_CTL_OVR1_B) 440 + 441 + #define _DPIO_PCS_STAGGER0_A 0x822c 442 + #define _DPIO_PCS_STAGGER0_B 0x842c 443 + #define DPIO_PCS_STAGGER0(port) _PORT(port, _DPIO_PCS_STAGGER0_A, \ 444 + _DPIO_PCS_STAGGER0_B) 445 + 446 + #define _DPIO_PCS_STAGGER1_A 0x8230 447 + #define _DPIO_PCS_STAGGER1_B 0x8430 448 + #define DPIO_PCS_STAGGER1(port) _PORT(port, _DPIO_PCS_STAGGER1_A, \ 449 + _DPIO_PCS_STAGGER1_B) 450 + 451 + #define _DPIO_PCS_CLOCKBUF0_A 0x8238 452 + #define _DPIO_PCS_CLOCKBUF0_B 0x8438 453 + #define DPIO_PCS_CLOCKBUF0(port) _PORT(port, _DPIO_PCS_CLOCKBUF0_A, \ 454 + _DPIO_PCS_CLOCKBUF0_B) 455 + 456 + #define _DPIO_PCS_CLOCKBUF8_A 0x825c 457 + #define _DPIO_PCS_CLOCKBUF8_B 0x845c 458 + #define DPIO_PCS_CLOCKBUF8(port) _PORT(port, _DPIO_PCS_CLOCKBUF8_A, \ 459 + _DPIO_PCS_CLOCKBUF8_B) 460 + 461 + #define _DPIO_TX_SWING_CTL2_A 0x8288 462 + #define _DPIO_TX_SWING_CTL2_B 0x8488 463 + #define DPIO_TX_SWING_CTL2(port) _PORT(port, _DPIO_TX_SWING_CTL2_A, \ 464 + _DPIO_TX_SWING_CTL2_B) 465 + 466 + #define _DPIO_TX_SWING_CTL3_A 0x828c 467 + #define _DPIO_TX_SWING_CTL3_B 0x848c 468 + #define DPIO_TX_SWING_CTL3(port) _PORT(port, _DPIO_TX_SWING_CTL3_A, \ 469 + _DPIO_TX_SWING_CTL3_B) 470 + 471 + #define _DPIO_TX_SWING_CTL4_A 0x8290 472 + #define _DPIO_TX_SWING_CTL4_B 0x8490 473 + #define DPIO_TX_SWING_CTL4(port) _PORT(port, _DPIO_TX_SWING_CTL4_A, \ 474 + _DPIO_TX_SWING_CTL4_B) 475 + 476 + #define _DPIO_TX_OCALINIT_0 0x8294 477 + #define _DPIO_TX_OCALINIT_1 0x8494 478 + #define DPIO_TX_OCALINIT_EN (1<<31) 479 + #define DPIO_TX_OCALINIT(port) _PORT(port, _DPIO_TX_OCALINIT_0, \ 480 + _DPIO_TX_OCALINIT_1) 481 + 482 + #define _DPIO_TX_CTL_0 0x82ac 483 + #define _DPIO_TX_CTL_1 0x84ac 484 + #define DPIO_TX_CTL(port) _PORT(port, _DPIO_TX_CTL_0, _DPIO_TX_CTL_1) 485 + 486 + #define _DPIO_TX_LANE_0 0x82b8 487 + #define _DPIO_TX_LANE_1 0x84b8 488 + #define DPIO_TX_LANE(port) _PORT(port, _DPIO_TX_LANE_0, _DPIO_TX_LANE_1) 489 + 490 + #define _DPIO_DATA_CHANNEL1 0x8220 491 + #define _DPIO_DATA_CHANNEL2 0x8420 492 + #define DPIO_DATA_CHANNEL(port) _PORT(port, _DPIO_DATA_CHANNEL1, _DPIO_DATA_CHANNEL2) 493 + 494 + #define _DPIO_PORT0_PCS0 0x0220 495 + #define _DPIO_PORT0_PCS1 0x0420 496 + #define _DPIO_PORT1_PCS2 0x2620 497 + #define _DPIO_PORT1_PCS3 0x2820 498 + #define DPIO_DATA_LANE_A(port) _PORT(port, _DPIO_PORT0_PCS0, _DPIO_PORT1_PCS2) 499 + #define DPIO_DATA_LANE_B(port) _PORT(port, _DPIO_PORT0_PCS1, _DPIO_PORT1_PCS3) 500 + #define DPIO_DATA_CHANNEL1 0x8220 501 + #define DPIO_DATA_CHANNEL2 0x8420 419 502 420 503 /* 421 504 * Fence registers ··· 638 527 639 528 #define ERROR_GEN6 0x040a0 640 529 #define GEN7_ERR_INT 0x44040 641 - #define ERR_INT_MMIO_UNCLAIMED (1<<13) 530 + #define ERR_INT_POISON (1<<31) 531 + #define ERR_INT_MMIO_UNCLAIMED (1<<13) 532 + #define ERR_INT_FIFO_UNDERRUN_C (1<<6) 533 + #define ERR_INT_FIFO_UNDERRUN_B (1<<3) 534 + #define ERR_INT_FIFO_UNDERRUN_A (1<<0) 642 535 643 536 #define FPGA_DBG 0x42300 644 537 #define FPGA_DBG_RM_NOCLAIM (1<<31) ··· 698 583 #define VLV_IIR (VLV_DISPLAY_BASE + 0x20a4) 699 584 #define VLV_IMR (VLV_DISPLAY_BASE + 0x20a8) 700 585 #define VLV_ISR (VLV_DISPLAY_BASE + 0x20ac) 586 + #define VLV_PCBR (VLV_DISPLAY_BASE + 0x2120) 701 587 #define I915_PIPE_CONTROL_NOTIFY_INTERRUPT (1<<18) 702 588 #define I915_DISPLAY_PORT_INTERRUPT (1<<17) 703 589 #define I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT (1<<15) ··· 923 807 #define DPFC_CTL_EN (1<<31) 924 808 #define DPFC_CTL_PLANEA (0<<30) 925 809 #define DPFC_CTL_PLANEB (1<<30) 810 + #define IVB_DPFC_CTL_PLANE_SHIFT (29) 926 811 #define DPFC_CTL_FENCE_EN (1<<29) 812 + #define IVB_DPFC_CTL_FENCE_EN (1<<28) 927 813 #define DPFC_CTL_PERSISTENT_MODE (1<<25) 928 814 #define DPFC_SR_EN (1<<10) 929 815 #define DPFC_CTL_LIMIT_1X (0<<6) ··· 958 840 #define ILK_DPFC_CHICKEN 0x43224 959 841 #define ILK_FBC_RT_BASE 0x2128 960 842 #define ILK_FBC_RT_VALID (1<<0) 843 + #define SNB_FBC_FRONT_BUFFER (1<<1) 961 844 962 845 #define ILK_DISPLAY_CHICKEN1 0x42000 963 846 #define ILK_FBCQ_DIS (1<<22) ··· 974 855 #define SNB_CPU_FENCE_ENABLE (1<<29) 975 856 #define DPFC_CPU_FENCE_OFFSET 0x100104 976 857 858 + /* Framebuffer compression for Ivybridge */ 859 + #define IVB_FBC_RT_BASE 0x7020 860 + 861 + 862 + #define _HSW_PIPE_SLICE_CHICKEN_1_A 0x420B0 863 + #define _HSW_PIPE_SLICE_CHICKEN_1_B 0x420B4 864 + #define HSW_BYPASS_FBC_QUEUE (1<<22) 865 + #define HSW_PIPE_SLICE_CHICKEN_1(pipe) _PIPE(pipe, + \ 866 + _HSW_PIPE_SLICE_CHICKEN_1_A, + \ 867 + _HSW_PIPE_SLICE_CHICKEN_1_B) 868 + 869 + #define HSW_CLKGATE_DISABLE_PART_1 0x46500 870 + #define HSW_DPFC_GATING_DISABLE (1<<23) 977 871 978 872 /* 979 873 * GPIO regs ··· 1095 963 #define DPLL_FPA01_P1_POST_DIV_MASK 0x00ff0000 /* i915 */ 1096 964 #define DPLL_FPA01_P1_POST_DIV_MASK_PINEVIEW 0x00ff8000 /* Pineview */ 1097 965 #define DPLL_LOCK_VLV (1<<15) 966 + #define DPLL_INTEGRATED_CRI_CLK_VLV (1<<14) 1098 967 #define DPLL_INTEGRATED_CLOCK_VLV (1<<13) 968 + #define DPLL_PORTC_READY_MASK (0xf << 4) 969 + #define DPLL_PORTB_READY_MASK (0xf) 1099 970 1100 971 #define DPLL_FPA01_P1_POST_DIV_MASK_I830 0x001f0000 1101 972 /* ··· 2102 1967 #define BLM_PIPE_A (0 << 29) 2103 1968 #define BLM_PIPE_B (1 << 29) 2104 1969 #define BLM_PIPE_C (2 << 29) /* ivb + */ 1970 + #define BLM_TRANSCODER_A BLM_PIPE_A /* hsw */ 1971 + #define BLM_TRANSCODER_B BLM_PIPE_B 1972 + #define BLM_TRANSCODER_C BLM_PIPE_C 1973 + #define BLM_TRANSCODER_EDP (3 << 29) 2105 1974 #define BLM_PIPE(pipe) ((pipe) << 29) 2106 1975 #define BLM_POLARITY_I965 (1 << 28) /* gen4 only */ 2107 1976 #define BLM_PHASE_IN_INTERUPT_STATUS (1 << 26) ··· 2679 2540 #define DP_PRE_EMPHASIS_SHIFT 22 2680 2541 2681 2542 /* How many wires to use. I guess 3 was too hard */ 2682 - #define DP_PORT_WIDTH_1 (0 << 19) 2683 - #define DP_PORT_WIDTH_2 (1 << 19) 2684 - #define DP_PORT_WIDTH_4 (3 << 19) 2543 + #define DP_PORT_WIDTH(width) (((width) - 1) << 19) 2685 2544 #define DP_PORT_WIDTH_MASK (7 << 19) 2686 2545 2687 2546 /* Mystic DPCD version 1.1 special mode */ ··· 2783 2646 * which is after the LUTs, so we want the bytes for our color format. 2784 2647 * For our current usage, this is always 3, one byte for R, G and B. 2785 2648 */ 2786 - #define _PIPEA_GMCH_DATA_M 0x70050 2787 - #define _PIPEB_GMCH_DATA_M 0x71050 2649 + #define _PIPEA_DATA_M_G4X 0x70050 2650 + #define _PIPEB_DATA_M_G4X 0x71050 2788 2651 2789 2652 /* Transfer unit size for display port - 1, default is 0x3f (for TU size 64) */ 2790 2653 #define TU_SIZE(x) (((x)-1) << 25) /* default size 64 */ 2654 + #define TU_SIZE_SHIFT 25 2791 2655 #define TU_SIZE_MASK (0x3f << 25) 2792 2656 2793 2657 #define DATA_LINK_M_N_MASK (0xffffff) 2794 2658 #define DATA_LINK_N_MAX (0x800000) 2795 2659 2796 - #define _PIPEA_GMCH_DATA_N 0x70054 2797 - #define _PIPEB_GMCH_DATA_N 0x71054 2660 + #define _PIPEA_DATA_N_G4X 0x70054 2661 + #define _PIPEB_DATA_N_G4X 0x71054 2662 + #define PIPE_GMCH_DATA_N_MASK (0xffffff) 2798 2663 2799 2664 /* 2800 2665 * Computing Link M and N values for the Display Port link ··· 2809 2670 * Attributes and VB-ID. 2810 2671 */ 2811 2672 2812 - #define _PIPEA_DP_LINK_M 0x70060 2813 - #define _PIPEB_DP_LINK_M 0x71060 2673 + #define _PIPEA_LINK_M_G4X 0x70060 2674 + #define _PIPEB_LINK_M_G4X 0x71060 2675 + #define PIPEA_DP_LINK_M_MASK (0xffffff) 2814 2676 2815 - #define _PIPEA_DP_LINK_N 0x70064 2816 - #define _PIPEB_DP_LINK_N 0x71064 2677 + #define _PIPEA_LINK_N_G4X 0x70064 2678 + #define _PIPEB_LINK_N_G4X 0x71064 2679 + #define PIPEA_DP_LINK_N_MASK (0xffffff) 2817 2680 2818 - #define PIPE_GMCH_DATA_M(pipe) _PIPE(pipe, _PIPEA_GMCH_DATA_M, _PIPEB_GMCH_DATA_M) 2819 - #define PIPE_GMCH_DATA_N(pipe) _PIPE(pipe, _PIPEA_GMCH_DATA_N, _PIPEB_GMCH_DATA_N) 2820 - #define PIPE_DP_LINK_M(pipe) _PIPE(pipe, _PIPEA_DP_LINK_M, _PIPEB_DP_LINK_M) 2821 - #define PIPE_DP_LINK_N(pipe) _PIPE(pipe, _PIPEA_DP_LINK_N, _PIPEB_DP_LINK_N) 2681 + #define PIPE_DATA_M_G4X(pipe) _PIPE(pipe, _PIPEA_DATA_M_G4X, _PIPEB_DATA_M_G4X) 2682 + #define PIPE_DATA_N_G4X(pipe) _PIPE(pipe, _PIPEA_DATA_N_G4X, _PIPEB_DATA_N_G4X) 2683 + #define PIPE_LINK_M_G4X(pipe) _PIPE(pipe, _PIPEA_LINK_M_G4X, _PIPEB_LINK_M_G4X) 2684 + #define PIPE_LINK_N_G4X(pipe) _PIPE(pipe, _PIPEA_LINK_N_G4X, _PIPEB_LINK_N_G4X) 2822 2685 2823 2686 /* Display & cursor control */ 2824 2687 ··· 2856 2715 #define PIPECONF_INTERLACED_ILK (3 << 21) 2857 2716 #define PIPECONF_INTERLACED_DBL_ILK (4 << 21) /* ilk/snb only */ 2858 2717 #define PIPECONF_PFIT_PF_INTERLACED_DBL_ILK (5 << 21) /* ilk/snb only */ 2718 + #define PIPECONF_INTERLACE_MODE_MASK (7 << 21) 2859 2719 #define PIPECONF_CXSR_DOWNCLOCK (1<<16) 2860 2720 #define PIPECONF_COLOR_RANGE_SELECT (1 << 13) 2861 2721 #define PIPECONF_BPC_MASK (0x7 << 5) ··· 3644 3502 #define DE_PIPEA_FIFO_UNDERRUN (1 << 0) 3645 3503 3646 3504 /* More Ivybridge lolz */ 3647 - #define DE_ERR_DEBUG_IVB (1<<30) 3505 + #define DE_ERR_INT_IVB (1<<30) 3648 3506 #define DE_GSE_IVB (1<<29) 3649 3507 #define DE_PCH_EVENT_IVB (1<<28) 3650 3508 #define DE_DP_A_HOTPLUG_IVB (1<<27) ··· 3803 3661 SDE_PORTC_HOTPLUG_CPT | \ 3804 3662 SDE_PORTB_HOTPLUG_CPT) 3805 3663 #define SDE_GMBUS_CPT (1 << 17) 3664 + #define SDE_ERROR_CPT (1 << 16) 3806 3665 #define SDE_AUDIO_CP_REQ_C_CPT (1 << 10) 3807 3666 #define SDE_AUDIO_CP_CHG_C_CPT (1 << 9) 3808 3667 #define SDE_FDI_RXC_CPT (1 << 8) ··· 3827 3684 #define SDEIMR 0xc4004 3828 3685 #define SDEIIR 0xc4008 3829 3686 #define SDEIER 0xc400c 3687 + 3688 + #define SERR_INT 0xc4040 3689 + #define SERR_INT_POISON (1<<31) 3690 + #define SERR_INT_TRANS_C_FIFO_UNDERRUN (1<<6) 3691 + #define SERR_INT_TRANS_B_FIFO_UNDERRUN (1<<3) 3692 + #define SERR_INT_TRANS_A_FIFO_UNDERRUN (1<<0) 3830 3693 3831 3694 /* digital port hotplug */ 3832 3695 #define PCH_PORT_HOTPLUG 0xc4030 /* SHOTPLUG_CTL */ ··· 3943 3794 3944 3795 /* transcoder */ 3945 3796 3946 - #define _TRANS_HTOTAL_A 0xe0000 3947 - #define TRANS_HTOTAL_SHIFT 16 3948 - #define TRANS_HACTIVE_SHIFT 0 3949 - #define _TRANS_HBLANK_A 0xe0004 3950 - #define TRANS_HBLANK_END_SHIFT 16 3951 - #define TRANS_HBLANK_START_SHIFT 0 3952 - #define _TRANS_HSYNC_A 0xe0008 3953 - #define TRANS_HSYNC_END_SHIFT 16 3954 - #define TRANS_HSYNC_START_SHIFT 0 3955 - #define _TRANS_VTOTAL_A 0xe000c 3956 - #define TRANS_VTOTAL_SHIFT 16 3957 - #define TRANS_VACTIVE_SHIFT 0 3958 - #define _TRANS_VBLANK_A 0xe0010 3959 - #define TRANS_VBLANK_END_SHIFT 16 3960 - #define TRANS_VBLANK_START_SHIFT 0 3961 - #define _TRANS_VSYNC_A 0xe0014 3962 - #define TRANS_VSYNC_END_SHIFT 16 3963 - #define TRANS_VSYNC_START_SHIFT 0 3964 - #define _TRANS_VSYNCSHIFT_A 0xe0028 3797 + #define _PCH_TRANS_HTOTAL_A 0xe0000 3798 + #define TRANS_HTOTAL_SHIFT 16 3799 + #define TRANS_HACTIVE_SHIFT 0 3800 + #define _PCH_TRANS_HBLANK_A 0xe0004 3801 + #define TRANS_HBLANK_END_SHIFT 16 3802 + #define TRANS_HBLANK_START_SHIFT 0 3803 + #define _PCH_TRANS_HSYNC_A 0xe0008 3804 + #define TRANS_HSYNC_END_SHIFT 16 3805 + #define TRANS_HSYNC_START_SHIFT 0 3806 + #define _PCH_TRANS_VTOTAL_A 0xe000c 3807 + #define TRANS_VTOTAL_SHIFT 16 3808 + #define TRANS_VACTIVE_SHIFT 0 3809 + #define _PCH_TRANS_VBLANK_A 0xe0010 3810 + #define TRANS_VBLANK_END_SHIFT 16 3811 + #define TRANS_VBLANK_START_SHIFT 0 3812 + #define _PCH_TRANS_VSYNC_A 0xe0014 3813 + #define TRANS_VSYNC_END_SHIFT 16 3814 + #define TRANS_VSYNC_START_SHIFT 0 3815 + #define _PCH_TRANS_VSYNCSHIFT_A 0xe0028 3965 3816 3966 - #define _TRANSA_DATA_M1 0xe0030 3967 - #define _TRANSA_DATA_N1 0xe0034 3968 - #define _TRANSA_DATA_M2 0xe0038 3969 - #define _TRANSA_DATA_N2 0xe003c 3970 - #define _TRANSA_DP_LINK_M1 0xe0040 3971 - #define _TRANSA_DP_LINK_N1 0xe0044 3972 - #define _TRANSA_DP_LINK_M2 0xe0048 3973 - #define _TRANSA_DP_LINK_N2 0xe004c 3817 + #define _PCH_TRANSA_DATA_M1 0xe0030 3818 + #define _PCH_TRANSA_DATA_N1 0xe0034 3819 + #define _PCH_TRANSA_DATA_M2 0xe0038 3820 + #define _PCH_TRANSA_DATA_N2 0xe003c 3821 + #define _PCH_TRANSA_LINK_M1 0xe0040 3822 + #define _PCH_TRANSA_LINK_N1 0xe0044 3823 + #define _PCH_TRANSA_LINK_M2 0xe0048 3824 + #define _PCH_TRANSA_LINK_N2 0xe004c 3974 3825 3975 3826 /* Per-transcoder DIP controls */ 3976 3827 ··· 4039 3890 #define HSW_TVIDEO_DIP_VSC_DATA(trans) \ 4040 3891 _TRANSCODER(trans, HSW_VIDEO_DIP_VSC_DATA_A, HSW_VIDEO_DIP_VSC_DATA_B) 4041 3892 4042 - #define _TRANS_HTOTAL_B 0xe1000 4043 - #define _TRANS_HBLANK_B 0xe1004 4044 - #define _TRANS_HSYNC_B 0xe1008 4045 - #define _TRANS_VTOTAL_B 0xe100c 4046 - #define _TRANS_VBLANK_B 0xe1010 4047 - #define _TRANS_VSYNC_B 0xe1014 4048 - #define _TRANS_VSYNCSHIFT_B 0xe1028 3893 + #define _PCH_TRANS_HTOTAL_B 0xe1000 3894 + #define _PCH_TRANS_HBLANK_B 0xe1004 3895 + #define _PCH_TRANS_HSYNC_B 0xe1008 3896 + #define _PCH_TRANS_VTOTAL_B 0xe100c 3897 + #define _PCH_TRANS_VBLANK_B 0xe1010 3898 + #define _PCH_TRANS_VSYNC_B 0xe1014 3899 + #define _PCH_TRANS_VSYNCSHIFT_B 0xe1028 4049 3900 4050 - #define TRANS_HTOTAL(pipe) _PIPE(pipe, _TRANS_HTOTAL_A, _TRANS_HTOTAL_B) 4051 - #define TRANS_HBLANK(pipe) _PIPE(pipe, _TRANS_HBLANK_A, _TRANS_HBLANK_B) 4052 - #define TRANS_HSYNC(pipe) _PIPE(pipe, _TRANS_HSYNC_A, _TRANS_HSYNC_B) 4053 - #define TRANS_VTOTAL(pipe) _PIPE(pipe, _TRANS_VTOTAL_A, _TRANS_VTOTAL_B) 4054 - #define TRANS_VBLANK(pipe) _PIPE(pipe, _TRANS_VBLANK_A, _TRANS_VBLANK_B) 4055 - #define TRANS_VSYNC(pipe) _PIPE(pipe, _TRANS_VSYNC_A, _TRANS_VSYNC_B) 4056 - #define TRANS_VSYNCSHIFT(pipe) _PIPE(pipe, _TRANS_VSYNCSHIFT_A, \ 4057 - _TRANS_VSYNCSHIFT_B) 3901 + #define PCH_TRANS_HTOTAL(pipe) _PIPE(pipe, _PCH_TRANS_HTOTAL_A, _PCH_TRANS_HTOTAL_B) 3902 + #define PCH_TRANS_HBLANK(pipe) _PIPE(pipe, _PCH_TRANS_HBLANK_A, _PCH_TRANS_HBLANK_B) 3903 + #define PCH_TRANS_HSYNC(pipe) _PIPE(pipe, _PCH_TRANS_HSYNC_A, _PCH_TRANS_HSYNC_B) 3904 + #define PCH_TRANS_VTOTAL(pipe) _PIPE(pipe, _PCH_TRANS_VTOTAL_A, _PCH_TRANS_VTOTAL_B) 3905 + #define PCH_TRANS_VBLANK(pipe) _PIPE(pipe, _PCH_TRANS_VBLANK_A, _PCH_TRANS_VBLANK_B) 3906 + #define PCH_TRANS_VSYNC(pipe) _PIPE(pipe, _PCH_TRANS_VSYNC_A, _PCH_TRANS_VSYNC_B) 3907 + #define PCH_TRANS_VSYNCSHIFT(pipe) _PIPE(pipe, _PCH_TRANS_VSYNCSHIFT_A, \ 3908 + _PCH_TRANS_VSYNCSHIFT_B) 4058 3909 4059 - #define _TRANSB_DATA_M1 0xe1030 4060 - #define _TRANSB_DATA_N1 0xe1034 4061 - #define _TRANSB_DATA_M2 0xe1038 4062 - #define _TRANSB_DATA_N2 0xe103c 4063 - #define _TRANSB_DP_LINK_M1 0xe1040 4064 - #define _TRANSB_DP_LINK_N1 0xe1044 4065 - #define _TRANSB_DP_LINK_M2 0xe1048 4066 - #define _TRANSB_DP_LINK_N2 0xe104c 3910 + #define _PCH_TRANSB_DATA_M1 0xe1030 3911 + #define _PCH_TRANSB_DATA_N1 0xe1034 3912 + #define _PCH_TRANSB_DATA_M2 0xe1038 3913 + #define _PCH_TRANSB_DATA_N2 0xe103c 3914 + #define _PCH_TRANSB_LINK_M1 0xe1040 3915 + #define _PCH_TRANSB_LINK_N1 0xe1044 3916 + #define _PCH_TRANSB_LINK_M2 0xe1048 3917 + #define _PCH_TRANSB_LINK_N2 0xe104c 4067 3918 4068 - #define TRANSDATA_M1(pipe) _PIPE(pipe, _TRANSA_DATA_M1, _TRANSB_DATA_M1) 4069 - #define TRANSDATA_N1(pipe) _PIPE(pipe, _TRANSA_DATA_N1, _TRANSB_DATA_N1) 4070 - #define TRANSDATA_M2(pipe) _PIPE(pipe, _TRANSA_DATA_M2, _TRANSB_DATA_M2) 4071 - #define TRANSDATA_N2(pipe) _PIPE(pipe, _TRANSA_DATA_N2, _TRANSB_DATA_N2) 4072 - #define TRANSDPLINK_M1(pipe) _PIPE(pipe, _TRANSA_DP_LINK_M1, _TRANSB_DP_LINK_M1) 4073 - #define TRANSDPLINK_N1(pipe) _PIPE(pipe, _TRANSA_DP_LINK_N1, _TRANSB_DP_LINK_N1) 4074 - #define TRANSDPLINK_M2(pipe) _PIPE(pipe, _TRANSA_DP_LINK_M2, _TRANSB_DP_LINK_M2) 4075 - #define TRANSDPLINK_N2(pipe) _PIPE(pipe, _TRANSA_DP_LINK_N2, _TRANSB_DP_LINK_N2) 3919 + #define PCH_TRANS_DATA_M1(pipe) _PIPE(pipe, _PCH_TRANSA_DATA_M1, _PCH_TRANSB_DATA_M1) 3920 + #define PCH_TRANS_DATA_N1(pipe) _PIPE(pipe, _PCH_TRANSA_DATA_N1, _PCH_TRANSB_DATA_N1) 3921 + #define PCH_TRANS_DATA_M2(pipe) _PIPE(pipe, _PCH_TRANSA_DATA_M2, _PCH_TRANSB_DATA_M2) 3922 + #define PCH_TRANS_DATA_N2(pipe) _PIPE(pipe, _PCH_TRANSA_DATA_N2, _PCH_TRANSB_DATA_N2) 3923 + #define PCH_TRANS_LINK_M1(pipe) _PIPE(pipe, _PCH_TRANSA_LINK_M1, _PCH_TRANSB_LINK_M1) 3924 + #define PCH_TRANS_LINK_N1(pipe) _PIPE(pipe, _PCH_TRANSA_LINK_N1, _PCH_TRANSB_LINK_N1) 3925 + #define PCH_TRANS_LINK_M2(pipe) _PIPE(pipe, _PCH_TRANSA_LINK_M2, _PCH_TRANSB_LINK_M2) 3926 + #define PCH_TRANS_LINK_N2(pipe) _PIPE(pipe, _PCH_TRANSA_LINK_N2, _PCH_TRANSB_LINK_N2) 4076 3927 4077 - #define _TRANSACONF 0xf0008 4078 - #define _TRANSBCONF 0xf1008 4079 - #define TRANSCONF(plane) _PIPE(plane, _TRANSACONF, _TRANSBCONF) 3928 + #define _PCH_TRANSACONF 0xf0008 3929 + #define _PCH_TRANSBCONF 0xf1008 3930 + #define PCH_TRANSCONF(pipe) _PIPE(pipe, _PCH_TRANSACONF, _PCH_TRANSBCONF) 3931 + #define LPT_TRANSCONF _PCH_TRANSACONF /* lpt has only one transcoder */ 4080 3932 #define TRANS_DISABLE (0<<31) 4081 3933 #define TRANS_ENABLE (1<<31) 4082 3934 #define TRANS_STATE_MASK (1<<30) ··· 4161 4011 #define FDI_LINK_TRAIN_600MV_3_5DB_SNB_B (0x39<<22) 4162 4012 #define FDI_LINK_TRAIN_800MV_0DB_SNB_B (0x38<<22) 4163 4013 #define FDI_LINK_TRAIN_VOL_EMP_MASK (0x3f<<22) 4164 - #define FDI_DP_PORT_WIDTH_X1 (0<<19) 4165 - #define FDI_DP_PORT_WIDTH_X2 (1<<19) 4166 - #define FDI_DP_PORT_WIDTH_X3 (2<<19) 4167 - #define FDI_DP_PORT_WIDTH_X4 (3<<19) 4014 + #define FDI_DP_PORT_WIDTH_SHIFT 19 4015 + #define FDI_DP_PORT_WIDTH_MASK (7 << FDI_DP_PORT_WIDTH_SHIFT) 4016 + #define FDI_DP_PORT_WIDTH(width) (((width) - 1) << FDI_DP_PORT_WIDTH_SHIFT) 4168 4017 #define FDI_TX_ENHANCE_FRAME_ENABLE (1<<18) 4169 4018 /* Ironlake: hardwired to 1 */ 4170 4019 #define FDI_TX_PLL_ENABLE (1<<14) ··· 4188 4039 /* train, dp width same as FDI_TX */ 4189 4040 #define FDI_FS_ERRC_ENABLE (1<<27) 4190 4041 #define FDI_FE_ERRC_ENABLE (1<<26) 4191 - #define FDI_DP_PORT_WIDTH_X8 (7<<19) 4192 4042 #define FDI_RX_POLARITY_REVERSED_LPT (1<<16) 4193 4043 #define FDI_8BPC (0<<16) 4194 4044 #define FDI_10BPC (1<<16) ··· 4209 4061 #define FDI_LINK_TRAIN_PATTERN_IDLE_CPT (2<<8) 4210 4062 #define FDI_LINK_TRAIN_NORMAL_CPT (3<<8) 4211 4063 #define FDI_LINK_TRAIN_PATTERN_MASK_CPT (3<<8) 4212 - /* LPT */ 4213 - #define FDI_PORT_WIDTH_2X_LPT (1<<19) 4214 - #define FDI_PORT_WIDTH_1X_LPT (0<<19) 4215 4064 4216 4065 #define _FDI_RXA_MISC 0xf0010 4217 4066 #define _FDI_RXB_MISC 0xf1010 ··· 4454 4309 #define GEN6_RC_CTL_RC6_ENABLE (1<<18) 4455 4310 #define GEN6_RC_CTL_RC1e_ENABLE (1<<20) 4456 4311 #define GEN6_RC_CTL_RC7_ENABLE (1<<22) 4312 + #define GEN7_RC_CTL_TO_MODE (1<<28) 4457 4313 #define GEN6_RC_CTL_EI_MODE(x) ((x)<<27) 4458 4314 #define GEN6_RC_CTL_HW_ENABLE (1<<31) 4459 4315 #define GEN6_RP_DOWN_TIMEOUT 0xA010 ··· 4546 4400 #define IOSF_BAR_SHIFT 1 4547 4401 #define IOSF_SB_BUSY (1<<0) 4548 4402 #define IOSF_PORT_PUNIT 0x4 4403 + #define IOSF_PORT_NC 0x11 4549 4404 #define VLV_IOSF_DATA 0x182104 4550 4405 #define VLV_IOSF_ADDR 0x182108 4551 4406 4552 4407 #define PUNIT_OPCODE_REG_READ 6 4553 4408 #define PUNIT_OPCODE_REG_WRITE 7 4409 + 4410 + #define PUNIT_REG_GPU_LFM 0xd3 4411 + #define PUNIT_REG_GPU_FREQ_REQ 0xd4 4412 + #define PUNIT_REG_GPU_FREQ_STS 0xd8 4413 + #define PUNIT_REG_MEDIA_TURBO_FREQ_REQ 0xdc 4414 + 4415 + #define PUNIT_FUSE_BUS2 0xf6 /* bits 47:40 */ 4416 + #define PUNIT_FUSE_BUS1 0xf5 /* bits 55:48 */ 4417 + 4418 + #define IOSF_NC_FB_GFX_FREQ_FUSE 0x1c 4419 + #define FB_GFX_MAX_FREQ_FUSE_SHIFT 3 4420 + #define FB_GFX_MAX_FREQ_FUSE_MASK 0x000007f8 4421 + #define FB_GFX_FGUARANTEED_FREQ_FUSE_SHIFT 11 4422 + #define FB_GFX_FGUARANTEED_FREQ_FUSE_MASK 0x0007f800 4423 + #define IOSF_NC_FB_GFX_FMAX_FUSE_HI 0x34 4424 + #define FB_FMAX_VMIN_FREQ_HI_MASK 0x00000007 4425 + #define IOSF_NC_FB_GFX_FMAX_FUSE_LO 0x30 4426 + #define FB_FMAX_VMIN_FREQ_LO_SHIFT 27 4427 + #define FB_FMAX_VMIN_FREQ_LO_MASK 0xf8000000 4554 4428 4555 4429 #define GEN6_GT_CORE_STATUS 0x138060 4556 4430 #define GEN6_CORE_CPD_STATE_MASK (7<<4) ··· 4768 4602 #define TRANS_DDI_EDP_INPUT_B_ONOFF (5<<12) 4769 4603 #define TRANS_DDI_EDP_INPUT_C_ONOFF (6<<12) 4770 4604 #define TRANS_DDI_BFI_ENABLE (1<<4) 4771 - #define TRANS_DDI_PORT_WIDTH_X1 (0<<1) 4772 - #define TRANS_DDI_PORT_WIDTH_X2 (1<<1) 4773 - #define TRANS_DDI_PORT_WIDTH_X4 (3<<1) 4774 4605 4775 4606 /* DisplayPort Transport Control */ 4776 4607 #define DP_TP_CTL_A 0x64040 ··· 4811 4648 #define DDI_BUF_PORT_REVERSAL (1<<16) 4812 4649 #define DDI_BUF_IS_IDLE (1<<7) 4813 4650 #define DDI_A_4_LANES (1<<4) 4814 - #define DDI_PORT_WIDTH_X1 (0<<1) 4815 - #define DDI_PORT_WIDTH_X2 (1<<1) 4816 - #define DDI_PORT_WIDTH_X4 (3<<1) 4651 + #define DDI_PORT_WIDTH(width) (((width) - 1) << 1) 4817 4652 #define DDI_INIT_DISPLAY_DETECTED (1<<0) 4818 4653 4819 4654 /* DDI Buffer Translations */ ··· 4948 4787 #define _PIPE_A_CSC_COEFF_RV_GV 0x49020 4949 4788 #define _PIPE_A_CSC_COEFF_BV 0x49024 4950 4789 #define _PIPE_A_CSC_MODE 0x49028 4790 + #define CSC_BLACK_SCREEN_OFFSET (1 << 2) 4791 + #define CSC_POSITION_BEFORE_GAMMA (1 << 1) 4792 + #define CSC_MODE_YUV_TO_RGB (1 << 0) 4951 4793 #define _PIPE_A_CSC_PREOFF_HI 0x49030 4952 4794 #define _PIPE_A_CSC_PREOFF_ME 0x49034 4953 4795 #define _PIPE_A_CSC_PREOFF_LO 0x49038 ··· 4971 4807 #define _PIPE_B_CSC_POSTOFF_HI 0x49140 4972 4808 #define _PIPE_B_CSC_POSTOFF_ME 0x49144 4973 4809 #define _PIPE_B_CSC_POSTOFF_LO 0x49148 4974 - 4975 - #define CSC_BLACK_SCREEN_OFFSET (1 << 2) 4976 - #define CSC_POSITION_BEFORE_GAMMA (1 << 1) 4977 - #define CSC_MODE_YUV_TO_RGB (1 << 0) 4978 4810 4979 4811 #define PIPE_CSC_COEFF_RY_GY(pipe) _PIPE(pipe, _PIPE_A_CSC_COEFF_RY_GY, _PIPE_B_CSC_COEFF_RY_GY) 4980 4812 #define PIPE_CSC_COEFF_BY(pipe) _PIPE(pipe, _PIPE_A_CSC_COEFF_BY, _PIPE_B_CSC_COEFF_BY)
+10
drivers/gpu/drm/i915/i915_suspend.c
··· 192 192 static void i915_save_display(struct drm_device *dev) 193 193 { 194 194 struct drm_i915_private *dev_priv = dev->dev_private; 195 + unsigned long flags; 195 196 196 197 /* Display arbitration control */ 197 198 if (INTEL_INFO(dev)->gen <= 4) ··· 202 201 /* Don't regfile.save them in KMS mode */ 203 202 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 204 203 i915_save_display_reg(dev); 204 + 205 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 205 206 206 207 /* LVDS state */ 207 208 if (HAS_PCH_SPLIT(dev)) { ··· 224 221 if (IS_MOBILE(dev) && !IS_I830(dev)) 225 222 dev_priv->regfile.saveLVDS = I915_READ(LVDS); 226 223 } 224 + 225 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 227 226 228 227 if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev)) 229 228 dev_priv->regfile.savePFIT_CONTROL = I915_READ(PFIT_CONTROL); ··· 262 257 { 263 258 struct drm_i915_private *dev_priv = dev->dev_private; 264 259 u32 mask = 0xffffffff; 260 + unsigned long flags; 265 261 266 262 /* Display arbitration */ 267 263 if (INTEL_INFO(dev)->gen <= 4) ··· 270 264 271 265 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 272 266 i915_restore_display_reg(dev); 267 + 268 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 273 269 274 270 /* LVDS state */ 275 271 if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev)) ··· 311 303 I915_WRITE(PP_DIVISOR, dev_priv->regfile.savePP_DIVISOR); 312 304 I915_WRITE(PP_CONTROL, dev_priv->regfile.savePP_CONTROL); 313 305 } 306 + 307 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 314 308 315 309 /* only restore FBC info on the platform that supports FBC*/ 316 310 intel_disable_fbc(dev);
+55 -19
drivers/gpu/drm/i915/i915_sysfs.c
··· 212 212 int ret; 213 213 214 214 mutex_lock(&dev_priv->rps.hw_lock); 215 - ret = dev_priv->rps.cur_delay * GT_FREQUENCY_MULTIPLIER; 215 + if (IS_VALLEYVIEW(dev_priv->dev)) { 216 + u32 freq; 217 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS, &freq); 218 + ret = vlv_gpu_freq(dev_priv->mem_freq, (freq >> 8) & 0xff); 219 + } else { 220 + ret = dev_priv->rps.cur_delay * GT_FREQUENCY_MULTIPLIER; 221 + } 216 222 mutex_unlock(&dev_priv->rps.hw_lock); 217 223 218 224 return snprintf(buf, PAGE_SIZE, "%d\n", ret); ··· 232 226 int ret; 233 227 234 228 mutex_lock(&dev_priv->rps.hw_lock); 235 - ret = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER; 229 + if (IS_VALLEYVIEW(dev_priv->dev)) 230 + ret = vlv_gpu_freq(dev_priv->mem_freq, dev_priv->rps.max_delay); 231 + else 232 + ret = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER; 236 233 mutex_unlock(&dev_priv->rps.hw_lock); 237 234 238 235 return snprintf(buf, PAGE_SIZE, "%d\n", ret); ··· 255 246 if (ret) 256 247 return ret; 257 248 258 - val /= GT_FREQUENCY_MULTIPLIER; 259 - 260 249 mutex_lock(&dev_priv->rps.hw_lock); 261 250 262 - rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); 263 - hw_max = dev_priv->rps.hw_max; 264 - non_oc_max = (rp_state_cap & 0xff); 265 - hw_min = ((rp_state_cap & 0xff0000) >> 16); 251 + if (IS_VALLEYVIEW(dev_priv->dev)) { 252 + val = vlv_freq_opcode(dev_priv->mem_freq, val); 266 253 267 - if (val < hw_min || val > hw_max || val < dev_priv->rps.min_delay) { 254 + hw_max = valleyview_rps_max_freq(dev_priv); 255 + hw_min = valleyview_rps_min_freq(dev_priv); 256 + non_oc_max = hw_max; 257 + } else { 258 + val /= GT_FREQUENCY_MULTIPLIER; 259 + 260 + rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); 261 + hw_max = dev_priv->rps.hw_max; 262 + non_oc_max = (rp_state_cap & 0xff); 263 + hw_min = ((rp_state_cap & 0xff0000) >> 16); 264 + } 265 + 266 + if (val < hw_min || val > hw_max || 267 + val < dev_priv->rps.min_delay) { 268 268 mutex_unlock(&dev_priv->rps.hw_lock); 269 269 return -EINVAL; 270 270 } ··· 282 264 DRM_DEBUG("User requested overclocking to %d\n", 283 265 val * GT_FREQUENCY_MULTIPLIER); 284 266 285 - if (dev_priv->rps.cur_delay > val) 286 - gen6_set_rps(dev_priv->dev, val); 267 + if (dev_priv->rps.cur_delay > val) { 268 + if (IS_VALLEYVIEW(dev_priv->dev)) 269 + valleyview_set_rps(dev_priv->dev, val); 270 + else 271 + gen6_set_rps(dev_priv->dev, val); 272 + } 287 273 288 274 dev_priv->rps.max_delay = val; 289 275 ··· 304 282 int ret; 305 283 306 284 mutex_lock(&dev_priv->rps.hw_lock); 307 - ret = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER; 285 + if (IS_VALLEYVIEW(dev_priv->dev)) 286 + ret = vlv_gpu_freq(dev_priv->mem_freq, dev_priv->rps.min_delay); 287 + else 288 + ret = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER; 308 289 mutex_unlock(&dev_priv->rps.hw_lock); 309 290 310 291 return snprintf(buf, PAGE_SIZE, "%d\n", ret); ··· 327 302 if (ret) 328 303 return ret; 329 304 330 - val /= GT_FREQUENCY_MULTIPLIER; 331 - 332 305 mutex_lock(&dev_priv->rps.hw_lock); 333 306 334 - rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); 335 - hw_max = dev_priv->rps.hw_max; 336 - hw_min = ((rp_state_cap & 0xff0000) >> 16); 307 + if (IS_VALLEYVIEW(dev)) { 308 + val = vlv_freq_opcode(dev_priv->mem_freq, val); 309 + 310 + hw_max = valleyview_rps_max_freq(dev_priv); 311 + hw_min = valleyview_rps_min_freq(dev_priv); 312 + } else { 313 + val /= GT_FREQUENCY_MULTIPLIER; 314 + 315 + rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); 316 + hw_max = dev_priv->rps.hw_max; 317 + hw_min = ((rp_state_cap & 0xff0000) >> 16); 318 + } 337 319 338 320 if (val < hw_min || val > hw_max || val > dev_priv->rps.max_delay) { 339 321 mutex_unlock(&dev_priv->rps.hw_lock); 340 322 return -EINVAL; 341 323 } 342 324 343 - if (dev_priv->rps.cur_delay < val) 344 - gen6_set_rps(dev_priv->dev, val); 325 + if (dev_priv->rps.cur_delay < val) { 326 + if (IS_VALLEYVIEW(dev)) 327 + valleyview_set_rps(dev, val); 328 + else 329 + gen6_set_rps(dev_priv->dev, val); 330 + } 345 331 346 332 dev_priv->rps.min_delay = val; 347 333
+44 -44
drivers/gpu/drm/i915/i915_ums.c
··· 148 148 dev_priv->regfile.savePFA_WIN_SZ = I915_READ(_PFA_WIN_SZ); 149 149 dev_priv->regfile.savePFA_WIN_POS = I915_READ(_PFA_WIN_POS); 150 150 151 - dev_priv->regfile.saveTRANSACONF = I915_READ(_TRANSACONF); 152 - dev_priv->regfile.saveTRANS_HTOTAL_A = I915_READ(_TRANS_HTOTAL_A); 153 - dev_priv->regfile.saveTRANS_HBLANK_A = I915_READ(_TRANS_HBLANK_A); 154 - dev_priv->regfile.saveTRANS_HSYNC_A = I915_READ(_TRANS_HSYNC_A); 155 - dev_priv->regfile.saveTRANS_VTOTAL_A = I915_READ(_TRANS_VTOTAL_A); 156 - dev_priv->regfile.saveTRANS_VBLANK_A = I915_READ(_TRANS_VBLANK_A); 157 - dev_priv->regfile.saveTRANS_VSYNC_A = I915_READ(_TRANS_VSYNC_A); 151 + dev_priv->regfile.saveTRANSACONF = I915_READ(_PCH_TRANSACONF); 152 + dev_priv->regfile.saveTRANS_HTOTAL_A = I915_READ(_PCH_TRANS_HTOTAL_A); 153 + dev_priv->regfile.saveTRANS_HBLANK_A = I915_READ(_PCH_TRANS_HBLANK_A); 154 + dev_priv->regfile.saveTRANS_HSYNC_A = I915_READ(_PCH_TRANS_HSYNC_A); 155 + dev_priv->regfile.saveTRANS_VTOTAL_A = I915_READ(_PCH_TRANS_VTOTAL_A); 156 + dev_priv->regfile.saveTRANS_VBLANK_A = I915_READ(_PCH_TRANS_VBLANK_A); 157 + dev_priv->regfile.saveTRANS_VSYNC_A = I915_READ(_PCH_TRANS_VSYNC_A); 158 158 } 159 159 160 160 dev_priv->regfile.saveDSPACNTR = I915_READ(_DSPACNTR); ··· 205 205 dev_priv->regfile.savePFB_WIN_SZ = I915_READ(_PFB_WIN_SZ); 206 206 dev_priv->regfile.savePFB_WIN_POS = I915_READ(_PFB_WIN_POS); 207 207 208 - dev_priv->regfile.saveTRANSBCONF = I915_READ(_TRANSBCONF); 209 - dev_priv->regfile.saveTRANS_HTOTAL_B = I915_READ(_TRANS_HTOTAL_B); 210 - dev_priv->regfile.saveTRANS_HBLANK_B = I915_READ(_TRANS_HBLANK_B); 211 - dev_priv->regfile.saveTRANS_HSYNC_B = I915_READ(_TRANS_HSYNC_B); 212 - dev_priv->regfile.saveTRANS_VTOTAL_B = I915_READ(_TRANS_VTOTAL_B); 213 - dev_priv->regfile.saveTRANS_VBLANK_B = I915_READ(_TRANS_VBLANK_B); 214 - dev_priv->regfile.saveTRANS_VSYNC_B = I915_READ(_TRANS_VSYNC_B); 208 + dev_priv->regfile.saveTRANSBCONF = I915_READ(_PCH_TRANSBCONF); 209 + dev_priv->regfile.saveTRANS_HTOTAL_B = I915_READ(_PCH_TRANS_HTOTAL_B); 210 + dev_priv->regfile.saveTRANS_HBLANK_B = I915_READ(_PCH_TRANS_HBLANK_B); 211 + dev_priv->regfile.saveTRANS_HSYNC_B = I915_READ(_PCH_TRANS_HSYNC_B); 212 + dev_priv->regfile.saveTRANS_VTOTAL_B = I915_READ(_PCH_TRANS_VTOTAL_B); 213 + dev_priv->regfile.saveTRANS_VBLANK_B = I915_READ(_PCH_TRANS_VBLANK_B); 214 + dev_priv->regfile.saveTRANS_VSYNC_B = I915_READ(_PCH_TRANS_VSYNC_B); 215 215 } 216 216 217 217 dev_priv->regfile.saveDSPBCNTR = I915_READ(_DSPBCNTR); ··· 259 259 dev_priv->regfile.saveDP_B = I915_READ(DP_B); 260 260 dev_priv->regfile.saveDP_C = I915_READ(DP_C); 261 261 dev_priv->regfile.saveDP_D = I915_READ(DP_D); 262 - dev_priv->regfile.savePIPEA_GMCH_DATA_M = I915_READ(_PIPEA_GMCH_DATA_M); 263 - dev_priv->regfile.savePIPEB_GMCH_DATA_M = I915_READ(_PIPEB_GMCH_DATA_M); 264 - dev_priv->regfile.savePIPEA_GMCH_DATA_N = I915_READ(_PIPEA_GMCH_DATA_N); 265 - dev_priv->regfile.savePIPEB_GMCH_DATA_N = I915_READ(_PIPEB_GMCH_DATA_N); 266 - dev_priv->regfile.savePIPEA_DP_LINK_M = I915_READ(_PIPEA_DP_LINK_M); 267 - dev_priv->regfile.savePIPEB_DP_LINK_M = I915_READ(_PIPEB_DP_LINK_M); 268 - dev_priv->regfile.savePIPEA_DP_LINK_N = I915_READ(_PIPEA_DP_LINK_N); 269 - dev_priv->regfile.savePIPEB_DP_LINK_N = I915_READ(_PIPEB_DP_LINK_N); 262 + dev_priv->regfile.savePIPEA_GMCH_DATA_M = I915_READ(_PIPEA_DATA_M_G4X); 263 + dev_priv->regfile.savePIPEB_GMCH_DATA_M = I915_READ(_PIPEB_DATA_M_G4X); 264 + dev_priv->regfile.savePIPEA_GMCH_DATA_N = I915_READ(_PIPEA_DATA_N_G4X); 265 + dev_priv->regfile.savePIPEB_GMCH_DATA_N = I915_READ(_PIPEB_DATA_N_G4X); 266 + dev_priv->regfile.savePIPEA_DP_LINK_M = I915_READ(_PIPEA_LINK_M_G4X); 267 + dev_priv->regfile.savePIPEB_DP_LINK_M = I915_READ(_PIPEB_LINK_M_G4X); 268 + dev_priv->regfile.savePIPEA_DP_LINK_N = I915_READ(_PIPEA_LINK_N_G4X); 269 + dev_priv->regfile.savePIPEB_DP_LINK_N = I915_READ(_PIPEB_LINK_N_G4X); 270 270 } 271 271 /* FIXME: regfile.save TV & SDVO state */ 272 272 ··· 282 282 283 283 /* Display port ratios (must be done before clock is set) */ 284 284 if (SUPPORTS_INTEGRATED_DP(dev)) { 285 - I915_WRITE(_PIPEA_GMCH_DATA_M, dev_priv->regfile.savePIPEA_GMCH_DATA_M); 286 - I915_WRITE(_PIPEB_GMCH_DATA_M, dev_priv->regfile.savePIPEB_GMCH_DATA_M); 287 - I915_WRITE(_PIPEA_GMCH_DATA_N, dev_priv->regfile.savePIPEA_GMCH_DATA_N); 288 - I915_WRITE(_PIPEB_GMCH_DATA_N, dev_priv->regfile.savePIPEB_GMCH_DATA_N); 289 - I915_WRITE(_PIPEA_DP_LINK_M, dev_priv->regfile.savePIPEA_DP_LINK_M); 290 - I915_WRITE(_PIPEB_DP_LINK_M, dev_priv->regfile.savePIPEB_DP_LINK_M); 291 - I915_WRITE(_PIPEA_DP_LINK_N, dev_priv->regfile.savePIPEA_DP_LINK_N); 292 - I915_WRITE(_PIPEB_DP_LINK_N, dev_priv->regfile.savePIPEB_DP_LINK_N); 285 + I915_WRITE(_PIPEA_DATA_M_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_M); 286 + I915_WRITE(_PIPEB_DATA_M_G4X, dev_priv->regfile.savePIPEB_GMCH_DATA_M); 287 + I915_WRITE(_PIPEA_DATA_N_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_N); 288 + I915_WRITE(_PIPEB_DATA_N_G4X, dev_priv->regfile.savePIPEB_GMCH_DATA_N); 289 + I915_WRITE(_PIPEA_LINK_M_G4X, dev_priv->regfile.savePIPEA_DP_LINK_M); 290 + I915_WRITE(_PIPEB_LINK_M_G4X, dev_priv->regfile.savePIPEB_DP_LINK_M); 291 + I915_WRITE(_PIPEA_LINK_N_G4X, dev_priv->regfile.savePIPEA_DP_LINK_N); 292 + I915_WRITE(_PIPEB_LINK_N_G4X, dev_priv->regfile.savePIPEB_DP_LINK_N); 293 293 } 294 294 295 295 /* Fences */ ··· 379 379 I915_WRITE(_PFA_WIN_SZ, dev_priv->regfile.savePFA_WIN_SZ); 380 380 I915_WRITE(_PFA_WIN_POS, dev_priv->regfile.savePFA_WIN_POS); 381 381 382 - I915_WRITE(_TRANSACONF, dev_priv->regfile.saveTRANSACONF); 383 - I915_WRITE(_TRANS_HTOTAL_A, dev_priv->regfile.saveTRANS_HTOTAL_A); 384 - I915_WRITE(_TRANS_HBLANK_A, dev_priv->regfile.saveTRANS_HBLANK_A); 385 - I915_WRITE(_TRANS_HSYNC_A, dev_priv->regfile.saveTRANS_HSYNC_A); 386 - I915_WRITE(_TRANS_VTOTAL_A, dev_priv->regfile.saveTRANS_VTOTAL_A); 387 - I915_WRITE(_TRANS_VBLANK_A, dev_priv->regfile.saveTRANS_VBLANK_A); 388 - I915_WRITE(_TRANS_VSYNC_A, dev_priv->regfile.saveTRANS_VSYNC_A); 382 + I915_WRITE(_PCH_TRANSACONF, dev_priv->regfile.saveTRANSACONF); 383 + I915_WRITE(_PCH_TRANS_HTOTAL_A, dev_priv->regfile.saveTRANS_HTOTAL_A); 384 + I915_WRITE(_PCH_TRANS_HBLANK_A, dev_priv->regfile.saveTRANS_HBLANK_A); 385 + I915_WRITE(_PCH_TRANS_HSYNC_A, dev_priv->regfile.saveTRANS_HSYNC_A); 386 + I915_WRITE(_PCH_TRANS_VTOTAL_A, dev_priv->regfile.saveTRANS_VTOTAL_A); 387 + I915_WRITE(_PCH_TRANS_VBLANK_A, dev_priv->regfile.saveTRANS_VBLANK_A); 388 + I915_WRITE(_PCH_TRANS_VSYNC_A, dev_priv->regfile.saveTRANS_VSYNC_A); 389 389 } 390 390 391 391 /* Restore plane info */ ··· 448 448 I915_WRITE(_PFB_WIN_SZ, dev_priv->regfile.savePFB_WIN_SZ); 449 449 I915_WRITE(_PFB_WIN_POS, dev_priv->regfile.savePFB_WIN_POS); 450 450 451 - I915_WRITE(_TRANSBCONF, dev_priv->regfile.saveTRANSBCONF); 452 - I915_WRITE(_TRANS_HTOTAL_B, dev_priv->regfile.saveTRANS_HTOTAL_B); 453 - I915_WRITE(_TRANS_HBLANK_B, dev_priv->regfile.saveTRANS_HBLANK_B); 454 - I915_WRITE(_TRANS_HSYNC_B, dev_priv->regfile.saveTRANS_HSYNC_B); 455 - I915_WRITE(_TRANS_VTOTAL_B, dev_priv->regfile.saveTRANS_VTOTAL_B); 456 - I915_WRITE(_TRANS_VBLANK_B, dev_priv->regfile.saveTRANS_VBLANK_B); 457 - I915_WRITE(_TRANS_VSYNC_B, dev_priv->regfile.saveTRANS_VSYNC_B); 451 + I915_WRITE(_PCH_TRANSBCONF, dev_priv->regfile.saveTRANSBCONF); 452 + I915_WRITE(_PCH_TRANS_HTOTAL_B, dev_priv->regfile.saveTRANS_HTOTAL_B); 453 + I915_WRITE(_PCH_TRANS_HBLANK_B, dev_priv->regfile.saveTRANS_HBLANK_B); 454 + I915_WRITE(_PCH_TRANS_HSYNC_B, dev_priv->regfile.saveTRANS_HSYNC_B); 455 + I915_WRITE(_PCH_TRANS_VTOTAL_B, dev_priv->regfile.saveTRANS_VTOTAL_B); 456 + I915_WRITE(_PCH_TRANS_VBLANK_B, dev_priv->regfile.saveTRANS_VBLANK_B); 457 + I915_WRITE(_PCH_TRANS_VSYNC_B, dev_priv->regfile.saveTRANS_VSYNC_B); 458 458 } 459 459 460 460 /* Restore plane info */
+50 -50
drivers/gpu/drm/i915/intel_bios.c
··· 212 212 if (!lvds_options) 213 213 return; 214 214 215 - dev_priv->lvds_dither = lvds_options->pixel_dither; 215 + dev_priv->vbt.lvds_dither = lvds_options->pixel_dither; 216 216 if (lvds_options->panel_type == 0xff) 217 217 return; 218 218 ··· 226 226 if (!lvds_lfp_data_ptrs) 227 227 return; 228 228 229 - dev_priv->lvds_vbt = 1; 229 + dev_priv->vbt.lvds_vbt = 1; 230 230 231 231 panel_dvo_timing = get_lvds_dvo_timing(lvds_lfp_data, 232 232 lvds_lfp_data_ptrs, ··· 238 238 239 239 fill_detail_timing_data(panel_fixed_mode, panel_dvo_timing); 240 240 241 - dev_priv->lfp_lvds_vbt_mode = panel_fixed_mode; 241 + dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode; 242 242 243 243 DRM_DEBUG_KMS("Found panel mode in BIOS VBT tables:\n"); 244 244 drm_mode_debug_printmodeline(panel_fixed_mode); ··· 274 274 /* check the resolution, just to be sure */ 275 275 if (fp_timing->x_res == panel_fixed_mode->hdisplay && 276 276 fp_timing->y_res == panel_fixed_mode->vdisplay) { 277 - dev_priv->bios_lvds_val = fp_timing->lvds_reg_val; 277 + dev_priv->vbt.bios_lvds_val = fp_timing->lvds_reg_val; 278 278 DRM_DEBUG_KMS("VBT initial LVDS value %x\n", 279 - dev_priv->bios_lvds_val); 279 + dev_priv->vbt.bios_lvds_val); 280 280 } 281 281 } 282 282 } ··· 316 316 317 317 fill_detail_timing_data(panel_fixed_mode, dvo_timing + index); 318 318 319 - dev_priv->sdvo_lvds_vbt_mode = panel_fixed_mode; 319 + dev_priv->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode; 320 320 321 321 DRM_DEBUG_KMS("Found SDVO panel mode in BIOS VBT tables:\n"); 322 322 drm_mode_debug_printmodeline(panel_fixed_mode); ··· 345 345 346 346 general = find_section(bdb, BDB_GENERAL_FEATURES); 347 347 if (general) { 348 - dev_priv->int_tv_support = general->int_tv_support; 349 - dev_priv->int_crt_support = general->int_crt_support; 350 - dev_priv->lvds_use_ssc = general->enable_ssc; 351 - dev_priv->lvds_ssc_freq = 348 + dev_priv->vbt.int_tv_support = general->int_tv_support; 349 + dev_priv->vbt.int_crt_support = general->int_crt_support; 350 + dev_priv->vbt.lvds_use_ssc = general->enable_ssc; 351 + dev_priv->vbt.lvds_ssc_freq = 352 352 intel_bios_ssc_frequency(dev, general->ssc_freq); 353 - dev_priv->display_clock_mode = general->display_clock_mode; 354 - dev_priv->fdi_rx_polarity_inverted = general->fdi_rx_polarity_inverted; 353 + dev_priv->vbt.display_clock_mode = general->display_clock_mode; 354 + dev_priv->vbt.fdi_rx_polarity_inverted = general->fdi_rx_polarity_inverted; 355 355 DRM_DEBUG_KMS("BDB_GENERAL_FEATURES int_tv_support %d int_crt_support %d lvds_use_ssc %d lvds_ssc_freq %d display_clock_mode %d fdi_rx_polarity_inverted %d\n", 356 - dev_priv->int_tv_support, 357 - dev_priv->int_crt_support, 358 - dev_priv->lvds_use_ssc, 359 - dev_priv->lvds_ssc_freq, 360 - dev_priv->display_clock_mode, 361 - dev_priv->fdi_rx_polarity_inverted); 356 + dev_priv->vbt.int_tv_support, 357 + dev_priv->vbt.int_crt_support, 358 + dev_priv->vbt.lvds_use_ssc, 359 + dev_priv->vbt.lvds_ssc_freq, 360 + dev_priv->vbt.display_clock_mode, 361 + dev_priv->vbt.fdi_rx_polarity_inverted); 362 362 } 363 363 } 364 364 ··· 375 375 int bus_pin = general->crt_ddc_gmbus_pin; 376 376 DRM_DEBUG_KMS("crt_ddc_bus_pin: %d\n", bus_pin); 377 377 if (intel_gmbus_is_port_valid(bus_pin)) 378 - dev_priv->crt_ddc_pin = bus_pin; 378 + dev_priv->vbt.crt_ddc_pin = bus_pin; 379 379 } else { 380 380 DRM_DEBUG_KMS("BDB_GD too small (%d). Invalid.\n", 381 381 block_size); ··· 486 486 487 487 if (SUPPORTS_EDP(dev) && 488 488 driver->lvds_config == BDB_DRIVER_FEATURE_EDP) 489 - dev_priv->edp.support = 1; 489 + dev_priv->vbt.edp_support = 1; 490 490 491 491 if (driver->dual_frequency) 492 492 dev_priv->render_reclock_avail = true; ··· 501 501 502 502 edp = find_section(bdb, BDB_EDP); 503 503 if (!edp) { 504 - if (SUPPORTS_EDP(dev_priv->dev) && dev_priv->edp.support) 504 + if (SUPPORTS_EDP(dev_priv->dev) && dev_priv->vbt.edp_support) 505 505 DRM_DEBUG_KMS("No eDP BDB found but eDP panel supported.\n"); 506 506 return; 507 507 } 508 508 509 509 switch ((edp->color_depth >> (panel_type * 2)) & 3) { 510 510 case EDP_18BPP: 511 - dev_priv->edp.bpp = 18; 511 + dev_priv->vbt.edp_bpp = 18; 512 512 break; 513 513 case EDP_24BPP: 514 - dev_priv->edp.bpp = 24; 514 + dev_priv->vbt.edp_bpp = 24; 515 515 break; 516 516 case EDP_30BPP: 517 - dev_priv->edp.bpp = 30; 517 + dev_priv->vbt.edp_bpp = 30; 518 518 break; 519 519 } 520 520 ··· 522 522 edp_pps = &edp->power_seqs[panel_type]; 523 523 edp_link_params = &edp->link_params[panel_type]; 524 524 525 - dev_priv->edp.pps = *edp_pps; 525 + dev_priv->vbt.edp_pps = *edp_pps; 526 526 527 - dev_priv->edp.rate = edp_link_params->rate ? DP_LINK_BW_2_7 : 527 + dev_priv->vbt.edp_rate = edp_link_params->rate ? DP_LINK_BW_2_7 : 528 528 DP_LINK_BW_1_62; 529 529 switch (edp_link_params->lanes) { 530 530 case 0: 531 - dev_priv->edp.lanes = 1; 531 + dev_priv->vbt.edp_lanes = 1; 532 532 break; 533 533 case 1: 534 - dev_priv->edp.lanes = 2; 534 + dev_priv->vbt.edp_lanes = 2; 535 535 break; 536 536 case 3: 537 537 default: 538 - dev_priv->edp.lanes = 4; 538 + dev_priv->vbt.edp_lanes = 4; 539 539 break; 540 540 } 541 541 switch (edp_link_params->preemphasis) { 542 542 case 0: 543 - dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_0; 543 + dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_0; 544 544 break; 545 545 case 1: 546 - dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_3_5; 546 + dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_3_5; 547 547 break; 548 548 case 2: 549 - dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_6; 549 + dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_6; 550 550 break; 551 551 case 3: 552 - dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_9_5; 552 + dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_9_5; 553 553 break; 554 554 } 555 555 switch (edp_link_params->vswing) { 556 556 case 0: 557 - dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_400; 557 + dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_400; 558 558 break; 559 559 case 1: 560 - dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_600; 560 + dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_600; 561 561 break; 562 562 case 2: 563 - dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_800; 563 + dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_800; 564 564 break; 565 565 case 3: 566 - dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_1200; 566 + dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_1200; 567 567 break; 568 568 } 569 569 } ··· 611 611 DRM_DEBUG_KMS("no child dev is parsed from VBT\n"); 612 612 return; 613 613 } 614 - dev_priv->child_dev = kcalloc(count, sizeof(*p_child), GFP_KERNEL); 615 - if (!dev_priv->child_dev) { 614 + dev_priv->vbt.child_dev = kcalloc(count, sizeof(*p_child), GFP_KERNEL); 615 + if (!dev_priv->vbt.child_dev) { 616 616 DRM_DEBUG_KMS("No memory space for child device\n"); 617 617 return; 618 618 } 619 619 620 - dev_priv->child_dev_num = count; 620 + dev_priv->vbt.child_dev_num = count; 621 621 count = 0; 622 622 for (i = 0; i < child_device_num; i++) { 623 623 p_child = &(p_defs->devices[i]); ··· 625 625 /* skip the device block if device type is invalid */ 626 626 continue; 627 627 } 628 - child_dev_ptr = dev_priv->child_dev + count; 628 + child_dev_ptr = dev_priv->vbt.child_dev + count; 629 629 count++; 630 630 memcpy((void *)child_dev_ptr, (void *)p_child, 631 631 sizeof(*p_child)); ··· 638 638 { 639 639 struct drm_device *dev = dev_priv->dev; 640 640 641 - dev_priv->crt_ddc_pin = GMBUS_PORT_VGADDC; 641 + dev_priv->vbt.crt_ddc_pin = GMBUS_PORT_VGADDC; 642 642 643 643 /* LFP panel data */ 644 - dev_priv->lvds_dither = 1; 645 - dev_priv->lvds_vbt = 0; 644 + dev_priv->vbt.lvds_dither = 1; 645 + dev_priv->vbt.lvds_vbt = 0; 646 646 647 647 /* SDVO panel data */ 648 - dev_priv->sdvo_lvds_vbt_mode = NULL; 648 + dev_priv->vbt.sdvo_lvds_vbt_mode = NULL; 649 649 650 650 /* general features */ 651 - dev_priv->int_tv_support = 1; 652 - dev_priv->int_crt_support = 1; 651 + dev_priv->vbt.int_tv_support = 1; 652 + dev_priv->vbt.int_crt_support = 1; 653 653 654 654 /* Default to using SSC */ 655 - dev_priv->lvds_use_ssc = 1; 656 - dev_priv->lvds_ssc_freq = intel_bios_ssc_frequency(dev, 1); 657 - DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->lvds_ssc_freq); 655 + dev_priv->vbt.lvds_use_ssc = 1; 656 + dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev, 1); 657 + DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->vbt.lvds_ssc_freq); 658 658 } 659 659 660 660 static int __init intel_no_opregion_vbt_callback(const struct dmi_system_id *id)
+6 -2
drivers/gpu/drm/i915/intel_crt.c
··· 207 207 if (HAS_PCH_SPLIT(dev)) 208 208 pipe_config->has_pch_encoder = true; 209 209 210 + /* LPT FDI RX only supports 8bpc. */ 211 + if (HAS_PCH_LPT(dev)) 212 + pipe_config->pipe_bpp = 24; 213 + 210 214 return true; 211 215 } 212 216 ··· 435 431 436 432 BUG_ON(crt->base.type != INTEL_OUTPUT_ANALOG); 437 433 438 - i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->crt_ddc_pin); 434 + i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->vbt.crt_ddc_pin); 439 435 edid = intel_crt_get_edid(connector, i2c); 440 436 441 437 if (edid) { ··· 641 637 int ret; 642 638 struct i2c_adapter *i2c; 643 639 644 - i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->crt_ddc_pin); 640 + i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->vbt.crt_ddc_pin); 645 641 ret = intel_crt_ddc_get_modes(connector, i2c); 646 642 if (ret || !IS_G4X(dev)) 647 643 return ret;
+247 -463
drivers/gpu/drm/i915/intel_ddi.c
··· 174 174 * mode set "sequence for CRT port" document: 175 175 * - TP1 to TP2 time with the default value 176 176 * - FDI delay to 90h 177 + * 178 + * WaFDIAutoLinkSetTimingOverrride:hsw 177 179 */ 178 180 I915_WRITE(_FDI_RXA_MISC, FDI_RX_PWRDN_LANE1_VAL(2) | 179 181 FDI_RX_PWRDN_LANE0_VAL(2) | ··· 183 181 184 182 /* Enable the PCH Receiver FDI PLL */ 185 183 rx_ctl_val = dev_priv->fdi_rx_config | FDI_RX_ENHANCE_FRAME_ENABLE | 186 - FDI_RX_PLL_ENABLE | ((intel_crtc->fdi_lanes - 1) << 19); 184 + FDI_RX_PLL_ENABLE | 185 + FDI_DP_PORT_WIDTH(intel_crtc->config.fdi_lanes); 187 186 I915_WRITE(_FDI_RXA_CTL, rx_ctl_val); 188 187 POSTING_READ(_FDI_RXA_CTL); 189 188 udelay(220); ··· 212 209 * port reversal bit */ 213 210 I915_WRITE(DDI_BUF_CTL(PORT_E), 214 211 DDI_BUF_CTL_ENABLE | 215 - ((intel_crtc->fdi_lanes - 1) << 1) | 212 + ((intel_crtc->config.fdi_lanes - 1) << 1) | 216 213 hsw_ddi_buf_ctl_values[i / 2]); 217 214 POSTING_READ(DDI_BUF_CTL(PORT_E)); 218 215 ··· 281 278 DRM_ERROR("FDI link training failed!\n"); 282 279 } 283 280 284 - /* WRPLL clock dividers */ 285 - struct wrpll_tmds_clock { 286 - u32 clock; 287 - u16 p; /* Post divider */ 288 - u16 n2; /* Feedback divider */ 289 - u16 r2; /* Reference divider */ 290 - }; 291 - 292 - /* Table of matching values for WRPLL clocks programming for each frequency. 293 - * The code assumes this table is sorted. */ 294 - static const struct wrpll_tmds_clock wrpll_tmds_clock_table[] = { 295 - {19750, 38, 25, 18}, 296 - {20000, 48, 32, 18}, 297 - {21000, 36, 21, 15}, 298 - {21912, 42, 29, 17}, 299 - {22000, 36, 22, 15}, 300 - {23000, 36, 23, 15}, 301 - {23500, 40, 40, 23}, 302 - {23750, 26, 16, 14}, 303 - {24000, 36, 24, 15}, 304 - {25000, 36, 25, 15}, 305 - {25175, 26, 40, 33}, 306 - {25200, 30, 21, 15}, 307 - {26000, 36, 26, 15}, 308 - {27000, 30, 21, 14}, 309 - {27027, 18, 100, 111}, 310 - {27500, 30, 29, 19}, 311 - {28000, 34, 30, 17}, 312 - {28320, 26, 30, 22}, 313 - {28322, 32, 42, 25}, 314 - {28750, 24, 23, 18}, 315 - {29000, 30, 29, 18}, 316 - {29750, 32, 30, 17}, 317 - {30000, 30, 25, 15}, 318 - {30750, 30, 41, 24}, 319 - {31000, 30, 31, 18}, 320 - {31500, 30, 28, 16}, 321 - {32000, 30, 32, 18}, 322 - {32500, 28, 32, 19}, 323 - {33000, 24, 22, 15}, 324 - {34000, 28, 30, 17}, 325 - {35000, 26, 32, 19}, 326 - {35500, 24, 30, 19}, 327 - {36000, 26, 26, 15}, 328 - {36750, 26, 46, 26}, 329 - {37000, 24, 23, 14}, 330 - {37762, 22, 40, 26}, 331 - {37800, 20, 21, 15}, 332 - {38000, 24, 27, 16}, 333 - {38250, 24, 34, 20}, 334 - {39000, 24, 26, 15}, 335 - {40000, 24, 32, 18}, 336 - {40500, 20, 21, 14}, 337 - {40541, 22, 147, 89}, 338 - {40750, 18, 19, 14}, 339 - {41000, 16, 17, 14}, 340 - {41500, 22, 44, 26}, 341 - {41540, 22, 44, 26}, 342 - {42000, 18, 21, 15}, 343 - {42500, 22, 45, 26}, 344 - {43000, 20, 43, 27}, 345 - {43163, 20, 24, 15}, 346 - {44000, 18, 22, 15}, 347 - {44900, 20, 108, 65}, 348 - {45000, 20, 25, 15}, 349 - {45250, 20, 52, 31}, 350 - {46000, 18, 23, 15}, 351 - {46750, 20, 45, 26}, 352 - {47000, 20, 40, 23}, 353 - {48000, 18, 24, 15}, 354 - {49000, 18, 49, 30}, 355 - {49500, 16, 22, 15}, 356 - {50000, 18, 25, 15}, 357 - {50500, 18, 32, 19}, 358 - {51000, 18, 34, 20}, 359 - {52000, 18, 26, 15}, 360 - {52406, 14, 34, 25}, 361 - {53000, 16, 22, 14}, 362 - {54000, 16, 24, 15}, 363 - {54054, 16, 173, 108}, 364 - {54500, 14, 24, 17}, 365 - {55000, 12, 22, 18}, 366 - {56000, 14, 45, 31}, 367 - {56250, 16, 25, 15}, 368 - {56750, 14, 25, 17}, 369 - {57000, 16, 27, 16}, 370 - {58000, 16, 43, 25}, 371 - {58250, 16, 38, 22}, 372 - {58750, 16, 40, 23}, 373 - {59000, 14, 26, 17}, 374 - {59341, 14, 40, 26}, 375 - {59400, 16, 44, 25}, 376 - {60000, 16, 32, 18}, 377 - {60500, 12, 39, 29}, 378 - {61000, 14, 49, 31}, 379 - {62000, 14, 37, 23}, 380 - {62250, 14, 42, 26}, 381 - {63000, 12, 21, 15}, 382 - {63500, 14, 28, 17}, 383 - {64000, 12, 27, 19}, 384 - {65000, 14, 32, 19}, 385 - {65250, 12, 29, 20}, 386 - {65500, 12, 32, 22}, 387 - {66000, 12, 22, 15}, 388 - {66667, 14, 38, 22}, 389 - {66750, 10, 21, 17}, 390 - {67000, 14, 33, 19}, 391 - {67750, 14, 58, 33}, 392 - {68000, 14, 30, 17}, 393 - {68179, 14, 46, 26}, 394 - {68250, 14, 46, 26}, 395 - {69000, 12, 23, 15}, 396 - {70000, 12, 28, 18}, 397 - {71000, 12, 30, 19}, 398 - {72000, 12, 24, 15}, 399 - {73000, 10, 23, 17}, 400 - {74000, 12, 23, 14}, 401 - {74176, 8, 100, 91}, 402 - {74250, 10, 22, 16}, 403 - {74481, 12, 43, 26}, 404 - {74500, 10, 29, 21}, 405 - {75000, 12, 25, 15}, 406 - {75250, 10, 39, 28}, 407 - {76000, 12, 27, 16}, 408 - {77000, 12, 53, 31}, 409 - {78000, 12, 26, 15}, 410 - {78750, 12, 28, 16}, 411 - {79000, 10, 38, 26}, 412 - {79500, 10, 28, 19}, 413 - {80000, 12, 32, 18}, 414 - {81000, 10, 21, 14}, 415 - {81081, 6, 100, 111}, 416 - {81624, 8, 29, 24}, 417 - {82000, 8, 17, 14}, 418 - {83000, 10, 40, 26}, 419 - {83950, 10, 28, 18}, 420 - {84000, 10, 28, 18}, 421 - {84750, 6, 16, 17}, 422 - {85000, 6, 17, 18}, 423 - {85250, 10, 30, 19}, 424 - {85750, 10, 27, 17}, 425 - {86000, 10, 43, 27}, 426 - {87000, 10, 29, 18}, 427 - {88000, 10, 44, 27}, 428 - {88500, 10, 41, 25}, 429 - {89000, 10, 28, 17}, 430 - {89012, 6, 90, 91}, 431 - {89100, 10, 33, 20}, 432 - {90000, 10, 25, 15}, 433 - {91000, 10, 32, 19}, 434 - {92000, 10, 46, 27}, 435 - {93000, 10, 31, 18}, 436 - {94000, 10, 40, 23}, 437 - {94500, 10, 28, 16}, 438 - {95000, 10, 44, 25}, 439 - {95654, 10, 39, 22}, 440 - {95750, 10, 39, 22}, 441 - {96000, 10, 32, 18}, 442 - {97000, 8, 23, 16}, 443 - {97750, 8, 42, 29}, 444 - {98000, 8, 45, 31}, 445 - {99000, 8, 22, 15}, 446 - {99750, 8, 34, 23}, 447 - {100000, 6, 20, 18}, 448 - {100500, 6, 19, 17}, 449 - {101000, 6, 37, 33}, 450 - {101250, 8, 21, 14}, 451 - {102000, 6, 17, 15}, 452 - {102250, 6, 25, 22}, 453 - {103000, 8, 29, 19}, 454 - {104000, 8, 37, 24}, 455 - {105000, 8, 28, 18}, 456 - {106000, 8, 22, 14}, 457 - {107000, 8, 46, 29}, 458 - {107214, 8, 27, 17}, 459 - {108000, 8, 24, 15}, 460 - {108108, 8, 173, 108}, 461 - {109000, 6, 23, 19}, 462 - {110000, 6, 22, 18}, 463 - {110013, 6, 22, 18}, 464 - {110250, 8, 49, 30}, 465 - {110500, 8, 36, 22}, 466 - {111000, 8, 23, 14}, 467 - {111264, 8, 150, 91}, 468 - {111375, 8, 33, 20}, 469 - {112000, 8, 63, 38}, 470 - {112500, 8, 25, 15}, 471 - {113100, 8, 57, 34}, 472 - {113309, 8, 42, 25}, 473 - {114000, 8, 27, 16}, 474 - {115000, 6, 23, 18}, 475 - {116000, 8, 43, 25}, 476 - {117000, 8, 26, 15}, 477 - {117500, 8, 40, 23}, 478 - {118000, 6, 38, 29}, 479 - {119000, 8, 30, 17}, 480 - {119500, 8, 46, 26}, 481 - {119651, 8, 39, 22}, 482 - {120000, 8, 32, 18}, 483 - {121000, 6, 39, 29}, 484 - {121250, 6, 31, 23}, 485 - {121750, 6, 23, 17}, 486 - {122000, 6, 42, 31}, 487 - {122614, 6, 30, 22}, 488 - {123000, 6, 41, 30}, 489 - {123379, 6, 37, 27}, 490 - {124000, 6, 51, 37}, 491 - {125000, 6, 25, 18}, 492 - {125250, 4, 13, 14}, 493 - {125750, 4, 27, 29}, 494 - {126000, 6, 21, 15}, 495 - {127000, 6, 24, 17}, 496 - {127250, 6, 41, 29}, 497 - {128000, 6, 27, 19}, 498 - {129000, 6, 43, 30}, 499 - {129859, 4, 25, 26}, 500 - {130000, 6, 26, 18}, 501 - {130250, 6, 42, 29}, 502 - {131000, 6, 32, 22}, 503 - {131500, 6, 38, 26}, 504 - {131850, 6, 41, 28}, 505 - {132000, 6, 22, 15}, 506 - {132750, 6, 28, 19}, 507 - {133000, 6, 34, 23}, 508 - {133330, 6, 37, 25}, 509 - {134000, 6, 61, 41}, 510 - {135000, 6, 21, 14}, 511 - {135250, 6, 167, 111}, 512 - {136000, 6, 62, 41}, 513 - {137000, 6, 35, 23}, 514 - {138000, 6, 23, 15}, 515 - {138500, 6, 40, 26}, 516 - {138750, 6, 37, 24}, 517 - {139000, 6, 34, 22}, 518 - {139050, 6, 34, 22}, 519 - {139054, 6, 34, 22}, 520 - {140000, 6, 28, 18}, 521 - {141000, 6, 36, 23}, 522 - {141500, 6, 22, 14}, 523 - {142000, 6, 30, 19}, 524 - {143000, 6, 27, 17}, 525 - {143472, 4, 17, 16}, 526 - {144000, 6, 24, 15}, 527 - {145000, 6, 29, 18}, 528 - {146000, 6, 47, 29}, 529 - {146250, 6, 26, 16}, 530 - {147000, 6, 49, 30}, 531 - {147891, 6, 23, 14}, 532 - {148000, 6, 23, 14}, 533 - {148250, 6, 28, 17}, 534 - {148352, 4, 100, 91}, 535 - {148500, 6, 33, 20}, 536 - {149000, 6, 48, 29}, 537 - {150000, 6, 25, 15}, 538 - {151000, 4, 19, 17}, 539 - {152000, 6, 27, 16}, 540 - {152280, 6, 44, 26}, 541 - {153000, 6, 34, 20}, 542 - {154000, 6, 53, 31}, 543 - {155000, 6, 31, 18}, 544 - {155250, 6, 50, 29}, 545 - {155750, 6, 45, 26}, 546 - {156000, 6, 26, 15}, 547 - {157000, 6, 61, 35}, 548 - {157500, 6, 28, 16}, 549 - {158000, 6, 65, 37}, 550 - {158250, 6, 44, 25}, 551 - {159000, 6, 53, 30}, 552 - {159500, 6, 39, 22}, 553 - {160000, 6, 32, 18}, 554 - {161000, 4, 31, 26}, 555 - {162000, 4, 18, 15}, 556 - {162162, 4, 131, 109}, 557 - {162500, 4, 53, 44}, 558 - {163000, 4, 29, 24}, 559 - {164000, 4, 17, 14}, 560 - {165000, 4, 22, 18}, 561 - {166000, 4, 32, 26}, 562 - {167000, 4, 26, 21}, 563 - {168000, 4, 46, 37}, 564 - {169000, 4, 104, 83}, 565 - {169128, 4, 64, 51}, 566 - {169500, 4, 39, 31}, 567 - {170000, 4, 34, 27}, 568 - {171000, 4, 19, 15}, 569 - {172000, 4, 51, 40}, 570 - {172750, 4, 32, 25}, 571 - {172800, 4, 32, 25}, 572 - {173000, 4, 41, 32}, 573 - {174000, 4, 49, 38}, 574 - {174787, 4, 22, 17}, 575 - {175000, 4, 35, 27}, 576 - {176000, 4, 30, 23}, 577 - {177000, 4, 38, 29}, 578 - {178000, 4, 29, 22}, 579 - {178500, 4, 37, 28}, 580 - {179000, 4, 53, 40}, 581 - {179500, 4, 73, 55}, 582 - {180000, 4, 20, 15}, 583 - {181000, 4, 55, 41}, 584 - {182000, 4, 31, 23}, 585 - {183000, 4, 42, 31}, 586 - {184000, 4, 30, 22}, 587 - {184750, 4, 26, 19}, 588 - {185000, 4, 37, 27}, 589 - {186000, 4, 51, 37}, 590 - {187000, 4, 36, 26}, 591 - {188000, 4, 32, 23}, 592 - {189000, 4, 21, 15}, 593 - {190000, 4, 38, 27}, 594 - {190960, 4, 41, 29}, 595 - {191000, 4, 41, 29}, 596 - {192000, 4, 27, 19}, 597 - {192250, 4, 37, 26}, 598 - {193000, 4, 20, 14}, 599 - {193250, 4, 53, 37}, 600 - {194000, 4, 23, 16}, 601 - {194208, 4, 23, 16}, 602 - {195000, 4, 26, 18}, 603 - {196000, 4, 45, 31}, 604 - {197000, 4, 35, 24}, 605 - {197750, 4, 41, 28}, 606 - {198000, 4, 22, 15}, 607 - {198500, 4, 25, 17}, 608 - {199000, 4, 28, 19}, 609 - {200000, 4, 37, 25}, 610 - {201000, 4, 61, 41}, 611 - {202000, 4, 112, 75}, 612 - {202500, 4, 21, 14}, 613 - {203000, 4, 146, 97}, 614 - {204000, 4, 62, 41}, 615 - {204750, 4, 44, 29}, 616 - {205000, 4, 38, 25}, 617 - {206000, 4, 29, 19}, 618 - {207000, 4, 23, 15}, 619 - {207500, 4, 40, 26}, 620 - {208000, 4, 37, 24}, 621 - {208900, 4, 48, 31}, 622 - {209000, 4, 48, 31}, 623 - {209250, 4, 31, 20}, 624 - {210000, 4, 28, 18}, 625 - {211000, 4, 25, 16}, 626 - {212000, 4, 22, 14}, 627 - {213000, 4, 30, 19}, 628 - {213750, 4, 38, 24}, 629 - {214000, 4, 46, 29}, 630 - {214750, 4, 35, 22}, 631 - {215000, 4, 43, 27}, 632 - {216000, 4, 24, 15}, 633 - {217000, 4, 37, 23}, 634 - {218000, 4, 42, 26}, 635 - {218250, 4, 42, 26}, 636 - {218750, 4, 34, 21}, 637 - {219000, 4, 47, 29}, 638 - {220000, 4, 44, 27}, 639 - {220640, 4, 49, 30}, 640 - {220750, 4, 36, 22}, 641 - {221000, 4, 36, 22}, 642 - {222000, 4, 23, 14}, 643 - {222525, 4, 28, 17}, 644 - {222750, 4, 33, 20}, 645 - {227000, 4, 37, 22}, 646 - {230250, 4, 29, 17}, 647 - {233500, 4, 38, 22}, 648 - {235000, 4, 40, 23}, 649 - {238000, 4, 30, 17}, 650 - {241500, 2, 17, 19}, 651 - {245250, 2, 20, 22}, 652 - {247750, 2, 22, 24}, 653 - {253250, 2, 15, 16}, 654 - {256250, 2, 18, 19}, 655 - {262500, 2, 31, 32}, 656 - {267250, 2, 66, 67}, 657 - {268500, 2, 94, 95}, 658 - {270000, 2, 14, 14}, 659 - {272500, 2, 77, 76}, 660 - {273750, 2, 57, 56}, 661 - {280750, 2, 24, 23}, 662 - {281250, 2, 23, 22}, 663 - {286000, 2, 17, 16}, 664 - {291750, 2, 26, 24}, 665 - {296703, 2, 56, 51}, 666 - {297000, 2, 22, 20}, 667 - {298000, 2, 21, 19}, 668 - }; 669 - 670 281 static void intel_ddi_mode_set(struct drm_encoder *encoder, 671 282 struct drm_display_mode *mode, 672 283 struct drm_display_mode *adjusted_mode) ··· 292 675 int pipe = intel_crtc->pipe; 293 676 int type = intel_encoder->type; 294 677 295 - DRM_DEBUG_KMS("Preparing DDI mode for Haswell on port %c, pipe %c\n", 678 + DRM_DEBUG_KMS("Preparing DDI mode on port %c, pipe %c\n", 296 679 port_name(port), pipe_name(pipe)); 297 680 298 681 intel_crtc->eld_vld = false; ··· 303 686 304 687 intel_dp->DP = intel_dig_port->port_reversal | 305 688 DDI_BUF_CTL_ENABLE | DDI_BUF_EMP_400MV_0DB_HSW; 306 - switch (intel_dp->lane_count) { 307 - case 1: 308 - intel_dp->DP |= DDI_PORT_WIDTH_X1; 309 - break; 310 - case 2: 311 - intel_dp->DP |= DDI_PORT_WIDTH_X2; 312 - break; 313 - case 4: 314 - intel_dp->DP |= DDI_PORT_WIDTH_X4; 315 - break; 316 - default: 317 - intel_dp->DP |= DDI_PORT_WIDTH_X4; 318 - WARN(1, "Unexpected DP lane count %d\n", 319 - intel_dp->lane_count); 320 - break; 321 - } 689 + intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count); 322 690 323 691 if (intel_dp->has_audio) { 324 692 DRM_DEBUG_DRIVER("DP audio on pipe %c on DDI\n", ··· 350 748 } 351 749 352 750 if (num_encoders != 1) 353 - WARN(1, "%d encoders on crtc for pipe %d\n", num_encoders, 354 - intel_crtc->pipe); 751 + WARN(1, "%d encoders on crtc for pipe %c\n", num_encoders, 752 + pipe_name(intel_crtc->pipe)); 355 753 356 754 BUG_ON(ret == NULL); 357 755 return ret; ··· 404 802 intel_crtc->ddi_pll_sel = PORT_CLK_SEL_NONE; 405 803 } 406 804 407 - static void intel_ddi_calculate_wrpll(int clock, int *p, int *n2, int *r2) 805 + #define LC_FREQ 2700 806 + #define LC_FREQ_2K (LC_FREQ * 2000) 807 + 808 + #define P_MIN 2 809 + #define P_MAX 64 810 + #define P_INC 2 811 + 812 + /* Constraints for PLL good behavior */ 813 + #define REF_MIN 48 814 + #define REF_MAX 400 815 + #define VCO_MIN 2400 816 + #define VCO_MAX 4800 817 + 818 + #define ABS_DIFF(a, b) ((a > b) ? (a - b) : (b - a)) 819 + 820 + struct wrpll_rnp { 821 + unsigned p, n2, r2; 822 + }; 823 + 824 + static unsigned wrpll_get_budget_for_freq(int clock) 408 825 { 409 - u32 i; 826 + unsigned budget; 410 827 411 - for (i = 0; i < ARRAY_SIZE(wrpll_tmds_clock_table); i++) 412 - if (clock <= wrpll_tmds_clock_table[i].clock) 413 - break; 828 + switch (clock) { 829 + case 25175000: 830 + case 25200000: 831 + case 27000000: 832 + case 27027000: 833 + case 37762500: 834 + case 37800000: 835 + case 40500000: 836 + case 40541000: 837 + case 54000000: 838 + case 54054000: 839 + case 59341000: 840 + case 59400000: 841 + case 72000000: 842 + case 74176000: 843 + case 74250000: 844 + case 81000000: 845 + case 81081000: 846 + case 89012000: 847 + case 89100000: 848 + case 108000000: 849 + case 108108000: 850 + case 111264000: 851 + case 111375000: 852 + case 148352000: 853 + case 148500000: 854 + case 162000000: 855 + case 162162000: 856 + case 222525000: 857 + case 222750000: 858 + case 296703000: 859 + case 297000000: 860 + budget = 0; 861 + break; 862 + case 233500000: 863 + case 245250000: 864 + case 247750000: 865 + case 253250000: 866 + case 298000000: 867 + budget = 1500; 868 + break; 869 + case 169128000: 870 + case 169500000: 871 + case 179500000: 872 + case 202000000: 873 + budget = 2000; 874 + break; 875 + case 256250000: 876 + case 262500000: 877 + case 270000000: 878 + case 272500000: 879 + case 273750000: 880 + case 280750000: 881 + case 281250000: 882 + case 286000000: 883 + case 291750000: 884 + budget = 4000; 885 + break; 886 + case 267250000: 887 + case 268500000: 888 + budget = 5000; 889 + break; 890 + default: 891 + budget = 1000; 892 + break; 893 + } 414 894 415 - if (i == ARRAY_SIZE(wrpll_tmds_clock_table)) 416 - i--; 895 + return budget; 896 + } 417 897 418 - *p = wrpll_tmds_clock_table[i].p; 419 - *n2 = wrpll_tmds_clock_table[i].n2; 420 - *r2 = wrpll_tmds_clock_table[i].r2; 898 + static void wrpll_update_rnp(uint64_t freq2k, unsigned budget, 899 + unsigned r2, unsigned n2, unsigned p, 900 + struct wrpll_rnp *best) 901 + { 902 + uint64_t a, b, c, d, diff, diff_best; 421 903 422 - if (wrpll_tmds_clock_table[i].clock != clock) 423 - DRM_INFO("WRPLL: using settings for %dKHz on %dKHz mode\n", 424 - wrpll_tmds_clock_table[i].clock, clock); 904 + /* No best (r,n,p) yet */ 905 + if (best->p == 0) { 906 + best->p = p; 907 + best->n2 = n2; 908 + best->r2 = r2; 909 + return; 910 + } 425 911 426 - DRM_DEBUG_KMS("WRPLL: %dKHz refresh rate with p=%d, n2=%d r2=%d\n", 427 - clock, *p, *n2, *r2); 912 + /* 913 + * Output clock is (LC_FREQ_2K / 2000) * N / (P * R), which compares to 914 + * freq2k. 915 + * 916 + * delta = 1e6 * 917 + * abs(freq2k - (LC_FREQ_2K * n2/(p * r2))) / 918 + * freq2k; 919 + * 920 + * and we would like delta <= budget. 921 + * 922 + * If the discrepancy is above the PPM-based budget, always prefer to 923 + * improve upon the previous solution. However, if you're within the 924 + * budget, try to maximize Ref * VCO, that is N / (P * R^2). 925 + */ 926 + a = freq2k * budget * p * r2; 927 + b = freq2k * budget * best->p * best->r2; 928 + diff = ABS_DIFF((freq2k * p * r2), (LC_FREQ_2K * n2)); 929 + diff_best = ABS_DIFF((freq2k * best->p * best->r2), 930 + (LC_FREQ_2K * best->n2)); 931 + c = 1000000 * diff; 932 + d = 1000000 * diff_best; 933 + 934 + if (a < c && b < d) { 935 + /* If both are above the budget, pick the closer */ 936 + if (best->p * best->r2 * diff < p * r2 * diff_best) { 937 + best->p = p; 938 + best->n2 = n2; 939 + best->r2 = r2; 940 + } 941 + } else if (a >= c && b < d) { 942 + /* If A is below the threshold but B is above it? Update. */ 943 + best->p = p; 944 + best->n2 = n2; 945 + best->r2 = r2; 946 + } else if (a >= c && b >= d) { 947 + /* Both are below the limit, so pick the higher n2/(r2*r2) */ 948 + if (n2 * best->r2 * best->r2 > best->n2 * r2 * r2) { 949 + best->p = p; 950 + best->n2 = n2; 951 + best->r2 = r2; 952 + } 953 + } 954 + /* Otherwise a < c && b >= d, do nothing */ 955 + } 956 + 957 + static void 958 + intel_ddi_calculate_wrpll(int clock /* in Hz */, 959 + unsigned *r2_out, unsigned *n2_out, unsigned *p_out) 960 + { 961 + uint64_t freq2k; 962 + unsigned p, n2, r2; 963 + struct wrpll_rnp best = { 0, 0, 0 }; 964 + unsigned budget; 965 + 966 + freq2k = clock / 100; 967 + 968 + budget = wrpll_get_budget_for_freq(clock); 969 + 970 + /* Special case handling for 540 pixel clock: bypass WR PLL entirely 971 + * and directly pass the LC PLL to it. */ 972 + if (freq2k == 5400000) { 973 + *n2_out = 2; 974 + *p_out = 1; 975 + *r2_out = 2; 976 + return; 977 + } 978 + 979 + /* 980 + * Ref = LC_FREQ / R, where Ref is the actual reference input seen by 981 + * the WR PLL. 982 + * 983 + * We want R so that REF_MIN <= Ref <= REF_MAX. 984 + * Injecting R2 = 2 * R gives: 985 + * REF_MAX * r2 > LC_FREQ * 2 and 986 + * REF_MIN * r2 < LC_FREQ * 2 987 + * 988 + * Which means the desired boundaries for r2 are: 989 + * LC_FREQ * 2 / REF_MAX < r2 < LC_FREQ * 2 / REF_MIN 990 + * 991 + */ 992 + for (r2 = LC_FREQ * 2 / REF_MAX + 1; 993 + r2 <= LC_FREQ * 2 / REF_MIN; 994 + r2++) { 995 + 996 + /* 997 + * VCO = N * Ref, that is: VCO = N * LC_FREQ / R 998 + * 999 + * Once again we want VCO_MIN <= VCO <= VCO_MAX. 1000 + * Injecting R2 = 2 * R and N2 = 2 * N, we get: 1001 + * VCO_MAX * r2 > n2 * LC_FREQ and 1002 + * VCO_MIN * r2 < n2 * LC_FREQ) 1003 + * 1004 + * Which means the desired boundaries for n2 are: 1005 + * VCO_MIN * r2 / LC_FREQ < n2 < VCO_MAX * r2 / LC_FREQ 1006 + */ 1007 + for (n2 = VCO_MIN * r2 / LC_FREQ + 1; 1008 + n2 <= VCO_MAX * r2 / LC_FREQ; 1009 + n2++) { 1010 + 1011 + for (p = P_MIN; p <= P_MAX; p += P_INC) 1012 + wrpll_update_rnp(freq2k, budget, 1013 + r2, n2, p, &best); 1014 + } 1015 + } 1016 + 1017 + *n2_out = best.n2; 1018 + *p_out = best.p; 1019 + *r2_out = best.r2; 1020 + 1021 + DRM_DEBUG_KMS("WRPLL: %dHz refresh rate with p=%d, n2=%d r2=%d\n", 1022 + clock, *p_out, *n2_out, *r2_out); 428 1023 } 429 1024 430 1025 bool intel_ddi_pll_mode_set(struct drm_crtc *crtc, int clock) ··· 662 863 return true; 663 864 664 865 } else if (type == INTEL_OUTPUT_HDMI) { 665 - int p, n2, r2; 866 + unsigned p, n2, r2; 666 867 667 868 if (plls->wrpll1_refcount == 0) { 668 869 DRM_DEBUG_KMS("Using WRPLL 1 on pipe %c\n", ··· 684 885 WARN(I915_READ(reg) & WRPLL_PLL_ENABLE, 685 886 "WRPLL already enabled\n"); 686 887 687 - intel_ddi_calculate_wrpll(clock, &p, &n2, &r2); 888 + intel_ddi_calculate_wrpll(clock * 1000, &r2, &n2, &p); 688 889 689 890 val = WRPLL_PLL_ENABLE | WRPLL_PLL_SELECT_LCPLL_2700 | 690 891 WRPLL_DIVIDER_REFERENCE(r2) | WRPLL_DIVIDER_FEEDBACK(n2) | ··· 794 995 /* Can only use the always-on power well for eDP when 795 996 * not using the panel fitter, and when not using motion 796 997 * blur mitigation (which we don't support). */ 797 - if (dev_priv->pch_pf_size) 998 + if (intel_crtc->config.pch_pfit.size) 798 999 temp |= TRANS_DDI_EDP_INPUT_A_ONOFF; 799 1000 else 800 1001 temp |= TRANS_DDI_EDP_INPUT_A_ON; ··· 821 1022 822 1023 } else if (type == INTEL_OUTPUT_ANALOG) { 823 1024 temp |= TRANS_DDI_MODE_SELECT_FDI; 824 - temp |= (intel_crtc->fdi_lanes - 1) << 1; 1025 + temp |= (intel_crtc->config.fdi_lanes - 1) << 1; 825 1026 826 1027 } else if (type == INTEL_OUTPUT_DISPLAYPORT || 827 1028 type == INTEL_OUTPUT_EDP) { ··· 829 1030 830 1031 temp |= TRANS_DDI_MODE_SELECT_DP_SST; 831 1032 832 - switch (intel_dp->lane_count) { 833 - case 1: 834 - temp |= TRANS_DDI_PORT_WIDTH_X1; 835 - break; 836 - case 2: 837 - temp |= TRANS_DDI_PORT_WIDTH_X2; 838 - break; 839 - case 4: 840 - temp |= TRANS_DDI_PORT_WIDTH_X4; 841 - break; 842 - default: 843 - temp |= TRANS_DDI_PORT_WIDTH_X4; 844 - WARN(1, "Unsupported lane count %d\n", 845 - intel_dp->lane_count); 846 - } 847 - 1033 + temp |= DDI_PORT_WIDTH(intel_dp->lane_count); 848 1034 } else { 849 - WARN(1, "Invalid encoder type %d for pipe %d\n", 850 - intel_encoder->type, pipe); 1035 + WARN(1, "Invalid encoder type %d for pipe %c\n", 1036 + intel_encoder->type, pipe_name(pipe)); 851 1037 } 852 1038 853 1039 I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp); ··· 932 1148 } 933 1149 } 934 1150 935 - DRM_DEBUG_KMS("No pipe for ddi port %i found\n", port); 1151 + DRM_DEBUG_KMS("No pipe for ddi port %c found\n", port_name(port)); 936 1152 937 1153 return false; 938 1154 } ··· 1118 1334 ironlake_edp_backlight_on(intel_dp); 1119 1335 } 1120 1336 1121 - if (intel_crtc->eld_vld) { 1337 + if (intel_crtc->eld_vld && type != INTEL_OUTPUT_EDP) { 1122 1338 tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD); 1123 1339 tmp |= ((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) << (pipe * 4)); 1124 1340 I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp); ··· 1136 1352 struct drm_i915_private *dev_priv = dev->dev_private; 1137 1353 uint32_t tmp; 1138 1354 1139 - tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD); 1140 - tmp &= ~((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) << (pipe * 4)); 1141 - I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp); 1355 + if (intel_crtc->eld_vld && type != INTEL_OUTPUT_EDP) { 1356 + tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD); 1357 + tmp &= ~((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) << 1358 + (pipe * 4)); 1359 + I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp); 1360 + } 1142 1361 1143 1362 if (type == INTEL_OUTPUT_EDP) { 1144 1363 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); ··· 1305 1518 return; 1306 1519 } 1307 1520 1308 - if (port != PORT_A) { 1309 - hdmi_connector = kzalloc(sizeof(struct intel_connector), 1310 - GFP_KERNEL); 1311 - if (!hdmi_connector) { 1312 - kfree(dp_connector); 1313 - kfree(intel_dig_port); 1314 - return; 1315 - } 1316 - } 1317 - 1318 1521 intel_encoder = &intel_dig_port->base; 1319 1522 encoder = &intel_encoder->base; 1320 1523 ··· 1322 1545 intel_dig_port->port = port; 1323 1546 intel_dig_port->port_reversal = I915_READ(DDI_BUF_CTL(port)) & 1324 1547 DDI_BUF_PORT_REVERSAL; 1325 - if (hdmi_connector) 1326 - intel_dig_port->hdmi.hdmi_reg = DDI_BUF_CTL(port); 1327 1548 intel_dig_port->dp.output_reg = DDI_BUF_CTL(port); 1328 1549 1329 1550 intel_encoder->type = INTEL_OUTPUT_UNKNOWN; ··· 1329 1554 intel_encoder->cloneable = false; 1330 1555 intel_encoder->hot_plug = intel_ddi_hot_plug; 1331 1556 1332 - if (hdmi_connector) 1333 - intel_hdmi_init_connector(intel_dig_port, hdmi_connector); 1334 1557 intel_dp_init_connector(intel_dig_port, dp_connector); 1558 + 1559 + if (intel_encoder->type != INTEL_OUTPUT_EDP) { 1560 + hdmi_connector = kzalloc(sizeof(struct intel_connector), 1561 + GFP_KERNEL); 1562 + if (!hdmi_connector) { 1563 + return; 1564 + } 1565 + 1566 + intel_dig_port->hdmi.hdmi_reg = DDI_BUF_CTL(port); 1567 + intel_hdmi_init_connector(intel_dig_port, hdmi_connector); 1568 + } 1335 1569 }
+933 -661
drivers/gpu/drm/i915/intel_display.c
··· 46 46 static void intel_crtc_update_cursor(struct drm_crtc *crtc, bool on); 47 47 48 48 typedef struct { 49 - /* given values */ 50 - int n; 51 - int m1, m2; 52 - int p1, p2; 53 - /* derived values */ 54 - int dot; 55 - int vco; 56 - int m; 57 - int p; 58 - } intel_clock_t; 59 - 60 - typedef struct { 61 49 int min, max; 62 50 } intel_range_t; 63 51 ··· 100 112 intel_g4x_find_best_PLL(const intel_limit_t *limit, struct drm_crtc *crtc, 101 113 int target, int refclk, intel_clock_t *match_clock, 102 114 intel_clock_t *best_clock); 103 - 104 - static bool 105 - intel_find_pll_g4x_dp(const intel_limit_t *, struct drm_crtc *crtc, 106 - int target, int refclk, intel_clock_t *match_clock, 107 - intel_clock_t *best_clock); 108 - static bool 109 - intel_find_pll_ironlake_dp(const intel_limit_t *, struct drm_crtc *crtc, 110 - int target, int refclk, intel_clock_t *match_clock, 111 - intel_clock_t *best_clock); 112 115 113 116 static bool 114 117 intel_vlv_find_best_pll(const intel_limit_t *limit, struct drm_crtc *crtc, ··· 233 254 .find_pll = intel_g4x_find_best_PLL, 234 255 }; 235 256 236 - static const intel_limit_t intel_limits_g4x_display_port = { 237 - .dot = { .min = 161670, .max = 227000 }, 238 - .vco = { .min = 1750000, .max = 3500000}, 239 - .n = { .min = 1, .max = 2 }, 240 - .m = { .min = 97, .max = 108 }, 241 - .m1 = { .min = 0x10, .max = 0x12 }, 242 - .m2 = { .min = 0x05, .max = 0x06 }, 243 - .p = { .min = 10, .max = 20 }, 244 - .p1 = { .min = 1, .max = 2}, 245 - .p2 = { .dot_limit = 0, 246 - .p2_slow = 10, .p2_fast = 10 }, 247 - .find_pll = intel_find_pll_g4x_dp, 248 - }; 249 - 250 257 static const intel_limit_t intel_limits_pineview_sdvo = { 251 258 .dot = { .min = 20000, .max = 400000}, 252 259 .vco = { .min = 1700000, .max = 3500000 }, ··· 339 374 .find_pll = intel_g4x_find_best_PLL, 340 375 }; 341 376 342 - static const intel_limit_t intel_limits_ironlake_display_port = { 343 - .dot = { .min = 25000, .max = 350000 }, 344 - .vco = { .min = 1760000, .max = 3510000}, 345 - .n = { .min = 1, .max = 2 }, 346 - .m = { .min = 81, .max = 90 }, 347 - .m1 = { .min = 12, .max = 22 }, 348 - .m2 = { .min = 5, .max = 9 }, 349 - .p = { .min = 10, .max = 20 }, 350 - .p1 = { .min = 1, .max = 2}, 351 - .p2 = { .dot_limit = 0, 352 - .p2_slow = 10, .p2_fast = 10 }, 353 - .find_pll = intel_find_pll_ironlake_dp, 354 - }; 355 - 356 377 static const intel_limit_t intel_limits_vlv_dac = { 357 378 .dot = { .min = 25000, .max = 270000 }, 358 379 .vco = { .min = 4000000, .max = 6000000 }, ··· 347 396 .m1 = { .min = 2, .max = 3 }, 348 397 .m2 = { .min = 11, .max = 156 }, 349 398 .p = { .min = 10, .max = 30 }, 350 - .p1 = { .min = 2, .max = 3 }, 399 + .p1 = { .min = 1, .max = 3 }, 351 400 .p2 = { .dot_limit = 270000, 352 401 .p2_slow = 2, .p2_fast = 20 }, 353 402 .find_pll = intel_vlv_find_best_pll, 354 403 }; 355 404 356 405 static const intel_limit_t intel_limits_vlv_hdmi = { 357 - .dot = { .min = 20000, .max = 165000 }, 358 - .vco = { .min = 4000000, .max = 5994000}, 406 + .dot = { .min = 25000, .max = 270000 }, 407 + .vco = { .min = 4000000, .max = 6000000 }, 359 408 .n = { .min = 1, .max = 7 }, 360 409 .m = { .min = 60, .max = 300 }, /* guess */ 361 410 .m1 = { .min = 2, .max = 3 }, ··· 375 424 .m1 = { .min = 2, .max = 3 }, 376 425 .m2 = { .min = 11, .max = 156 }, 377 426 .p = { .min = 10, .max = 30 }, 378 - .p1 = { .min = 2, .max = 3 }, 427 + .p1 = { .min = 1, .max = 3 }, 379 428 .p2 = { .dot_limit = 270000, 380 429 .p2_slow = 2, .p2_fast = 20 }, 381 430 .find_pll = intel_vlv_find_best_pll, ··· 401 450 return I915_READ(DPIO_DATA); 402 451 } 403 452 404 - static void intel_dpio_write(struct drm_i915_private *dev_priv, int reg, 405 - u32 val) 453 + void intel_dpio_write(struct drm_i915_private *dev_priv, int reg, u32 val) 406 454 { 407 455 WARN_ON(!mutex_is_locked(&dev_priv->dpio_lock)); 408 456 ··· 416 466 DPIO_BYTE); 417 467 if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) 418 468 DRM_ERROR("DPIO write wait timed out\n"); 419 - } 420 - 421 - static void vlv_init_dpio(struct drm_device *dev) 422 - { 423 - struct drm_i915_private *dev_priv = dev->dev_private; 424 - 425 - /* Reset the DPIO config */ 426 - I915_WRITE(DPIO_CTL, 0); 427 - POSTING_READ(DPIO_CTL); 428 - I915_WRITE(DPIO_CTL, 1); 429 - POSTING_READ(DPIO_CTL); 430 469 } 431 470 432 471 static const intel_limit_t *intel_ironlake_limit(struct drm_crtc *crtc, ··· 436 497 else 437 498 limit = &intel_limits_ironlake_single_lvds; 438 499 } 439 - } else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT) || 440 - intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP)) 441 - limit = &intel_limits_ironlake_display_port; 442 - else 500 + } else 443 501 limit = &intel_limits_ironlake_dac; 444 502 445 503 return limit; ··· 457 521 limit = &intel_limits_g4x_hdmi; 458 522 } else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_SDVO)) { 459 523 limit = &intel_limits_g4x_sdvo; 460 - } else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) { 461 - limit = &intel_limits_g4x_display_port; 462 524 } else /* The option is for other outputs */ 463 525 limit = &intel_limits_i9xx_sdvo; 464 526 ··· 507 573 clock->dot = clock->vco / clock->p; 508 574 } 509 575 576 + static uint32_t i9xx_dpll_compute_m(struct dpll *dpll) 577 + { 578 + return 5 * (dpll->m1 + 2) + (dpll->m2 + 2); 579 + } 580 + 510 581 static void intel_clock(struct drm_device *dev, int refclk, intel_clock_t *clock) 511 582 { 512 583 if (IS_PINEVIEW(dev)) { 513 584 pineview_clock(refclk, clock); 514 585 return; 515 586 } 516 - clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); 587 + clock->m = i9xx_dpll_compute_m(clock); 517 588 clock->p = clock->p1 * clock->p2; 518 589 clock->vco = refclk * clock->m / (clock->n + 2); 519 590 clock->dot = clock->vco / clock->p; ··· 651 712 found = false; 652 713 653 714 if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) { 654 - int lvds_reg; 655 - 656 - if (HAS_PCH_SPLIT(dev)) 657 - lvds_reg = PCH_LVDS; 658 - else 659 - lvds_reg = LVDS; 660 715 if (intel_is_dual_link_lvds(dev)) 661 716 clock.p2 = limit->p2.p2_fast; 662 717 else ··· 679 746 if (!intel_PLL_is_valid(dev, limit, 680 747 &clock)) 681 748 continue; 682 - if (match_clock && 683 - clock.p != match_clock->p) 684 - continue; 685 749 686 750 this_err = abs(clock.dot - target); 687 751 if (this_err < err_most) { ··· 694 764 return found; 695 765 } 696 766 697 - static bool 698 - intel_find_pll_ironlake_dp(const intel_limit_t *limit, struct drm_crtc *crtc, 699 - int target, int refclk, intel_clock_t *match_clock, 700 - intel_clock_t *best_clock) 701 - { 702 - struct drm_device *dev = crtc->dev; 703 - intel_clock_t clock; 704 - 705 - if (target < 200000) { 706 - clock.n = 1; 707 - clock.p1 = 2; 708 - clock.p2 = 10; 709 - clock.m1 = 12; 710 - clock.m2 = 9; 711 - } else { 712 - clock.n = 2; 713 - clock.p1 = 1; 714 - clock.p2 = 10; 715 - clock.m1 = 14; 716 - clock.m2 = 8; 717 - } 718 - intel_clock(dev, refclk, &clock); 719 - memcpy(best_clock, &clock, sizeof(intel_clock_t)); 720 - return true; 721 - } 722 - 723 - /* DisplayPort has only two frequencies, 162MHz and 270MHz */ 724 - static bool 725 - intel_find_pll_g4x_dp(const intel_limit_t *limit, struct drm_crtc *crtc, 726 - int target, int refclk, intel_clock_t *match_clock, 727 - intel_clock_t *best_clock) 728 - { 729 - intel_clock_t clock; 730 - if (target < 200000) { 731 - clock.p1 = 2; 732 - clock.p2 = 10; 733 - clock.n = 2; 734 - clock.m1 = 23; 735 - clock.m2 = 8; 736 - } else { 737 - clock.p1 = 1; 738 - clock.p2 = 10; 739 - clock.n = 1; 740 - clock.m1 = 14; 741 - clock.m2 = 2; 742 - } 743 - clock.m = 5 * (clock.m1 + 2) + (clock.m2 + 2); 744 - clock.p = (clock.p1 * clock.p2); 745 - clock.dot = 96000 * clock.m / (clock.n + 2) / clock.p; 746 - clock.vco = 0; 747 - memcpy(best_clock, &clock, sizeof(intel_clock_t)); 748 - return true; 749 - } 750 767 static bool 751 768 intel_vlv_find_best_pll(const intel_limit_t *limit, struct drm_crtc *crtc, 752 769 int target, int refclk, intel_clock_t *match_clock, ··· 974 1097 pch_dpll = I915_READ(PCH_DPLL_SEL); 975 1098 cur_state = pll->pll_reg == _PCH_DPLL_B; 976 1099 if (!WARN(((pch_dpll >> (4 * crtc->pipe)) & 1) != cur_state, 977 - "PLL[%d] not attached to this transcoder %d: %08x\n", 978 - cur_state, crtc->pipe, pch_dpll)) { 1100 + "PLL[%d] not attached to this transcoder %c: %08x\n", 1101 + cur_state, pipe_name(crtc->pipe), pch_dpll)) { 979 1102 cur_state = !!(val >> (4*crtc->pipe + 3)); 980 1103 WARN(cur_state != state, 981 - "PLL[%d] not %s on this transcoder %d: %08x\n", 1104 + "PLL[%d] not %s on this transcoder %c: %08x\n", 982 1105 pll->pll_reg == _PCH_DPLL_B, 983 1106 state_string(state), 984 - crtc->pipe, 1107 + pipe_name(crtc->pipe), 985 1108 val); 986 1109 } 987 1110 } ··· 1104 1227 if (pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) 1105 1228 state = true; 1106 1229 1107 - if (!intel_using_power_well(dev_priv->dev) && 1108 - cpu_transcoder != TRANSCODER_EDP) { 1230 + if (!intel_display_power_enabled(dev_priv->dev, 1231 + POWER_DOMAIN_TRANSCODER(cpu_transcoder))) { 1109 1232 cur_state = false; 1110 1233 } else { 1111 1234 reg = PIPECONF(cpu_transcoder); ··· 1179 1302 reg = SPCNTR(pipe, i); 1180 1303 val = I915_READ(reg); 1181 1304 WARN((val & SP_ENABLE), 1182 - "sprite %d assertion failure, should be off on pipe %c but is still active\n", 1183 - pipe * 2 + i, pipe_name(pipe)); 1305 + "sprite %c assertion failure, should be off on pipe %c but is still active\n", 1306 + sprite_name(pipe, i), pipe_name(pipe)); 1184 1307 } 1185 1308 } 1186 1309 ··· 1200 1323 WARN(!enabled, "PCH refclk assertion failure, should be active but is disabled\n"); 1201 1324 } 1202 1325 1203 - static void assert_transcoder_disabled(struct drm_i915_private *dev_priv, 1204 - enum pipe pipe) 1326 + static void assert_pch_transcoder_disabled(struct drm_i915_private *dev_priv, 1327 + enum pipe pipe) 1205 1328 { 1206 1329 int reg; 1207 1330 u32 val; 1208 1331 bool enabled; 1209 1332 1210 - reg = TRANSCONF(pipe); 1333 + reg = PCH_TRANSCONF(pipe); 1211 1334 val = I915_READ(reg); 1212 1335 enabled = !!(val & TRANS_ENABLE); 1213 1336 WARN(enabled, ··· 1351 1474 int reg; 1352 1475 u32 val; 1353 1476 1477 + assert_pipe_disabled(dev_priv, pipe); 1478 + 1354 1479 /* No really, not for ILK+ */ 1355 1480 BUG_ON(!IS_VALLEYVIEW(dev_priv->dev) && dev_priv->info->gen >= 5); 1356 1481 ··· 1465 1586 return I915_READ(SBI_DATA); 1466 1587 } 1467 1588 1589 + void vlv_wait_port_ready(struct drm_i915_private *dev_priv, int port) 1590 + { 1591 + u32 port_mask; 1592 + 1593 + if (!port) 1594 + port_mask = DPLL_PORTB_READY_MASK; 1595 + else 1596 + port_mask = DPLL_PORTC_READY_MASK; 1597 + 1598 + if (wait_for((I915_READ(DPLL(0)) & port_mask) == 0, 1000)) 1599 + WARN(1, "timed out waiting for port %c ready: 0x%08x\n", 1600 + 'B' + port, I915_READ(DPLL(0))); 1601 + } 1602 + 1468 1603 /** 1469 1604 * ironlake_enable_pch_pll - enable PCH PLL 1470 1605 * @dev_priv: i915 private structure ··· 1559 1666 DRM_DEBUG_KMS("disabling PCH PLL %x\n", pll->pll_reg); 1560 1667 1561 1668 /* Make sure transcoder isn't still depending on us */ 1562 - assert_transcoder_disabled(dev_priv, intel_crtc->pipe); 1669 + assert_pch_transcoder_disabled(dev_priv, intel_crtc->pipe); 1563 1670 1564 1671 reg = pll->pll_reg; 1565 1672 val = I915_READ(reg); ··· 1599 1706 I915_WRITE(reg, val); 1600 1707 } 1601 1708 1602 - reg = TRANSCONF(pipe); 1709 + reg = PCH_TRANSCONF(pipe); 1603 1710 val = I915_READ(reg); 1604 1711 pipeconf_val = I915_READ(PIPECONF(pipe)); 1605 1712 ··· 1624 1731 1625 1732 I915_WRITE(reg, val | TRANS_ENABLE); 1626 1733 if (wait_for(I915_READ(reg) & TRANS_STATE_ENABLE, 100)) 1627 - DRM_ERROR("failed to enable transcoder %d\n", pipe); 1734 + DRM_ERROR("failed to enable transcoder %c\n", pipe_name(pipe)); 1628 1735 } 1629 1736 1630 1737 static void lpt_enable_pch_transcoder(struct drm_i915_private *dev_priv, ··· 1653 1760 else 1654 1761 val |= TRANS_PROGRESSIVE; 1655 1762 1656 - I915_WRITE(TRANSCONF(TRANSCODER_A), val); 1657 - if (wait_for(I915_READ(_TRANSACONF) & TRANS_STATE_ENABLE, 100)) 1763 + I915_WRITE(LPT_TRANSCONF, val); 1764 + if (wait_for(I915_READ(LPT_TRANSCONF) & TRANS_STATE_ENABLE, 100)) 1658 1765 DRM_ERROR("Failed to enable PCH transcoder\n"); 1659 1766 } 1660 1767 ··· 1671 1778 /* Ports must be off as well */ 1672 1779 assert_pch_ports_disabled(dev_priv, pipe); 1673 1780 1674 - reg = TRANSCONF(pipe); 1781 + reg = PCH_TRANSCONF(pipe); 1675 1782 val = I915_READ(reg); 1676 1783 val &= ~TRANS_ENABLE; 1677 1784 I915_WRITE(reg, val); 1678 1785 /* wait for PCH transcoder off, transcoder state */ 1679 1786 if (wait_for((I915_READ(reg) & TRANS_STATE_ENABLE) == 0, 50)) 1680 - DRM_ERROR("failed to disable transcoder %d\n", pipe); 1787 + DRM_ERROR("failed to disable transcoder %c\n", pipe_name(pipe)); 1681 1788 1682 1789 if (!HAS_PCH_IBX(dev)) { 1683 1790 /* Workaround: Clear the timing override chicken bit again. */ ··· 1692 1799 { 1693 1800 u32 val; 1694 1801 1695 - val = I915_READ(_TRANSACONF); 1802 + val = I915_READ(LPT_TRANSCONF); 1696 1803 val &= ~TRANS_ENABLE; 1697 - I915_WRITE(_TRANSACONF, val); 1804 + I915_WRITE(LPT_TRANSCONF, val); 1698 1805 /* wait for PCH transcoder off, transcoder state */ 1699 - if (wait_for((I915_READ(_TRANSACONF) & TRANS_STATE_ENABLE) == 0, 50)) 1806 + if (wait_for((I915_READ(LPT_TRANSCONF) & TRANS_STATE_ENABLE) == 0, 50)) 1700 1807 DRM_ERROR("Failed to disable PCH transcoder\n"); 1701 1808 1702 1809 /* Workaround: clear timing override bit. */ ··· 1727 1834 enum pipe pch_transcoder; 1728 1835 int reg; 1729 1836 u32 val; 1837 + 1838 + assert_planes_disabled(dev_priv, pipe); 1839 + assert_sprites_disabled(dev_priv, pipe); 1730 1840 1731 1841 if (HAS_PCH_LPT(dev_priv->dev)) 1732 1842 pch_transcoder = TRANSCODER_A; ··· 1992 2096 case 1: 1993 2097 break; 1994 2098 default: 1995 - DRM_ERROR("Can't update plane %d in SAREA\n", plane); 2099 + DRM_ERROR("Can't update plane %c in SAREA\n", plane_name(plane)); 1996 2100 return -EINVAL; 1997 2101 } 1998 2102 ··· 2089 2193 case 2: 2090 2194 break; 2091 2195 default: 2092 - DRM_ERROR("Can't update plane %d in SAREA\n", plane); 2196 + DRM_ERROR("Can't update plane %c in SAREA\n", plane_name(plane)); 2093 2197 return -EINVAL; 2094 2198 } 2095 2199 ··· 2280 2384 } 2281 2385 2282 2386 if (intel_crtc->plane > INTEL_INFO(dev)->num_pipes) { 2283 - DRM_ERROR("no plane for crtc: plane %d, num_pipes %d\n", 2284 - intel_crtc->plane, 2285 - INTEL_INFO(dev)->num_pipes); 2387 + DRM_ERROR("no plane for crtc: plane %c, num_pipes %d\n", 2388 + plane_name(intel_crtc->plane), 2389 + INTEL_INFO(dev)->num_pipes); 2286 2390 return -EINVAL; 2287 2391 } 2288 2392 ··· 2363 2467 FDI_FE_ERRC_ENABLE); 2364 2468 } 2365 2469 2470 + static bool pipe_has_enabled_pch(struct intel_crtc *intel_crtc) 2471 + { 2472 + return intel_crtc->base.enabled && intel_crtc->config.has_pch_encoder; 2473 + } 2474 + 2366 2475 static void ivb_modeset_global_resources(struct drm_device *dev) 2367 2476 { 2368 2477 struct drm_i915_private *dev_priv = dev->dev_private; ··· 2377 2476 to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_C]); 2378 2477 uint32_t temp; 2379 2478 2380 - /* When everything is off disable fdi C so that we could enable fdi B 2381 - * with all lanes. XXX: This misses the case where a pipe is not using 2382 - * any pch resources and so doesn't need any fdi lanes. */ 2383 - if (!pipe_B_crtc->base.enabled && !pipe_C_crtc->base.enabled) { 2479 + /* 2480 + * When everything is off disable fdi C so that we could enable fdi B 2481 + * with all lanes. Note that we don't care about enabled pipes without 2482 + * an enabled pch encoder. 2483 + */ 2484 + if (!pipe_has_enabled_pch(pipe_B_crtc) && 2485 + !pipe_has_enabled_pch(pipe_C_crtc)) { 2384 2486 WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE); 2385 2487 WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE); 2386 2488 ··· 2421 2517 /* enable CPU FDI TX and PCH FDI RX */ 2422 2518 reg = FDI_TX_CTL(pipe); 2423 2519 temp = I915_READ(reg); 2424 - temp &= ~(7 << 19); 2425 - temp |= (intel_crtc->fdi_lanes - 1) << 19; 2520 + temp &= ~FDI_DP_PORT_WIDTH_MASK; 2521 + temp |= FDI_DP_PORT_WIDTH(intel_crtc->config.fdi_lanes); 2426 2522 temp &= ~FDI_LINK_TRAIN_NONE; 2427 2523 temp |= FDI_LINK_TRAIN_PATTERN_1; 2428 2524 I915_WRITE(reg, temp | FDI_TX_ENABLE); ··· 2519 2615 /* enable CPU FDI TX and PCH FDI RX */ 2520 2616 reg = FDI_TX_CTL(pipe); 2521 2617 temp = I915_READ(reg); 2522 - temp &= ~(7 << 19); 2523 - temp |= (intel_crtc->fdi_lanes - 1) << 19; 2618 + temp &= ~FDI_DP_PORT_WIDTH_MASK; 2619 + temp |= FDI_DP_PORT_WIDTH(intel_crtc->config.fdi_lanes); 2524 2620 temp &= ~FDI_LINK_TRAIN_NONE; 2525 2621 temp |= FDI_LINK_TRAIN_PATTERN_1; 2526 2622 temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; ··· 2654 2750 /* enable CPU FDI TX and PCH FDI RX */ 2655 2751 reg = FDI_TX_CTL(pipe); 2656 2752 temp = I915_READ(reg); 2657 - temp &= ~(7 << 19); 2658 - temp |= (intel_crtc->fdi_lanes - 1) << 19; 2753 + temp &= ~FDI_DP_PORT_WIDTH_MASK; 2754 + temp |= FDI_DP_PORT_WIDTH(intel_crtc->config.fdi_lanes); 2659 2755 temp &= ~(FDI_LINK_TRAIN_AUTO | FDI_LINK_TRAIN_NONE_IVB); 2660 2756 temp |= FDI_LINK_TRAIN_PATTERN_1_IVB; 2661 2757 temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; ··· 2756 2852 /* enable PCH FDI RX PLL, wait warmup plus DMI latency */ 2757 2853 reg = FDI_RX_CTL(pipe); 2758 2854 temp = I915_READ(reg); 2759 - temp &= ~((0x7 << 19) | (0x7 << 16)); 2760 - temp |= (intel_crtc->fdi_lanes - 1) << 19; 2855 + temp &= ~(FDI_DP_PORT_WIDTH_MASK | (0x7 << 16)); 2856 + temp |= FDI_DP_PORT_WIDTH(intel_crtc->config.fdi_lanes); 2761 2857 temp |= (I915_READ(PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; 2762 2858 I915_WRITE(reg, temp | FDI_RX_PLL_ENABLE); 2763 2859 ··· 2989 3085 mutex_unlock(&dev_priv->dpio_lock); 2990 3086 } 2991 3087 3088 + static void ironlake_pch_transcoder_set_timings(struct intel_crtc *crtc, 3089 + enum pipe pch_transcoder) 3090 + { 3091 + struct drm_device *dev = crtc->base.dev; 3092 + struct drm_i915_private *dev_priv = dev->dev_private; 3093 + enum transcoder cpu_transcoder = crtc->config.cpu_transcoder; 3094 + 3095 + I915_WRITE(PCH_TRANS_HTOTAL(pch_transcoder), 3096 + I915_READ(HTOTAL(cpu_transcoder))); 3097 + I915_WRITE(PCH_TRANS_HBLANK(pch_transcoder), 3098 + I915_READ(HBLANK(cpu_transcoder))); 3099 + I915_WRITE(PCH_TRANS_HSYNC(pch_transcoder), 3100 + I915_READ(HSYNC(cpu_transcoder))); 3101 + 3102 + I915_WRITE(PCH_TRANS_VTOTAL(pch_transcoder), 3103 + I915_READ(VTOTAL(cpu_transcoder))); 3104 + I915_WRITE(PCH_TRANS_VBLANK(pch_transcoder), 3105 + I915_READ(VBLANK(cpu_transcoder))); 3106 + I915_WRITE(PCH_TRANS_VSYNC(pch_transcoder), 3107 + I915_READ(VSYNC(cpu_transcoder))); 3108 + I915_WRITE(PCH_TRANS_VSYNCSHIFT(pch_transcoder), 3109 + I915_READ(VSYNCSHIFT(cpu_transcoder))); 3110 + } 3111 + 2992 3112 /* 2993 3113 * Enable PCH resources required for PCH ports: 2994 3114 * - PCH PLLs ··· 3029 3101 int pipe = intel_crtc->pipe; 3030 3102 u32 reg, temp; 3031 3103 3032 - assert_transcoder_disabled(dev_priv, pipe); 3104 + assert_pch_transcoder_disabled(dev_priv, pipe); 3033 3105 3034 3106 /* Write the TU size bits before fdi link training, so that error 3035 3107 * detection works. */ ··· 3076 3148 3077 3149 /* set transcoder timing, panel must allow it */ 3078 3150 assert_panel_unlocked(dev_priv, pipe); 3079 - I915_WRITE(TRANS_HTOTAL(pipe), I915_READ(HTOTAL(pipe))); 3080 - I915_WRITE(TRANS_HBLANK(pipe), I915_READ(HBLANK(pipe))); 3081 - I915_WRITE(TRANS_HSYNC(pipe), I915_READ(HSYNC(pipe))); 3082 - 3083 - I915_WRITE(TRANS_VTOTAL(pipe), I915_READ(VTOTAL(pipe))); 3084 - I915_WRITE(TRANS_VBLANK(pipe), I915_READ(VBLANK(pipe))); 3085 - I915_WRITE(TRANS_VSYNC(pipe), I915_READ(VSYNC(pipe))); 3086 - I915_WRITE(TRANS_VSYNCSHIFT(pipe), I915_READ(VSYNCSHIFT(pipe))); 3151 + ironlake_pch_transcoder_set_timings(intel_crtc, pipe); 3087 3152 3088 3153 intel_fdi_normal_train(crtc); 3089 3154 ··· 3126 3205 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 3127 3206 enum transcoder cpu_transcoder = intel_crtc->config.cpu_transcoder; 3128 3207 3129 - assert_transcoder_disabled(dev_priv, TRANSCODER_A); 3208 + assert_pch_transcoder_disabled(dev_priv, TRANSCODER_A); 3130 3209 3131 3210 lpt_program_iclkip(crtc); 3132 3211 3133 3212 /* Set transcoder timing. */ 3134 - I915_WRITE(_TRANS_HTOTAL_A, I915_READ(HTOTAL(cpu_transcoder))); 3135 - I915_WRITE(_TRANS_HBLANK_A, I915_READ(HBLANK(cpu_transcoder))); 3136 - I915_WRITE(_TRANS_HSYNC_A, I915_READ(HSYNC(cpu_transcoder))); 3137 - 3138 - I915_WRITE(_TRANS_VTOTAL_A, I915_READ(VTOTAL(cpu_transcoder))); 3139 - I915_WRITE(_TRANS_VBLANK_A, I915_READ(VBLANK(cpu_transcoder))); 3140 - I915_WRITE(_TRANS_VSYNC_A, I915_READ(VSYNC(cpu_transcoder))); 3141 - I915_WRITE(_TRANS_VSYNCSHIFT_A, I915_READ(VSYNCSHIFT(cpu_transcoder))); 3213 + ironlake_pch_transcoder_set_timings(intel_crtc, PIPE_A); 3142 3214 3143 3215 lpt_enable_pch_transcoder(dev_priv, cpu_transcoder); 3144 3216 } ··· 3208 3294 found: 3209 3295 intel_crtc->pch_pll = pll; 3210 3296 pll->refcount++; 3211 - DRM_DEBUG_DRIVER("using pll %d for pipe %d\n", i, intel_crtc->pipe); 3297 + DRM_DEBUG_DRIVER("using pll %d for pipe %c\n", i, pipe_name(intel_crtc->pipe)); 3212 3298 prepare: /* separate function? */ 3213 3299 DRM_DEBUG_DRIVER("switching PLL %x off\n", pll->pll_reg); 3214 3300 ··· 3223 3309 return pll; 3224 3310 } 3225 3311 3226 - void intel_cpt_verify_modeset(struct drm_device *dev, int pipe) 3312 + static void cpt_verify_modeset(struct drm_device *dev, int pipe) 3227 3313 { 3228 3314 struct drm_i915_private *dev_priv = dev->dev_private; 3229 3315 int dslreg = PIPEDSL(pipe); ··· 3233 3319 udelay(500); 3234 3320 if (wait_for(I915_READ(dslreg) != temp, 5)) { 3235 3321 if (wait_for(I915_READ(dslreg) != temp, 5)) 3236 - DRM_ERROR("mode set failed: pipe %d stuck\n", pipe); 3322 + DRM_ERROR("mode set failed: pipe %c stuck\n", pipe_name(pipe)); 3323 + } 3324 + } 3325 + 3326 + static void ironlake_pfit_enable(struct intel_crtc *crtc) 3327 + { 3328 + struct drm_device *dev = crtc->base.dev; 3329 + struct drm_i915_private *dev_priv = dev->dev_private; 3330 + int pipe = crtc->pipe; 3331 + 3332 + if (crtc->config.pch_pfit.size) { 3333 + /* Force use of hard-coded filter coefficients 3334 + * as some pre-programmed values are broken, 3335 + * e.g. x201. 3336 + */ 3337 + if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) 3338 + I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3 | 3339 + PF_PIPE_SEL_IVB(pipe)); 3340 + else 3341 + I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3); 3342 + I915_WRITE(PF_WIN_POS(pipe), crtc->config.pch_pfit.pos); 3343 + I915_WRITE(PF_WIN_SZ(pipe), crtc->config.pch_pfit.size); 3237 3344 } 3238 3345 } 3239 3346 ··· 3274 3339 return; 3275 3340 3276 3341 intel_crtc->active = true; 3342 + 3343 + intel_set_cpu_fifo_underrun_reporting(dev, pipe, true); 3344 + intel_set_pch_fifo_underrun_reporting(dev, pipe, true); 3345 + 3277 3346 intel_update_watermarks(dev); 3278 3347 3279 3348 if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) { ··· 3302 3363 encoder->pre_enable(encoder); 3303 3364 3304 3365 /* Enable panel fitting for LVDS */ 3305 - if (dev_priv->pch_pf_size && 3306 - (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) || 3307 - intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))) { 3308 - /* Force use of hard-coded filter coefficients 3309 - * as some pre-programmed values are broken, 3310 - * e.g. x201. 3311 - */ 3312 - if (IS_IVYBRIDGE(dev)) 3313 - I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3 | 3314 - PF_PIPE_SEL_IVB(pipe)); 3315 - else 3316 - I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3); 3317 - I915_WRITE(PF_WIN_POS(pipe), dev_priv->pch_pf_pos); 3318 - I915_WRITE(PF_WIN_SZ(pipe), dev_priv->pch_pf_size); 3319 - } 3366 + ironlake_pfit_enable(intel_crtc); 3320 3367 3321 3368 /* 3322 3369 * On ILK+ LUT must be loaded before the pipe is running but with ··· 3327 3402 encoder->enable(encoder); 3328 3403 3329 3404 if (HAS_PCH_CPT(dev)) 3330 - intel_cpt_verify_modeset(dev, intel_crtc->pipe); 3405 + cpt_verify_modeset(dev, intel_crtc->pipe); 3331 3406 3332 3407 /* 3333 3408 * There seems to be a race in PCH platform hw (at least on some ··· 3355 3430 return; 3356 3431 3357 3432 intel_crtc->active = true; 3433 + 3434 + intel_set_cpu_fifo_underrun_reporting(dev, pipe, true); 3435 + if (intel_crtc->config.has_pch_encoder) 3436 + intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_A, true); 3437 + 3358 3438 intel_update_watermarks(dev); 3359 3439 3360 3440 if (intel_crtc->config.has_pch_encoder) ··· 3372 3442 intel_ddi_enable_pipe_clock(intel_crtc); 3373 3443 3374 3444 /* Enable panel fitting for eDP */ 3375 - if (dev_priv->pch_pf_size && 3376 - intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP)) { 3377 - /* Force use of hard-coded filter coefficients 3378 - * as some pre-programmed values are broken, 3379 - * e.g. x201. 3380 - */ 3381 - I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3 | 3382 - PF_PIPE_SEL_IVB(pipe)); 3383 - I915_WRITE(PF_WIN_POS(pipe), dev_priv->pch_pf_pos); 3384 - I915_WRITE(PF_WIN_SZ(pipe), dev_priv->pch_pf_size); 3385 - } 3445 + ironlake_pfit_enable(intel_crtc); 3386 3446 3387 3447 /* 3388 3448 * On ILK+ LUT must be loaded before the pipe is running but with ··· 3410 3490 intel_wait_for_vblank(dev, intel_crtc->pipe); 3411 3491 } 3412 3492 3493 + static void ironlake_pfit_disable(struct intel_crtc *crtc) 3494 + { 3495 + struct drm_device *dev = crtc->base.dev; 3496 + struct drm_i915_private *dev_priv = dev->dev_private; 3497 + int pipe = crtc->pipe; 3498 + 3499 + /* To avoid upsetting the power well on haswell only disable the pfit if 3500 + * it's in use. The hw state code will make sure we get this right. */ 3501 + if (crtc->config.pch_pfit.size) { 3502 + I915_WRITE(PF_CTL(pipe), 0); 3503 + I915_WRITE(PF_WIN_POS(pipe), 0); 3504 + I915_WRITE(PF_WIN_SZ(pipe), 0); 3505 + } 3506 + } 3507 + 3413 3508 static void ironlake_crtc_disable(struct drm_crtc *crtc) 3414 3509 { 3415 3510 struct drm_device *dev = crtc->dev; ··· 3451 3516 if (dev_priv->cfb_plane == plane) 3452 3517 intel_disable_fbc(dev); 3453 3518 3519 + intel_set_pch_fifo_underrun_reporting(dev, pipe, false); 3454 3520 intel_disable_pipe(dev_priv, pipe); 3455 3521 3456 - /* Disable PF */ 3457 - I915_WRITE(PF_CTL(pipe), 0); 3458 - I915_WRITE(PF_WIN_SZ(pipe), 0); 3522 + ironlake_pfit_disable(intel_crtc); 3459 3523 3460 3524 for_each_encoder_on_crtc(dev, crtc, encoder) 3461 3525 if (encoder->post_disable) ··· 3463 3529 ironlake_fdi_disable(crtc); 3464 3530 3465 3531 ironlake_disable_pch_transcoder(dev_priv, pipe); 3532 + intel_set_pch_fifo_underrun_reporting(dev, pipe, true); 3466 3533 3467 3534 if (HAS_PCH_CPT(dev)) { 3468 3535 /* disable TRANS_DP_CTL */ ··· 3525 3590 drm_vblank_off(dev, pipe); 3526 3591 intel_crtc_update_cursor(crtc, false); 3527 3592 3528 - intel_disable_plane(dev_priv, plane, pipe); 3529 - 3593 + /* FBC must be disabled before disabling the plane on HSW. */ 3530 3594 if (dev_priv->cfb_plane == plane) 3531 3595 intel_disable_fbc(dev); 3532 3596 3597 + intel_disable_plane(dev_priv, plane, pipe); 3598 + 3599 + if (intel_crtc->config.has_pch_encoder) 3600 + intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_A, false); 3533 3601 intel_disable_pipe(dev_priv, pipe); 3534 3602 3535 3603 intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder); 3536 3604 3537 - /* XXX: Once we have proper panel fitter state tracking implemented with 3538 - * hardware state read/check support we should switch to only disable 3539 - * the panel fitter when we know it's used. */ 3540 - if (intel_using_power_well(dev)) { 3541 - I915_WRITE(PF_CTL(pipe), 0); 3542 - I915_WRITE(PF_WIN_SZ(pipe), 0); 3543 - } 3605 + ironlake_pfit_disable(intel_crtc); 3544 3606 3545 3607 intel_ddi_disable_pipe_clock(intel_crtc); 3546 3608 ··· 3547 3615 3548 3616 if (intel_crtc->config.has_pch_encoder) { 3549 3617 lpt_disable_pch_transcoder(dev_priv); 3618 + intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_A, true); 3550 3619 intel_ddi_fdi_disable(crtc); 3551 3620 } 3552 3621 ··· 3618 3685 } 3619 3686 } 3620 3687 3688 + static void i9xx_pfit_enable(struct intel_crtc *crtc) 3689 + { 3690 + struct drm_device *dev = crtc->base.dev; 3691 + struct drm_i915_private *dev_priv = dev->dev_private; 3692 + struct intel_crtc_config *pipe_config = &crtc->config; 3693 + 3694 + if (!crtc->config.gmch_pfit.control) 3695 + return; 3696 + 3697 + WARN_ON(I915_READ(PFIT_CONTROL) & PFIT_ENABLE); 3698 + assert_pipe_disabled(dev_priv, crtc->pipe); 3699 + 3700 + /* 3701 + * Enable automatic panel scaling so that non-native modes 3702 + * fill the screen. The panel fitter should only be 3703 + * adjusted whilst the pipe is disabled, according to 3704 + * register description and PRM. 3705 + */ 3706 + DRM_DEBUG_KMS("applying panel-fitter: %x, %x\n", 3707 + pipe_config->gmch_pfit.control, 3708 + pipe_config->gmch_pfit.pgm_ratios); 3709 + 3710 + I915_WRITE(PFIT_PGM_RATIOS, pipe_config->gmch_pfit.pgm_ratios); 3711 + I915_WRITE(PFIT_CONTROL, pipe_config->gmch_pfit.control); 3712 + 3713 + /* Border color in case we don't scale up to the full screen. Black by 3714 + * default, change to something else for debugging. */ 3715 + I915_WRITE(BCLRPAT(crtc->pipe), 0); 3716 + } 3717 + 3718 + static void valleyview_crtc_enable(struct drm_crtc *crtc) 3719 + { 3720 + struct drm_device *dev = crtc->dev; 3721 + struct drm_i915_private *dev_priv = dev->dev_private; 3722 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 3723 + struct intel_encoder *encoder; 3724 + int pipe = intel_crtc->pipe; 3725 + int plane = intel_crtc->plane; 3726 + 3727 + WARN_ON(!crtc->enabled); 3728 + 3729 + if (intel_crtc->active) 3730 + return; 3731 + 3732 + intel_crtc->active = true; 3733 + intel_update_watermarks(dev); 3734 + 3735 + mutex_lock(&dev_priv->dpio_lock); 3736 + 3737 + for_each_encoder_on_crtc(dev, crtc, encoder) 3738 + if (encoder->pre_pll_enable) 3739 + encoder->pre_pll_enable(encoder); 3740 + 3741 + intel_enable_pll(dev_priv, pipe); 3742 + 3743 + for_each_encoder_on_crtc(dev, crtc, encoder) 3744 + if (encoder->pre_enable) 3745 + encoder->pre_enable(encoder); 3746 + 3747 + /* VLV wants encoder enabling _before_ the pipe is up. */ 3748 + for_each_encoder_on_crtc(dev, crtc, encoder) 3749 + encoder->enable(encoder); 3750 + 3751 + /* Enable panel fitting for eDP */ 3752 + i9xx_pfit_enable(intel_crtc); 3753 + 3754 + intel_enable_pipe(dev_priv, pipe, false); 3755 + intel_enable_plane(dev_priv, plane, pipe); 3756 + 3757 + intel_crtc_load_lut(crtc); 3758 + intel_update_fbc(dev); 3759 + 3760 + /* Give the overlay scaler a chance to enable if it's on this pipe */ 3761 + intel_crtc_dpms_overlay(intel_crtc, true); 3762 + intel_crtc_update_cursor(crtc, true); 3763 + 3764 + mutex_unlock(&dev_priv->dpio_lock); 3765 + } 3766 + 3621 3767 static void i9xx_crtc_enable(struct drm_crtc *crtc) 3622 3768 { 3623 3769 struct drm_device *dev = crtc->dev; ··· 3720 3708 if (encoder->pre_enable) 3721 3709 encoder->pre_enable(encoder); 3722 3710 3711 + /* Enable panel fitting for LVDS */ 3712 + i9xx_pfit_enable(intel_crtc); 3713 + 3723 3714 intel_enable_pipe(dev_priv, pipe, false); 3724 3715 intel_enable_plane(dev_priv, plane, pipe); 3725 3716 if (IS_G4X(dev)) ··· 3743 3728 { 3744 3729 struct drm_device *dev = crtc->base.dev; 3745 3730 struct drm_i915_private *dev_priv = dev->dev_private; 3746 - enum pipe pipe; 3747 - uint32_t pctl = I915_READ(PFIT_CONTROL); 3731 + 3732 + if (!crtc->config.gmch_pfit.control) 3733 + return; 3748 3734 3749 3735 assert_pipe_disabled(dev_priv, crtc->pipe); 3750 3736 3751 - if (INTEL_INFO(dev)->gen >= 4) 3752 - pipe = (pctl & PFIT_PIPE_MASK) >> PFIT_PIPE_SHIFT; 3753 - else 3754 - pipe = PIPE_B; 3755 - 3756 - if (pipe == crtc->pipe) { 3757 - DRM_DEBUG_DRIVER("disabling pfit, current: 0x%08x\n", pctl); 3758 - I915_WRITE(PFIT_CONTROL, 0); 3759 - } 3737 + DRM_DEBUG_DRIVER("disabling pfit, current: 0x%08x\n", 3738 + I915_READ(PFIT_CONTROL)); 3739 + I915_WRITE(PFIT_CONTROL, 0); 3760 3740 } 3761 3741 3762 3742 static void i9xx_crtc_disable(struct drm_crtc *crtc) ··· 3782 3772 intel_disable_pipe(dev_priv, pipe); 3783 3773 3784 3774 i9xx_pfit_disable(intel_crtc); 3775 + 3776 + for_each_encoder_on_crtc(dev, crtc, encoder) 3777 + if (encoder->post_disable) 3778 + encoder->post_disable(encoder); 3785 3779 3786 3780 intel_disable_pll(dev_priv, pipe); 3787 3781 ··· 3859 3845 /* crtc should still be enabled when we disable it. */ 3860 3846 WARN_ON(!crtc->enabled); 3861 3847 3862 - intel_crtc->eld_vld = false; 3863 3848 dev_priv->display.crtc_disable(crtc); 3849 + intel_crtc->eld_vld = false; 3864 3850 intel_crtc_update_sarea(crtc, false); 3865 3851 dev_priv->display.off(crtc); 3866 3852 ··· 3991 3977 return encoder->get_hw_state(encoder, &pipe); 3992 3978 } 3993 3979 3994 - static bool intel_crtc_compute_config(struct drm_crtc *crtc, 3995 - struct intel_crtc_config *pipe_config) 3980 + static bool ironlake_check_fdi_lanes(struct drm_device *dev, enum pipe pipe, 3981 + struct intel_crtc_config *pipe_config) 3982 + { 3983 + struct drm_i915_private *dev_priv = dev->dev_private; 3984 + struct intel_crtc *pipe_B_crtc = 3985 + to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]); 3986 + 3987 + DRM_DEBUG_KMS("checking fdi config on pipe %c, lanes %i\n", 3988 + pipe_name(pipe), pipe_config->fdi_lanes); 3989 + if (pipe_config->fdi_lanes > 4) { 3990 + DRM_DEBUG_KMS("invalid fdi lane config on pipe %c: %i lanes\n", 3991 + pipe_name(pipe), pipe_config->fdi_lanes); 3992 + return false; 3993 + } 3994 + 3995 + if (IS_HASWELL(dev)) { 3996 + if (pipe_config->fdi_lanes > 2) { 3997 + DRM_DEBUG_KMS("only 2 lanes on haswell, required: %i lanes\n", 3998 + pipe_config->fdi_lanes); 3999 + return false; 4000 + } else { 4001 + return true; 4002 + } 4003 + } 4004 + 4005 + if (INTEL_INFO(dev)->num_pipes == 2) 4006 + return true; 4007 + 4008 + /* Ivybridge 3 pipe is really complicated */ 4009 + switch (pipe) { 4010 + case PIPE_A: 4011 + return true; 4012 + case PIPE_B: 4013 + if (dev_priv->pipe_to_crtc_mapping[PIPE_C]->enabled && 4014 + pipe_config->fdi_lanes > 2) { 4015 + DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n", 4016 + pipe_name(pipe), pipe_config->fdi_lanes); 4017 + return false; 4018 + } 4019 + return true; 4020 + case PIPE_C: 4021 + if (!pipe_has_enabled_pch(pipe_B_crtc) || 4022 + pipe_B_crtc->config.fdi_lanes <= 2) { 4023 + if (pipe_config->fdi_lanes > 2) { 4024 + DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n", 4025 + pipe_name(pipe), pipe_config->fdi_lanes); 4026 + return false; 4027 + } 4028 + } else { 4029 + DRM_DEBUG_KMS("fdi link B uses too many lanes to enable link C\n"); 4030 + return false; 4031 + } 4032 + return true; 4033 + default: 4034 + BUG(); 4035 + } 4036 + } 4037 + 4038 + #define RETRY 1 4039 + static int ironlake_fdi_compute_config(struct intel_crtc *intel_crtc, 4040 + struct intel_crtc_config *pipe_config) 4041 + { 4042 + struct drm_device *dev = intel_crtc->base.dev; 4043 + struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode; 4044 + int target_clock, lane, link_bw; 4045 + bool setup_ok, needs_recompute = false; 4046 + 4047 + retry: 4048 + /* FDI is a binary signal running at ~2.7GHz, encoding 4049 + * each output octet as 10 bits. The actual frequency 4050 + * is stored as a divider into a 100MHz clock, and the 4051 + * mode pixel clock is stored in units of 1KHz. 4052 + * Hence the bw of each lane in terms of the mode signal 4053 + * is: 4054 + */ 4055 + link_bw = intel_fdi_link_freq(dev) * MHz(100)/KHz(1)/10; 4056 + 4057 + if (pipe_config->pixel_target_clock) 4058 + target_clock = pipe_config->pixel_target_clock; 4059 + else 4060 + target_clock = adjusted_mode->clock; 4061 + 4062 + lane = ironlake_get_lanes_required(target_clock, link_bw, 4063 + pipe_config->pipe_bpp); 4064 + 4065 + pipe_config->fdi_lanes = lane; 4066 + 4067 + if (pipe_config->pixel_multiplier > 1) 4068 + link_bw *= pipe_config->pixel_multiplier; 4069 + intel_link_compute_m_n(pipe_config->pipe_bpp, lane, target_clock, 4070 + link_bw, &pipe_config->fdi_m_n); 4071 + 4072 + setup_ok = ironlake_check_fdi_lanes(intel_crtc->base.dev, 4073 + intel_crtc->pipe, pipe_config); 4074 + if (!setup_ok && pipe_config->pipe_bpp > 6*3) { 4075 + pipe_config->pipe_bpp -= 2*3; 4076 + DRM_DEBUG_KMS("fdi link bw constraint, reducing pipe bpp to %i\n", 4077 + pipe_config->pipe_bpp); 4078 + needs_recompute = true; 4079 + pipe_config->bw_constrained = true; 4080 + 4081 + goto retry; 4082 + } 4083 + 4084 + if (needs_recompute) 4085 + return RETRY; 4086 + 4087 + return setup_ok ? 0 : -EINVAL; 4088 + } 4089 + 4090 + static int intel_crtc_compute_config(struct drm_crtc *crtc, 4091 + struct intel_crtc_config *pipe_config) 3996 4092 { 3997 4093 struct drm_device *dev = crtc->dev; 3998 4094 struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode; ··· 4111 3987 /* FDI link clock is fixed at 2.7G */ 4112 3988 if (pipe_config->requested_mode.clock * 3 4113 3989 > IRONLAKE_FDI_FREQ * 4) 4114 - return false; 3990 + return -EINVAL; 4115 3991 } 4116 3992 4117 3993 /* All interlaced capable intel hw wants timings in frames. Note though ··· 4120 3996 if (!pipe_config->timings_set) 4121 3997 drm_mode_set_crtcinfo(adjusted_mode, 0); 4122 3998 4123 - /* WaPruneModeWithIncorrectHsyncOffset: Cantiga+ cannot handle modes 4124 - * with a hsync front porch of 0. 3999 + /* Cantiga+ cannot handle modes with a hsync front porch of 0. 4000 + * WaPruneModeWithIncorrectHsyncOffset:ctg,elk,ilk,snb,ivb,vlv,hsw. 4125 4001 */ 4126 4002 if ((INTEL_INFO(dev)->gen > 4 || IS_G4X(dev)) && 4127 4003 adjusted_mode->hsync_start == adjusted_mode->hdisplay) 4128 - return false; 4004 + return -EINVAL; 4129 4005 4130 4006 if ((IS_G4X(dev) || IS_VALLEYVIEW(dev)) && pipe_config->pipe_bpp > 10*3) { 4131 4007 pipe_config->pipe_bpp = 10*3; /* 12bpc is gen5+ */ ··· 4135 4011 pipe_config->pipe_bpp = 8*3; 4136 4012 } 4137 4013 4138 - return true; 4014 + if (pipe_config->has_pch_encoder) 4015 + return ironlake_fdi_compute_config(to_intel_crtc(crtc), pipe_config); 4016 + 4017 + return 0; 4139 4018 } 4140 4019 4141 4020 static int valleyview_get_display_clock_speed(struct drm_device *dev) ··· 4247 4120 { 4248 4121 if (i915_panel_use_ssc >= 0) 4249 4122 return i915_panel_use_ssc != 0; 4250 - return dev_priv->lvds_use_ssc 4123 + return dev_priv->vbt.lvds_use_ssc 4251 4124 && !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE); 4252 4125 } 4253 4126 ··· 4283 4156 refclk = vlv_get_refclk(crtc); 4284 4157 } else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) && 4285 4158 intel_panel_use_ssc(dev_priv) && num_connectors < 2) { 4286 - refclk = dev_priv->lvds_ssc_freq * 1000; 4159 + refclk = dev_priv->vbt.lvds_ssc_freq * 1000; 4287 4160 DRM_DEBUG_KMS("using SSC reference clock of %d MHz\n", 4288 4161 refclk / 1000); 4289 4162 } else if (!IS_GEN2(dev)) { ··· 4295 4168 return refclk; 4296 4169 } 4297 4170 4298 - static void i9xx_adjust_sdvo_tv_clock(struct intel_crtc *crtc) 4171 + static uint32_t pnv_dpll_compute_fp(struct dpll *dpll) 4299 4172 { 4300 - unsigned dotclock = crtc->config.adjusted_mode.clock; 4301 - struct dpll *clock = &crtc->config.dpll; 4173 + return (1 << dpll->n) << 16 | dpll->m1 << 8 | dpll->m2; 4174 + } 4302 4175 4303 - /* SDVO TV has fixed PLL values depend on its clock range, 4304 - this mirrors vbios setting. */ 4305 - if (dotclock >= 100000 && dotclock < 140500) { 4306 - clock->p1 = 2; 4307 - clock->p2 = 10; 4308 - clock->n = 3; 4309 - clock->m1 = 16; 4310 - clock->m2 = 8; 4311 - } else if (dotclock >= 140500 && dotclock <= 200000) { 4312 - clock->p1 = 1; 4313 - clock->p2 = 10; 4314 - clock->n = 6; 4315 - clock->m1 = 12; 4316 - clock->m2 = 8; 4317 - } 4318 - 4319 - crtc->config.clock_set = true; 4176 + static uint32_t i9xx_dpll_compute_fp(struct dpll *dpll) 4177 + { 4178 + return dpll->n << 16 | dpll->m1 << 8 | dpll->m2; 4320 4179 } 4321 4180 4322 4181 static void i9xx_update_pll_dividers(struct intel_crtc *crtc, ··· 4312 4199 struct drm_i915_private *dev_priv = dev->dev_private; 4313 4200 int pipe = crtc->pipe; 4314 4201 u32 fp, fp2 = 0; 4315 - struct dpll *clock = &crtc->config.dpll; 4316 4202 4317 4203 if (IS_PINEVIEW(dev)) { 4318 - fp = (1 << clock->n) << 16 | clock->m1 << 8 | clock->m2; 4204 + fp = pnv_dpll_compute_fp(&crtc->config.dpll); 4319 4205 if (reduced_clock) 4320 - fp2 = (1 << reduced_clock->n) << 16 | 4321 - reduced_clock->m1 << 8 | reduced_clock->m2; 4206 + fp2 = pnv_dpll_compute_fp(reduced_clock); 4322 4207 } else { 4323 - fp = clock->n << 16 | clock->m1 << 8 | clock->m2; 4208 + fp = i9xx_dpll_compute_fp(&crtc->config.dpll); 4324 4209 if (reduced_clock) 4325 - fp2 = reduced_clock->n << 16 | reduced_clock->m1 << 8 | 4326 - reduced_clock->m2; 4210 + fp2 = i9xx_dpll_compute_fp(reduced_clock); 4327 4211 } 4328 4212 4329 4213 I915_WRITE(FP0(pipe), fp); ··· 4332 4222 crtc->lowfreq_avail = true; 4333 4223 } else { 4334 4224 I915_WRITE(FP1(pipe), fp); 4225 + } 4226 + } 4227 + 4228 + static void vlv_pllb_recal_opamp(struct drm_i915_private *dev_priv) 4229 + { 4230 + u32 reg_val; 4231 + 4232 + /* 4233 + * PLLB opamp always calibrates to max value of 0x3f, force enable it 4234 + * and set it to a reasonable value instead. 4235 + */ 4236 + reg_val = intel_dpio_read(dev_priv, DPIO_IREF(1)); 4237 + reg_val &= 0xffffff00; 4238 + reg_val |= 0x00000030; 4239 + intel_dpio_write(dev_priv, DPIO_IREF(1), reg_val); 4240 + 4241 + reg_val = intel_dpio_read(dev_priv, DPIO_CALIBRATION); 4242 + reg_val &= 0x8cffffff; 4243 + reg_val = 0x8c000000; 4244 + intel_dpio_write(dev_priv, DPIO_CALIBRATION, reg_val); 4245 + 4246 + reg_val = intel_dpio_read(dev_priv, DPIO_IREF(1)); 4247 + reg_val &= 0xffffff00; 4248 + intel_dpio_write(dev_priv, DPIO_IREF(1), reg_val); 4249 + 4250 + reg_val = intel_dpio_read(dev_priv, DPIO_CALIBRATION); 4251 + reg_val &= 0x00ffffff; 4252 + reg_val |= 0xb0000000; 4253 + intel_dpio_write(dev_priv, DPIO_CALIBRATION, reg_val); 4254 + } 4255 + 4256 + static void intel_pch_transcoder_set_m_n(struct intel_crtc *crtc, 4257 + struct intel_link_m_n *m_n) 4258 + { 4259 + struct drm_device *dev = crtc->base.dev; 4260 + struct drm_i915_private *dev_priv = dev->dev_private; 4261 + int pipe = crtc->pipe; 4262 + 4263 + I915_WRITE(PCH_TRANS_DATA_M1(pipe), TU_SIZE(m_n->tu) | m_n->gmch_m); 4264 + I915_WRITE(PCH_TRANS_DATA_N1(pipe), m_n->gmch_n); 4265 + I915_WRITE(PCH_TRANS_LINK_M1(pipe), m_n->link_m); 4266 + I915_WRITE(PCH_TRANS_LINK_N1(pipe), m_n->link_n); 4267 + } 4268 + 4269 + static void intel_cpu_transcoder_set_m_n(struct intel_crtc *crtc, 4270 + struct intel_link_m_n *m_n) 4271 + { 4272 + struct drm_device *dev = crtc->base.dev; 4273 + struct drm_i915_private *dev_priv = dev->dev_private; 4274 + int pipe = crtc->pipe; 4275 + enum transcoder transcoder = crtc->config.cpu_transcoder; 4276 + 4277 + if (INTEL_INFO(dev)->gen >= 5) { 4278 + I915_WRITE(PIPE_DATA_M1(transcoder), TU_SIZE(m_n->tu) | m_n->gmch_m); 4279 + I915_WRITE(PIPE_DATA_N1(transcoder), m_n->gmch_n); 4280 + I915_WRITE(PIPE_LINK_M1(transcoder), m_n->link_m); 4281 + I915_WRITE(PIPE_LINK_N1(transcoder), m_n->link_n); 4282 + } else { 4283 + I915_WRITE(PIPE_DATA_M_G4X(pipe), TU_SIZE(m_n->tu) | m_n->gmch_m); 4284 + I915_WRITE(PIPE_DATA_N_G4X(pipe), m_n->gmch_n); 4285 + I915_WRITE(PIPE_LINK_M_G4X(pipe), m_n->link_m); 4286 + I915_WRITE(PIPE_LINK_N_G4X(pipe), m_n->link_n); 4335 4287 } 4336 4288 } 4337 4289 ··· 4409 4237 { 4410 4238 struct drm_device *dev = crtc->base.dev; 4411 4239 struct drm_i915_private *dev_priv = dev->dev_private; 4240 + struct drm_display_mode *adjusted_mode = 4241 + &crtc->config.adjusted_mode; 4242 + struct intel_encoder *encoder; 4412 4243 int pipe = crtc->pipe; 4413 - u32 dpll, mdiv, pdiv; 4244 + u32 dpll, mdiv; 4414 4245 u32 bestn, bestm1, bestm2, bestp1, bestp2; 4415 - bool is_sdvo; 4416 - u32 temp; 4246 + bool is_hdmi; 4247 + u32 coreclk, reg_val, dpll_md; 4417 4248 4418 4249 mutex_lock(&dev_priv->dpio_lock); 4419 4250 4420 - is_sdvo = intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_SDVO) || 4421 - intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_HDMI); 4422 - 4423 - dpll = DPLL_VGA_MODE_DIS; 4424 - dpll |= DPLL_EXT_BUFFER_ENABLE_VLV; 4425 - dpll |= DPLL_REFA_CLK_ENABLE_VLV; 4426 - dpll |= DPLL_INTEGRATED_CLOCK_VLV; 4427 - 4428 - I915_WRITE(DPLL(pipe), dpll); 4429 - POSTING_READ(DPLL(pipe)); 4251 + is_hdmi = intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_HDMI); 4430 4252 4431 4253 bestn = crtc->config.dpll.n; 4432 4254 bestm1 = crtc->config.dpll.m1; ··· 4428 4262 bestp1 = crtc->config.dpll.p1; 4429 4263 bestp2 = crtc->config.dpll.p2; 4430 4264 4431 - /* 4432 - * In Valleyview PLL and program lane counter registers are exposed 4433 - * through DPIO interface 4434 - */ 4265 + /* See eDP HDMI DPIO driver vbios notes doc */ 4266 + 4267 + /* PLL B needs special handling */ 4268 + if (pipe) 4269 + vlv_pllb_recal_opamp(dev_priv); 4270 + 4271 + /* Set up Tx target for periodic Rcomp update */ 4272 + intel_dpio_write(dev_priv, DPIO_IREF_BCAST, 0x0100000f); 4273 + 4274 + /* Disable target IRef on PLL */ 4275 + reg_val = intel_dpio_read(dev_priv, DPIO_IREF_CTL(pipe)); 4276 + reg_val &= 0x00ffffff; 4277 + intel_dpio_write(dev_priv, DPIO_IREF_CTL(pipe), reg_val); 4278 + 4279 + /* Disable fast lock */ 4280 + intel_dpio_write(dev_priv, DPIO_FASTCLK_DISABLE, 0x610); 4281 + 4282 + /* Set idtafcrecal before PLL is enabled */ 4435 4283 mdiv = ((bestm1 << DPIO_M1DIV_SHIFT) | (bestm2 & DPIO_M2DIV_MASK)); 4436 4284 mdiv |= ((bestp1 << DPIO_P1_SHIFT) | (bestp2 << DPIO_P2_SHIFT)); 4437 4285 mdiv |= ((bestn << DPIO_N_SHIFT)); 4438 - mdiv |= (1 << DPIO_POST_DIV_SHIFT); 4439 4286 mdiv |= (1 << DPIO_K_SHIFT); 4287 + 4288 + /* 4289 + * Post divider depends on pixel clock rate, DAC vs digital (and LVDS, 4290 + * but we don't support that). 4291 + * Note: don't use the DAC post divider as it seems unstable. 4292 + */ 4293 + mdiv |= (DPIO_POST_DIV_HDMIDP << DPIO_POST_DIV_SHIFT); 4294 + intel_dpio_write(dev_priv, DPIO_DIV(pipe), mdiv); 4295 + 4440 4296 mdiv |= DPIO_ENABLE_CALIBRATION; 4441 4297 intel_dpio_write(dev_priv, DPIO_DIV(pipe), mdiv); 4442 4298 4443 - intel_dpio_write(dev_priv, DPIO_CORE_CLK(pipe), 0x01000000); 4299 + /* Set HBR and RBR LPF coefficients */ 4300 + if (adjusted_mode->clock == 162000 || 4301 + intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_HDMI)) 4302 + intel_dpio_write(dev_priv, DPIO_LFP_COEFF(pipe), 4303 + 0x005f0021); 4304 + else 4305 + intel_dpio_write(dev_priv, DPIO_LFP_COEFF(pipe), 4306 + 0x00d0000f); 4444 4307 4445 - pdiv = (1 << DPIO_REFSEL_OVERRIDE) | (5 << DPIO_PLL_MODESEL_SHIFT) | 4446 - (3 << DPIO_BIAS_CURRENT_CTL_SHIFT) | (1<<20) | 4447 - (7 << DPIO_PLL_REFCLK_SEL_SHIFT) | (8 << DPIO_DRIVER_CTL_SHIFT) | 4448 - (5 << DPIO_CLK_BIAS_CTL_SHIFT); 4449 - intel_dpio_write(dev_priv, DPIO_REFSFR(pipe), pdiv); 4308 + if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_EDP) || 4309 + intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DISPLAYPORT)) { 4310 + /* Use SSC source */ 4311 + if (!pipe) 4312 + intel_dpio_write(dev_priv, DPIO_REFSFR(pipe), 4313 + 0x0df40000); 4314 + else 4315 + intel_dpio_write(dev_priv, DPIO_REFSFR(pipe), 4316 + 0x0df70000); 4317 + } else { /* HDMI or VGA */ 4318 + /* Use bend source */ 4319 + if (!pipe) 4320 + intel_dpio_write(dev_priv, DPIO_REFSFR(pipe), 4321 + 0x0df70000); 4322 + else 4323 + intel_dpio_write(dev_priv, DPIO_REFSFR(pipe), 4324 + 0x0df40000); 4325 + } 4450 4326 4451 - intel_dpio_write(dev_priv, DPIO_LFP_COEFF(pipe), 0x005f003b); 4327 + coreclk = intel_dpio_read(dev_priv, DPIO_CORE_CLK(pipe)); 4328 + coreclk = (coreclk & 0x0000ff00) | 0x01c00000; 4329 + if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DISPLAYPORT) || 4330 + intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_EDP)) 4331 + coreclk |= 0x01000000; 4332 + intel_dpio_write(dev_priv, DPIO_CORE_CLK(pipe), coreclk); 4333 + 4334 + intel_dpio_write(dev_priv, DPIO_PLL_CML(pipe), 0x87871000); 4335 + 4336 + for_each_encoder_on_crtc(dev, &crtc->base, encoder) 4337 + if (encoder->pre_pll_enable) 4338 + encoder->pre_pll_enable(encoder); 4339 + 4340 + /* Enable DPIO clock input */ 4341 + dpll = DPLL_EXT_BUFFER_ENABLE_VLV | DPLL_REFA_CLK_ENABLE_VLV | 4342 + DPLL_VGA_MODE_DIS | DPLL_INTEGRATED_CLOCK_VLV; 4343 + if (pipe) 4344 + dpll |= DPLL_INTEGRATED_CRI_CLK_VLV; 4452 4345 4453 4346 dpll |= DPLL_VCO_ENABLE; 4454 4347 I915_WRITE(DPLL(pipe), dpll); 4455 4348 POSTING_READ(DPLL(pipe)); 4349 + udelay(150); 4350 + 4456 4351 if (wait_for(((I915_READ(DPLL(pipe)) & DPLL_LOCK_VLV) == DPLL_LOCK_VLV), 1)) 4457 4352 DRM_ERROR("DPLL %d failed to lock\n", pipe); 4458 4353 4459 - intel_dpio_write(dev_priv, DPIO_FASTCLK_DISABLE, 0x620); 4354 + dpll_md = 0; 4355 + if (crtc->config.pixel_multiplier > 1) { 4356 + dpll_md = (crtc->config.pixel_multiplier - 1) 4357 + << DPLL_MD_UDI_MULTIPLIER_SHIFT; 4358 + } 4359 + I915_WRITE(DPLL_MD(pipe), dpll_md); 4360 + POSTING_READ(DPLL_MD(pipe)); 4460 4361 4461 4362 if (crtc->config.has_dp_encoder) 4462 4363 intel_dp_set_m_n(crtc); 4463 - 4464 - I915_WRITE(DPLL(pipe), dpll); 4465 - 4466 - /* Wait for the clocks to stabilize. */ 4467 - POSTING_READ(DPLL(pipe)); 4468 - udelay(150); 4469 - 4470 - temp = 0; 4471 - if (is_sdvo) { 4472 - temp = 0; 4473 - if (crtc->config.pixel_multiplier > 1) { 4474 - temp = (crtc->config.pixel_multiplier - 1) 4475 - << DPLL_MD_UDI_MULTIPLIER_SHIFT; 4476 - } 4477 - } 4478 - I915_WRITE(DPLL_MD(pipe), temp); 4479 - POSTING_READ(DPLL_MD(pipe)); 4480 - 4481 - /* Now program lane control registers */ 4482 - if(intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DISPLAYPORT) 4483 - || intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_HDMI)) { 4484 - temp = 0x1000C4; 4485 - if(pipe == 1) 4486 - temp |= (1 << 21); 4487 - intel_dpio_write(dev_priv, DPIO_DATA_CHANNEL1, temp); 4488 - } 4489 - 4490 - if(intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_EDP)) { 4491 - temp = 0x1000C4; 4492 - if(pipe == 1) 4493 - temp |= (1 << 21); 4494 - intel_dpio_write(dev_priv, DPIO_DATA_CHANNEL2, temp); 4495 - } 4496 4364 4497 4365 mutex_unlock(&dev_priv->dpio_lock); 4498 4366 } ··· 4555 4355 else 4556 4356 dpll |= DPLLB_MODE_DAC_SERIAL; 4557 4357 4558 - if (is_sdvo) { 4559 - if ((crtc->config.pixel_multiplier > 1) && 4560 - (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))) { 4561 - dpll |= (crtc->config.pixel_multiplier - 1) 4562 - << SDVO_MULTIPLIER_SHIFT_HIRES; 4563 - } 4564 - dpll |= DPLL_DVO_HIGH_SPEED; 4358 + if ((crtc->config.pixel_multiplier > 1) && 4359 + (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))) { 4360 + dpll |= (crtc->config.pixel_multiplier - 1) 4361 + << SDVO_MULTIPLIER_SHIFT_HIRES; 4565 4362 } 4363 + 4364 + if (is_sdvo) 4365 + dpll |= DPLL_DVO_HIGH_SPEED; 4366 + 4566 4367 if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DISPLAYPORT)) 4567 4368 dpll |= DPLL_DVO_HIGH_SPEED; 4568 4369 ··· 4592 4391 if (INTEL_INFO(dev)->gen >= 4) 4593 4392 dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT); 4594 4393 4595 - if (is_sdvo && intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_TVOUT)) 4394 + if (crtc->config.sdvo_tv_clock) 4596 4395 dpll |= PLL_REF_INPUT_TVCLKINBC; 4597 - else if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_TVOUT)) 4598 - /* XXX: just matching BIOS for now */ 4599 - /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 4600 - dpll |= 3; 4601 4396 else if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_LVDS) && 4602 4397 intel_panel_use_ssc(dev_priv) && num_connectors < 2) 4603 4398 dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; ··· 4619 4422 udelay(150); 4620 4423 4621 4424 if (INTEL_INFO(dev)->gen >= 4) { 4622 - u32 temp = 0; 4623 - if (is_sdvo) { 4624 - temp = 0; 4625 - if (crtc->config.pixel_multiplier > 1) { 4626 - temp = (crtc->config.pixel_multiplier - 1) 4627 - << DPLL_MD_UDI_MULTIPLIER_SHIFT; 4628 - } 4425 + u32 dpll_md = 0; 4426 + if (crtc->config.pixel_multiplier > 1) { 4427 + dpll_md = (crtc->config.pixel_multiplier - 1) 4428 + << DPLL_MD_UDI_MULTIPLIER_SHIFT; 4629 4429 } 4630 - I915_WRITE(DPLL_MD(pipe), temp); 4430 + I915_WRITE(DPLL_MD(pipe), dpll_md); 4631 4431 } else { 4632 4432 /* The pixel multiplier can only be updated once the 4633 4433 * DPLL is enabled and the clocks are stable. ··· 4699 4505 struct drm_i915_private *dev_priv = dev->dev_private; 4700 4506 enum pipe pipe = intel_crtc->pipe; 4701 4507 enum transcoder cpu_transcoder = intel_crtc->config.cpu_transcoder; 4702 - uint32_t vsyncshift; 4508 + uint32_t vsyncshift, crtc_vtotal, crtc_vblank_end; 4509 + 4510 + /* We need to be careful not to changed the adjusted mode, for otherwise 4511 + * the hw state checker will get angry at the mismatch. */ 4512 + crtc_vtotal = adjusted_mode->crtc_vtotal; 4513 + crtc_vblank_end = adjusted_mode->crtc_vblank_end; 4703 4514 4704 4515 if (!IS_GEN2(dev) && adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) { 4705 4516 /* the chip adds 2 halflines automatically */ 4706 - adjusted_mode->crtc_vtotal -= 1; 4707 - adjusted_mode->crtc_vblank_end -= 1; 4517 + crtc_vtotal -= 1; 4518 + crtc_vblank_end -= 1; 4708 4519 vsyncshift = adjusted_mode->crtc_hsync_start 4709 4520 - adjusted_mode->crtc_htotal / 2; 4710 4521 } else { ··· 4731 4532 4732 4533 I915_WRITE(VTOTAL(cpu_transcoder), 4733 4534 (adjusted_mode->crtc_vdisplay - 1) | 4734 - ((adjusted_mode->crtc_vtotal - 1) << 16)); 4535 + ((crtc_vtotal - 1) << 16)); 4735 4536 I915_WRITE(VBLANK(cpu_transcoder), 4736 4537 (adjusted_mode->crtc_vblank_start - 1) | 4737 - ((adjusted_mode->crtc_vblank_end - 1) << 16)); 4538 + ((crtc_vblank_end - 1) << 16)); 4738 4539 I915_WRITE(VSYNC(cpu_transcoder), 4739 4540 (adjusted_mode->crtc_vsync_start - 1) | 4740 4541 ((adjusted_mode->crtc_vsync_end - 1) << 16)); ··· 4752 4553 */ 4753 4554 I915_WRITE(PIPESRC(pipe), 4754 4555 ((mode->hdisplay - 1) << 16) | (mode->vdisplay - 1)); 4556 + } 4557 + 4558 + static void intel_get_pipe_timings(struct intel_crtc *crtc, 4559 + struct intel_crtc_config *pipe_config) 4560 + { 4561 + struct drm_device *dev = crtc->base.dev; 4562 + struct drm_i915_private *dev_priv = dev->dev_private; 4563 + enum transcoder cpu_transcoder = pipe_config->cpu_transcoder; 4564 + uint32_t tmp; 4565 + 4566 + tmp = I915_READ(HTOTAL(cpu_transcoder)); 4567 + pipe_config->adjusted_mode.crtc_hdisplay = (tmp & 0xffff) + 1; 4568 + pipe_config->adjusted_mode.crtc_htotal = ((tmp >> 16) & 0xffff) + 1; 4569 + tmp = I915_READ(HBLANK(cpu_transcoder)); 4570 + pipe_config->adjusted_mode.crtc_hblank_start = (tmp & 0xffff) + 1; 4571 + pipe_config->adjusted_mode.crtc_hblank_end = ((tmp >> 16) & 0xffff) + 1; 4572 + tmp = I915_READ(HSYNC(cpu_transcoder)); 4573 + pipe_config->adjusted_mode.crtc_hsync_start = (tmp & 0xffff) + 1; 4574 + pipe_config->adjusted_mode.crtc_hsync_end = ((tmp >> 16) & 0xffff) + 1; 4575 + 4576 + tmp = I915_READ(VTOTAL(cpu_transcoder)); 4577 + pipe_config->adjusted_mode.crtc_vdisplay = (tmp & 0xffff) + 1; 4578 + pipe_config->adjusted_mode.crtc_vtotal = ((tmp >> 16) & 0xffff) + 1; 4579 + tmp = I915_READ(VBLANK(cpu_transcoder)); 4580 + pipe_config->adjusted_mode.crtc_vblank_start = (tmp & 0xffff) + 1; 4581 + pipe_config->adjusted_mode.crtc_vblank_end = ((tmp >> 16) & 0xffff) + 1; 4582 + tmp = I915_READ(VSYNC(cpu_transcoder)); 4583 + pipe_config->adjusted_mode.crtc_vsync_start = (tmp & 0xffff) + 1; 4584 + pipe_config->adjusted_mode.crtc_vsync_end = ((tmp >> 16) & 0xffff) + 1; 4585 + 4586 + if (I915_READ(PIPECONF(cpu_transcoder)) & PIPECONF_INTERLACE_MASK) { 4587 + pipe_config->adjusted_mode.flags |= DRM_MODE_FLAG_INTERLACE; 4588 + pipe_config->adjusted_mode.crtc_vtotal += 1; 4589 + pipe_config->adjusted_mode.crtc_vblank_end += 1; 4590 + } 4591 + 4592 + tmp = I915_READ(PIPESRC(crtc->pipe)); 4593 + pipe_config->requested_mode.vdisplay = (tmp & 0xffff) + 1; 4594 + pipe_config->requested_mode.hdisplay = ((tmp >> 16) & 0xffff) + 1; 4755 4595 } 4756 4596 4757 4597 static void i9xx_set_pipeconf(struct intel_crtc *intel_crtc) ··· 4815 4577 pipeconf &= ~PIPECONF_DOUBLE_WIDE; 4816 4578 } 4817 4579 4818 - /* default to 8bpc */ 4819 - pipeconf &= ~(PIPECONF_BPC_MASK | PIPECONF_DITHER_EN); 4820 - if (intel_crtc->config.has_dp_encoder) { 4821 - if (intel_crtc->config.dither) { 4822 - pipeconf |= PIPECONF_6BPC | 4823 - PIPECONF_DITHER_EN | 4824 - PIPECONF_DITHER_TYPE_SP; 4825 - } 4826 - } 4580 + /* only g4x and later have fancy bpc/dither controls */ 4581 + if (IS_G4X(dev) || IS_VALLEYVIEW(dev)) { 4582 + pipeconf &= ~(PIPECONF_BPC_MASK | 4583 + PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_MASK); 4827 4584 4828 - if (IS_VALLEYVIEW(dev) && intel_pipe_has_type(&intel_crtc->base, 4829 - INTEL_OUTPUT_EDP)) { 4830 - if (intel_crtc->config.dither) { 4831 - pipeconf |= PIPECONF_6BPC | 4832 - PIPECONF_ENABLE | 4833 - I965_PIPECONF_ACTIVE; 4585 + /* Bspec claims that we can't use dithering for 30bpp pipes. */ 4586 + if (intel_crtc->config.dither && intel_crtc->config.pipe_bpp != 30) 4587 + pipeconf |= PIPECONF_DITHER_EN | 4588 + PIPECONF_DITHER_TYPE_SP; 4589 + 4590 + switch (intel_crtc->config.pipe_bpp) { 4591 + case 18: 4592 + pipeconf |= PIPECONF_6BPC; 4593 + break; 4594 + case 24: 4595 + pipeconf |= PIPECONF_8BPC; 4596 + break; 4597 + case 30: 4598 + pipeconf |= PIPECONF_10BPC; 4599 + break; 4600 + default: 4601 + /* Case prevented by intel_choose_pipe_bpp_dither. */ 4602 + BUG(); 4834 4603 } 4835 4604 } 4836 4605 ··· 4884 4639 int refclk, num_connectors = 0; 4885 4640 intel_clock_t clock, reduced_clock; 4886 4641 u32 dspcntr; 4887 - bool ok, has_reduced_clock = false, is_sdvo = false; 4888 - bool is_lvds = false, is_tv = false; 4642 + bool ok, has_reduced_clock = false; 4643 + bool is_lvds = false; 4889 4644 struct intel_encoder *encoder; 4890 4645 const intel_limit_t *limit; 4891 4646 int ret; ··· 4894 4649 switch (encoder->type) { 4895 4650 case INTEL_OUTPUT_LVDS: 4896 4651 is_lvds = true; 4897 - break; 4898 - case INTEL_OUTPUT_SDVO: 4899 - case INTEL_OUTPUT_HDMI: 4900 - is_sdvo = true; 4901 - if (encoder->needs_tv_clock) 4902 - is_tv = true; 4903 - break; 4904 - case INTEL_OUTPUT_TVOUT: 4905 - is_tv = true; 4906 4652 break; 4907 4653 } 4908 4654 ··· 4940 4704 intel_crtc->config.dpll.p2 = clock.p2; 4941 4705 } 4942 4706 4943 - if (is_sdvo && is_tv) 4944 - i9xx_adjust_sdvo_tv_clock(intel_crtc); 4945 - 4946 4707 if (IS_GEN2(dev)) 4947 4708 i8xx_update_pll(intel_crtc, adjusted_mode, 4948 4709 has_reduced_clock ? &reduced_clock : NULL, ··· 4949 4716 else 4950 4717 i9xx_update_pll(intel_crtc, 4951 4718 has_reduced_clock ? &reduced_clock : NULL, 4952 - num_connectors); 4719 + num_connectors); 4953 4720 4954 4721 /* Set up the display plane register */ 4955 4722 dspcntr = DISPPLANE_GAMMA_ENABLE; ··· 4961 4728 dspcntr |= DISPPLANE_SEL_PIPE_B; 4962 4729 } 4963 4730 4964 - DRM_DEBUG_KMS("Mode for pipe %c:\n", pipe == 0 ? 'A' : 'B'); 4731 + DRM_DEBUG_KMS("Mode for pipe %c:\n", pipe_name(pipe)); 4965 4732 drm_mode_debug_printmodeline(mode); 4966 4733 4967 4734 intel_set_pipe_timings(intel_crtc, mode, adjusted_mode); ··· 4976 4743 4977 4744 i9xx_set_pipeconf(intel_crtc); 4978 4745 4979 - intel_enable_pipe(dev_priv, pipe, false); 4980 - 4981 - intel_wait_for_vblank(dev, pipe); 4982 - 4983 4746 I915_WRITE(DSPCNTR(plane), dspcntr); 4984 4747 POSTING_READ(DSPCNTR(plane)); 4985 4748 ··· 4984 4755 intel_update_watermarks(dev); 4985 4756 4986 4757 return ret; 4758 + } 4759 + 4760 + static void i9xx_get_pfit_config(struct intel_crtc *crtc, 4761 + struct intel_crtc_config *pipe_config) 4762 + { 4763 + struct drm_device *dev = crtc->base.dev; 4764 + struct drm_i915_private *dev_priv = dev->dev_private; 4765 + uint32_t tmp; 4766 + 4767 + tmp = I915_READ(PFIT_CONTROL); 4768 + 4769 + if (INTEL_INFO(dev)->gen < 4) { 4770 + if (crtc->pipe != PIPE_B) 4771 + return; 4772 + 4773 + /* gen2/3 store dither state in pfit control, needs to match */ 4774 + pipe_config->gmch_pfit.control = tmp & PANEL_8TO6_DITHER_ENABLE; 4775 + } else { 4776 + if ((tmp & PFIT_PIPE_MASK) != (crtc->pipe << PFIT_PIPE_SHIFT)) 4777 + return; 4778 + } 4779 + 4780 + if (!(tmp & PFIT_ENABLE)) 4781 + return; 4782 + 4783 + pipe_config->gmch_pfit.control = I915_READ(PFIT_CONTROL); 4784 + pipe_config->gmch_pfit.pgm_ratios = I915_READ(PFIT_PGM_RATIOS); 4785 + if (INTEL_INFO(dev)->gen < 5) 4786 + pipe_config->gmch_pfit.lvds_border_bits = 4787 + I915_READ(LVDS) & LVDS_BORDER_ENABLE; 4987 4788 } 4988 4789 4989 4790 static bool i9xx_get_pipe_config(struct intel_crtc *crtc, ··· 5027 4768 if (!(tmp & PIPECONF_ENABLE)) 5028 4769 return false; 5029 4770 4771 + intel_get_pipe_timings(crtc, pipe_config); 4772 + 4773 + i9xx_get_pfit_config(crtc, pipe_config); 4774 + 5030 4775 return true; 5031 4776 } 5032 4777 ··· 5042 4779 u32 val, final; 5043 4780 bool has_lvds = false; 5044 4781 bool has_cpu_edp = false; 5045 - bool has_pch_edp = false; 5046 4782 bool has_panel = false; 5047 4783 bool has_ck505 = false; 5048 4784 bool can_ssc = false; ··· 5056 4794 break; 5057 4795 case INTEL_OUTPUT_EDP: 5058 4796 has_panel = true; 5059 - if (intel_encoder_is_pch_edp(&encoder->base)) 5060 - has_pch_edp = true; 5061 - else 4797 + if (enc_to_dig_port(&encoder->base)->port == PORT_A) 5062 4798 has_cpu_edp = true; 5063 4799 break; 5064 4800 } 5065 4801 } 5066 4802 5067 4803 if (HAS_PCH_IBX(dev)) { 5068 - has_ck505 = dev_priv->display_clock_mode; 4804 + has_ck505 = dev_priv->vbt.display_clock_mode; 5069 4805 can_ssc = has_ck505; 5070 4806 } else { 5071 4807 has_ck505 = false; 5072 4808 can_ssc = true; 5073 4809 } 5074 4810 5075 - DRM_DEBUG_KMS("has_panel %d has_lvds %d has_pch_edp %d has_cpu_edp %d has_ck505 %d\n", 5076 - has_panel, has_lvds, has_pch_edp, has_cpu_edp, 5077 - has_ck505); 4811 + DRM_DEBUG_KMS("has_panel %d has_lvds %d has_ck505 %d\n", 4812 + has_panel, has_lvds, has_ck505); 5078 4813 5079 4814 /* Ironlake: try to setup display ref clock before DPLL 5080 4815 * enabling. This is only under driver's control after ··· 5361 5102 struct drm_device *dev = crtc->dev; 5362 5103 struct drm_i915_private *dev_priv = dev->dev_private; 5363 5104 struct intel_encoder *encoder; 5364 - struct intel_encoder *edp_encoder = NULL; 5365 5105 int num_connectors = 0; 5366 5106 bool is_lvds = false; 5367 5107 ··· 5369 5111 case INTEL_OUTPUT_LVDS: 5370 5112 is_lvds = true; 5371 5113 break; 5372 - case INTEL_OUTPUT_EDP: 5373 - edp_encoder = encoder; 5374 - break; 5375 5114 } 5376 5115 num_connectors++; 5377 5116 } 5378 5117 5379 5118 if (is_lvds && intel_panel_use_ssc(dev_priv) && num_connectors < 2) { 5380 5119 DRM_DEBUG_KMS("using SSC reference clock of %d MHz\n", 5381 - dev_priv->lvds_ssc_freq); 5382 - return dev_priv->lvds_ssc_freq * 1000; 5120 + dev_priv->vbt.lvds_ssc_freq); 5121 + return dev_priv->vbt.lvds_ssc_freq * 1000; 5383 5122 } 5384 5123 5385 5124 return 120000; 5386 5125 } 5387 5126 5388 - static void ironlake_set_pipeconf(struct drm_crtc *crtc, 5389 - struct drm_display_mode *adjusted_mode, 5390 - bool dither) 5127 + static void ironlake_set_pipeconf(struct drm_crtc *crtc) 5391 5128 { 5392 5129 struct drm_i915_private *dev_priv = crtc->dev->dev_private; 5393 5130 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); ··· 5411 5158 } 5412 5159 5413 5160 val &= ~(PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_MASK); 5414 - if (dither) 5161 + if (intel_crtc->config.dither) 5415 5162 val |= (PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_SP); 5416 5163 5417 5164 val &= ~PIPECONF_INTERLACE_MASK; 5418 - if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) 5165 + if (intel_crtc->config.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) 5419 5166 val |= PIPECONF_INTERLACED_ILK; 5420 5167 else 5421 5168 val |= PIPECONF_PROGRESSIVE; ··· 5493 5240 } 5494 5241 } 5495 5242 5496 - static void haswell_set_pipeconf(struct drm_crtc *crtc, 5497 - struct drm_display_mode *adjusted_mode, 5498 - bool dither) 5243 + static void haswell_set_pipeconf(struct drm_crtc *crtc) 5499 5244 { 5500 5245 struct drm_i915_private *dev_priv = crtc->dev->dev_private; 5501 5246 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); ··· 5503 5252 val = I915_READ(PIPECONF(cpu_transcoder)); 5504 5253 5505 5254 val &= ~(PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_MASK); 5506 - if (dither) 5255 + if (intel_crtc->config.dither) 5507 5256 val |= (PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_SP); 5508 5257 5509 5258 val &= ~PIPECONF_INTERLACE_MASK_HSW; 5510 - if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) 5259 + if (intel_crtc->config.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) 5511 5260 val |= PIPECONF_INTERLACED_ILK; 5512 5261 else 5513 5262 val |= PIPECONF_PROGRESSIVE; ··· 5527 5276 struct intel_encoder *intel_encoder; 5528 5277 int refclk; 5529 5278 const intel_limit_t *limit; 5530 - bool ret, is_sdvo = false, is_tv = false, is_lvds = false; 5279 + bool ret, is_lvds = false; 5531 5280 5532 5281 for_each_encoder_on_crtc(dev, crtc, intel_encoder) { 5533 5282 switch (intel_encoder->type) { 5534 5283 case INTEL_OUTPUT_LVDS: 5535 5284 is_lvds = true; 5536 - break; 5537 - case INTEL_OUTPUT_SDVO: 5538 - case INTEL_OUTPUT_HDMI: 5539 - is_sdvo = true; 5540 - if (intel_encoder->needs_tv_clock) 5541 - is_tv = true; 5542 - break; 5543 - case INTEL_OUTPUT_TVOUT: 5544 - is_tv = true; 5545 5285 break; 5546 5286 } 5547 5287 } ··· 5564 5322 reduced_clock); 5565 5323 } 5566 5324 5567 - if (is_sdvo && is_tv) 5568 - i9xx_adjust_sdvo_tv_clock(to_intel_crtc(crtc)); 5569 - 5570 5325 return true; 5571 5326 } 5572 5327 ··· 5585 5346 POSTING_READ(SOUTH_CHICKEN1); 5586 5347 } 5587 5348 5588 - static bool ironlake_check_fdi_lanes(struct intel_crtc *intel_crtc) 5349 + static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc) 5589 5350 { 5590 5351 struct drm_device *dev = intel_crtc->base.dev; 5591 5352 struct drm_i915_private *dev_priv = dev->dev_private; 5592 - struct intel_crtc *pipe_B_crtc = 5593 - to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]); 5594 - 5595 - DRM_DEBUG_KMS("checking fdi config on pipe %i, lanes %i\n", 5596 - intel_crtc->pipe, intel_crtc->fdi_lanes); 5597 - if (intel_crtc->fdi_lanes > 4) { 5598 - DRM_DEBUG_KMS("invalid fdi lane config on pipe %i: %i lanes\n", 5599 - intel_crtc->pipe, intel_crtc->fdi_lanes); 5600 - /* Clamp lanes to avoid programming the hw with bogus values. */ 5601 - intel_crtc->fdi_lanes = 4; 5602 - 5603 - return false; 5604 - } 5605 - 5606 - if (INTEL_INFO(dev)->num_pipes == 2) 5607 - return true; 5608 5353 5609 5354 switch (intel_crtc->pipe) { 5610 5355 case PIPE_A: 5611 - return true; 5356 + break; 5612 5357 case PIPE_B: 5613 - if (dev_priv->pipe_to_crtc_mapping[PIPE_C]->enabled && 5614 - intel_crtc->fdi_lanes > 2) { 5615 - DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %i: %i lanes\n", 5616 - intel_crtc->pipe, intel_crtc->fdi_lanes); 5617 - /* Clamp lanes to avoid programming the hw with bogus values. */ 5618 - intel_crtc->fdi_lanes = 2; 5619 - 5620 - return false; 5621 - } 5622 - 5623 - if (intel_crtc->fdi_lanes > 2) 5358 + if (intel_crtc->config.fdi_lanes > 2) 5624 5359 WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT); 5625 5360 else 5626 5361 cpt_enable_fdi_bc_bifurcation(dev); 5627 5362 5628 - return true; 5363 + break; 5629 5364 case PIPE_C: 5630 - if (!pipe_B_crtc->base.enabled || pipe_B_crtc->fdi_lanes <= 2) { 5631 - if (intel_crtc->fdi_lanes > 2) { 5632 - DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %i: %i lanes\n", 5633 - intel_crtc->pipe, intel_crtc->fdi_lanes); 5634 - /* Clamp lanes to avoid programming the hw with bogus values. */ 5635 - intel_crtc->fdi_lanes = 2; 5636 - 5637 - return false; 5638 - } 5639 - } else { 5640 - DRM_DEBUG_KMS("fdi link B uses too many lanes to enable link C\n"); 5641 - return false; 5642 - } 5643 - 5644 5365 cpt_enable_fdi_bc_bifurcation(dev); 5645 5366 5646 - return true; 5367 + break; 5647 5368 default: 5648 5369 BUG(); 5649 5370 } ··· 5620 5421 return bps / (link_bw * 8) + 1; 5621 5422 } 5622 5423 5623 - void intel_pch_transcoder_set_m_n(struct intel_crtc *crtc, 5624 - struct intel_link_m_n *m_n) 5424 + static bool ironlake_needs_fb_cb_tune(struct dpll *dpll, int factor) 5625 5425 { 5626 - struct drm_device *dev = crtc->base.dev; 5627 - struct drm_i915_private *dev_priv = dev->dev_private; 5628 - int pipe = crtc->pipe; 5629 - 5630 - I915_WRITE(TRANSDATA_M1(pipe), TU_SIZE(m_n->tu) | m_n->gmch_m); 5631 - I915_WRITE(TRANSDATA_N1(pipe), m_n->gmch_n); 5632 - I915_WRITE(TRANSDPLINK_M1(pipe), m_n->link_m); 5633 - I915_WRITE(TRANSDPLINK_N1(pipe), m_n->link_n); 5634 - } 5635 - 5636 - void intel_cpu_transcoder_set_m_n(struct intel_crtc *crtc, 5637 - struct intel_link_m_n *m_n) 5638 - { 5639 - struct drm_device *dev = crtc->base.dev; 5640 - struct drm_i915_private *dev_priv = dev->dev_private; 5641 - int pipe = crtc->pipe; 5642 - enum transcoder transcoder = crtc->config.cpu_transcoder; 5643 - 5644 - if (INTEL_INFO(dev)->gen >= 5) { 5645 - I915_WRITE(PIPE_DATA_M1(transcoder), TU_SIZE(m_n->tu) | m_n->gmch_m); 5646 - I915_WRITE(PIPE_DATA_N1(transcoder), m_n->gmch_n); 5647 - I915_WRITE(PIPE_LINK_M1(transcoder), m_n->link_m); 5648 - I915_WRITE(PIPE_LINK_N1(transcoder), m_n->link_n); 5649 - } else { 5650 - I915_WRITE(PIPE_GMCH_DATA_M(pipe), TU_SIZE(m_n->tu) | m_n->gmch_m); 5651 - I915_WRITE(PIPE_GMCH_DATA_N(pipe), m_n->gmch_n); 5652 - I915_WRITE(PIPE_DP_LINK_M(pipe), m_n->link_m); 5653 - I915_WRITE(PIPE_DP_LINK_N(pipe), m_n->link_n); 5654 - } 5655 - } 5656 - 5657 - static void ironlake_fdi_set_m_n(struct drm_crtc *crtc) 5658 - { 5659 - struct drm_device *dev = crtc->dev; 5660 - struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 5661 - struct drm_display_mode *adjusted_mode = 5662 - &intel_crtc->config.adjusted_mode; 5663 - struct intel_link_m_n m_n = {0}; 5664 - int target_clock, lane, link_bw; 5665 - 5666 - /* FDI is a binary signal running at ~2.7GHz, encoding 5667 - * each output octet as 10 bits. The actual frequency 5668 - * is stored as a divider into a 100MHz clock, and the 5669 - * mode pixel clock is stored in units of 1KHz. 5670 - * Hence the bw of each lane in terms of the mode signal 5671 - * is: 5672 - */ 5673 - link_bw = intel_fdi_link_freq(dev) * MHz(100)/KHz(1)/10; 5674 - 5675 - if (intel_crtc->config.pixel_target_clock) 5676 - target_clock = intel_crtc->config.pixel_target_clock; 5677 - else 5678 - target_clock = adjusted_mode->clock; 5679 - 5680 - lane = ironlake_get_lanes_required(target_clock, link_bw, 5681 - intel_crtc->config.pipe_bpp); 5682 - 5683 - intel_crtc->fdi_lanes = lane; 5684 - 5685 - if (intel_crtc->config.pixel_multiplier > 1) 5686 - link_bw *= intel_crtc->config.pixel_multiplier; 5687 - intel_link_compute_m_n(intel_crtc->config.pipe_bpp, lane, target_clock, 5688 - link_bw, &m_n); 5689 - 5690 - intel_cpu_transcoder_set_m_n(intel_crtc, &m_n); 5426 + return i9xx_dpll_compute_m(dpll) < factor * dpll->n; 5691 5427 } 5692 5428 5693 5429 static uint32_t ironlake_compute_dpll(struct intel_crtc *intel_crtc, 5694 - intel_clock_t *clock, u32 *fp, 5430 + u32 *fp, 5695 5431 intel_clock_t *reduced_clock, u32 *fp2) 5696 5432 { 5697 5433 struct drm_crtc *crtc = &intel_crtc->base; ··· 5635 5501 struct intel_encoder *intel_encoder; 5636 5502 uint32_t dpll; 5637 5503 int factor, num_connectors = 0; 5638 - bool is_lvds = false, is_sdvo = false, is_tv = false; 5504 + bool is_lvds = false, is_sdvo = false; 5639 5505 5640 5506 for_each_encoder_on_crtc(dev, crtc, intel_encoder) { 5641 5507 switch (intel_encoder->type) { ··· 5645 5511 case INTEL_OUTPUT_SDVO: 5646 5512 case INTEL_OUTPUT_HDMI: 5647 5513 is_sdvo = true; 5648 - if (intel_encoder->needs_tv_clock) 5649 - is_tv = true; 5650 - break; 5651 - case INTEL_OUTPUT_TVOUT: 5652 - is_tv = true; 5653 5514 break; 5654 5515 } 5655 5516 ··· 5655 5526 factor = 21; 5656 5527 if (is_lvds) { 5657 5528 if ((intel_panel_use_ssc(dev_priv) && 5658 - dev_priv->lvds_ssc_freq == 100) || 5529 + dev_priv->vbt.lvds_ssc_freq == 100) || 5659 5530 (HAS_PCH_IBX(dev) && intel_is_dual_link_lvds(dev))) 5660 5531 factor = 25; 5661 - } else if (is_sdvo && is_tv) 5532 + } else if (intel_crtc->config.sdvo_tv_clock) 5662 5533 factor = 20; 5663 5534 5664 - if (clock->m < factor * clock->n) 5535 + if (ironlake_needs_fb_cb_tune(&intel_crtc->config.dpll, factor)) 5665 5536 *fp |= FP_CB_TUNE; 5666 5537 5667 5538 if (fp2 && (reduced_clock->m < factor * reduced_clock->n)) ··· 5673 5544 dpll |= DPLLB_MODE_LVDS; 5674 5545 else 5675 5546 dpll |= DPLLB_MODE_DAC_SERIAL; 5676 - if (is_sdvo) { 5677 - if (intel_crtc->config.pixel_multiplier > 1) { 5678 - dpll |= (intel_crtc->config.pixel_multiplier - 1) 5679 - << PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT; 5680 - } 5681 - dpll |= DPLL_DVO_HIGH_SPEED; 5547 + 5548 + if (intel_crtc->config.pixel_multiplier > 1) { 5549 + dpll |= (intel_crtc->config.pixel_multiplier - 1) 5550 + << PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT; 5682 5551 } 5683 - if (intel_crtc->config.has_dp_encoder && 5684 - intel_crtc->config.has_pch_encoder) 5552 + 5553 + if (is_sdvo) 5554 + dpll |= DPLL_DVO_HIGH_SPEED; 5555 + if (intel_crtc->config.has_dp_encoder) 5685 5556 dpll |= DPLL_DVO_HIGH_SPEED; 5686 5557 5687 5558 /* compute bitmask from p1 value */ 5688 - dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; 5559 + dpll |= (1 << (intel_crtc->config.dpll.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; 5689 5560 /* also FPA1 */ 5690 - dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT; 5561 + dpll |= (1 << (intel_crtc->config.dpll.p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT; 5691 5562 5692 - switch (clock->p2) { 5563 + switch (intel_crtc->config.dpll.p2) { 5693 5564 case 5: 5694 5565 dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5; 5695 5566 break; ··· 5704 5575 break; 5705 5576 } 5706 5577 5707 - if (is_sdvo && is_tv) 5708 - dpll |= PLL_REF_INPUT_TVCLKINBC; 5709 - else if (is_tv) 5710 - /* XXX: just matching BIOS for now */ 5711 - /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 5712 - dpll |= 3; 5713 - else if (is_lvds && intel_panel_use_ssc(dev_priv) && num_connectors < 2) 5578 + if (is_lvds && intel_panel_use_ssc(dev_priv) && num_connectors < 2) 5714 5579 dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; 5715 5580 else 5716 5581 dpll |= PLL_REF_INPUT_DREFCLK; ··· 5726 5603 int plane = intel_crtc->plane; 5727 5604 int num_connectors = 0; 5728 5605 intel_clock_t clock, reduced_clock; 5729 - u32 dpll, fp = 0, fp2 = 0; 5606 + u32 dpll = 0, fp = 0, fp2 = 0; 5730 5607 bool ok, has_reduced_clock = false; 5731 5608 bool is_lvds = false; 5732 5609 struct intel_encoder *encoder; 5733 5610 int ret; 5734 - bool dither, fdi_config_ok; 5735 5611 5736 5612 for_each_encoder_on_crtc(dev, crtc, encoder) { 5737 5613 switch (encoder->type) { ··· 5765 5643 /* Ensure that the cursor is valid for the new mode before changing... */ 5766 5644 intel_crtc_update_cursor(crtc, true); 5767 5645 5768 - /* determine panel color depth */ 5769 - dither = intel_crtc->config.dither; 5770 - if (is_lvds && dev_priv->lvds_dither) 5771 - dither = true; 5772 - 5773 - fp = clock.n << 16 | clock.m1 << 8 | clock.m2; 5774 - if (has_reduced_clock) 5775 - fp2 = reduced_clock.n << 16 | reduced_clock.m1 << 8 | 5776 - reduced_clock.m2; 5777 - 5778 - dpll = ironlake_compute_dpll(intel_crtc, &clock, &fp, &reduced_clock, 5779 - has_reduced_clock ? &fp2 : NULL); 5780 - 5781 - DRM_DEBUG_KMS("Mode for pipe %d:\n", pipe); 5646 + DRM_DEBUG_KMS("Mode for pipe %c:\n", pipe_name(pipe)); 5782 5647 drm_mode_debug_printmodeline(mode); 5783 5648 5784 5649 /* CPU eDP is the only output that doesn't need a PCH PLL of its own. */ 5785 5650 if (intel_crtc->config.has_pch_encoder) { 5786 5651 struct intel_pch_pll *pll; 5787 5652 5653 + fp = i9xx_dpll_compute_fp(&intel_crtc->config.dpll); 5654 + if (has_reduced_clock) 5655 + fp2 = i9xx_dpll_compute_fp(&reduced_clock); 5656 + 5657 + dpll = ironlake_compute_dpll(intel_crtc, 5658 + &fp, &reduced_clock, 5659 + has_reduced_clock ? &fp2 : NULL); 5660 + 5788 5661 pll = intel_get_pch_pll(intel_crtc, dpll, fp); 5789 5662 if (pll == NULL) { 5790 - DRM_DEBUG_DRIVER("failed to find PLL for pipe %d\n", 5791 - pipe); 5663 + DRM_DEBUG_DRIVER("failed to find PLL for pipe %c\n", 5664 + pipe_name(pipe)); 5792 5665 return -EINVAL; 5793 5666 } 5794 5667 } else ··· 5823 5706 5824 5707 intel_set_pipe_timings(intel_crtc, mode, adjusted_mode); 5825 5708 5826 - /* Note, this also computes intel_crtc->fdi_lanes which is used below in 5827 - * ironlake_check_fdi_lanes. */ 5828 - intel_crtc->fdi_lanes = 0; 5829 - if (intel_crtc->config.has_pch_encoder) 5830 - ironlake_fdi_set_m_n(crtc); 5709 + if (intel_crtc->config.has_pch_encoder) { 5710 + intel_cpu_transcoder_set_m_n(intel_crtc, 5711 + &intel_crtc->config.fdi_m_n); 5712 + } 5831 5713 5832 - fdi_config_ok = ironlake_check_fdi_lanes(intel_crtc); 5714 + if (IS_IVYBRIDGE(dev)) 5715 + ivybridge_update_fdi_bc_bifurcation(intel_crtc); 5833 5716 5834 - ironlake_set_pipeconf(crtc, adjusted_mode, dither); 5835 - 5836 - intel_wait_for_vblank(dev, pipe); 5717 + ironlake_set_pipeconf(crtc); 5837 5718 5838 5719 /* Set up the display plane register */ 5839 5720 I915_WRITE(DSPCNTR(plane), DISPPLANE_GAMMA_ENABLE); ··· 5843 5728 5844 5729 intel_update_linetime_watermarks(dev, pipe, adjusted_mode); 5845 5730 5846 - return fdi_config_ok ? ret : -EINVAL; 5731 + return ret; 5732 + } 5733 + 5734 + static void ironlake_get_fdi_m_n_config(struct intel_crtc *crtc, 5735 + struct intel_crtc_config *pipe_config) 5736 + { 5737 + struct drm_device *dev = crtc->base.dev; 5738 + struct drm_i915_private *dev_priv = dev->dev_private; 5739 + enum transcoder transcoder = pipe_config->cpu_transcoder; 5740 + 5741 + pipe_config->fdi_m_n.link_m = I915_READ(PIPE_LINK_M1(transcoder)); 5742 + pipe_config->fdi_m_n.link_n = I915_READ(PIPE_LINK_N1(transcoder)); 5743 + pipe_config->fdi_m_n.gmch_m = I915_READ(PIPE_DATA_M1(transcoder)) 5744 + & ~TU_SIZE_MASK; 5745 + pipe_config->fdi_m_n.gmch_n = I915_READ(PIPE_DATA_N1(transcoder)); 5746 + pipe_config->fdi_m_n.tu = ((I915_READ(PIPE_DATA_M1(transcoder)) 5747 + & TU_SIZE_MASK) >> TU_SIZE_SHIFT) + 1; 5748 + } 5749 + 5750 + static void ironlake_get_pfit_config(struct intel_crtc *crtc, 5751 + struct intel_crtc_config *pipe_config) 5752 + { 5753 + struct drm_device *dev = crtc->base.dev; 5754 + struct drm_i915_private *dev_priv = dev->dev_private; 5755 + uint32_t tmp; 5756 + 5757 + tmp = I915_READ(PF_CTL(crtc->pipe)); 5758 + 5759 + if (tmp & PF_ENABLE) { 5760 + pipe_config->pch_pfit.pos = I915_READ(PF_WIN_POS(crtc->pipe)); 5761 + pipe_config->pch_pfit.size = I915_READ(PF_WIN_SZ(crtc->pipe)); 5762 + } 5847 5763 } 5848 5764 5849 5765 static bool ironlake_get_pipe_config(struct intel_crtc *crtc, ··· 5888 5742 if (!(tmp & PIPECONF_ENABLE)) 5889 5743 return false; 5890 5744 5891 - if (I915_READ(TRANSCONF(crtc->pipe)) & TRANS_ENABLE) 5745 + if (I915_READ(PCH_TRANSCONF(crtc->pipe)) & TRANS_ENABLE) { 5892 5746 pipe_config->has_pch_encoder = true; 5747 + 5748 + tmp = I915_READ(FDI_RX_CTL(crtc->pipe)); 5749 + pipe_config->fdi_lanes = ((FDI_DP_PORT_WIDTH_MASK & tmp) >> 5750 + FDI_DP_PORT_WIDTH_SHIFT) + 1; 5751 + 5752 + ironlake_get_fdi_m_n_config(crtc, pipe_config); 5753 + } 5754 + 5755 + intel_get_pipe_timings(crtc, pipe_config); 5756 + 5757 + ironlake_get_pfit_config(crtc, pipe_config); 5893 5758 5894 5759 return true; 5895 5760 } 5896 5761 5897 5762 static void haswell_modeset_global_resources(struct drm_device *dev) 5898 5763 { 5899 - struct drm_i915_private *dev_priv = dev->dev_private; 5900 5764 bool enable = false; 5901 5765 struct intel_crtc *crtc; 5902 5766 struct intel_encoder *encoder; ··· 5917 5761 /* XXX: Should check for edp transcoder here, but thanks to init 5918 5762 * sequence that's not yet available. Just in case desktop eDP 5919 5763 * on PORT D is possible on haswell, too. */ 5764 + /* Even the eDP panel fitter is outside the always-on well. */ 5765 + if (crtc->config.pch_pfit.size && crtc->base.enabled) 5766 + enable = true; 5920 5767 } 5921 5768 5922 5769 list_for_each_entry(encoder, &dev->mode_config.encoder_list, ··· 5928 5769 encoder->connectors_active) 5929 5770 enable = true; 5930 5771 } 5931 - 5932 - /* Even the eDP panel fitter is outside the always-on well. */ 5933 - if (dev_priv->pch_pf_size) 5934 - enable = true; 5935 5772 5936 5773 intel_set_power_well(dev, enable); 5937 5774 } ··· 5948 5793 bool is_cpu_edp = false; 5949 5794 struct intel_encoder *encoder; 5950 5795 int ret; 5951 - bool dither; 5952 5796 5953 5797 for_each_encoder_on_crtc(dev, crtc, encoder) { 5954 5798 switch (encoder->type) { 5955 5799 case INTEL_OUTPUT_EDP: 5956 - if (!intel_encoder_is_pch_edp(&encoder->base)) 5800 + if (enc_to_dig_port(&encoder->base)->port == PORT_A) 5957 5801 is_cpu_edp = true; 5958 5802 break; 5959 5803 } ··· 5983 5829 /* Ensure that the cursor is valid for the new mode before changing... */ 5984 5830 intel_crtc_update_cursor(crtc, true); 5985 5831 5986 - /* determine panel color depth */ 5987 - dither = intel_crtc->config.dither; 5988 - 5989 - DRM_DEBUG_KMS("Mode for pipe %d:\n", pipe); 5832 + DRM_DEBUG_KMS("Mode for pipe %c:\n", pipe_name(pipe)); 5990 5833 drm_mode_debug_printmodeline(mode); 5991 5834 5992 5835 if (intel_crtc->config.has_dp_encoder) ··· 5993 5842 5994 5843 intel_set_pipe_timings(intel_crtc, mode, adjusted_mode); 5995 5844 5996 - if (intel_crtc->config.has_pch_encoder) 5997 - ironlake_fdi_set_m_n(crtc); 5845 + if (intel_crtc->config.has_pch_encoder) { 5846 + intel_cpu_transcoder_set_m_n(intel_crtc, 5847 + &intel_crtc->config.fdi_m_n); 5848 + } 5998 5849 5999 - haswell_set_pipeconf(crtc, adjusted_mode, dither); 5850 + haswell_set_pipeconf(crtc); 6000 5851 6001 5852 intel_set_pipe_csc(crtc); 6002 5853 ··· 6020 5867 { 6021 5868 struct drm_device *dev = crtc->base.dev; 6022 5869 struct drm_i915_private *dev_priv = dev->dev_private; 5870 + enum transcoder cpu_transcoder = crtc->config.cpu_transcoder; 5871 + enum intel_display_power_domain pfit_domain; 6023 5872 uint32_t tmp; 6024 5873 6025 - tmp = I915_READ(PIPECONF(crtc->config.cpu_transcoder)); 5874 + if (!intel_display_power_enabled(dev, 5875 + POWER_DOMAIN_TRANSCODER(cpu_transcoder))) 5876 + return false; 5877 + 5878 + tmp = I915_READ(PIPECONF(cpu_transcoder)); 6026 5879 if (!(tmp & PIPECONF_ENABLE)) 6027 5880 return false; 6028 5881 6029 5882 /* 6030 - * aswell has only FDI/PCH transcoder A. It is which is connected to 5883 + * Haswell has only FDI/PCH transcoder A. It is which is connected to 6031 5884 * DDI E. So just check whether this pipe is wired to DDI E and whether 6032 5885 * the PCH transcoder is on. 6033 5886 */ 6034 - tmp = I915_READ(TRANS_DDI_FUNC_CTL(crtc->pipe)); 5887 + tmp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder)); 6035 5888 if ((tmp & TRANS_DDI_PORT_MASK) == TRANS_DDI_SELECT_PORT(PORT_E) && 6036 - I915_READ(TRANSCONF(PIPE_A)) & TRANS_ENABLE) 5889 + I915_READ(LPT_TRANSCONF) & TRANS_ENABLE) { 6037 5890 pipe_config->has_pch_encoder = true; 6038 5891 5892 + tmp = I915_READ(FDI_RX_CTL(PIPE_A)); 5893 + pipe_config->fdi_lanes = ((FDI_DP_PORT_WIDTH_MASK & tmp) >> 5894 + FDI_DP_PORT_WIDTH_SHIFT) + 1; 5895 + 5896 + ironlake_get_fdi_m_n_config(crtc, pipe_config); 5897 + } 5898 + 5899 + intel_get_pipe_timings(crtc, pipe_config); 5900 + 5901 + pfit_domain = POWER_DOMAIN_PIPE_PANEL_FITTER(crtc->pipe); 5902 + if (intel_display_power_enabled(dev, pfit_domain)) 5903 + ironlake_get_pfit_config(crtc, pipe_config); 6039 5904 6040 5905 return true; 6041 5906 } ··· 6291 6120 eldv |= IBX_ELD_VALIDB << 4; 6292 6121 eldv |= IBX_ELD_VALIDB << 8; 6293 6122 } else { 6294 - DRM_DEBUG_DRIVER("ELD on port %c\n", 'A' + i); 6123 + DRM_DEBUG_DRIVER("ELD on port %c\n", port_name(i)); 6295 6124 eldv = IBX_ELD_VALIDB << ((i - 1) * 4); 6296 6125 } 6297 6126 ··· 7762 7591 bpp, connector->display_info.bpc*3); 7763 7592 pipe_config->pipe_bpp = connector->display_info.bpc*3; 7764 7593 } 7594 + 7595 + /* Clamp bpp to 8 on screens without EDID 1.4 */ 7596 + if (connector->display_info.bpc == 0 && bpp > 24) { 7597 + DRM_DEBUG_KMS("clamping display bpp (was %d) to default limit of 24\n", 7598 + bpp); 7599 + pipe_config->pipe_bpp = 24; 7600 + } 7765 7601 } 7766 7602 7767 7603 return bpp; ··· 7783 7605 struct drm_encoder_helper_funcs *encoder_funcs; 7784 7606 struct intel_encoder *encoder; 7785 7607 struct intel_crtc_config *pipe_config; 7786 - int plane_bpp; 7608 + int plane_bpp, ret = -EINVAL; 7609 + bool retry = true; 7787 7610 7788 7611 pipe_config = kzalloc(sizeof(*pipe_config), GFP_KERNEL); 7789 7612 if (!pipe_config) ··· 7797 7618 if (plane_bpp < 0) 7798 7619 goto fail; 7799 7620 7621 + encoder_retry: 7800 7622 /* Pass our mode to the connectors and the CRTC to give them a chance to 7801 7623 * adjust it according to limitations or connector properties, and also 7802 7624 * a chance to reject the mode entirely. ··· 7826 7646 } 7827 7647 } 7828 7648 7829 - if (!(intel_crtc_compute_config(crtc, pipe_config))) { 7649 + ret = intel_crtc_compute_config(crtc, pipe_config); 7650 + if (ret < 0) { 7830 7651 DRM_DEBUG_KMS("CRTC fixup failed\n"); 7831 7652 goto fail; 7832 7653 } 7654 + 7655 + if (ret == RETRY) { 7656 + if (WARN(!retry, "loop in pipe configuration computation\n")) { 7657 + ret = -EINVAL; 7658 + goto fail; 7659 + } 7660 + 7661 + DRM_DEBUG_KMS("CRTC bw constrained, retrying\n"); 7662 + retry = false; 7663 + goto encoder_retry; 7664 + } 7665 + 7833 7666 DRM_DEBUG_KMS("[CRTC:%d]\n", crtc->base.id); 7834 7667 7835 7668 pipe_config->dither = pipe_config->pipe_bpp != plane_bpp; ··· 7852 7659 return pipe_config; 7853 7660 fail: 7854 7661 kfree(pipe_config); 7855 - return ERR_PTR(-EINVAL); 7662 + return ERR_PTR(ret); 7856 7663 } 7857 7664 7858 7665 /* Computes which crtcs are affected and sets the relevant bits in the mask. For ··· 7948 7755 */ 7949 7756 *modeset_pipes &= 1 << intel_crtc->pipe; 7950 7757 *prepare_pipes &= 1 << intel_crtc->pipe; 7758 + 7759 + DRM_DEBUG_KMS("set mode pipe masks: modeset: %x, prepare: %x, disable: %x\n", 7760 + *modeset_pipes, *prepare_pipes, *disable_pipes); 7951 7761 } 7952 7762 7953 7763 static bool intel_crtc_in_use(struct drm_crtc *crtc) ··· 8017 7821 list_for_each_entry((intel_crtc), \ 8018 7822 &(dev)->mode_config.crtc_list, \ 8019 7823 base.head) \ 8020 - if (mask & (1 <<(intel_crtc)->pipe)) \ 7824 + if (mask & (1 <<(intel_crtc)->pipe)) 8021 7825 8022 7826 static bool 8023 - intel_pipe_config_compare(struct intel_crtc_config *current_config, 7827 + intel_pipe_config_compare(struct drm_device *dev, 7828 + struct intel_crtc_config *current_config, 8024 7829 struct intel_crtc_config *pipe_config) 8025 7830 { 8026 - if (current_config->has_pch_encoder != pipe_config->has_pch_encoder) { 8027 - DRM_ERROR("mismatch in has_pch_encoder " 8028 - "(expected %i, found %i)\n", 8029 - current_config->has_pch_encoder, 8030 - pipe_config->has_pch_encoder); 8031 - return false; 7831 + #define PIPE_CONF_CHECK_I(name) \ 7832 + if (current_config->name != pipe_config->name) { \ 7833 + DRM_ERROR("mismatch in " #name " " \ 7834 + "(expected %i, found %i)\n", \ 7835 + current_config->name, \ 7836 + pipe_config->name); \ 7837 + return false; \ 8032 7838 } 7839 + 7840 + #define PIPE_CONF_CHECK_FLAGS(name, mask) \ 7841 + if ((current_config->name ^ pipe_config->name) & (mask)) { \ 7842 + DRM_ERROR("mismatch in " #name " " \ 7843 + "(expected %i, found %i)\n", \ 7844 + current_config->name & (mask), \ 7845 + pipe_config->name & (mask)); \ 7846 + return false; \ 7847 + } 7848 + 7849 + PIPE_CONF_CHECK_I(has_pch_encoder); 7850 + PIPE_CONF_CHECK_I(fdi_lanes); 7851 + PIPE_CONF_CHECK_I(fdi_m_n.gmch_m); 7852 + PIPE_CONF_CHECK_I(fdi_m_n.gmch_n); 7853 + PIPE_CONF_CHECK_I(fdi_m_n.link_m); 7854 + PIPE_CONF_CHECK_I(fdi_m_n.link_n); 7855 + PIPE_CONF_CHECK_I(fdi_m_n.tu); 7856 + 7857 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_hdisplay); 7858 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_htotal); 7859 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_hblank_start); 7860 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_hblank_end); 7861 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_hsync_start); 7862 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_hsync_end); 7863 + 7864 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_vdisplay); 7865 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_vtotal); 7866 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_vblank_start); 7867 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_vblank_end); 7868 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_vsync_start); 7869 + PIPE_CONF_CHECK_I(adjusted_mode.crtc_vsync_end); 7870 + 7871 + PIPE_CONF_CHECK_FLAGS(adjusted_mode.flags, 7872 + DRM_MODE_FLAG_INTERLACE); 7873 + 7874 + PIPE_CONF_CHECK_I(requested_mode.hdisplay); 7875 + PIPE_CONF_CHECK_I(requested_mode.vdisplay); 7876 + 7877 + PIPE_CONF_CHECK_I(gmch_pfit.control); 7878 + /* pfit ratios are autocomputed by the hw on gen4+ */ 7879 + if (INTEL_INFO(dev)->gen < 4) 7880 + PIPE_CONF_CHECK_I(gmch_pfit.pgm_ratios); 7881 + PIPE_CONF_CHECK_I(gmch_pfit.lvds_border_bits); 7882 + PIPE_CONF_CHECK_I(pch_pfit.pos); 7883 + PIPE_CONF_CHECK_I(pch_pfit.size); 7884 + 7885 + #undef PIPE_CONF_CHECK_I 7886 + #undef PIPE_CONF_CHECK_FLAGS 8033 7887 8034 7888 return true; 8035 7889 } ··· 8181 7935 "(expected %i, found %i)\n", enabled, crtc->base.enabled); 8182 7936 8183 7937 memset(&pipe_config, 0, sizeof(pipe_config)); 7938 + pipe_config.cpu_transcoder = crtc->config.cpu_transcoder; 8184 7939 active = dev_priv->display.get_pipe_config(crtc, 8185 7940 &pipe_config); 8186 7941 WARN(crtc->active != active, ··· 8189 7942 "(expected %i, found %i)\n", crtc->active, active); 8190 7943 8191 7944 WARN(active && 8192 - !intel_pipe_config_compare(&crtc->config, &pipe_config), 7945 + !intel_pipe_config_compare(dev, &crtc->config, &pipe_config), 8193 7946 "pipe state doesn't match!\n"); 8194 7947 } 8195 7948 } ··· 8231 7984 goto out; 8232 7985 } 8233 7986 } 8234 - 8235 - DRM_DEBUG_KMS("set mode pipe masks: modeset: %x, prepare: %x, disable: %x\n", 8236 - modeset_pipes, prepare_pipes, disable_pipes); 8237 7987 8238 7988 for_each_intel_crtc_masked(dev, disable_pipes, intel_crtc) 8239 7989 intel_crtc_disable(&intel_crtc->base); ··· 8837 8593 intel_hdmi_init(dev, GEN4_HDMIB, PORT_B); 8838 8594 } 8839 8595 8840 - if (!found && SUPPORTS_INTEGRATED_DP(dev)) { 8841 - DRM_DEBUG_KMS("probing DP_B\n"); 8596 + if (!found && SUPPORTS_INTEGRATED_DP(dev)) 8842 8597 intel_dp_init(dev, DP_B, PORT_B); 8843 - } 8844 8598 } 8845 8599 8846 8600 /* Before G4X SDVOC doesn't have its own detect register */ ··· 8854 8612 DRM_DEBUG_KMS("probing HDMI on SDVOC\n"); 8855 8613 intel_hdmi_init(dev, GEN4_HDMIC, PORT_C); 8856 8614 } 8857 - if (SUPPORTS_INTEGRATED_DP(dev)) { 8858 - DRM_DEBUG_KMS("probing DP_C\n"); 8615 + if (SUPPORTS_INTEGRATED_DP(dev)) 8859 8616 intel_dp_init(dev, DP_C, PORT_C); 8860 - } 8861 8617 } 8862 8618 8863 8619 if (SUPPORTS_INTEGRATED_DP(dev) && 8864 - (I915_READ(DP_D) & DP_DETECTED)) { 8865 - DRM_DEBUG_KMS("probing DP_D\n"); 8620 + (I915_READ(DP_D) & DP_DETECTED)) 8866 8621 intel_dp_init(dev, DP_D, PORT_D); 8867 - } 8868 8622 } else if (IS_GEN2(dev)) 8869 8623 intel_dvo_init(dev); 8870 8624 ··· 9029 8791 dev_priv->display.crtc_disable = ironlake_crtc_disable; 9030 8792 dev_priv->display.off = ironlake_crtc_off; 9031 8793 dev_priv->display.update_plane = ironlake_update_plane; 8794 + } else if (IS_VALLEYVIEW(dev)) { 8795 + dev_priv->display.get_pipe_config = i9xx_get_pipe_config; 8796 + dev_priv->display.crtc_mode_set = i9xx_crtc_mode_set; 8797 + dev_priv->display.crtc_enable = valleyview_crtc_enable; 8798 + dev_priv->display.crtc_disable = i9xx_crtc_disable; 8799 + dev_priv->display.off = i9xx_crtc_off; 8800 + dev_priv->display.update_plane = i9xx_update_plane; 9032 8801 } else { 9033 8802 dev_priv->display.get_pipe_config = i9xx_get_pipe_config; 9034 8803 dev_priv->display.crtc_mode_set = i9xx_crtc_mode_set; ··· 9277 9032 mutex_unlock(&dev->struct_mutex); 9278 9033 } 9279 9034 9035 + void intel_modeset_suspend_hw(struct drm_device *dev) 9036 + { 9037 + intel_suspend_hw(dev); 9038 + } 9039 + 9280 9040 void intel_modeset_init(struct drm_device *dev) 9281 9041 { 9282 9042 struct drm_i915_private *dev_priv = dev->dev_private; ··· 9327 9077 for (j = 0; j < dev_priv->num_plane; j++) { 9328 9078 ret = intel_plane_init(dev, i, j); 9329 9079 if (ret) 9330 - DRM_DEBUG_KMS("pipe %d plane %d init failed: %d\n", 9331 - i, j, ret); 9080 + DRM_DEBUG_KMS("pipe %c sprite %c init failed: %d\n", 9081 + pipe_name(i), sprite_name(i, j), ret); 9332 9082 } 9333 9083 } 9334 9084 ··· 9685 9435 struct drm_crtc *crtc; 9686 9436 struct intel_crtc *intel_crtc; 9687 9437 9438 + /* 9439 + * Interrupts and polling as the first thing to avoid creating havoc. 9440 + * Too much stuff here (turning of rps, connectors, ...) would 9441 + * experience fancy races otherwise. 9442 + */ 9443 + drm_irq_uninstall(dev); 9444 + cancel_work_sync(&dev_priv->hotplug_work); 9445 + /* 9446 + * Due to the hpd irq storm handling the hotplug work can re-arm the 9447 + * poll handlers. Hence disable polling after hpd handling is shut down. 9448 + */ 9688 9449 drm_kms_helper_poll_fini(dev); 9450 + 9689 9451 mutex_lock(&dev->struct_mutex); 9690 9452 9691 9453 intel_unregister_dsm_handler(); 9692 - 9693 9454 9694 9455 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 9695 9456 /* Skip inactive CRTCs */ ··· 9717 9456 9718 9457 ironlake_teardown_rc6(dev); 9719 9458 9720 - if (IS_VALLEYVIEW(dev)) 9721 - vlv_init_dpio(dev); 9722 - 9723 9459 mutex_unlock(&dev->struct_mutex); 9724 - 9725 - /* Disable the irq before mode object teardown, for the irq might 9726 - * enqueue unpin/hotplug work. */ 9727 - drm_irq_uninstall(dev); 9728 - cancel_work_sync(&dev_priv->hotplug_work); 9729 - cancel_work_sync(&dev_priv->rps.work); 9730 9460 9731 9461 /* flush any delayed tasks or pending work */ 9732 9462 flush_scheduled_work(); ··· 9767 9515 #include <linux/seq_file.h> 9768 9516 9769 9517 struct intel_display_error_state { 9518 + 9519 + u32 power_well_driver; 9520 + 9770 9521 struct intel_cursor_error_state { 9771 9522 u32 control; 9772 9523 u32 position; ··· 9778 9523 } cursor[I915_MAX_PIPES]; 9779 9524 9780 9525 struct intel_pipe_error_state { 9526 + enum transcoder cpu_transcoder; 9781 9527 u32 conf; 9782 9528 u32 source; 9783 9529 ··· 9813 9557 if (error == NULL) 9814 9558 return NULL; 9815 9559 9560 + if (HAS_POWER_WELL(dev)) 9561 + error->power_well_driver = I915_READ(HSW_PWR_WELL_DRIVER); 9562 + 9816 9563 for_each_pipe(i) { 9817 9564 cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv, i); 9565 + error->pipe[i].cpu_transcoder = cpu_transcoder; 9818 9566 9819 9567 if (INTEL_INFO(dev)->gen <= 6 || IS_VALLEYVIEW(dev)) { 9820 9568 error->cursor[i].control = I915_READ(CURCNTR(i)); ··· 9853 9593 error->pipe[i].vsync = I915_READ(VSYNC(cpu_transcoder)); 9854 9594 } 9855 9595 9596 + /* In the code above we read the registers without checking if the power 9597 + * well was on, so here we have to clear the FPGA_DBG_RM_NOCLAIM bit to 9598 + * prevent the next I915_WRITE from detecting it and printing an error 9599 + * message. */ 9600 + if (HAS_POWER_WELL(dev)) 9601 + I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM); 9602 + 9856 9603 return error; 9857 9604 } 9858 9605 ··· 9871 9604 int i; 9872 9605 9873 9606 seq_printf(m, "Num Pipes: %d\n", INTEL_INFO(dev)->num_pipes); 9607 + if (HAS_POWER_WELL(dev)) 9608 + seq_printf(m, "PWR_WELL_CTL2: %08x\n", 9609 + error->power_well_driver); 9874 9610 for_each_pipe(i) { 9875 9611 seq_printf(m, "Pipe [%d]:\n", i); 9612 + seq_printf(m, " CPU transcoder: %c\n", 9613 + transcoder_name(error->pipe[i].cpu_transcoder)); 9876 9614 seq_printf(m, " CONF: %08x\n", error->pipe[i].conf); 9877 9615 seq_printf(m, " SRC: %08x\n", error->pipe[i].source); 9878 9616 seq_printf(m, " HTOTAL: %08x\n", error->pipe[i].htotal);
+276 -76
drivers/gpu/drm/i915/intel_dp.c
··· 52 52 return intel_dig_port->base.type == INTEL_OUTPUT_EDP; 53 53 } 54 54 55 - /** 56 - * is_pch_edp - is the port on the PCH and attached to an eDP panel? 57 - * @intel_dp: DP struct 58 - * 59 - * Returns true if the given DP struct corresponds to a PCH DP port attached 60 - * to an eDP panel, false otherwise. Helpful for determining whether we 61 - * may need FDI resources for a given DP output or not. 62 - */ 63 - static bool is_pch_edp(struct intel_dp *intel_dp) 55 + static struct drm_device *intel_dp_to_dev(struct intel_dp *intel_dp) 64 56 { 65 - return intel_dp->is_pch_edp; 57 + struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 58 + 59 + return intel_dig_port->base.base.dev; 66 60 } 67 61 68 62 /** ··· 67 73 */ 68 74 static bool is_cpu_edp(struct intel_dp *intel_dp) 69 75 { 70 - return is_edp(intel_dp) && !is_pch_edp(intel_dp); 71 - } 72 - 73 - static struct drm_device *intel_dp_to_dev(struct intel_dp *intel_dp) 74 - { 76 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 75 77 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 78 + enum port port = intel_dig_port->port; 76 79 77 - return intel_dig_port->base.base.dev; 80 + return is_edp(intel_dp) && 81 + (port == PORT_A || (port == PORT_C && IS_VALLEYVIEW(dev))); 78 82 } 79 83 80 84 static struct intel_dp *intel_attached_dp(struct drm_connector *connector) 81 85 { 82 86 return enc_to_intel_dp(&intel_attached_encoder(connector)->base); 83 - } 84 - 85 - /** 86 - * intel_encoder_is_pch_edp - is the given encoder a PCH attached eDP? 87 - * @encoder: DRM encoder 88 - * 89 - * Return true if @encoder corresponds to a PCH attached eDP panel. Needed 90 - * by intel_display.c. 91 - */ 92 - bool intel_encoder_is_pch_edp(struct drm_encoder *encoder) 93 - { 94 - struct intel_dp *intel_dp; 95 - 96 - if (!encoder) 97 - return false; 98 - 99 - intel_dp = enc_to_intel_dp(encoder); 100 - 101 - return is_pch_edp(intel_dp); 102 87 } 103 88 104 89 static void intel_dp_link_down(struct intel_dp *intel_dp); ··· 633 660 return ret; 634 661 } 635 662 663 + static void 664 + intel_dp_set_clock(struct intel_encoder *encoder, 665 + struct intel_crtc_config *pipe_config, int link_bw) 666 + { 667 + struct drm_device *dev = encoder->base.dev; 668 + 669 + if (IS_G4X(dev)) { 670 + if (link_bw == DP_LINK_BW_1_62) { 671 + pipe_config->dpll.p1 = 2; 672 + pipe_config->dpll.p2 = 10; 673 + pipe_config->dpll.n = 2; 674 + pipe_config->dpll.m1 = 23; 675 + pipe_config->dpll.m2 = 8; 676 + } else { 677 + pipe_config->dpll.p1 = 1; 678 + pipe_config->dpll.p2 = 10; 679 + pipe_config->dpll.n = 1; 680 + pipe_config->dpll.m1 = 14; 681 + pipe_config->dpll.m2 = 2; 682 + } 683 + pipe_config->clock_set = true; 684 + } else if (IS_HASWELL(dev)) { 685 + /* Haswell has special-purpose DP DDI clocks. */ 686 + } else if (HAS_PCH_SPLIT(dev)) { 687 + if (link_bw == DP_LINK_BW_1_62) { 688 + pipe_config->dpll.n = 1; 689 + pipe_config->dpll.p1 = 2; 690 + pipe_config->dpll.p2 = 10; 691 + pipe_config->dpll.m1 = 12; 692 + pipe_config->dpll.m2 = 9; 693 + } else { 694 + pipe_config->dpll.n = 2; 695 + pipe_config->dpll.p1 = 1; 696 + pipe_config->dpll.p2 = 10; 697 + pipe_config->dpll.m1 = 14; 698 + pipe_config->dpll.m2 = 8; 699 + } 700 + pipe_config->clock_set = true; 701 + } else if (IS_VALLEYVIEW(dev)) { 702 + /* FIXME: Need to figure out optimized DP clocks for vlv. */ 703 + } 704 + } 705 + 636 706 bool 637 707 intel_dp_compute_config(struct intel_encoder *encoder, 638 708 struct intel_crtc_config *pipe_config) ··· 683 667 struct drm_device *dev = encoder->base.dev; 684 668 struct drm_i915_private *dev_priv = dev->dev_private; 685 669 struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode; 686 - struct drm_display_mode *mode = &pipe_config->requested_mode; 687 670 struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base); 671 + struct intel_crtc *intel_crtc = encoder->new_crtc; 688 672 struct intel_connector *intel_connector = intel_dp->attached_connector; 689 673 int lane_count, clock; 690 674 int max_lane_count = drm_dp_max_lane_count(intel_dp->dpcd); ··· 701 685 if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) { 702 686 intel_fixed_panel_mode(intel_connector->panel.fixed_mode, 703 687 adjusted_mode); 704 - intel_pch_panel_fitting(dev, 705 - intel_connector->panel.fitting_mode, 706 - mode, adjusted_mode); 688 + if (!HAS_PCH_SPLIT(dev)) 689 + intel_gmch_panel_fitting(intel_crtc, pipe_config, 690 + intel_connector->panel.fitting_mode); 691 + else 692 + intel_pch_panel_fitting(intel_crtc, pipe_config, 693 + intel_connector->panel.fitting_mode); 707 694 } 708 695 /* We need to take the panel's fixed mode into account. */ 709 696 target_clock = adjusted_mode->clock; ··· 721 702 /* Walk through all bpp values. Luckily they're all nicely spaced with 2 722 703 * bpc in between. */ 723 704 bpp = min_t(int, 8*3, pipe_config->pipe_bpp); 724 - if (is_edp(intel_dp) && dev_priv->edp.bpp) 725 - bpp = min_t(int, bpp, dev_priv->edp.bpp); 705 + if (is_edp(intel_dp) && dev_priv->vbt.edp_bpp) 706 + bpp = min_t(int, bpp, dev_priv->vbt.edp_bpp); 726 707 727 708 for (; bpp >= 6*3; bpp -= 2*3) { 728 709 mode_rate = intel_dp_link_required(target_clock, bpp); ··· 773 754 intel_link_compute_m_n(bpp, lane_count, 774 755 target_clock, adjusted_mode->clock, 775 756 &pipe_config->dp_m_n); 757 + 758 + intel_dp_set_clock(encoder, pipe_config, intel_dp->link_bw); 776 759 777 760 return true; 778 761 } ··· 854 833 855 834 /* Handle DP bits in common between all three register formats */ 856 835 intel_dp->DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0; 836 + intel_dp->DP |= DP_PORT_WIDTH(intel_dp->lane_count); 857 837 858 - switch (intel_dp->lane_count) { 859 - case 1: 860 - intel_dp->DP |= DP_PORT_WIDTH_1; 861 - break; 862 - case 2: 863 - intel_dp->DP |= DP_PORT_WIDTH_2; 864 - break; 865 - case 4: 866 - intel_dp->DP |= DP_PORT_WIDTH_4; 867 - break; 868 - } 869 838 if (intel_dp->has_audio) { 870 839 DRM_DEBUG_DRIVER("Enabling DP audio on pipe %c\n", 871 840 pipe_name(intel_crtc->pipe)); ··· 1392 1381 intel_dp_complete_link_train(intel_dp); 1393 1382 intel_dp_stop_link_train(intel_dp); 1394 1383 ironlake_edp_backlight_on(intel_dp); 1384 + 1385 + if (IS_VALLEYVIEW(dev)) { 1386 + struct intel_digital_port *dport = 1387 + enc_to_dig_port(&encoder->base); 1388 + int channel = vlv_dport_to_channel(dport); 1389 + 1390 + vlv_wait_port_ready(dev_priv, channel); 1391 + } 1395 1392 } 1396 1393 1397 1394 static void intel_pre_enable_dp(struct intel_encoder *encoder) 1398 1395 { 1399 1396 struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base); 1400 1397 struct drm_device *dev = encoder->base.dev; 1398 + struct drm_i915_private *dev_priv = dev->dev_private; 1401 1399 1402 1400 if (is_cpu_edp(intel_dp) && !IS_VALLEYVIEW(dev)) 1403 1401 ironlake_edp_pll_on(intel_dp); 1402 + 1403 + if (IS_VALLEYVIEW(dev)) { 1404 + struct intel_digital_port *dport = enc_to_dig_port(&encoder->base); 1405 + struct intel_crtc *intel_crtc = 1406 + to_intel_crtc(encoder->base.crtc); 1407 + int port = vlv_dport_to_channel(dport); 1408 + int pipe = intel_crtc->pipe; 1409 + u32 val; 1410 + 1411 + WARN_ON(!mutex_is_locked(&dev_priv->dpio_lock)); 1412 + 1413 + val = intel_dpio_read(dev_priv, DPIO_DATA_LANE_A(port)); 1414 + val = 0; 1415 + if (pipe) 1416 + val |= (1<<21); 1417 + else 1418 + val &= ~(1<<21); 1419 + val |= 0x001000c4; 1420 + intel_dpio_write(dev_priv, DPIO_DATA_CHANNEL(port), val); 1421 + 1422 + intel_dpio_write(dev_priv, DPIO_PCS_CLOCKBUF0(port), 1423 + 0x00760018); 1424 + intel_dpio_write(dev_priv, DPIO_PCS_CLOCKBUF8(port), 1425 + 0x00400888); 1426 + } 1427 + } 1428 + 1429 + static void intel_dp_pre_pll_enable(struct intel_encoder *encoder) 1430 + { 1431 + struct intel_digital_port *dport = enc_to_dig_port(&encoder->base); 1432 + struct drm_device *dev = encoder->base.dev; 1433 + struct drm_i915_private *dev_priv = dev->dev_private; 1434 + int port = vlv_dport_to_channel(dport); 1435 + 1436 + if (!IS_VALLEYVIEW(dev)) 1437 + return; 1438 + 1439 + WARN_ON(!mutex_is_locked(&dev_priv->dpio_lock)); 1440 + 1441 + /* Program Tx lane resets to default */ 1442 + intel_dpio_write(dev_priv, DPIO_PCS_TX(port), 1443 + DPIO_PCS_TX_LANE2_RESET | 1444 + DPIO_PCS_TX_LANE1_RESET); 1445 + intel_dpio_write(dev_priv, DPIO_PCS_CLK(port), 1446 + DPIO_PCS_CLK_CRI_RXEB_EIOS_EN | 1447 + DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN | 1448 + (1<<DPIO_PCS_CLK_DATAWIDTH_SHIFT) | 1449 + DPIO_PCS_CLK_SOFT_RESET); 1450 + 1451 + /* Fix up inter-pair skew failure */ 1452 + intel_dpio_write(dev_priv, DPIO_PCS_STAGGER1(port), 0x00750f00); 1453 + intel_dpio_write(dev_priv, DPIO_TX_CTL(port), 0x00001500); 1454 + intel_dpio_write(dev_priv, DPIO_TX_LANE(port), 0x40400000); 1404 1455 } 1405 1456 1406 1457 /* ··· 1525 1452 { 1526 1453 struct drm_device *dev = intel_dp_to_dev(intel_dp); 1527 1454 1528 - if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) 1455 + if (IS_VALLEYVIEW(dev)) 1456 + return DP_TRAIN_VOLTAGE_SWING_1200; 1457 + else if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) 1529 1458 return DP_TRAIN_VOLTAGE_SWING_800; 1530 1459 else if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1531 1460 return DP_TRAIN_VOLTAGE_SWING_1200; ··· 1552 1477 default: 1553 1478 return DP_TRAIN_PRE_EMPHASIS_0; 1554 1479 } 1555 - } else if (IS_GEN7(dev) && is_cpu_edp(intel_dp) && !IS_VALLEYVIEW(dev)) { 1480 + } else if (IS_VALLEYVIEW(dev)) { 1481 + switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) { 1482 + case DP_TRAIN_VOLTAGE_SWING_400: 1483 + return DP_TRAIN_PRE_EMPHASIS_9_5; 1484 + case DP_TRAIN_VOLTAGE_SWING_600: 1485 + return DP_TRAIN_PRE_EMPHASIS_6; 1486 + case DP_TRAIN_VOLTAGE_SWING_800: 1487 + return DP_TRAIN_PRE_EMPHASIS_3_5; 1488 + case DP_TRAIN_VOLTAGE_SWING_1200: 1489 + default: 1490 + return DP_TRAIN_PRE_EMPHASIS_0; 1491 + } 1492 + } else if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) { 1556 1493 switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) { 1557 1494 case DP_TRAIN_VOLTAGE_SWING_400: 1558 1495 return DP_TRAIN_PRE_EMPHASIS_6; ··· 1587 1500 return DP_TRAIN_PRE_EMPHASIS_0; 1588 1501 } 1589 1502 } 1503 + } 1504 + 1505 + static uint32_t intel_vlv_signal_levels(struct intel_dp *intel_dp) 1506 + { 1507 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 1508 + struct drm_i915_private *dev_priv = dev->dev_private; 1509 + struct intel_digital_port *dport = dp_to_dig_port(intel_dp); 1510 + unsigned long demph_reg_value, preemph_reg_value, 1511 + uniqtranscale_reg_value; 1512 + uint8_t train_set = intel_dp->train_set[0]; 1513 + int port = vlv_dport_to_channel(dport); 1514 + 1515 + WARN_ON(!mutex_is_locked(&dev_priv->dpio_lock)); 1516 + 1517 + switch (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) { 1518 + case DP_TRAIN_PRE_EMPHASIS_0: 1519 + preemph_reg_value = 0x0004000; 1520 + switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) { 1521 + case DP_TRAIN_VOLTAGE_SWING_400: 1522 + demph_reg_value = 0x2B405555; 1523 + uniqtranscale_reg_value = 0x552AB83A; 1524 + break; 1525 + case DP_TRAIN_VOLTAGE_SWING_600: 1526 + demph_reg_value = 0x2B404040; 1527 + uniqtranscale_reg_value = 0x5548B83A; 1528 + break; 1529 + case DP_TRAIN_VOLTAGE_SWING_800: 1530 + demph_reg_value = 0x2B245555; 1531 + uniqtranscale_reg_value = 0x5560B83A; 1532 + break; 1533 + case DP_TRAIN_VOLTAGE_SWING_1200: 1534 + demph_reg_value = 0x2B405555; 1535 + uniqtranscale_reg_value = 0x5598DA3A; 1536 + break; 1537 + default: 1538 + return 0; 1539 + } 1540 + break; 1541 + case DP_TRAIN_PRE_EMPHASIS_3_5: 1542 + preemph_reg_value = 0x0002000; 1543 + switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) { 1544 + case DP_TRAIN_VOLTAGE_SWING_400: 1545 + demph_reg_value = 0x2B404040; 1546 + uniqtranscale_reg_value = 0x5552B83A; 1547 + break; 1548 + case DP_TRAIN_VOLTAGE_SWING_600: 1549 + demph_reg_value = 0x2B404848; 1550 + uniqtranscale_reg_value = 0x5580B83A; 1551 + break; 1552 + case DP_TRAIN_VOLTAGE_SWING_800: 1553 + demph_reg_value = 0x2B404040; 1554 + uniqtranscale_reg_value = 0x55ADDA3A; 1555 + break; 1556 + default: 1557 + return 0; 1558 + } 1559 + break; 1560 + case DP_TRAIN_PRE_EMPHASIS_6: 1561 + preemph_reg_value = 0x0000000; 1562 + switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) { 1563 + case DP_TRAIN_VOLTAGE_SWING_400: 1564 + demph_reg_value = 0x2B305555; 1565 + uniqtranscale_reg_value = 0x5570B83A; 1566 + break; 1567 + case DP_TRAIN_VOLTAGE_SWING_600: 1568 + demph_reg_value = 0x2B2B4040; 1569 + uniqtranscale_reg_value = 0x55ADDA3A; 1570 + break; 1571 + default: 1572 + return 0; 1573 + } 1574 + break; 1575 + case DP_TRAIN_PRE_EMPHASIS_9_5: 1576 + preemph_reg_value = 0x0006000; 1577 + switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) { 1578 + case DP_TRAIN_VOLTAGE_SWING_400: 1579 + demph_reg_value = 0x1B405555; 1580 + uniqtranscale_reg_value = 0x55ADDA3A; 1581 + break; 1582 + default: 1583 + return 0; 1584 + } 1585 + break; 1586 + default: 1587 + return 0; 1588 + } 1589 + 1590 + intel_dpio_write(dev_priv, DPIO_TX_OCALINIT(port), 0x00000000); 1591 + intel_dpio_write(dev_priv, DPIO_TX_SWING_CTL4(port), demph_reg_value); 1592 + intel_dpio_write(dev_priv, DPIO_TX_SWING_CTL2(port), 1593 + uniqtranscale_reg_value); 1594 + intel_dpio_write(dev_priv, DPIO_TX_SWING_CTL3(port), 0x0C782040); 1595 + intel_dpio_write(dev_priv, DPIO_PCS_STAGGER0(port), 0x00030000); 1596 + intel_dpio_write(dev_priv, DPIO_PCS_CTL_OVER1(port), preemph_reg_value); 1597 + intel_dpio_write(dev_priv, DPIO_TX_OCALINIT(port), 0x80000000); 1598 + 1599 + return 0; 1590 1600 } 1591 1601 1592 1602 static void ··· 1860 1676 if (HAS_DDI(dev)) { 1861 1677 signal_levels = intel_hsw_signal_levels(train_set); 1862 1678 mask = DDI_BUF_EMP_MASK; 1863 - } else if (IS_GEN7(dev) && is_cpu_edp(intel_dp) && !IS_VALLEYVIEW(dev)) { 1679 + } else if (IS_VALLEYVIEW(dev)) { 1680 + signal_levels = intel_vlv_signal_levels(intel_dp); 1681 + mask = 0; 1682 + } else if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) { 1864 1683 signal_levels = intel_gen7_edp_signal_levels(train_set); 1865 1684 mask = EDP_LINK_TRAIN_VOL_EMP_MASK_IVB; 1866 1685 } else if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { ··· 2775 2588 struct child_device_config *p_child; 2776 2589 int i; 2777 2590 2778 - if (!dev_priv->child_dev_num) 2591 + if (!dev_priv->vbt.child_dev_num) 2779 2592 return false; 2780 2593 2781 - for (i = 0; i < dev_priv->child_dev_num; i++) { 2782 - p_child = dev_priv->child_dev + i; 2594 + for (i = 0; i < dev_priv->vbt.child_dev_num; i++) { 2595 + p_child = dev_priv->vbt.child_dev + i; 2783 2596 2784 2597 if (p_child->dvo_port == PORT_IDPD && 2785 2598 p_child->device_type == DEVICE_TYPE_eDP) ··· 2857 2670 DRM_DEBUG_KMS("cur t1_t3 %d t8 %d t9 %d t10 %d t11_t12 %d\n", 2858 2671 cur.t1_t3, cur.t8, cur.t9, cur.t10, cur.t11_t12); 2859 2672 2860 - vbt = dev_priv->edp.pps; 2673 + vbt = dev_priv->vbt.edp_pps; 2861 2674 2862 2675 /* Upper limits from eDP 1.3 spec. Note that we use the clunky units of 2863 2676 * our hw here, which are all in 100usec. */ ··· 2979 2792 intel_dp->DP = I915_READ(intel_dp->output_reg); 2980 2793 intel_dp->attached_connector = intel_connector; 2981 2794 2982 - if (HAS_PCH_SPLIT(dev) && port == PORT_D) 2983 - if (intel_dpd_is_edp(dev)) 2984 - intel_dp->is_pch_edp = true; 2985 - 2795 + type = DRM_MODE_CONNECTOR_DisplayPort; 2986 2796 /* 2987 2797 * FIXME : We need to initialize built-in panels before external panels. 2988 2798 * For X0, DP_C is fixed as eDP. Revisit this as part of VLV eDP cleanup 2989 2799 */ 2990 - if (IS_VALLEYVIEW(dev) && port == PORT_C) { 2800 + switch (port) { 2801 + case PORT_A: 2991 2802 type = DRM_MODE_CONNECTOR_eDP; 2992 - intel_encoder->type = INTEL_OUTPUT_EDP; 2993 - } else if (port == PORT_A || is_pch_edp(intel_dp)) { 2994 - type = DRM_MODE_CONNECTOR_eDP; 2995 - intel_encoder->type = INTEL_OUTPUT_EDP; 2996 - } else { 2997 - /* The intel_encoder->type value may be INTEL_OUTPUT_UNKNOWN for 2998 - * DDI or INTEL_OUTPUT_DISPLAYPORT for the older gens, so don't 2999 - * rewrite it. 3000 - */ 3001 - type = DRM_MODE_CONNECTOR_DisplayPort; 2803 + break; 2804 + case PORT_C: 2805 + if (IS_VALLEYVIEW(dev)) 2806 + type = DRM_MODE_CONNECTOR_eDP; 2807 + break; 2808 + case PORT_D: 2809 + if (HAS_PCH_SPLIT(dev) && intel_dpd_is_edp(dev)) 2810 + type = DRM_MODE_CONNECTOR_eDP; 2811 + break; 2812 + default: /* silence GCC warning */ 2813 + break; 3002 2814 } 2815 + 2816 + /* 2817 + * For eDP we always set the encoder type to INTEL_OUTPUT_EDP, but 2818 + * for DP the encoder type can be set by the caller to 2819 + * INTEL_OUTPUT_UNKNOWN for DDI, so don't rewrite it. 2820 + */ 2821 + if (type == DRM_MODE_CONNECTOR_eDP) 2822 + intel_encoder->type = INTEL_OUTPUT_EDP; 2823 + 2824 + DRM_DEBUG_KMS("Adding %s connector on port %c\n", 2825 + type == DRM_MODE_CONNECTOR_eDP ? "eDP" : "DP", 2826 + port_name(port)); 3003 2827 3004 2828 drm_connector_init(dev, connector, &intel_dp_connector_funcs, type); 3005 2829 drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs); ··· 3127 2929 } 3128 2930 3129 2931 /* fallback to VBT if available for eDP */ 3130 - if (!fixed_mode && dev_priv->lfp_lvds_vbt_mode) { 3131 - fixed_mode = drm_mode_duplicate(dev, dev_priv->lfp_lvds_vbt_mode); 2932 + if (!fixed_mode && dev_priv->vbt.lfp_lvds_vbt_mode) { 2933 + fixed_mode = drm_mode_duplicate(dev, dev_priv->vbt.lfp_lvds_vbt_mode); 3132 2934 if (fixed_mode) 3133 2935 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 3134 2936 } ··· 3184 2986 intel_encoder->disable = intel_disable_dp; 3185 2987 intel_encoder->post_disable = intel_post_disable_dp; 3186 2988 intel_encoder->get_hw_state = intel_dp_get_hw_state; 2989 + if (IS_VALLEYVIEW(dev)) 2990 + intel_encoder->pre_pll_enable = intel_dp_pre_pll_enable; 3187 2991 3188 2992 intel_dig_port->port = port; 3189 2993 intel_dig_port->dp.output_reg = output_reg;
+89 -28
drivers/gpu/drm/i915/intel_drv.h
··· 120 120 struct intel_crtc *new_crtc; 121 121 122 122 int type; 123 - bool needs_tv_clock; 124 123 /* 125 124 * Intel hw has only one MUX where encoders could be clone, hence a 126 125 * simple flag is enough to compute the possible_clones mask. ··· 176 177 u8 polled; 177 178 }; 178 179 180 + typedef struct dpll { 181 + /* given values */ 182 + int n; 183 + int m1, m2; 184 + int p1, p2; 185 + /* derived values */ 186 + int dot; 187 + int vco; 188 + int m; 189 + int p; 190 + } intel_clock_t; 191 + 179 192 struct intel_crtc_config { 180 193 struct drm_display_mode requested_mode; 181 194 struct drm_display_mode adjusted_mode; ··· 212 201 /* DP has a bunch of special case unfortunately, so mark the pipe 213 202 * accordingly. */ 214 203 bool has_dp_encoder; 204 + 205 + /* 206 + * Enable dithering, used when the selected pipe bpp doesn't match the 207 + * plane bpp. 208 + */ 215 209 bool dither; 216 210 217 211 /* Controls for the clock computation, to override various stages. */ 218 212 bool clock_set; 219 213 214 + /* SDVO TV has a bunch of special case. To make multifunction encoders 215 + * work correctly, we need to track this at runtime.*/ 216 + bool sdvo_tv_clock; 217 + 218 + /* 219 + * crtc bandwidth limit, don't increase pipe bpp or clock if not really 220 + * required. This is set in the 2nd loop of calling encoder's 221 + * ->compute_config if the first pick doesn't work out. 222 + */ 223 + bool bw_constrained; 224 + 220 225 /* Settings for the intel dpll used on pretty much everything but 221 226 * haswell. */ 222 - struct dpll { 223 - unsigned n; 224 - unsigned m1, m2; 225 - unsigned p1, p2; 226 - } dpll; 227 + struct dpll dpll; 227 228 228 229 int pipe_bpp; 229 230 struct intel_link_m_n dp_m_n; ··· 247 224 int pixel_target_clock; 248 225 /* Used by SDVO (and if we ever fix it, HDMI). */ 249 226 unsigned pixel_multiplier; 227 + 228 + /* Panel fitter controls for gen2-gen4 + VLV */ 229 + struct { 230 + u32 control; 231 + u32 pgm_ratios; 232 + u32 lvds_border_bits; 233 + } gmch_pfit; 234 + 235 + /* Panel fitter placement and size for Ironlake+ */ 236 + struct { 237 + u32 pos; 238 + u32 size; 239 + } pch_pfit; 240 + 241 + /* FDI configuration, only valid if has_pch_encoder is set. */ 242 + int fdi_lanes; 243 + struct intel_link_m_n fdi_m_n; 250 244 }; 251 245 252 246 struct intel_crtc { ··· 282 242 bool lowfreq_avail; 283 243 struct intel_overlay *overlay; 284 244 struct intel_unpin_work *unpin_work; 285 - int fdi_lanes; 286 245 287 246 atomic_t unpin_work_count; 288 247 ··· 304 265 305 266 /* reset counter value when the last flip was submitted */ 306 267 unsigned int reset_counter; 268 + 269 + /* Access to these should be protected by dev_priv->irq_lock. */ 270 + bool cpu_fifo_underrun_disabled; 271 + bool pch_fifo_underrun_disabled; 307 272 }; 308 273 309 274 struct intel_plane { ··· 454 411 uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS]; 455 412 struct i2c_adapter adapter; 456 413 struct i2c_algo_dp_aux_data algo; 457 - bool is_pch_edp; 458 414 uint8_t train_set[4]; 459 415 int panel_power_up_delay; 460 416 int panel_power_down_delay; ··· 472 430 struct intel_dp dp; 473 431 struct intel_hdmi hdmi; 474 432 }; 433 + 434 + static inline int 435 + vlv_dport_to_channel(struct intel_digital_port *dport) 436 + { 437 + switch (dport->port) { 438 + case PORT_B: 439 + return 0; 440 + case PORT_C: 441 + return 1; 442 + default: 443 + BUG(); 444 + } 445 + } 475 446 476 447 static inline struct drm_crtc * 477 448 intel_get_crtc_for_pipe(struct drm_device *dev, int pipe) ··· 529 474 extern void intel_attach_force_audio_property(struct drm_connector *connector); 530 475 extern void intel_attach_broadcast_rgb_property(struct drm_connector *connector); 531 476 477 + extern bool intel_pipe_has_type(struct drm_crtc *crtc, int type); 532 478 extern void intel_crt_init(struct drm_device *dev); 533 479 extern void intel_hdmi_init(struct drm_device *dev, 534 480 int hdmi_reg, enum port port); ··· 568 512 extern void ironlake_edp_panel_off(struct intel_dp *intel_dp); 569 513 extern void ironlake_edp_panel_vdd_on(struct intel_dp *intel_dp); 570 514 extern void ironlake_edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync); 571 - extern bool intel_encoder_is_pch_edp(struct drm_encoder *encoder); 572 515 extern int intel_plane_init(struct drm_device *dev, enum pipe pipe, int plane); 573 516 extern void intel_flush_display_plane(struct drm_i915_private *dev_priv, 574 517 enum plane plane); ··· 579 524 580 525 extern void intel_fixed_panel_mode(struct drm_display_mode *fixed_mode, 581 526 struct drm_display_mode *adjusted_mode); 582 - extern void intel_pch_panel_fitting(struct drm_device *dev, 583 - int fitting_mode, 584 - const struct drm_display_mode *mode, 585 - struct drm_display_mode *adjusted_mode); 586 - extern u32 intel_panel_get_max_backlight(struct drm_device *dev); 587 - extern void intel_panel_set_backlight(struct drm_device *dev, u32 level); 527 + extern void intel_pch_panel_fitting(struct intel_crtc *crtc, 528 + struct intel_crtc_config *pipe_config, 529 + int fitting_mode); 530 + extern void intel_gmch_panel_fitting(struct intel_crtc *crtc, 531 + struct intel_crtc_config *pipe_config, 532 + int fitting_mode); 533 + extern void intel_panel_set_backlight(struct drm_device *dev, 534 + u32 level, u32 max); 588 535 extern int intel_panel_setup_backlight(struct drm_connector *connector); 589 536 extern void intel_panel_enable_backlight(struct drm_device *dev, 590 537 enum pipe pipe); ··· 622 565 return to_intel_connector(connector)->encoder; 623 566 } 624 567 625 - static inline struct intel_dp *enc_to_intel_dp(struct drm_encoder *encoder) 626 - { 627 - struct intel_digital_port *intel_dig_port = 628 - container_of(encoder, struct intel_digital_port, base.base); 629 - return &intel_dig_port->dp; 630 - } 631 - 632 568 static inline struct intel_digital_port * 633 569 enc_to_dig_port(struct drm_encoder *encoder) 634 570 { 635 571 return container_of(encoder, struct intel_digital_port, base.base); 572 + } 573 + 574 + static inline struct intel_dp *enc_to_intel_dp(struct drm_encoder *encoder) 575 + { 576 + return &enc_to_dig_port(encoder)->dp; 636 577 } 637 578 638 579 static inline struct intel_digital_port * ··· 662 607 extern void intel_wait_for_vblank(struct drm_device *dev, int pipe); 663 608 extern void intel_wait_for_pipe_off(struct drm_device *dev, int pipe); 664 609 extern int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp); 610 + extern void vlv_wait_port_ready(struct drm_i915_private *dev_priv, int port); 665 611 666 612 struct intel_load_detect_pipe { 667 613 struct drm_framebuffer *release_fb; ··· 716 660 #define assert_pipe_disabled(d, p) assert_pipe(d, p, false) 717 661 718 662 extern void intel_init_clock_gating(struct drm_device *dev); 663 + extern void intel_suspend_hw(struct drm_device *dev); 719 664 extern void intel_write_eld(struct drm_encoder *encoder, 720 665 struct drm_display_mode *mode); 721 - extern void intel_cpt_verify_modeset(struct drm_device *dev, int pipe); 722 - extern void intel_cpu_transcoder_set_m_n(struct intel_crtc *crtc, 723 - struct intel_link_m_n *m_n); 724 - extern void intel_pch_transcoder_set_m_n(struct intel_crtc *crtc, 725 - struct intel_link_m_n *m_n); 726 666 extern void intel_prepare_ddi(struct drm_device *dev); 727 667 extern void hsw_fdi_link_train(struct drm_crtc *crtc); 728 668 extern void intel_ddi_init(struct drm_device *dev, enum port port); ··· 742 690 struct drm_file *file_priv); 743 691 744 692 extern u32 intel_dpio_read(struct drm_i915_private *dev_priv, int reg); 693 + extern void intel_dpio_write(struct drm_i915_private *dev_priv, int reg, 694 + u32 val); 745 695 746 696 /* Power-related functions, located in intel_pm.c */ 747 697 extern void intel_init_pm(struct drm_device *dev); ··· 755 701 extern void intel_gpu_ips_init(struct drm_i915_private *dev_priv); 756 702 extern void intel_gpu_ips_teardown(void); 757 703 758 - extern bool intel_using_power_well(struct drm_device *dev); 704 + extern bool intel_display_power_enabled(struct drm_device *dev, 705 + enum intel_display_power_domain domain); 759 706 extern void intel_init_power_well(struct drm_device *dev); 760 707 extern void intel_set_power_well(struct drm_device *dev, bool enable); 761 708 extern void intel_enable_gt_powersave(struct drm_device *dev); ··· 783 728 extern void intel_ddi_fdi_disable(struct drm_crtc *crtc); 784 729 785 730 extern void intel_display_handle_reset(struct drm_device *dev); 731 + extern bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev, 732 + enum pipe pipe, 733 + bool enable); 734 + extern bool intel_set_pch_fifo_underrun_reporting(struct drm_device *dev, 735 + enum transcoder pch_transcoder, 736 + bool enable); 786 737 787 738 #endif /* __INTEL_DRV_H__ */
+7
drivers/gpu/drm/i915/intel_dvo.c
··· 54 54 .dev_ops = &ch7xxx_ops, 55 55 }, 56 56 { 57 + .type = INTEL_DVO_CHIP_TMDS, 58 + .name = "ch7xxx", 59 + .dvo_reg = DVOC, 60 + .slave_addr = 0x75, /* For some ch7010 */ 61 + .dev_ops = &ch7xxx_ops, 62 + }, 63 + { 57 64 .type = INTEL_DVO_CHIP_LVDS, 58 65 .name = "ivch", 59 66 .dvo_reg = DVOA,
+133 -6
drivers/gpu/drm/i915/intel_hdmi.c
··· 697 697 I915_WRITE(intel_hdmi->hdmi_reg, temp); 698 698 POSTING_READ(intel_hdmi->hdmi_reg); 699 699 } 700 + 701 + if (IS_VALLEYVIEW(dev)) { 702 + struct intel_digital_port *dport = 703 + enc_to_dig_port(&encoder->base); 704 + int channel = vlv_dport_to_channel(dport); 705 + 706 + vlv_wait_port_ready(dev_priv, channel); 707 + } 700 708 } 701 709 702 710 static void intel_disable_hdmi(struct intel_encoder *encoder) ··· 783 775 struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base); 784 776 struct drm_device *dev = encoder->base.dev; 785 777 struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode; 778 + int clock_12bpc = pipe_config->requested_mode.clock * 3 / 2; 779 + int desired_bpp; 786 780 787 781 if (intel_hdmi->color_range_auto) { 788 782 /* See CEA-861-E - 5.1 Default Encoding Parameters */ ··· 804 794 /* 805 795 * HDMI is either 12 or 8, so if the display lets 10bpc sneak 806 796 * through, clamp it down. Note that g4x/vlv don't support 12bpc hdmi 807 - * outputs. 797 + * outputs. We also need to check that the higher clock still fits 798 + * within limits. 808 799 */ 809 - if (pipe_config->pipe_bpp > 8*3 && HAS_PCH_SPLIT(dev)) { 810 - DRM_DEBUG_KMS("forcing bpc to 12 for HDMI\n"); 811 - pipe_config->pipe_bpp = 12*3; 800 + if (pipe_config->pipe_bpp > 8*3 && clock_12bpc <= 225000 801 + && HAS_PCH_SPLIT(dev)) { 802 + DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n"); 803 + desired_bpp = 12*3; 804 + 805 + /* Need to adjust the port link by 1.5x for 12bpc. */ 806 + adjusted_mode->clock = clock_12bpc; 807 + pipe_config->pixel_target_clock = 808 + pipe_config->requested_mode.clock; 812 809 } else { 813 - DRM_DEBUG_KMS("forcing bpc to 8 for HDMI\n"); 814 - pipe_config->pipe_bpp = 8*3; 810 + DRM_DEBUG_KMS("picking bpc to 8 for HDMI output\n"); 811 + desired_bpp = 8*3; 812 + } 813 + 814 + if (!pipe_config->bw_constrained) { 815 + DRM_DEBUG_KMS("forcing pipe bpc to %i for HDMI\n", desired_bpp); 816 + pipe_config->pipe_bpp = desired_bpp; 817 + } 818 + 819 + if (adjusted_mode->clock > 225000) { 820 + DRM_DEBUG_KMS("too high HDMI clock, rejecting mode\n"); 821 + return false; 815 822 } 816 823 817 824 return true; ··· 982 955 return 0; 983 956 } 984 957 958 + static void intel_hdmi_pre_enable(struct intel_encoder *encoder) 959 + { 960 + struct intel_digital_port *dport = enc_to_dig_port(&encoder->base); 961 + struct drm_device *dev = encoder->base.dev; 962 + struct drm_i915_private *dev_priv = dev->dev_private; 963 + struct intel_crtc *intel_crtc = 964 + to_intel_crtc(encoder->base.crtc); 965 + int port = vlv_dport_to_channel(dport); 966 + int pipe = intel_crtc->pipe; 967 + u32 val; 968 + 969 + if (!IS_VALLEYVIEW(dev)) 970 + return; 971 + 972 + WARN_ON(!mutex_is_locked(&dev_priv->dpio_lock)); 973 + 974 + /* Enable clock channels for this port */ 975 + val = intel_dpio_read(dev_priv, DPIO_DATA_LANE_A(port)); 976 + val = 0; 977 + if (pipe) 978 + val |= (1<<21); 979 + else 980 + val &= ~(1<<21); 981 + val |= 0x001000c4; 982 + intel_dpio_write(dev_priv, DPIO_DATA_CHANNEL(port), val); 983 + 984 + /* HDMI 1.0V-2dB */ 985 + intel_dpio_write(dev_priv, DPIO_TX_OCALINIT(port), 0); 986 + intel_dpio_write(dev_priv, DPIO_TX_SWING_CTL4(port), 987 + 0x2b245f5f); 988 + intel_dpio_write(dev_priv, DPIO_TX_SWING_CTL2(port), 989 + 0x5578b83a); 990 + intel_dpio_write(dev_priv, DPIO_TX_SWING_CTL3(port), 991 + 0x0c782040); 992 + intel_dpio_write(dev_priv, DPIO_TX3_SWING_CTL4(port), 993 + 0x2b247878); 994 + intel_dpio_write(dev_priv, DPIO_PCS_STAGGER0(port), 0x00030000); 995 + intel_dpio_write(dev_priv, DPIO_PCS_CTL_OVER1(port), 996 + 0x00002000); 997 + intel_dpio_write(dev_priv, DPIO_TX_OCALINIT(port), 998 + DPIO_TX_OCALINIT_EN); 999 + 1000 + /* Program lane clock */ 1001 + intel_dpio_write(dev_priv, DPIO_PCS_CLOCKBUF0(port), 1002 + 0x00760018); 1003 + intel_dpio_write(dev_priv, DPIO_PCS_CLOCKBUF8(port), 1004 + 0x00400888); 1005 + } 1006 + 1007 + static void intel_hdmi_pre_pll_enable(struct intel_encoder *encoder) 1008 + { 1009 + struct intel_digital_port *dport = enc_to_dig_port(&encoder->base); 1010 + struct drm_device *dev = encoder->base.dev; 1011 + struct drm_i915_private *dev_priv = dev->dev_private; 1012 + int port = vlv_dport_to_channel(dport); 1013 + 1014 + if (!IS_VALLEYVIEW(dev)) 1015 + return; 1016 + 1017 + WARN_ON(!mutex_is_locked(&dev_priv->dpio_lock)); 1018 + 1019 + /* Program Tx lane resets to default */ 1020 + intel_dpio_write(dev_priv, DPIO_PCS_TX(port), 1021 + DPIO_PCS_TX_LANE2_RESET | 1022 + DPIO_PCS_TX_LANE1_RESET); 1023 + intel_dpio_write(dev_priv, DPIO_PCS_CLK(port), 1024 + DPIO_PCS_CLK_CRI_RXEB_EIOS_EN | 1025 + DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN | 1026 + (1<<DPIO_PCS_CLK_DATAWIDTH_SHIFT) | 1027 + DPIO_PCS_CLK_SOFT_RESET); 1028 + 1029 + /* Fix up inter-pair skew failure */ 1030 + intel_dpio_write(dev_priv, DPIO_PCS_STAGGER1(port), 0x00750f00); 1031 + intel_dpio_write(dev_priv, DPIO_TX_CTL(port), 0x00001500); 1032 + intel_dpio_write(dev_priv, DPIO_TX_LANE(port), 0x40400000); 1033 + 1034 + intel_dpio_write(dev_priv, DPIO_PCS_CTL_OVER1(port), 1035 + 0x00002000); 1036 + intel_dpio_write(dev_priv, DPIO_TX_OCALINIT(port), 1037 + DPIO_TX_OCALINIT_EN); 1038 + } 1039 + 1040 + static void intel_hdmi_post_disable(struct intel_encoder *encoder) 1041 + { 1042 + struct intel_digital_port *dport = enc_to_dig_port(&encoder->base); 1043 + struct drm_i915_private *dev_priv = encoder->base.dev->dev_private; 1044 + int port = vlv_dport_to_channel(dport); 1045 + 1046 + /* Reset lanes to avoid HDMI flicker (VLV w/a) */ 1047 + mutex_lock(&dev_priv->dpio_lock); 1048 + intel_dpio_write(dev_priv, DPIO_PCS_TX(port), 0x00000000); 1049 + intel_dpio_write(dev_priv, DPIO_PCS_CLK(port), 0x00e00060); 1050 + mutex_unlock(&dev_priv->dpio_lock); 1051 + } 1052 + 985 1053 static void intel_hdmi_destroy(struct drm_connector *connector) 986 1054 { 987 1055 drm_sysfs_connector_remove(connector); ··· 1216 1094 intel_encoder->enable = intel_enable_hdmi; 1217 1095 intel_encoder->disable = intel_disable_hdmi; 1218 1096 intel_encoder->get_hw_state = intel_hdmi_get_hw_state; 1097 + if (IS_VALLEYVIEW(dev)) { 1098 + intel_encoder->pre_enable = intel_hdmi_pre_enable; 1099 + intel_encoder->pre_pll_enable = intel_hdmi_pre_pll_enable; 1100 + intel_encoder->post_disable = intel_hdmi_post_disable; 1101 + } 1219 1102 1220 1103 intel_encoder->type = INTEL_OUTPUT_HDMI; 1221 1104 intel_encoder->crtc_mask = (1 << 0) | (1 << 1) | (1 << 2);
+21 -227
drivers/gpu/drm/i915/intel_lvds.c
··· 49 49 struct intel_lvds_encoder { 50 50 struct intel_encoder base; 51 51 52 - u32 pfit_control; 53 - u32 pfit_pgm_ratios; 54 52 bool is_dual_link; 55 53 u32 reg; 56 54 ··· 116 118 } 117 119 118 120 /* set the corresponsding LVDS_BORDER bit */ 119 - temp |= dev_priv->lvds_border_bits; 121 + temp &= ~LVDS_BORDER_ENABLE; 122 + temp |= intel_crtc->config.gmch_pfit.lvds_border_bits; 120 123 /* Set the B0-B3 data pairs corresponding to whether we're going to 121 124 * set the DPLLs for dual-channel mode or not. 122 125 */ ··· 135 136 * special lvds dither control bit on pch-split platforms, dithering is 136 137 * only controlled through the PIPECONF reg. */ 137 138 if (INTEL_INFO(dev)->gen == 4) { 138 - if (dev_priv->lvds_dither) 139 + /* Bspec wording suggests that LVDS port dithering only exists 140 + * for 18bpp panels. */ 141 + if (intel_crtc->config.dither && 142 + intel_crtc->config.pipe_bpp == 18) 139 143 temp |= LVDS_ENABLE_DITHER; 140 144 else 141 145 temp &= ~LVDS_ENABLE_DITHER; ··· 150 148 temp |= LVDS_VSYNC_POLARITY; 151 149 152 150 I915_WRITE(lvds_encoder->reg, temp); 153 - } 154 - 155 - static void intel_pre_enable_lvds(struct intel_encoder *encoder) 156 - { 157 - struct drm_device *dev = encoder->base.dev; 158 - struct intel_lvds_encoder *enc = to_lvds_encoder(&encoder->base); 159 - struct drm_i915_private *dev_priv = dev->dev_private; 160 - 161 - if (HAS_PCH_SPLIT(dev) || !enc->pfit_control) 162 - return; 163 - 164 - /* 165 - * Enable automatic panel scaling so that non-native modes 166 - * fill the screen. The panel fitter should only be 167 - * adjusted whilst the pipe is disabled, according to 168 - * register description and PRM. 169 - */ 170 - DRM_DEBUG_KMS("applying panel-fitter: %x, %x\n", 171 - enc->pfit_control, 172 - enc->pfit_pgm_ratios); 173 - 174 - I915_WRITE(PFIT_PGM_RATIOS, enc->pfit_pgm_ratios); 175 - I915_WRITE(PFIT_CONTROL, enc->pfit_control); 176 151 } 177 152 178 153 /** ··· 220 241 return MODE_OK; 221 242 } 222 243 223 - static void 224 - centre_horizontally(struct drm_display_mode *mode, 225 - int width) 226 - { 227 - u32 border, sync_pos, blank_width, sync_width; 228 - 229 - /* keep the hsync and hblank widths constant */ 230 - sync_width = mode->crtc_hsync_end - mode->crtc_hsync_start; 231 - blank_width = mode->crtc_hblank_end - mode->crtc_hblank_start; 232 - sync_pos = (blank_width - sync_width + 1) / 2; 233 - 234 - border = (mode->hdisplay - width + 1) / 2; 235 - border += border & 1; /* make the border even */ 236 - 237 - mode->crtc_hdisplay = width; 238 - mode->crtc_hblank_start = width + border; 239 - mode->crtc_hblank_end = mode->crtc_hblank_start + blank_width; 240 - 241 - mode->crtc_hsync_start = mode->crtc_hblank_start + sync_pos; 242 - mode->crtc_hsync_end = mode->crtc_hsync_start + sync_width; 243 - } 244 - 245 - static void 246 - centre_vertically(struct drm_display_mode *mode, 247 - int height) 248 - { 249 - u32 border, sync_pos, blank_width, sync_width; 250 - 251 - /* keep the vsync and vblank widths constant */ 252 - sync_width = mode->crtc_vsync_end - mode->crtc_vsync_start; 253 - blank_width = mode->crtc_vblank_end - mode->crtc_vblank_start; 254 - sync_pos = (blank_width - sync_width + 1) / 2; 255 - 256 - border = (mode->vdisplay - height + 1) / 2; 257 - 258 - mode->crtc_vdisplay = height; 259 - mode->crtc_vblank_start = height + border; 260 - mode->crtc_vblank_end = mode->crtc_vblank_start + blank_width; 261 - 262 - mode->crtc_vsync_start = mode->crtc_vblank_start + sync_pos; 263 - mode->crtc_vsync_end = mode->crtc_vsync_start + sync_width; 264 - } 265 - 266 - static inline u32 panel_fitter_scaling(u32 source, u32 target) 267 - { 268 - /* 269 - * Floating point operation is not supported. So the FACTOR 270 - * is defined, which can avoid the floating point computation 271 - * when calculating the panel ratio. 272 - */ 273 - #define ACCURACY 12 274 - #define FACTOR (1 << ACCURACY) 275 - u32 ratio = source * FACTOR / target; 276 - return (FACTOR * ratio + FACTOR/2) / FACTOR; 277 - } 278 - 279 244 static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder, 280 245 struct intel_crtc_config *pipe_config) 281 246 { ··· 230 307 struct intel_connector *intel_connector = 231 308 &lvds_encoder->attached_connector->base; 232 309 struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode; 233 - struct drm_display_mode *mode = &pipe_config->requested_mode; 234 310 struct intel_crtc *intel_crtc = lvds_encoder->base.new_crtc; 235 - u32 pfit_control = 0, pfit_pgm_ratios = 0, border = 0; 236 311 unsigned int lvds_bpp; 237 - int pipe; 238 312 239 313 /* Should never happen!! */ 240 314 if (INTEL_INFO(dev)->gen < 4 && intel_crtc->pipe == 0) { ··· 248 328 else 249 329 lvds_bpp = 6*3; 250 330 251 - if (lvds_bpp != pipe_config->pipe_bpp) { 331 + if (lvds_bpp != pipe_config->pipe_bpp && !pipe_config->bw_constrained) { 252 332 DRM_DEBUG_KMS("forcing display bpp (was %d) to LVDS (%d)\n", 253 333 pipe_config->pipe_bpp, lvds_bpp); 254 334 pipe_config->pipe_bpp = lvds_bpp; 255 335 } 336 + 256 337 /* 257 338 * We have timings from the BIOS for the panel, put them in 258 339 * to the adjusted mode. The CRTC will be set up for this mode, ··· 266 345 if (HAS_PCH_SPLIT(dev)) { 267 346 pipe_config->has_pch_encoder = true; 268 347 269 - intel_pch_panel_fitting(dev, 270 - intel_connector->panel.fitting_mode, 271 - mode, adjusted_mode); 348 + intel_pch_panel_fitting(intel_crtc, pipe_config, 349 + intel_connector->panel.fitting_mode); 272 350 return true; 351 + } else { 352 + intel_gmch_panel_fitting(intel_crtc, pipe_config, 353 + intel_connector->panel.fitting_mode); 273 354 } 274 - 275 - /* Native modes don't need fitting */ 276 - if (adjusted_mode->hdisplay == mode->hdisplay && 277 - adjusted_mode->vdisplay == mode->vdisplay) 278 - goto out; 279 - 280 - /* 965+ wants fuzzy fitting */ 281 - if (INTEL_INFO(dev)->gen >= 4) 282 - pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | 283 - PFIT_FILTER_FUZZY); 284 - 285 - /* 286 - * Enable automatic panel scaling for non-native modes so that they fill 287 - * the screen. Should be enabled before the pipe is enabled, according 288 - * to register description and PRM. 289 - * Change the value here to see the borders for debugging 290 - */ 291 - for_each_pipe(pipe) 292 - I915_WRITE(BCLRPAT(pipe), 0); 293 355 294 356 drm_mode_set_crtcinfo(adjusted_mode, 0); 295 357 pipe_config->timings_set = true; 296 - 297 - switch (intel_connector->panel.fitting_mode) { 298 - case DRM_MODE_SCALE_CENTER: 299 - /* 300 - * For centered modes, we have to calculate border widths & 301 - * heights and modify the values programmed into the CRTC. 302 - */ 303 - centre_horizontally(adjusted_mode, mode->hdisplay); 304 - centre_vertically(adjusted_mode, mode->vdisplay); 305 - border = LVDS_BORDER_ENABLE; 306 - break; 307 - 308 - case DRM_MODE_SCALE_ASPECT: 309 - /* Scale but preserve the aspect ratio */ 310 - if (INTEL_INFO(dev)->gen >= 4) { 311 - u32 scaled_width = adjusted_mode->hdisplay * mode->vdisplay; 312 - u32 scaled_height = mode->hdisplay * adjusted_mode->vdisplay; 313 - 314 - /* 965+ is easy, it does everything in hw */ 315 - if (scaled_width > scaled_height) 316 - pfit_control |= PFIT_ENABLE | PFIT_SCALING_PILLAR; 317 - else if (scaled_width < scaled_height) 318 - pfit_control |= PFIT_ENABLE | PFIT_SCALING_LETTER; 319 - else if (adjusted_mode->hdisplay != mode->hdisplay) 320 - pfit_control |= PFIT_ENABLE | PFIT_SCALING_AUTO; 321 - } else { 322 - u32 scaled_width = adjusted_mode->hdisplay * mode->vdisplay; 323 - u32 scaled_height = mode->hdisplay * adjusted_mode->vdisplay; 324 - /* 325 - * For earlier chips we have to calculate the scaling 326 - * ratio by hand and program it into the 327 - * PFIT_PGM_RATIO register 328 - */ 329 - if (scaled_width > scaled_height) { /* pillar */ 330 - centre_horizontally(adjusted_mode, scaled_height / mode->vdisplay); 331 - 332 - border = LVDS_BORDER_ENABLE; 333 - if (mode->vdisplay != adjusted_mode->vdisplay) { 334 - u32 bits = panel_fitter_scaling(mode->vdisplay, adjusted_mode->vdisplay); 335 - pfit_pgm_ratios |= (bits << PFIT_HORIZ_SCALE_SHIFT | 336 - bits << PFIT_VERT_SCALE_SHIFT); 337 - pfit_control |= (PFIT_ENABLE | 338 - VERT_INTERP_BILINEAR | 339 - HORIZ_INTERP_BILINEAR); 340 - } 341 - } else if (scaled_width < scaled_height) { /* letter */ 342 - centre_vertically(adjusted_mode, scaled_width / mode->hdisplay); 343 - 344 - border = LVDS_BORDER_ENABLE; 345 - if (mode->hdisplay != adjusted_mode->hdisplay) { 346 - u32 bits = panel_fitter_scaling(mode->hdisplay, adjusted_mode->hdisplay); 347 - pfit_pgm_ratios |= (bits << PFIT_HORIZ_SCALE_SHIFT | 348 - bits << PFIT_VERT_SCALE_SHIFT); 349 - pfit_control |= (PFIT_ENABLE | 350 - VERT_INTERP_BILINEAR | 351 - HORIZ_INTERP_BILINEAR); 352 - } 353 - } else 354 - /* Aspects match, Let hw scale both directions */ 355 - pfit_control |= (PFIT_ENABLE | 356 - VERT_AUTO_SCALE | HORIZ_AUTO_SCALE | 357 - VERT_INTERP_BILINEAR | 358 - HORIZ_INTERP_BILINEAR); 359 - } 360 - break; 361 - 362 - case DRM_MODE_SCALE_FULLSCREEN: 363 - /* 364 - * Full scaling, even if it changes the aspect ratio. 365 - * Fortunately this is all done for us in hw. 366 - */ 367 - if (mode->vdisplay != adjusted_mode->vdisplay || 368 - mode->hdisplay != adjusted_mode->hdisplay) { 369 - pfit_control |= PFIT_ENABLE; 370 - if (INTEL_INFO(dev)->gen >= 4) 371 - pfit_control |= PFIT_SCALING_AUTO; 372 - else 373 - pfit_control |= (VERT_AUTO_SCALE | 374 - VERT_INTERP_BILINEAR | 375 - HORIZ_AUTO_SCALE | 376 - HORIZ_INTERP_BILINEAR); 377 - } 378 - break; 379 - 380 - default: 381 - break; 382 - } 383 - 384 - out: 385 - /* If not enabling scaling, be consistent and always use 0. */ 386 - if ((pfit_control & PFIT_ENABLE) == 0) { 387 - pfit_control = 0; 388 - pfit_pgm_ratios = 0; 389 - } 390 - 391 - /* Make sure pre-965 set dither correctly */ 392 - if (INTEL_INFO(dev)->gen < 4 && dev_priv->lvds_dither) 393 - pfit_control |= PANEL_8TO6_DITHER_ENABLE; 394 - 395 - if (pfit_control != lvds_encoder->pfit_control || 396 - pfit_pgm_ratios != lvds_encoder->pfit_pgm_ratios) { 397 - lvds_encoder->pfit_control = pfit_control; 398 - lvds_encoder->pfit_pgm_ratios = pfit_pgm_ratios; 399 - } 400 - dev_priv->lvds_border_bits = border; 401 358 402 359 /* 403 360 * XXX: It would be nice to support lower refresh rates on the ··· 736 937 struct drm_i915_private *dev_priv = dev->dev_private; 737 938 int i; 738 939 739 - if (!dev_priv->child_dev_num) 940 + if (!dev_priv->vbt.child_dev_num) 740 941 return true; 741 942 742 - for (i = 0; i < dev_priv->child_dev_num; i++) { 743 - struct child_device_config *child = dev_priv->child_dev + i; 943 + for (i = 0; i < dev_priv->vbt.child_dev_num; i++) { 944 + struct child_device_config *child = dev_priv->vbt.child_dev + i; 744 945 745 946 /* If the device type is not LFP, continue. 746 947 * We have to check both the new identifiers as well as the ··· 828 1029 */ 829 1030 val = I915_READ(lvds_encoder->reg); 830 1031 if (!(val & ~(LVDS_PIPE_MASK | LVDS_DETECTED))) 831 - val = dev_priv->bios_lvds_val; 1032 + val = dev_priv->vbt.bios_lvds_val; 832 1033 833 1034 return (val & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP; 834 1035 } ··· 888 1089 if (HAS_PCH_SPLIT(dev)) { 889 1090 if ((I915_READ(PCH_LVDS) & LVDS_DETECTED) == 0) 890 1091 return false; 891 - if (dev_priv->edp.support) { 1092 + if (dev_priv->vbt.edp_support) { 892 1093 DRM_DEBUG_KMS("disable LVDS for eDP support\n"); 893 1094 return false; 894 1095 } ··· 906 1107 907 1108 lvds_encoder->attached_connector = lvds_connector; 908 1109 909 - if (!HAS_PCH_SPLIT(dev)) { 910 - lvds_encoder->pfit_control = I915_READ(PFIT_CONTROL); 911 - } 912 - 913 1110 intel_encoder = &lvds_encoder->base; 914 1111 encoder = &intel_encoder->base; 915 1112 intel_connector = &lvds_connector->base; ··· 917 1122 DRM_MODE_ENCODER_LVDS); 918 1123 919 1124 intel_encoder->enable = intel_enable_lvds; 920 - intel_encoder->pre_enable = intel_pre_enable_lvds; 921 1125 intel_encoder->pre_pll_enable = intel_pre_pll_enable_lvds; 922 1126 intel_encoder->compute_config = intel_lvds_compute_config; 923 1127 intel_encoder->disable = intel_disable_lvds; ··· 1006 1212 } 1007 1213 1008 1214 /* Failed to get EDID, what about VBT? */ 1009 - if (dev_priv->lfp_lvds_vbt_mode) { 1215 + if (dev_priv->vbt.lfp_lvds_vbt_mode) { 1010 1216 DRM_DEBUG_KMS("using mode from VBT: "); 1011 - drm_mode_debug_printmodeline(dev_priv->lfp_lvds_vbt_mode); 1217 + drm_mode_debug_printmodeline(dev_priv->vbt.lfp_lvds_vbt_mode); 1012 1218 1013 - fixed_mode = drm_mode_duplicate(dev, dev_priv->lfp_lvds_vbt_mode); 1219 + fixed_mode = drm_mode_duplicate(dev, dev_priv->vbt.lfp_lvds_vbt_mode); 1014 1220 if (fixed_mode) { 1015 1221 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 1016 1222 goto out;
+26 -76
drivers/gpu/drm/i915/intel_opregion.c
··· 110 110 u8 rsvd[102]; 111 111 } __attribute__((packed)); 112 112 113 + /* Driver readiness indicator */ 114 + #define ASLE_ARDY_READY (1 << 0) 115 + #define ASLE_ARDY_NOT_READY (0 << 0) 116 + 113 117 /* ASLE irq request bits */ 114 118 #define ASLE_SET_ALS_ILLUM (1 << 0) 115 119 #define ASLE_SET_BACKLIGHT (1 << 1) ··· 126 122 #define ASLE_BACKLIGHT_FAILED (1<<12) 127 123 #define ASLE_PFIT_FAILED (1<<14) 128 124 #define ASLE_PWM_FREQ_FAILED (1<<16) 125 + 126 + /* Technology enabled indicator */ 127 + #define ASLE_TCHE_ALS_EN (1 << 0) 128 + #define ASLE_TCHE_BLC_EN (1 << 1) 129 + #define ASLE_TCHE_PFIT_EN (1 << 2) 130 + #define ASLE_TCHE_PFMB_EN (1 << 3) 129 131 130 132 /* ASLE backlight brightness to set */ 131 133 #define ASLE_BCLP_VALID (1<<31) ··· 162 152 { 163 153 struct drm_i915_private *dev_priv = dev->dev_private; 164 154 struct opregion_asle __iomem *asle = dev_priv->opregion.asle; 165 - u32 max; 166 155 167 156 DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); 168 157 ··· 172 163 if (bclp > 255) 173 164 return ASLE_BACKLIGHT_FAILED; 174 165 175 - max = intel_panel_get_max_backlight(dev); 176 - intel_panel_set_backlight(dev, bclp * max / 255); 166 + intel_panel_set_backlight(dev, bclp, 255); 177 167 iowrite32((bclp*0x64)/0xff | ASLE_CBLV_VALID, &asle->cblv); 178 168 179 169 return 0; ··· 182 174 { 183 175 /* alsi is the current ALS reading in lux. 0 indicates below sensor 184 176 range, 0xffff indicates above sensor range. 1-0xfffe are valid */ 185 - return 0; 177 + DRM_DEBUG_DRIVER("Illum is not supported\n"); 178 + return ASLE_ALS_ILLUM_FAILED; 186 179 } 187 180 188 181 static u32 asle_set_pwm_freq(struct drm_device *dev, u32 pfmb) 189 182 { 190 - struct drm_i915_private *dev_priv = dev->dev_private; 191 - if (pfmb & ASLE_PFMB_PWM_VALID) { 192 - u32 blc_pwm_ctl = I915_READ(BLC_PWM_CTL); 193 - u32 pwm = pfmb & ASLE_PFMB_PWM_MASK; 194 - blc_pwm_ctl &= BACKLIGHT_DUTY_CYCLE_MASK; 195 - pwm = pwm >> 9; 196 - /* FIXME - what do we do with the PWM? */ 197 - } 198 - return 0; 183 + DRM_DEBUG_DRIVER("PWM freq is not supported\n"); 184 + return ASLE_PWM_FREQ_FAILED; 199 185 } 200 186 201 187 static u32 asle_set_pfit(struct drm_device *dev, u32 pfit) 202 188 { 203 189 /* Panel fitting is currently controlled by the X code, so this is a 204 190 noop until modesetting support works fully */ 205 - if (!(pfit & ASLE_PFIT_VALID)) 206 - return ASLE_PFIT_FAILED; 207 - return 0; 191 + DRM_DEBUG_DRIVER("Pfit is not supported\n"); 192 + return ASLE_PFIT_FAILED; 208 193 } 209 194 210 195 void intel_opregion_asle_intr(struct drm_device *dev) ··· 230 229 asle_stat |= asle_set_pwm_freq(dev, ioread32(&asle->pfmb)); 231 230 232 231 iowrite32(asle_stat, &asle->aslc); 233 - } 234 - 235 - void intel_opregion_gse_intr(struct drm_device *dev) 236 - { 237 - struct drm_i915_private *dev_priv = dev->dev_private; 238 - struct opregion_asle __iomem *asle = dev_priv->opregion.asle; 239 - u32 asle_stat = 0; 240 - u32 asle_req; 241 - 242 - if (!asle) 243 - return; 244 - 245 - asle_req = ioread32(&asle->aslc) & ASLE_REQ_MSK; 246 - 247 - if (!asle_req) { 248 - DRM_DEBUG_DRIVER("non asle set request??\n"); 249 - return; 250 - } 251 - 252 - if (asle_req & ASLE_SET_ALS_ILLUM) { 253 - DRM_DEBUG_DRIVER("Illum is not supported\n"); 254 - asle_stat |= ASLE_ALS_ILLUM_FAILED; 255 - } 256 - 257 - if (asle_req & ASLE_SET_BACKLIGHT) 258 - asle_stat |= asle_set_backlight(dev, ioread32(&asle->bclp)); 259 - 260 - if (asle_req & ASLE_SET_PFIT) { 261 - DRM_DEBUG_DRIVER("Pfit is not supported\n"); 262 - asle_stat |= ASLE_PFIT_FAILED; 263 - } 264 - 265 - if (asle_req & ASLE_SET_PWM_FREQ) { 266 - DRM_DEBUG_DRIVER("PWM freq is not supported\n"); 267 - asle_stat |= ASLE_PWM_FREQ_FAILED; 268 - } 269 - 270 - iowrite32(asle_stat, &asle->aslc); 271 - } 272 - #define ASLE_ALS_EN (1<<0) 273 - #define ASLE_BLC_EN (1<<1) 274 - #define ASLE_PFIT_EN (1<<2) 275 - #define ASLE_PFMB_EN (1<<3) 276 - 277 - void intel_opregion_enable_asle(struct drm_device *dev) 278 - { 279 - struct drm_i915_private *dev_priv = dev->dev_private; 280 - struct opregion_asle __iomem *asle = dev_priv->opregion.asle; 281 - 282 - if (asle) { 283 - if (IS_MOBILE(dev)) 284 - intel_enable_asle(dev); 285 - 286 - iowrite32(ASLE_ALS_EN | ASLE_BLC_EN | ASLE_PFIT_EN | 287 - ASLE_PFMB_EN, 288 - &asle->tche); 289 - iowrite32(1, &asle->ardy); 290 - } 291 232 } 292 233 293 234 #define ACPI_EV_DISPLAY_SWITCH (1<<0) ··· 415 472 register_acpi_notifier(&intel_opregion_notifier); 416 473 } 417 474 418 - if (opregion->asle) 419 - intel_opregion_enable_asle(dev); 475 + if (opregion->asle) { 476 + iowrite32(ASLE_TCHE_BLC_EN, &opregion->asle->tche); 477 + iowrite32(ASLE_ARDY_READY, &opregion->asle->ardy); 478 + } 420 479 } 421 480 422 481 void intel_opregion_fini(struct drm_device *dev) ··· 428 483 429 484 if (!opregion->header) 430 485 return; 486 + 487 + if (opregion->asle) 488 + iowrite32(ASLE_ARDY_NOT_READY, &opregion->asle->ardy); 431 489 432 490 if (opregion->acpi) { 433 491 iowrite32(0, &opregion->acpi->drdy); ··· 494 546 if (mboxes & MBOX_ASLE) { 495 547 DRM_DEBUG_DRIVER("ASLE supported\n"); 496 548 opregion->asle = base + OPREGION_ASLE_OFFSET; 549 + 550 + iowrite32(ASLE_ARDY_NOT_READY, &opregion->asle->ardy); 497 551 } 498 552 499 553 return 0;
+265 -31
drivers/gpu/drm/i915/intel_panel.c
··· 54 54 55 55 /* adjusted_mode has been preset to be the panel's fixed mode */ 56 56 void 57 - intel_pch_panel_fitting(struct drm_device *dev, 58 - int fitting_mode, 59 - const struct drm_display_mode *mode, 60 - struct drm_display_mode *adjusted_mode) 57 + intel_pch_panel_fitting(struct intel_crtc *intel_crtc, 58 + struct intel_crtc_config *pipe_config, 59 + int fitting_mode) 61 60 { 62 - struct drm_i915_private *dev_priv = dev->dev_private; 61 + struct drm_display_mode *mode, *adjusted_mode; 63 62 int x, y, width, height; 63 + 64 + mode = &pipe_config->requested_mode; 65 + adjusted_mode = &pipe_config->adjusted_mode; 64 66 65 67 x = y = width = height = 0; 66 68 ··· 106 104 } 107 105 break; 108 106 109 - default: 110 107 case DRM_MODE_SCALE_FULLSCREEN: 111 108 x = y = 0; 112 109 width = adjusted_mode->hdisplay; 113 110 height = adjusted_mode->vdisplay; 114 111 break; 112 + 113 + default: 114 + WARN(1, "bad panel fit mode: %d\n", fitting_mode); 115 + return; 115 116 } 116 117 117 118 done: 118 - dev_priv->pch_pf_pos = (x << 16) | y; 119 - dev_priv->pch_pf_size = (width << 16) | height; 119 + pipe_config->pch_pfit.pos = (x << 16) | y; 120 + pipe_config->pch_pfit.size = (width << 16) | height; 121 + } 122 + 123 + static void 124 + centre_horizontally(struct drm_display_mode *mode, 125 + int width) 126 + { 127 + u32 border, sync_pos, blank_width, sync_width; 128 + 129 + /* keep the hsync and hblank widths constant */ 130 + sync_width = mode->crtc_hsync_end - mode->crtc_hsync_start; 131 + blank_width = mode->crtc_hblank_end - mode->crtc_hblank_start; 132 + sync_pos = (blank_width - sync_width + 1) / 2; 133 + 134 + border = (mode->hdisplay - width + 1) / 2; 135 + border += border & 1; /* make the border even */ 136 + 137 + mode->crtc_hdisplay = width; 138 + mode->crtc_hblank_start = width + border; 139 + mode->crtc_hblank_end = mode->crtc_hblank_start + blank_width; 140 + 141 + mode->crtc_hsync_start = mode->crtc_hblank_start + sync_pos; 142 + mode->crtc_hsync_end = mode->crtc_hsync_start + sync_width; 143 + } 144 + 145 + static void 146 + centre_vertically(struct drm_display_mode *mode, 147 + int height) 148 + { 149 + u32 border, sync_pos, blank_width, sync_width; 150 + 151 + /* keep the vsync and vblank widths constant */ 152 + sync_width = mode->crtc_vsync_end - mode->crtc_vsync_start; 153 + blank_width = mode->crtc_vblank_end - mode->crtc_vblank_start; 154 + sync_pos = (blank_width - sync_width + 1) / 2; 155 + 156 + border = (mode->vdisplay - height + 1) / 2; 157 + 158 + mode->crtc_vdisplay = height; 159 + mode->crtc_vblank_start = height + border; 160 + mode->crtc_vblank_end = mode->crtc_vblank_start + blank_width; 161 + 162 + mode->crtc_vsync_start = mode->crtc_vblank_start + sync_pos; 163 + mode->crtc_vsync_end = mode->crtc_vsync_start + sync_width; 164 + } 165 + 166 + static inline u32 panel_fitter_scaling(u32 source, u32 target) 167 + { 168 + /* 169 + * Floating point operation is not supported. So the FACTOR 170 + * is defined, which can avoid the floating point computation 171 + * when calculating the panel ratio. 172 + */ 173 + #define ACCURACY 12 174 + #define FACTOR (1 << ACCURACY) 175 + u32 ratio = source * FACTOR / target; 176 + return (FACTOR * ratio + FACTOR/2) / FACTOR; 177 + } 178 + 179 + void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, 180 + struct intel_crtc_config *pipe_config, 181 + int fitting_mode) 182 + { 183 + struct drm_device *dev = intel_crtc->base.dev; 184 + u32 pfit_control = 0, pfit_pgm_ratios = 0, border = 0; 185 + struct drm_display_mode *mode, *adjusted_mode; 186 + 187 + mode = &pipe_config->requested_mode; 188 + adjusted_mode = &pipe_config->adjusted_mode; 189 + 190 + /* Native modes don't need fitting */ 191 + if (adjusted_mode->hdisplay == mode->hdisplay && 192 + adjusted_mode->vdisplay == mode->vdisplay) 193 + goto out; 194 + 195 + switch (fitting_mode) { 196 + case DRM_MODE_SCALE_CENTER: 197 + /* 198 + * For centered modes, we have to calculate border widths & 199 + * heights and modify the values programmed into the CRTC. 200 + */ 201 + centre_horizontally(adjusted_mode, mode->hdisplay); 202 + centre_vertically(adjusted_mode, mode->vdisplay); 203 + border = LVDS_BORDER_ENABLE; 204 + break; 205 + case DRM_MODE_SCALE_ASPECT: 206 + /* Scale but preserve the aspect ratio */ 207 + if (INTEL_INFO(dev)->gen >= 4) { 208 + u32 scaled_width = adjusted_mode->hdisplay * 209 + mode->vdisplay; 210 + u32 scaled_height = mode->hdisplay * 211 + adjusted_mode->vdisplay; 212 + 213 + /* 965+ is easy, it does everything in hw */ 214 + if (scaled_width > scaled_height) 215 + pfit_control |= PFIT_ENABLE | 216 + PFIT_SCALING_PILLAR; 217 + else if (scaled_width < scaled_height) 218 + pfit_control |= PFIT_ENABLE | 219 + PFIT_SCALING_LETTER; 220 + else if (adjusted_mode->hdisplay != mode->hdisplay) 221 + pfit_control |= PFIT_ENABLE | PFIT_SCALING_AUTO; 222 + } else { 223 + u32 scaled_width = adjusted_mode->hdisplay * 224 + mode->vdisplay; 225 + u32 scaled_height = mode->hdisplay * 226 + adjusted_mode->vdisplay; 227 + /* 228 + * For earlier chips we have to calculate the scaling 229 + * ratio by hand and program it into the 230 + * PFIT_PGM_RATIO register 231 + */ 232 + if (scaled_width > scaled_height) { /* pillar */ 233 + centre_horizontally(adjusted_mode, 234 + scaled_height / 235 + mode->vdisplay); 236 + 237 + border = LVDS_BORDER_ENABLE; 238 + if (mode->vdisplay != adjusted_mode->vdisplay) { 239 + u32 bits = panel_fitter_scaling(mode->vdisplay, adjusted_mode->vdisplay); 240 + pfit_pgm_ratios |= (bits << PFIT_HORIZ_SCALE_SHIFT | 241 + bits << PFIT_VERT_SCALE_SHIFT); 242 + pfit_control |= (PFIT_ENABLE | 243 + VERT_INTERP_BILINEAR | 244 + HORIZ_INTERP_BILINEAR); 245 + } 246 + } else if (scaled_width < scaled_height) { /* letter */ 247 + centre_vertically(adjusted_mode, 248 + scaled_width / 249 + mode->hdisplay); 250 + 251 + border = LVDS_BORDER_ENABLE; 252 + if (mode->hdisplay != adjusted_mode->hdisplay) { 253 + u32 bits = panel_fitter_scaling(mode->hdisplay, adjusted_mode->hdisplay); 254 + pfit_pgm_ratios |= (bits << PFIT_HORIZ_SCALE_SHIFT | 255 + bits << PFIT_VERT_SCALE_SHIFT); 256 + pfit_control |= (PFIT_ENABLE | 257 + VERT_INTERP_BILINEAR | 258 + HORIZ_INTERP_BILINEAR); 259 + } 260 + } else { 261 + /* Aspects match, Let hw scale both directions */ 262 + pfit_control |= (PFIT_ENABLE | 263 + VERT_AUTO_SCALE | HORIZ_AUTO_SCALE | 264 + VERT_INTERP_BILINEAR | 265 + HORIZ_INTERP_BILINEAR); 266 + } 267 + } 268 + break; 269 + case DRM_MODE_SCALE_FULLSCREEN: 270 + /* 271 + * Full scaling, even if it changes the aspect ratio. 272 + * Fortunately this is all done for us in hw. 273 + */ 274 + if (mode->vdisplay != adjusted_mode->vdisplay || 275 + mode->hdisplay != adjusted_mode->hdisplay) { 276 + pfit_control |= PFIT_ENABLE; 277 + if (INTEL_INFO(dev)->gen >= 4) 278 + pfit_control |= PFIT_SCALING_AUTO; 279 + else 280 + pfit_control |= (VERT_AUTO_SCALE | 281 + VERT_INTERP_BILINEAR | 282 + HORIZ_AUTO_SCALE | 283 + HORIZ_INTERP_BILINEAR); 284 + } 285 + break; 286 + default: 287 + WARN(1, "bad panel fit mode: %d\n", fitting_mode); 288 + return; 289 + } 290 + 291 + /* 965+ wants fuzzy fitting */ 292 + /* FIXME: handle multiple panels by failing gracefully */ 293 + if (INTEL_INFO(dev)->gen >= 4) 294 + pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | 295 + PFIT_FILTER_FUZZY); 296 + 297 + out: 298 + if ((pfit_control & PFIT_ENABLE) == 0) { 299 + pfit_control = 0; 300 + pfit_pgm_ratios = 0; 301 + } 302 + 303 + /* Make sure pre-965 set dither correctly for 18bpp panels. */ 304 + if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) 305 + pfit_control |= PANEL_8TO6_DITHER_ENABLE; 306 + 307 + pipe_config->gmch_pfit.control = pfit_control; 308 + pipe_config->gmch_pfit.pgm_ratios = pfit_pgm_ratios; 309 + pipe_config->gmch_pfit.lvds_border_bits = border; 120 310 } 121 311 122 312 static int is_backlight_combination_mode(struct drm_device *dev) ··· 324 130 return 0; 325 131 } 326 132 133 + /* XXX: query mode clock or hardware clock and program max PWM appropriately 134 + * when it's 0. 135 + */ 327 136 static u32 i915_read_blc_pwm_ctl(struct drm_device *dev) 328 137 { 329 138 struct drm_i915_private *dev_priv = dev->dev_private; 330 139 u32 val; 140 + 141 + WARN_ON(!spin_is_locked(&dev_priv->backlight.lock)); 331 142 332 143 /* Restore the CTL value if it lost, e.g. GPU reset */ 333 144 ··· 363 164 return val; 364 165 } 365 166 366 - static u32 _intel_panel_get_max_backlight(struct drm_device *dev) 167 + static u32 intel_panel_get_max_backlight(struct drm_device *dev) 367 168 { 368 169 u32 max; 369 170 ··· 381 182 max *= 0xff; 382 183 } 383 184 384 - return max; 385 - } 386 - 387 - u32 intel_panel_get_max_backlight(struct drm_device *dev) 388 - { 389 - u32 max; 390 - 391 - max = _intel_panel_get_max_backlight(dev); 392 - if (max == 0) { 393 - /* XXX add code here to query mode clock or hardware clock 394 - * and program max PWM appropriately. 395 - */ 396 - pr_warn_once("fixme: max PWM is zero\n"); 397 - return 1; 398 - } 399 - 400 185 DRM_DEBUG_DRIVER("max backlight PWM = %d\n", max); 186 + 401 187 return max; 402 188 } 403 189 ··· 401 217 return val; 402 218 403 219 if (i915_panel_invert_brightness > 0 || 404 - dev_priv->quirks & QUIRK_INVERT_BRIGHTNESS) 405 - return intel_panel_get_max_backlight(dev) - val; 220 + dev_priv->quirks & QUIRK_INVERT_BRIGHTNESS) { 221 + u32 max = intel_panel_get_max_backlight(dev); 222 + if (max) 223 + return max - val; 224 + } 406 225 407 226 return val; 408 227 } ··· 414 227 { 415 228 struct drm_i915_private *dev_priv = dev->dev_private; 416 229 u32 val; 230 + unsigned long flags; 231 + 232 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 417 233 418 234 if (HAS_PCH_SPLIT(dev)) { 419 235 val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; ··· 434 244 } 435 245 436 246 val = intel_panel_compute_brightness(dev, val); 247 + 248 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 249 + 437 250 DRM_DEBUG_DRIVER("get backlight PWM = %d\n", val); 438 251 return val; 439 252 } ··· 463 270 u32 max = intel_panel_get_max_backlight(dev); 464 271 u8 lbpc; 465 272 273 + /* we're screwed, but keep behaviour backwards compatible */ 274 + if (!max) 275 + max = 1; 276 + 466 277 lbpc = level * 0xfe / max + 1; 467 278 level /= lbpc; 468 279 pci_write_config_byte(dev->pdev, PCI_LBPC, lbpc); ··· 479 282 I915_WRITE(BLC_PWM_CTL, tmp | level); 480 283 } 481 284 482 - void intel_panel_set_backlight(struct drm_device *dev, u32 level) 285 + /* set backlight brightness to level in range [0..max] */ 286 + void intel_panel_set_backlight(struct drm_device *dev, u32 level, u32 max) 483 287 { 484 288 struct drm_i915_private *dev_priv = dev->dev_private; 289 + u32 freq; 290 + unsigned long flags; 291 + 292 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 293 + 294 + freq = intel_panel_get_max_backlight(dev); 295 + if (!freq) { 296 + /* we are screwed, bail out */ 297 + goto out; 298 + } 299 + 300 + /* scale to hardware */ 301 + level = level * freq / max; 485 302 486 303 dev_priv->backlight.level = level; 487 304 if (dev_priv->backlight.device) ··· 503 292 504 293 if (dev_priv->backlight.enabled) 505 294 intel_panel_actually_set_backlight(dev, level); 295 + out: 296 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 506 297 } 507 298 508 299 void intel_panel_disable_backlight(struct drm_device *dev) 509 300 { 510 301 struct drm_i915_private *dev_priv = dev->dev_private; 302 + unsigned long flags; 303 + 304 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 511 305 512 306 dev_priv->backlight.enabled = false; 513 307 intel_panel_actually_set_backlight(dev, 0); ··· 530 314 I915_WRITE(BLC_PWM_PCH_CTL1, tmp); 531 315 } 532 316 } 317 + 318 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 533 319 } 534 320 535 321 void intel_panel_enable_backlight(struct drm_device *dev, 536 322 enum pipe pipe) 537 323 { 538 324 struct drm_i915_private *dev_priv = dev->dev_private; 325 + enum transcoder cpu_transcoder = 326 + intel_pipe_to_cpu_transcoder(dev_priv, pipe); 327 + unsigned long flags; 328 + 329 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 539 330 540 331 if (dev_priv->backlight.level == 0) { 541 332 dev_priv->backlight.level = intel_panel_get_max_backlight(dev); ··· 570 347 else 571 348 tmp &= ~BLM_PIPE_SELECT; 572 349 573 - tmp |= BLM_PIPE(pipe); 350 + if (cpu_transcoder == TRANSCODER_EDP) 351 + tmp |= BLM_TRANSCODER_EDP; 352 + else 353 + tmp |= BLM_PIPE(cpu_transcoder); 574 354 tmp &= ~BLM_PWM_ENABLE; 575 355 576 356 I915_WRITE(reg, tmp); ··· 595 369 */ 596 370 dev_priv->backlight.enabled = true; 597 371 intel_panel_actually_set_backlight(dev, dev_priv->backlight.level); 372 + 373 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 598 374 } 599 375 600 376 static void intel_panel_init_backlight(struct drm_device *dev) ··· 633 405 static int intel_panel_update_status(struct backlight_device *bd) 634 406 { 635 407 struct drm_device *dev = bl_get_data(bd); 636 - intel_panel_set_backlight(dev, bd->props.brightness); 408 + intel_panel_set_backlight(dev, bd->props.brightness, 409 + bd->props.max_brightness); 637 410 return 0; 638 411 } 639 412 ··· 654 425 struct drm_device *dev = connector->dev; 655 426 struct drm_i915_private *dev_priv = dev->dev_private; 656 427 struct backlight_properties props; 428 + unsigned long flags; 657 429 658 430 intel_panel_init_backlight(dev); 659 431 ··· 664 434 memset(&props, 0, sizeof(props)); 665 435 props.type = BACKLIGHT_RAW; 666 436 props.brightness = dev_priv->backlight.level; 667 - props.max_brightness = _intel_panel_get_max_backlight(dev); 437 + 438 + spin_lock_irqsave(&dev_priv->backlight.lock, flags); 439 + props.max_brightness = intel_panel_get_max_backlight(dev); 440 + spin_unlock_irqrestore(&dev_priv->backlight.lock, flags); 441 + 668 442 if (props.max_brightness == 0) { 669 443 DRM_DEBUG_DRIVER("Failed to get maximum backlight value\n"); 670 444 return -ENODEV;
+519 -72
drivers/gpu/drm/i915/intel_pm.c
··· 113 113 fbc_ctl |= obj->fence_reg; 114 114 I915_WRITE(FBC_CONTROL, fbc_ctl); 115 115 116 - DRM_DEBUG_KMS("enabled FBC, pitch %d, yoff %d, plane %d, ", 117 - cfb_pitch, crtc->y, intel_crtc->plane); 116 + DRM_DEBUG_KMS("enabled FBC, pitch %d, yoff %d, plane %c, ", 117 + cfb_pitch, crtc->y, plane_name(intel_crtc->plane)); 118 118 } 119 119 120 120 static bool i8xx_fbc_enabled(struct drm_device *dev) ··· 148 148 /* enable it... */ 149 149 I915_WRITE(DPFC_CONTROL, I915_READ(DPFC_CONTROL) | DPFC_CTL_EN); 150 150 151 - DRM_DEBUG_KMS("enabled fbc on plane %d\n", intel_crtc->plane); 151 + DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane)); 152 152 } 153 153 154 154 static void g4x_disable_fbc(struct drm_device *dev) ··· 228 228 sandybridge_blit_fbc_update(dev); 229 229 } 230 230 231 - DRM_DEBUG_KMS("enabled fbc on plane %d\n", intel_crtc->plane); 231 + DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane)); 232 232 } 233 233 234 234 static void ironlake_disable_fbc(struct drm_device *dev) ··· 242 242 dpfc_ctl &= ~DPFC_CTL_EN; 243 243 I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl); 244 244 245 + if (IS_IVYBRIDGE(dev)) 246 + /* WaFbcDisableDpfcClockGating:ivb */ 247 + I915_WRITE(ILK_DSPCLK_GATE_D, 248 + I915_READ(ILK_DSPCLK_GATE_D) & 249 + ~ILK_DPFCUNIT_CLOCK_GATE_DISABLE); 250 + 251 + if (IS_HASWELL(dev)) 252 + /* WaFbcDisableDpfcClockGating:hsw */ 253 + I915_WRITE(HSW_CLKGATE_DISABLE_PART_1, 254 + I915_READ(HSW_CLKGATE_DISABLE_PART_1) & 255 + ~HSW_DPFC_GATING_DISABLE); 256 + 245 257 DRM_DEBUG_KMS("disabled FBC\n"); 246 258 } 247 259 } ··· 263 251 struct drm_i915_private *dev_priv = dev->dev_private; 264 252 265 253 return I915_READ(ILK_DPFC_CONTROL) & DPFC_CTL_EN; 254 + } 255 + 256 + static void gen7_enable_fbc(struct drm_crtc *crtc, unsigned long interval) 257 + { 258 + struct drm_device *dev = crtc->dev; 259 + struct drm_i915_private *dev_priv = dev->dev_private; 260 + struct drm_framebuffer *fb = crtc->fb; 261 + struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); 262 + struct drm_i915_gem_object *obj = intel_fb->obj; 263 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 264 + 265 + I915_WRITE(IVB_FBC_RT_BASE, obj->gtt_offset | ILK_FBC_RT_VALID); 266 + 267 + I915_WRITE(ILK_DPFC_CONTROL, DPFC_CTL_EN | DPFC_CTL_LIMIT_1X | 268 + IVB_DPFC_CTL_FENCE_EN | 269 + intel_crtc->plane << IVB_DPFC_CTL_PLANE_SHIFT); 270 + 271 + if (IS_IVYBRIDGE(dev)) { 272 + /* WaFbcAsynchFlipDisableFbcQueue:ivb */ 273 + I915_WRITE(ILK_DISPLAY_CHICKEN1, ILK_FBCQ_DIS); 274 + /* WaFbcDisableDpfcClockGating:ivb */ 275 + I915_WRITE(ILK_DSPCLK_GATE_D, 276 + I915_READ(ILK_DSPCLK_GATE_D) | 277 + ILK_DPFCUNIT_CLOCK_GATE_DISABLE); 278 + } else { 279 + /* WaFbcAsynchFlipDisableFbcQueue:hsw */ 280 + I915_WRITE(HSW_PIPE_SLICE_CHICKEN_1(intel_crtc->pipe), 281 + HSW_BYPASS_FBC_QUEUE); 282 + /* WaFbcDisableDpfcClockGating:hsw */ 283 + I915_WRITE(HSW_CLKGATE_DISABLE_PART_1, 284 + I915_READ(HSW_CLKGATE_DISABLE_PART_1) | 285 + HSW_DPFC_GATING_DISABLE); 286 + } 287 + 288 + I915_WRITE(SNB_DPFC_CTL_SA, 289 + SNB_CPU_FENCE_ENABLE | obj->fence_reg); 290 + I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y); 291 + 292 + sandybridge_blit_fbc_update(dev); 293 + 294 + DRM_DEBUG_KMS("enabled fbc on plane %d\n", intel_crtc->plane); 266 295 } 267 296 268 297 bool intel_fbc_enabled(struct drm_device *dev) ··· 492 439 if (enable_fbc < 0) { 493 440 DRM_DEBUG_KMS("fbc set to per-chip default\n"); 494 441 enable_fbc = 1; 495 - if (INTEL_INFO(dev)->gen <= 6) 442 + if (INTEL_INFO(dev)->gen <= 7 && !IS_HASWELL(dev)) 496 443 enable_fbc = 0; 497 444 } 498 445 if (!enable_fbc) { ··· 513 460 dev_priv->no_fbc_reason = FBC_MODE_TOO_LARGE; 514 461 goto out_disable; 515 462 } 516 - if ((IS_I915GM(dev) || IS_I945GM(dev)) && intel_crtc->plane != 0) { 463 + if ((IS_I915GM(dev) || IS_I945GM(dev) || IS_HASWELL(dev)) && 464 + intel_crtc->plane != 0) { 517 465 DRM_DEBUG_KMS("plane not 0, disabling compression\n"); 518 466 dev_priv->no_fbc_reason = FBC_BAD_PLANE; 519 467 goto out_disable; ··· 535 481 goto out_disable; 536 482 537 483 if (i915_gem_stolen_setup_compression(dev, intel_fb->obj->base.size)) { 538 - DRM_INFO("not enough stolen space for compressed buffer (need %zd bytes), disabling\n", intel_fb->obj->base.size); 539 - DRM_INFO("hint: you may be able to increase stolen memory size in the BIOS to avoid this\n"); 540 484 DRM_DEBUG_KMS("framebuffer too large, disabling compression\n"); 541 485 dev_priv->no_fbc_reason = FBC_STOLEN_TOO_SMALL; 542 486 goto out_disable; ··· 1685 1633 I915_WRITE(DISP_ARB_CTL, 1686 1634 I915_READ(DISP_ARB_CTL) | DISP_FBC_WM_DIS); 1687 1635 return false; 1636 + } else if (INTEL_INFO(dev)->gen >= 6) { 1637 + /* enable FBC WM (except on ILK, where it must remain off) */ 1638 + I915_WRITE(DISP_ARB_CTL, 1639 + I915_READ(DISP_ARB_CTL) & ~DISP_FBC_WM_DIS); 1688 1640 } 1689 1641 1690 1642 if (display_wm > display->max_wm) { ··· 2202 2146 &sandybridge_display_wm_info, 2203 2147 latency, &sprite_wm); 2204 2148 if (!ret) { 2205 - DRM_DEBUG_KMS("failed to compute sprite wm for pipe %d\n", 2206 - pipe); 2149 + DRM_DEBUG_KMS("failed to compute sprite wm for pipe %c\n", 2150 + pipe_name(pipe)); 2207 2151 return; 2208 2152 } 2209 2153 2210 2154 val = I915_READ(reg); 2211 2155 val &= ~WM0_PIPE_SPRITE_MASK; 2212 2156 I915_WRITE(reg, val | (sprite_wm << WM0_PIPE_SPRITE_SHIFT)); 2213 - DRM_DEBUG_KMS("sprite watermarks For pipe %d - %d\n", pipe, sprite_wm); 2157 + DRM_DEBUG_KMS("sprite watermarks For pipe %c - %d\n", pipe_name(pipe), sprite_wm); 2214 2158 2215 2159 2216 2160 ret = sandybridge_compute_sprite_srwm(dev, pipe, sprite_width, ··· 2219 2163 SNB_READ_WM1_LATENCY() * 500, 2220 2164 &sprite_wm); 2221 2165 if (!ret) { 2222 - DRM_DEBUG_KMS("failed to compute sprite lp1 wm on pipe %d\n", 2223 - pipe); 2166 + DRM_DEBUG_KMS("failed to compute sprite lp1 wm on pipe %c\n", 2167 + pipe_name(pipe)); 2224 2168 return; 2225 2169 } 2226 2170 I915_WRITE(WM1S_LP_ILK, sprite_wm); ··· 2235 2179 SNB_READ_WM2_LATENCY() * 500, 2236 2180 &sprite_wm); 2237 2181 if (!ret) { 2238 - DRM_DEBUG_KMS("failed to compute sprite lp2 wm on pipe %d\n", 2239 - pipe); 2182 + DRM_DEBUG_KMS("failed to compute sprite lp2 wm on pipe %c\n", 2183 + pipe_name(pipe)); 2240 2184 return; 2241 2185 } 2242 2186 I915_WRITE(WM2S_LP_IVB, sprite_wm); ··· 2247 2191 SNB_READ_WM3_LATENCY() * 500, 2248 2192 &sprite_wm); 2249 2193 if (!ret) { 2250 - DRM_DEBUG_KMS("failed to compute sprite lp3 wm on pipe %d\n", 2251 - pipe); 2194 + DRM_DEBUG_KMS("failed to compute sprite lp3 wm on pipe %c\n", 2195 + pipe_name(pipe)); 2252 2196 return; 2253 2197 } 2254 2198 I915_WRITE(WM3S_LP_IVB, sprite_wm); ··· 2537 2481 trace_intel_gpu_freq_change(val * 50); 2538 2482 } 2539 2483 2484 + void valleyview_set_rps(struct drm_device *dev, u8 val) 2485 + { 2486 + struct drm_i915_private *dev_priv = dev->dev_private; 2487 + unsigned long timeout = jiffies + msecs_to_jiffies(10); 2488 + u32 limits = gen6_rps_limits(dev_priv, &val); 2489 + u32 pval; 2490 + 2491 + WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock)); 2492 + WARN_ON(val > dev_priv->rps.max_delay); 2493 + WARN_ON(val < dev_priv->rps.min_delay); 2494 + 2495 + DRM_DEBUG_DRIVER("gpu freq request from %d to %d\n", 2496 + vlv_gpu_freq(dev_priv->mem_freq, 2497 + dev_priv->rps.cur_delay), 2498 + vlv_gpu_freq(dev_priv->mem_freq, val)); 2499 + 2500 + if (val == dev_priv->rps.cur_delay) 2501 + return; 2502 + 2503 + valleyview_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ, val); 2504 + 2505 + do { 2506 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS, &pval); 2507 + if (time_after(jiffies, timeout)) { 2508 + DRM_DEBUG_DRIVER("timed out waiting for Punit\n"); 2509 + break; 2510 + } 2511 + udelay(10); 2512 + } while (pval & 1); 2513 + 2514 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS, &pval); 2515 + if ((pval >> 8) != val) 2516 + DRM_DEBUG_DRIVER("punit overrode freq: %d requested, but got %d\n", 2517 + val, pval >> 8); 2518 + 2519 + /* Make sure we continue to get interrupts 2520 + * until we hit the minimum or maximum frequencies. 2521 + */ 2522 + I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, limits); 2523 + 2524 + dev_priv->rps.cur_delay = pval >> 8; 2525 + 2526 + trace_intel_gpu_freq_change(vlv_gpu_freq(dev_priv->mem_freq, val)); 2527 + } 2528 + 2529 + 2540 2530 static void gen6_disable_rps(struct drm_device *dev) 2541 2531 { 2542 2532 struct drm_i915_private *dev_priv = dev->dev_private; ··· 2601 2499 spin_unlock_irq(&dev_priv->rps.lock); 2602 2500 2603 2501 I915_WRITE(GEN6_PMIIR, I915_READ(GEN6_PMIIR)); 2502 + } 2503 + 2504 + static void valleyview_disable_rps(struct drm_device *dev) 2505 + { 2506 + struct drm_i915_private *dev_priv = dev->dev_private; 2507 + 2508 + I915_WRITE(GEN6_RC_CONTROL, 0); 2509 + I915_WRITE(GEN6_PMINTRMSK, 0xffffffff); 2510 + I915_WRITE(GEN6_PMIER, 0); 2511 + /* Complete PM interrupt masking here doesn't race with the rps work 2512 + * item again unmasking PM interrupts because that is using a different 2513 + * register (PMIMR) to mask PM interrupts. The only risk is in leaving 2514 + * stale bits in PMIIR and PMIMR which gen6_enable_rps will clean up. */ 2515 + 2516 + spin_lock_irq(&dev_priv->rps.lock); 2517 + dev_priv->rps.pm_iir = 0; 2518 + spin_unlock_irq(&dev_priv->rps.lock); 2519 + 2520 + I915_WRITE(GEN6_PMIIR, I915_READ(GEN6_PMIIR)); 2521 + 2522 + if (dev_priv->vlv_pctx) { 2523 + drm_gem_object_unreference(&dev_priv->vlv_pctx->base); 2524 + dev_priv->vlv_pctx = NULL; 2525 + } 2604 2526 } 2605 2527 2606 2528 int intel_enable_rc6(const struct drm_device *dev) ··· 2866 2740 ring_freq << GEN6_PCODE_FREQ_RING_RATIO_SHIFT | 2867 2741 gpu_freq); 2868 2742 } 2743 + } 2744 + 2745 + int valleyview_rps_max_freq(struct drm_i915_private *dev_priv) 2746 + { 2747 + u32 val, rp0; 2748 + 2749 + valleyview_nc_read(dev_priv, IOSF_NC_FB_GFX_FREQ_FUSE, &val); 2750 + 2751 + rp0 = (val & FB_GFX_MAX_FREQ_FUSE_MASK) >> FB_GFX_MAX_FREQ_FUSE_SHIFT; 2752 + /* Clamp to max */ 2753 + rp0 = min_t(u32, rp0, 0xea); 2754 + 2755 + return rp0; 2756 + } 2757 + 2758 + static int valleyview_rps_rpe_freq(struct drm_i915_private *dev_priv) 2759 + { 2760 + u32 val, rpe; 2761 + 2762 + valleyview_nc_read(dev_priv, IOSF_NC_FB_GFX_FMAX_FUSE_LO, &val); 2763 + rpe = (val & FB_FMAX_VMIN_FREQ_LO_MASK) >> FB_FMAX_VMIN_FREQ_LO_SHIFT; 2764 + valleyview_nc_read(dev_priv, IOSF_NC_FB_GFX_FMAX_FUSE_HI, &val); 2765 + rpe |= (val & FB_FMAX_VMIN_FREQ_HI_MASK) << 5; 2766 + 2767 + return rpe; 2768 + } 2769 + 2770 + int valleyview_rps_min_freq(struct drm_i915_private *dev_priv) 2771 + { 2772 + u32 val; 2773 + 2774 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_LFM, &val); 2775 + 2776 + return val & 0xff; 2777 + } 2778 + 2779 + static void vlv_rps_timer_work(struct work_struct *work) 2780 + { 2781 + drm_i915_private_t *dev_priv = container_of(work, drm_i915_private_t, 2782 + rps.vlv_work.work); 2783 + 2784 + /* 2785 + * Timer fired, we must be idle. Drop to min voltage state. 2786 + * Note: we use RPe here since it should match the 2787 + * Vmin we were shooting for. That should give us better 2788 + * perf when we come back out of RC6 than if we used the 2789 + * min freq available. 2790 + */ 2791 + mutex_lock(&dev_priv->rps.hw_lock); 2792 + valleyview_set_rps(dev_priv->dev, dev_priv->rps.rpe_delay); 2793 + mutex_unlock(&dev_priv->rps.hw_lock); 2794 + } 2795 + 2796 + static void valleyview_setup_pctx(struct drm_device *dev) 2797 + { 2798 + struct drm_i915_private *dev_priv = dev->dev_private; 2799 + struct drm_i915_gem_object *pctx; 2800 + unsigned long pctx_paddr; 2801 + u32 pcbr; 2802 + int pctx_size = 24*1024; 2803 + 2804 + pcbr = I915_READ(VLV_PCBR); 2805 + if (pcbr) { 2806 + /* BIOS set it up already, grab the pre-alloc'd space */ 2807 + int pcbr_offset; 2808 + 2809 + pcbr_offset = (pcbr & (~4095)) - dev_priv->mm.stolen_base; 2810 + pctx = i915_gem_object_create_stolen_for_preallocated(dev_priv->dev, 2811 + pcbr_offset, 2812 + -1, 2813 + pctx_size); 2814 + goto out; 2815 + } 2816 + 2817 + /* 2818 + * From the Gunit register HAS: 2819 + * The Gfx driver is expected to program this register and ensure 2820 + * proper allocation within Gfx stolen memory. For example, this 2821 + * register should be programmed such than the PCBR range does not 2822 + * overlap with other ranges, such as the frame buffer, protected 2823 + * memory, or any other relevant ranges. 2824 + */ 2825 + pctx = i915_gem_object_create_stolen(dev, pctx_size); 2826 + if (!pctx) { 2827 + DRM_DEBUG("not enough stolen space for PCTX, disabling\n"); 2828 + return; 2829 + } 2830 + 2831 + pctx_paddr = dev_priv->mm.stolen_base + pctx->stolen->start; 2832 + I915_WRITE(VLV_PCBR, pctx_paddr); 2833 + 2834 + out: 2835 + dev_priv->vlv_pctx = pctx; 2836 + } 2837 + 2838 + static void valleyview_enable_rps(struct drm_device *dev) 2839 + { 2840 + struct drm_i915_private *dev_priv = dev->dev_private; 2841 + struct intel_ring_buffer *ring; 2842 + u32 gtfifodbg, val, rpe; 2843 + int i; 2844 + 2845 + WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock)); 2846 + 2847 + if ((gtfifodbg = I915_READ(GTFIFODBG))) { 2848 + DRM_ERROR("GT fifo had a previous error %x\n", gtfifodbg); 2849 + I915_WRITE(GTFIFODBG, gtfifodbg); 2850 + } 2851 + 2852 + valleyview_setup_pctx(dev); 2853 + 2854 + gen6_gt_force_wake_get(dev_priv); 2855 + 2856 + I915_WRITE(GEN6_RP_UP_THRESHOLD, 59400); 2857 + I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 245000); 2858 + I915_WRITE(GEN6_RP_UP_EI, 66000); 2859 + I915_WRITE(GEN6_RP_DOWN_EI, 350000); 2860 + 2861 + I915_WRITE(GEN6_RP_IDLE_HYSTERSIS, 10); 2862 + 2863 + I915_WRITE(GEN6_RP_CONTROL, 2864 + GEN6_RP_MEDIA_TURBO | 2865 + GEN6_RP_MEDIA_HW_NORMAL_MODE | 2866 + GEN6_RP_MEDIA_IS_GFX | 2867 + GEN6_RP_ENABLE | 2868 + GEN6_RP_UP_BUSY_AVG | 2869 + GEN6_RP_DOWN_IDLE_CONT); 2870 + 2871 + I915_WRITE(GEN6_RC6_WAKE_RATE_LIMIT, 0x00280000); 2872 + I915_WRITE(GEN6_RC_EVALUATION_INTERVAL, 125000); 2873 + I915_WRITE(GEN6_RC_IDLE_HYSTERSIS, 25); 2874 + 2875 + for_each_ring(ring, dev_priv, i) 2876 + I915_WRITE(RING_MAX_IDLE(ring->mmio_base), 10); 2877 + 2878 + I915_WRITE(GEN6_RC6_THRESHOLD, 0xc350); 2879 + 2880 + /* allows RC6 residency counter to work */ 2881 + I915_WRITE(0x138104, _MASKED_BIT_ENABLE(0x3)); 2882 + I915_WRITE(GEN6_RC_CONTROL, 2883 + GEN7_RC_CTL_TO_MODE); 2884 + 2885 + valleyview_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS, &val); 2886 + switch ((val >> 6) & 3) { 2887 + case 0: 2888 + case 1: 2889 + dev_priv->mem_freq = 800; 2890 + break; 2891 + case 2: 2892 + dev_priv->mem_freq = 1066; 2893 + break; 2894 + case 3: 2895 + dev_priv->mem_freq = 1333; 2896 + break; 2897 + } 2898 + DRM_DEBUG_DRIVER("DDR speed: %d MHz", dev_priv->mem_freq); 2899 + 2900 + DRM_DEBUG_DRIVER("GPLL enabled? %s\n", val & 0x10 ? "yes" : "no"); 2901 + DRM_DEBUG_DRIVER("GPU status: 0x%08x\n", val); 2902 + 2903 + DRM_DEBUG_DRIVER("current GPU freq: %d\n", 2904 + vlv_gpu_freq(dev_priv->mem_freq, (val >> 8) & 0xff)); 2905 + dev_priv->rps.cur_delay = (val >> 8) & 0xff; 2906 + 2907 + dev_priv->rps.max_delay = valleyview_rps_max_freq(dev_priv); 2908 + dev_priv->rps.hw_max = dev_priv->rps.max_delay; 2909 + DRM_DEBUG_DRIVER("max GPU freq: %d\n", vlv_gpu_freq(dev_priv->mem_freq, 2910 + dev_priv->rps.max_delay)); 2911 + 2912 + rpe = valleyview_rps_rpe_freq(dev_priv); 2913 + DRM_DEBUG_DRIVER("RPe GPU freq: %d\n", 2914 + vlv_gpu_freq(dev_priv->mem_freq, rpe)); 2915 + dev_priv->rps.rpe_delay = rpe; 2916 + 2917 + val = valleyview_rps_min_freq(dev_priv); 2918 + DRM_DEBUG_DRIVER("min GPU freq: %d\n", vlv_gpu_freq(dev_priv->mem_freq, 2919 + val)); 2920 + dev_priv->rps.min_delay = val; 2921 + 2922 + DRM_DEBUG_DRIVER("setting GPU freq to %d\n", 2923 + vlv_gpu_freq(dev_priv->mem_freq, rpe)); 2924 + 2925 + INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work); 2926 + 2927 + valleyview_set_rps(dev_priv->dev, rpe); 2928 + 2929 + /* requires MSI enabled */ 2930 + I915_WRITE(GEN6_PMIER, GEN6_PM_DEFERRED_EVENTS); 2931 + spin_lock_irq(&dev_priv->rps.lock); 2932 + WARN_ON(dev_priv->rps.pm_iir != 0); 2933 + I915_WRITE(GEN6_PMIMR, 0); 2934 + spin_unlock_irq(&dev_priv->rps.lock); 2935 + /* enable all PM interrupts */ 2936 + I915_WRITE(GEN6_PMINTRMSK, 0); 2937 + 2938 + gen6_gt_force_wake_put(dev_priv); 2869 2939 } 2870 2940 2871 2941 void ironlake_teardown_rc6(struct drm_device *dev) ··· 3787 3465 { 3788 3466 struct drm_i915_private *dev_priv = dev->dev_private; 3789 3467 3468 + /* Interrupts should be disabled already to avoid re-arming. */ 3469 + WARN_ON(dev->irq_enabled); 3470 + 3790 3471 if (IS_IRONLAKE_M(dev)) { 3791 3472 ironlake_disable_drps(dev); 3792 3473 ironlake_disable_rc6(dev); 3793 - } else if (INTEL_INFO(dev)->gen >= 6 && !IS_VALLEYVIEW(dev)) { 3474 + } else if (INTEL_INFO(dev)->gen >= 6) { 3794 3475 cancel_delayed_work_sync(&dev_priv->rps.delayed_resume_work); 3476 + cancel_work_sync(&dev_priv->rps.work); 3477 + if (IS_VALLEYVIEW(dev)) 3478 + cancel_delayed_work_sync(&dev_priv->rps.vlv_work); 3795 3479 mutex_lock(&dev_priv->rps.hw_lock); 3796 - gen6_disable_rps(dev); 3480 + if (IS_VALLEYVIEW(dev)) 3481 + valleyview_disable_rps(dev); 3482 + else 3483 + gen6_disable_rps(dev); 3797 3484 mutex_unlock(&dev_priv->rps.hw_lock); 3798 3485 } 3799 3486 } ··· 3815 3484 struct drm_device *dev = dev_priv->dev; 3816 3485 3817 3486 mutex_lock(&dev_priv->rps.hw_lock); 3818 - gen6_enable_rps(dev); 3819 - gen6_update_ring_freq(dev); 3487 + 3488 + if (IS_VALLEYVIEW(dev)) { 3489 + valleyview_enable_rps(dev); 3490 + } else { 3491 + gen6_enable_rps(dev); 3492 + gen6_update_ring_freq(dev); 3493 + } 3820 3494 mutex_unlock(&dev_priv->rps.hw_lock); 3821 3495 } 3822 3496 ··· 3833 3497 ironlake_enable_drps(dev); 3834 3498 ironlake_enable_rc6(dev); 3835 3499 intel_init_emon(dev); 3836 - } else if ((IS_GEN6(dev) || IS_GEN7(dev)) && !IS_VALLEYVIEW(dev)) { 3500 + } else if (IS_GEN6(dev) || IS_GEN7(dev)) { 3837 3501 /* 3838 3502 * PCU communication is slow and this doesn't need to be 3839 3503 * done at any specific time, so do this out of our fast path ··· 3915 3579 _3D_CHICKEN2_WM_READ_PIPELINED << 16 | 3916 3580 _3D_CHICKEN2_WM_READ_PIPELINED); 3917 3581 3918 - /* WaDisableRenderCachePipelinedFlush */ 3582 + /* WaDisableRenderCachePipelinedFlush:ilk */ 3919 3583 I915_WRITE(CACHE_MODE_0, 3920 3584 _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE)); 3921 3585 ··· 3943 3607 val = I915_READ(TRANS_CHICKEN2(pipe)); 3944 3608 val |= TRANS_CHICKEN2_TIMING_OVERRIDE; 3945 3609 val &= ~TRANS_CHICKEN2_FDI_POLARITY_REVERSED; 3946 - if (dev_priv->fdi_rx_polarity_inverted) 3610 + if (dev_priv->vbt.fdi_rx_polarity_inverted) 3947 3611 val |= TRANS_CHICKEN2_FDI_POLARITY_REVERSED; 3948 3612 val &= ~TRANS_CHICKEN2_FRAME_START_DELAY_MASK; 3949 3613 val &= ~TRANS_CHICKEN2_DISABLE_DEEP_COLOR_COUNTER; ··· 3982 3646 I915_READ(ILK_DISPLAY_CHICKEN2) | 3983 3647 ILK_ELPIN_409_SELECT); 3984 3648 3985 - /* WaDisableHiZPlanesWhenMSAAEnabled */ 3649 + /* WaDisableHiZPlanesWhenMSAAEnabled:snb */ 3986 3650 I915_WRITE(_3D_CHICKEN, 3987 3651 _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB)); 3988 3652 3989 - /* WaSetupGtModeTdRowDispatch */ 3653 + /* WaSetupGtModeTdRowDispatch:snb */ 3990 3654 if (IS_SNB_GT1(dev)) 3991 3655 I915_WRITE(GEN6_GT_MODE, 3992 3656 _MASKED_BIT_ENABLE(GEN6_TD_FOUR_ROW_DISPATCH_DISABLE)); ··· 4013 3677 * According to the spec, bit 11 (RCCUNIT) must also be set, 4014 3678 * but we didn't debug actual testcases to find it out. 4015 3679 * 4016 - * Also apply WaDisableVDSUnitClockGating and 4017 - * WaDisableRCPBUnitClockGating. 3680 + * Also apply WaDisableVDSUnitClockGating:snb and 3681 + * WaDisableRCPBUnitClockGating:snb. 4018 3682 */ 4019 3683 I915_WRITE(GEN6_UCGCTL2, 4020 3684 GEN7_VDSUNIT_CLOCK_GATE_DISABLE | ··· 4045 3709 ILK_DPARBUNIT_CLOCK_GATE_ENABLE | 4046 3710 ILK_DPFDUNIT_CLOCK_GATE_ENABLE); 4047 3711 4048 - /* WaMbcDriverBootEnable */ 3712 + /* WaMbcDriverBootEnable:snb */ 4049 3713 I915_WRITE(GEN6_MBCTL, I915_READ(GEN6_MBCTL) | 4050 3714 GEN6_MBCTL_ENABLE_BOOT_FETCH); 4051 3715 ··· 4075 3739 reg |= GEN7_FF_VS_SCHED_HW; 4076 3740 reg |= GEN7_FF_DS_SCHED_HW; 4077 3741 4078 - /* WaVSRefCountFullforceMissDisable */ 4079 3742 if (IS_HASWELL(dev_priv->dev)) 4080 3743 reg &= ~GEN7_FF_VS_REF_CNT_FFME; 4081 3744 ··· 4093 3758 I915_WRITE(SOUTH_DSPCLK_GATE_D, 4094 3759 I915_READ(SOUTH_DSPCLK_GATE_D) | 4095 3760 PCH_LP_PARTITION_LEVEL_DISABLE); 3761 + 3762 + /* WADPOClockGatingDisable:hsw */ 3763 + I915_WRITE(_TRANSA_CHICKEN1, 3764 + I915_READ(_TRANSA_CHICKEN1) | 3765 + TRANS_CHICKEN1_DP0UNIT_GC_DISABLE); 3766 + } 3767 + 3768 + static void lpt_suspend_hw(struct drm_device *dev) 3769 + { 3770 + struct drm_i915_private *dev_priv = dev->dev_private; 3771 + 3772 + if (dev_priv->pch_id == INTEL_PCH_LPT_LP_DEVICE_ID_TYPE) { 3773 + uint32_t val = I915_READ(SOUTH_DSPCLK_GATE_D); 3774 + 3775 + val &= ~PCH_LP_PARTITION_LEVEL_DISABLE; 3776 + I915_WRITE(SOUTH_DSPCLK_GATE_D, val); 3777 + } 4096 3778 } 4097 3779 4098 3780 static void haswell_init_clock_gating(struct drm_device *dev) ··· 4122 3770 I915_WRITE(WM1_LP_ILK, 0); 4123 3771 4124 3772 /* According to the spec, bit 13 (RCZUNIT) must be set on IVB. 4125 - * This implements the WaDisableRCZUnitClockGating workaround. 3773 + * This implements the WaDisableRCZUnitClockGating:hsw workaround. 4126 3774 */ 4127 3775 I915_WRITE(GEN6_UCGCTL2, GEN6_RCZUNIT_CLOCK_GATE_DISABLE); 4128 3776 4129 - /* Apply the WaDisableRHWOOptimizationForRenderHang workaround. */ 3777 + /* Apply the WaDisableRHWOOptimizationForRenderHang:hsw workaround. */ 4130 3778 I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1, 4131 3779 GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC); 4132 3780 4133 - /* WaApplyL3ControlAndL3ChickenMode requires those two on Ivy Bridge */ 3781 + /* WaApplyL3ControlAndL3ChickenMode:hsw */ 4134 3782 I915_WRITE(GEN7_L3CNTLREG1, 4135 3783 GEN7_WA_FOR_GEN7_L3_CONTROL); 4136 3784 I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER, 4137 3785 GEN7_WA_L3_CHICKEN_MODE); 4138 3786 4139 - /* This is required by WaCatErrorRejectionIssue */ 3787 + /* This is required by WaCatErrorRejectionIssue:hsw */ 4140 3788 I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG, 4141 3789 I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) | 4142 3790 GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB); ··· 4148 3796 intel_flush_display_plane(dev_priv, pipe); 4149 3797 } 4150 3798 3799 + /* WaVSRefCountFullforceMissDisable:hsw */ 4151 3800 gen7_setup_fixed_func_scheduler(dev_priv); 4152 3801 4153 - /* WaDisable4x2SubspanOptimization */ 3802 + /* WaDisable4x2SubspanOptimization:hsw */ 4154 3803 I915_WRITE(CACHE_MODE_1, 4155 3804 _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE)); 4156 3805 4157 - /* WaMbcDriverBootEnable */ 3806 + /* WaMbcDriverBootEnable:hsw */ 4158 3807 I915_WRITE(GEN6_MBCTL, I915_READ(GEN6_MBCTL) | 4159 3808 GEN6_MBCTL_ENABLE_BOOT_FETCH); 4160 3809 4161 - /* WaSwitchSolVfFArbitrationPriority */ 3810 + /* WaSwitchSolVfFArbitrationPriority:hsw */ 4162 3811 I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | HSW_ECOCHK_ARB_PRIO_SOL); 4163 3812 4164 3813 /* XXX: This is a workaround for early silicon revisions and should be ··· 4186 3833 4187 3834 I915_WRITE(ILK_DSPCLK_GATE_D, ILK_VRHUNIT_CLOCK_GATE_DISABLE); 4188 3835 4189 - /* WaDisableEarlyCull */ 3836 + /* WaDisableEarlyCull:ivb */ 4190 3837 I915_WRITE(_3D_CHICKEN3, 4191 3838 _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL)); 4192 3839 4193 - /* WaDisableBackToBackFlipFix */ 3840 + /* WaDisableBackToBackFlipFix:ivb */ 4194 3841 I915_WRITE(IVB_CHICKEN3, 4195 3842 CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE | 4196 3843 CHICKEN3_DGMG_DONE_FIX_DISABLE); 4197 3844 4198 - /* WaDisablePSDDualDispatchEnable */ 3845 + /* WaDisablePSDDualDispatchEnable:ivb */ 4199 3846 if (IS_IVB_GT1(dev)) 4200 3847 I915_WRITE(GEN7_HALF_SLICE_CHICKEN1, 4201 3848 _MASKED_BIT_ENABLE(GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE)); ··· 4203 3850 I915_WRITE(GEN7_HALF_SLICE_CHICKEN1_GT2, 4204 3851 _MASKED_BIT_ENABLE(GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE)); 4205 3852 4206 - /* Apply the WaDisableRHWOOptimizationForRenderHang workaround. */ 3853 + /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */ 4207 3854 I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1, 4208 3855 GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC); 4209 3856 4210 - /* WaApplyL3ControlAndL3ChickenMode requires those two on Ivy Bridge */ 3857 + /* WaApplyL3ControlAndL3ChickenMode:ivb */ 4211 3858 I915_WRITE(GEN7_L3CNTLREG1, 4212 3859 GEN7_WA_FOR_GEN7_L3_CONTROL); 4213 3860 I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER, ··· 4220 3867 _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE)); 4221 3868 4222 3869 4223 - /* WaForceL3Serialization */ 3870 + /* WaForceL3Serialization:ivb */ 4224 3871 I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) & 4225 3872 ~L3SQ_URB_READ_CAM_MATCH_DISABLE); 4226 3873 ··· 4235 3882 * but we didn't debug actual testcases to find it out. 4236 3883 * 4237 3884 * According to the spec, bit 13 (RCZUNIT) must be set on IVB. 4238 - * This implements the WaDisableRCZUnitClockGating workaround. 3885 + * This implements the WaDisableRCZUnitClockGating:ivb workaround. 4239 3886 */ 4240 3887 I915_WRITE(GEN6_UCGCTL2, 4241 3888 GEN6_RCZUNIT_CLOCK_GATE_DISABLE | 4242 3889 GEN6_RCCUNIT_CLOCK_GATE_DISABLE); 4243 3890 4244 - /* This is required by WaCatErrorRejectionIssue */ 3891 + /* This is required by WaCatErrorRejectionIssue:ivb */ 4245 3892 I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG, 4246 3893 I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) | 4247 3894 GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB); ··· 4253 3900 intel_flush_display_plane(dev_priv, pipe); 4254 3901 } 4255 3902 4256 - /* WaMbcDriverBootEnable */ 3903 + /* WaMbcDriverBootEnable:ivb */ 4257 3904 I915_WRITE(GEN6_MBCTL, I915_READ(GEN6_MBCTL) | 4258 3905 GEN6_MBCTL_ENABLE_BOOT_FETCH); 4259 3906 3907 + /* WaVSRefCountFullforceMissDisable:ivb */ 4260 3908 gen7_setup_fixed_func_scheduler(dev_priv); 4261 3909 4262 - /* WaDisable4x2SubspanOptimization */ 3910 + /* WaDisable4x2SubspanOptimization:ivb */ 4263 3911 I915_WRITE(CACHE_MODE_1, 4264 3912 _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE)); 4265 3913 ··· 4286 3932 4287 3933 I915_WRITE(ILK_DSPCLK_GATE_D, ILK_VRHUNIT_CLOCK_GATE_DISABLE); 4288 3934 4289 - /* WaDisableEarlyCull */ 3935 + /* WaDisableEarlyCull:vlv */ 4290 3936 I915_WRITE(_3D_CHICKEN3, 4291 3937 _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL)); 4292 3938 4293 - /* WaDisableBackToBackFlipFix */ 3939 + /* WaDisableBackToBackFlipFix:vlv */ 4294 3940 I915_WRITE(IVB_CHICKEN3, 4295 3941 CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE | 4296 3942 CHICKEN3_DGMG_DONE_FIX_DISABLE); 4297 3943 4298 - /* WaDisablePSDDualDispatchEnable */ 3944 + /* WaDisablePSDDualDispatchEnable:vlv */ 4299 3945 I915_WRITE(GEN7_HALF_SLICE_CHICKEN1, 4300 3946 _MASKED_BIT_ENABLE(GEN7_MAX_PS_THREAD_DEP | 4301 3947 GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE)); 4302 3948 4303 - /* Apply the WaDisableRHWOOptimizationForRenderHang workaround. */ 3949 + /* Apply the WaDisableRHWOOptimizationForRenderHang:vlv workaround. */ 4304 3950 I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1, 4305 3951 GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC); 4306 3952 4307 - /* WaApplyL3ControlAndL3ChickenMode requires those two on Ivy Bridge */ 3953 + /* WaApplyL3ControlAndL3ChickenMode:vlv */ 4308 3954 I915_WRITE(GEN7_L3CNTLREG1, I915_READ(GEN7_L3CNTLREG1) | GEN7_L3AGDIS); 4309 3955 I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER, GEN7_WA_L3_CHICKEN_MODE); 4310 3956 4311 - /* WaForceL3Serialization */ 3957 + /* WaForceL3Serialization:vlv */ 4312 3958 I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) & 4313 3959 ~L3SQ_URB_READ_CAM_MATCH_DISABLE); 4314 3960 4315 - /* WaDisableDopClockGating */ 3961 + /* WaDisableDopClockGating:vlv */ 4316 3962 I915_WRITE(GEN7_ROW_CHICKEN2, 4317 3963 _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE)); 4318 3964 4319 - /* WaForceL3Serialization */ 3965 + /* WaForceL3Serialization:vlv */ 4320 3966 I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) & 4321 3967 ~L3SQ_URB_READ_CAM_MATCH_DISABLE); 4322 3968 4323 - /* This is required by WaCatErrorRejectionIssue */ 3969 + /* This is required by WaCatErrorRejectionIssue:vlv */ 4324 3970 I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG, 4325 3971 I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) | 4326 3972 GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB); 4327 3973 4328 - /* WaMbcDriverBootEnable */ 3974 + /* WaMbcDriverBootEnable:vlv */ 4329 3975 I915_WRITE(GEN6_MBCTL, I915_READ(GEN6_MBCTL) | 4330 3976 GEN6_MBCTL_ENABLE_BOOT_FETCH); 4331 3977 ··· 4341 3987 * but we didn't debug actual testcases to find it out. 4342 3988 * 4343 3989 * According to the spec, bit 13 (RCZUNIT) must be set on IVB. 4344 - * This implements the WaDisableRCZUnitClockGating workaround. 3990 + * This implements the WaDisableRCZUnitClockGating:vlv workaround. 4345 3991 * 4346 - * Also apply WaDisableVDSUnitClockGating and 4347 - * WaDisableRCPBUnitClockGating. 3992 + * Also apply WaDisableVDSUnitClockGating:vlv and 3993 + * WaDisableRCPBUnitClockGating:vlv. 4348 3994 */ 4349 3995 I915_WRITE(GEN6_UCGCTL2, 4350 3996 GEN7_VDSUNIT_CLOCK_GATE_DISABLE | ··· 4366 4012 _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE)); 4367 4013 4368 4014 /* 4369 - * WaDisableVLVClockGating_VBIIssue 4015 + * WaDisableVLVClockGating_VBIIssue:vlv 4370 4016 * Disable clock gating on th GCFG unit to prevent a delay 4371 4017 * in the reporting of vblank events. 4372 4018 */ ··· 4464 4110 dev_priv->display.init_clock_gating(dev); 4465 4111 } 4466 4112 4113 + void intel_suspend_hw(struct drm_device *dev) 4114 + { 4115 + if (HAS_PCH_LPT(dev)) 4116 + lpt_suspend_hw(dev); 4117 + } 4118 + 4467 4119 /** 4468 4120 * We should only use the power well if we explicitly asked the hardware to 4469 4121 * enable it, so check if it's enabled and also check if we've requested it to 4470 4122 * be enabled. 4471 4123 */ 4472 - bool intel_using_power_well(struct drm_device *dev) 4124 + bool intel_display_power_enabled(struct drm_device *dev, 4125 + enum intel_display_power_domain domain) 4473 4126 { 4474 4127 struct drm_i915_private *dev_priv = dev->dev_private; 4475 4128 4476 - if (IS_HASWELL(dev)) 4129 + if (!HAS_POWER_WELL(dev)) 4130 + return true; 4131 + 4132 + switch (domain) { 4133 + case POWER_DOMAIN_PIPE_A: 4134 + case POWER_DOMAIN_TRANSCODER_EDP: 4135 + return true; 4136 + case POWER_DOMAIN_PIPE_B: 4137 + case POWER_DOMAIN_PIPE_C: 4138 + case POWER_DOMAIN_PIPE_A_PANEL_FITTER: 4139 + case POWER_DOMAIN_PIPE_B_PANEL_FITTER: 4140 + case POWER_DOMAIN_PIPE_C_PANEL_FITTER: 4141 + case POWER_DOMAIN_TRANSCODER_A: 4142 + case POWER_DOMAIN_TRANSCODER_B: 4143 + case POWER_DOMAIN_TRANSCODER_C: 4477 4144 return I915_READ(HSW_PWR_WELL_DRIVER) == 4478 4145 (HSW_PWR_WELL_ENABLE | HSW_PWR_WELL_STATE); 4479 - else 4480 - return true; 4146 + default: 4147 + BUG(); 4148 + } 4481 4149 } 4482 4150 4483 4151 void intel_set_power_well(struct drm_device *dev, bool enable) ··· 4566 4190 if (I915_HAS_FBC(dev)) { 4567 4191 if (HAS_PCH_SPLIT(dev)) { 4568 4192 dev_priv->display.fbc_enabled = ironlake_fbc_enabled; 4569 - dev_priv->display.enable_fbc = ironlake_enable_fbc; 4193 + if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) 4194 + dev_priv->display.enable_fbc = 4195 + gen7_enable_fbc; 4196 + else 4197 + dev_priv->display.enable_fbc = 4198 + ironlake_enable_fbc; 4570 4199 dev_priv->display.disable_fbc = ironlake_disable_fbc; 4571 4200 } else if (IS_GM45(dev)) { 4572 4201 dev_priv->display.fbc_enabled = g4x_fbc_enabled; ··· 4721 4340 FORCEWAKE_ACK_TIMEOUT_MS)) 4722 4341 DRM_ERROR("Timed out waiting for forcewake to ack request.\n"); 4723 4342 4343 + /* WaRsForcewakeWaitTC0:snb */ 4724 4344 __gen6_gt_wait_for_thread_c0(dev_priv); 4725 4345 } 4726 4346 ··· 4753 4371 FORCEWAKE_ACK_TIMEOUT_MS)) 4754 4372 DRM_ERROR("Timed out waiting for forcewake to ack request.\n"); 4755 4373 4374 + /* WaRsForcewakeWaitTC0:ivb,hsw */ 4756 4375 __gen6_gt_wait_for_thread_c0(dev_priv); 4757 4376 } 4758 4377 ··· 4857 4474 FORCEWAKE_ACK_TIMEOUT_MS)) 4858 4475 DRM_ERROR("Timed out waiting for media to ack forcewake request.\n"); 4859 4476 4477 + /* WaRsForcewakeWaitTC0:vlv */ 4860 4478 __gen6_gt_wait_for_thread_c0(dev_priv); 4861 4479 } 4862 4480 ··· 4952 4568 return 0; 4953 4569 } 4954 4570 4955 - static int vlv_punit_rw(struct drm_i915_private *dev_priv, u8 opcode, 4571 + static int vlv_punit_rw(struct drm_i915_private *dev_priv, u32 port, u8 opcode, 4956 4572 u8 addr, u32 *val) 4957 4573 { 4958 - u32 cmd, devfn, port, be, bar; 4574 + u32 cmd, devfn, be, bar; 4959 4575 4960 4576 bar = 0; 4961 4577 be = 0xf; 4962 - port = IOSF_PORT_PUNIT; 4963 4578 devfn = PCI_DEVFN(2, 0); 4964 4579 4965 4580 cmd = (devfn << IOSF_DEVFN_SHIFT) | (opcode << IOSF_OPCODE_SHIFT) | ··· 4980 4597 I915_WRITE(VLV_IOSF_DOORBELL_REQ, cmd); 4981 4598 4982 4599 if (wait_for((I915_READ(VLV_IOSF_DOORBELL_REQ) & IOSF_SB_BUSY) == 0, 4983 - 500)) { 4600 + 5)) { 4984 4601 DRM_ERROR("timeout waiting for pcode %s (%d) to finish\n", 4985 4602 opcode == PUNIT_OPCODE_REG_READ ? "read" : "write", 4986 4603 addr); ··· 4996 4613 4997 4614 int valleyview_punit_read(struct drm_i915_private *dev_priv, u8 addr, u32 *val) 4998 4615 { 4999 - return vlv_punit_rw(dev_priv, PUNIT_OPCODE_REG_READ, addr, val); 4616 + return vlv_punit_rw(dev_priv, IOSF_PORT_PUNIT, PUNIT_OPCODE_REG_READ, 4617 + addr, val); 5000 4618 } 5001 4619 5002 4620 int valleyview_punit_write(struct drm_i915_private *dev_priv, u8 addr, u32 val) 5003 4621 { 5004 - return vlv_punit_rw(dev_priv, PUNIT_OPCODE_REG_WRITE, addr, &val); 4622 + return vlv_punit_rw(dev_priv, IOSF_PORT_PUNIT, PUNIT_OPCODE_REG_WRITE, 4623 + addr, &val); 5005 4624 } 4625 + 4626 + int valleyview_nc_read(struct drm_i915_private *dev_priv, u8 addr, u32 *val) 4627 + { 4628 + return vlv_punit_rw(dev_priv, IOSF_PORT_NC, PUNIT_OPCODE_REG_READ, 4629 + addr, val); 4630 + } 4631 + 4632 + int vlv_gpu_freq(int ddr_freq, int val) 4633 + { 4634 + int mult, base; 4635 + 4636 + switch (ddr_freq) { 4637 + case 800: 4638 + mult = 20; 4639 + base = 120; 4640 + break; 4641 + case 1066: 4642 + mult = 22; 4643 + base = 133; 4644 + break; 4645 + case 1333: 4646 + mult = 21; 4647 + base = 125; 4648 + break; 4649 + default: 4650 + return -1; 4651 + } 4652 + 4653 + return ((val - 0xbd) * mult) + base; 4654 + } 4655 + 4656 + int vlv_freq_opcode(int ddr_freq, int val) 4657 + { 4658 + int mult, base; 4659 + 4660 + switch (ddr_freq) { 4661 + case 800: 4662 + mult = 20; 4663 + base = 120; 4664 + break; 4665 + case 1066: 4666 + mult = 22; 4667 + base = 133; 4668 + break; 4669 + case 1333: 4670 + mult = 21; 4671 + base = 125; 4672 + break; 4673 + default: 4674 + return -1; 4675 + } 4676 + 4677 + val /= mult; 4678 + val -= base / mult; 4679 + val += 0xbd; 4680 + 4681 + if (val > 0xea) 4682 + val = 0xea; 4683 + 4684 + return val; 4685 + } 4686 +
+2
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 515 515 /* We need to disable the AsyncFlip performance optimisations in order 516 516 * to use MI_WAIT_FOR_EVENT within the CS. It should already be 517 517 * programmed to '1' on all products. 518 + * 519 + * WaDisableAsyncFlipPerfMode:snb,ivb,hsw,vlv 518 520 */ 519 521 if (INTEL_INFO(dev)->gen >= 6) 520 522 I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
+1 -1
drivers/gpu/drm/i915/intel_ringbuffer.h
··· 135 135 */ 136 136 bool itlb_before_ctx_switch; 137 137 struct i915_hw_context *default_context; 138 - struct drm_i915_gem_object *last_context_obj; 138 + struct i915_hw_context *last_context; 139 139 140 140 void *private; 141 141 };
+35 -9
drivers/gpu/drm/i915/intel_sdvo.c
··· 1041 1041 return true; 1042 1042 } 1043 1043 1044 + static void i9xx_adjust_sdvo_tv_clock(struct intel_crtc_config *pipe_config) 1045 + { 1046 + unsigned dotclock = pipe_config->adjusted_mode.clock; 1047 + struct dpll *clock = &pipe_config->dpll; 1048 + 1049 + /* SDVO TV has fixed PLL values depend on its clock range, 1050 + this mirrors vbios setting. */ 1051 + if (dotclock >= 100000 && dotclock < 140500) { 1052 + clock->p1 = 2; 1053 + clock->p2 = 10; 1054 + clock->n = 3; 1055 + clock->m1 = 16; 1056 + clock->m2 = 8; 1057 + } else if (dotclock >= 140500 && dotclock <= 200000) { 1058 + clock->p1 = 1; 1059 + clock->p2 = 10; 1060 + clock->n = 6; 1061 + clock->m1 = 12; 1062 + clock->m2 = 8; 1063 + } else { 1064 + WARN(1, "SDVO TV clock out of range: %i\n", dotclock); 1065 + } 1066 + 1067 + pipe_config->clock_set = true; 1068 + } 1069 + 1044 1070 static bool intel_sdvo_compute_config(struct intel_encoder *encoder, 1045 1071 struct intel_crtc_config *pipe_config) 1046 1072 { ··· 1092 1066 (void) intel_sdvo_get_preferred_input_mode(intel_sdvo, 1093 1067 mode, 1094 1068 adjusted_mode); 1069 + pipe_config->sdvo_tv_clock = true; 1095 1070 } else if (intel_sdvo->is_lvds) { 1096 1071 if (!intel_sdvo_set_output_timings_from_mode(intel_sdvo, 1097 1072 intel_sdvo->sdvo_lvds_fixed_mode)) ··· 1123 1096 1124 1097 if (intel_sdvo->color_range) 1125 1098 pipe_config->limited_color_range = true; 1099 + 1100 + /* Clock computation needs to happen after pixel multiplier. */ 1101 + if (intel_sdvo->is_tv) 1102 + i9xx_adjust_sdvo_tv_clock(pipe_config); 1126 1103 1127 1104 return true; 1128 1105 } ··· 1526 1495 1527 1496 return drm_get_edid(connector, 1528 1497 intel_gmbus_get_adapter(dev_priv, 1529 - dev_priv->crt_ddc_pin)); 1498 + dev_priv->vbt.crt_ddc_pin)); 1530 1499 } 1531 1500 1532 1501 static enum drm_connector_status ··· 1656 1625 if (ret == connector_status_connected) { 1657 1626 intel_sdvo->is_tv = false; 1658 1627 intel_sdvo->is_lvds = false; 1659 - intel_sdvo->base.needs_tv_clock = false; 1660 1628 1661 - if (response & SDVO_TV_MASK) { 1629 + if (response & SDVO_TV_MASK) 1662 1630 intel_sdvo->is_tv = true; 1663 - intel_sdvo->base.needs_tv_clock = true; 1664 - } 1665 1631 if (response & SDVO_LVDS_MASK) 1666 1632 intel_sdvo->is_lvds = intel_sdvo->sdvo_lvds_fixed_mode != NULL; 1667 1633 } ··· 1809 1781 goto end; 1810 1782 1811 1783 /* Fetch modes from VBT */ 1812 - if (dev_priv->sdvo_lvds_vbt_mode != NULL) { 1784 + if (dev_priv->vbt.sdvo_lvds_vbt_mode != NULL) { 1813 1785 newmode = drm_mode_duplicate(connector->dev, 1814 - dev_priv->sdvo_lvds_vbt_mode); 1786 + dev_priv->vbt.sdvo_lvds_vbt_mode); 1815 1787 if (newmode != NULL) { 1816 1788 /* Guarantee the mode is preferred */ 1817 1789 newmode->type = (DRM_MODE_TYPE_PREFERRED | ··· 2355 2327 intel_sdvo_connector->output_flag = type; 2356 2328 2357 2329 intel_sdvo->is_tv = true; 2358 - intel_sdvo->base.needs_tv_clock = true; 2359 2330 2360 2331 intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo); 2361 2332 ··· 2442 2415 intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, uint16_t flags) 2443 2416 { 2444 2417 intel_sdvo->is_tv = false; 2445 - intel_sdvo->base.needs_tv_clock = false; 2446 2418 intel_sdvo->is_lvds = false; 2447 2419 2448 2420 /* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
+164 -53
drivers/gpu/drm/i915/intel_sprite.c
··· 32 32 #include <drm/drmP.h> 33 33 #include <drm/drm_crtc.h> 34 34 #include <drm/drm_fourcc.h> 35 + #include <drm/drm_rect.h> 35 36 #include "intel_drv.h" 36 37 #include <drm/i915_drm.h> 37 38 #include "i915_drv.h" ··· 584 583 key->flags = I915_SET_COLORKEY_NONE; 585 584 } 586 585 586 + static bool 587 + format_is_yuv(uint32_t format) 588 + { 589 + switch (format) { 590 + case DRM_FORMAT_YUYV: 591 + case DRM_FORMAT_UYVY: 592 + case DRM_FORMAT_VYUY: 593 + case DRM_FORMAT_YVYU: 594 + return true; 595 + default: 596 + return false; 597 + } 598 + } 599 + 587 600 static int 588 601 intel_update_plane(struct drm_plane *plane, struct drm_crtc *crtc, 589 602 struct drm_framebuffer *fb, int crtc_x, int crtc_y, ··· 615 600 enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv, 616 601 pipe); 617 602 int ret = 0; 618 - int x = src_x >> 16, y = src_y >> 16; 619 - int primary_w = crtc->mode.hdisplay, primary_h = crtc->mode.vdisplay; 620 603 bool disable_primary = false; 604 + bool visible; 605 + int hscale, vscale; 606 + int max_scale, min_scale; 607 + int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0); 608 + struct drm_rect src = { 609 + /* sample coordinates in 16.16 fixed point */ 610 + .x1 = src_x, 611 + .x2 = src_x + src_w, 612 + .y1 = src_y, 613 + .y2 = src_y + src_h, 614 + }; 615 + struct drm_rect dst = { 616 + /* integer pixels */ 617 + .x1 = crtc_x, 618 + .x2 = crtc_x + crtc_w, 619 + .y1 = crtc_y, 620 + .y2 = crtc_y + crtc_h, 621 + }; 622 + const struct drm_rect clip = { 623 + .x2 = crtc->mode.hdisplay, 624 + .y2 = crtc->mode.vdisplay, 625 + }; 621 626 622 627 intel_fb = to_intel_framebuffer(fb); 623 628 obj = intel_fb->obj; ··· 653 618 intel_plane->src_w = src_w; 654 619 intel_plane->src_h = src_h; 655 620 656 - src_w = src_w >> 16; 657 - src_h = src_h >> 16; 658 - 659 621 /* Pipe must be running... */ 660 - if (!(I915_READ(PIPECONF(cpu_transcoder)) & PIPECONF_ENABLE)) 622 + if (!(I915_READ(PIPECONF(cpu_transcoder)) & PIPECONF_ENABLE)) { 623 + DRM_DEBUG_KMS("Pipe disabled\n"); 661 624 return -EINVAL; 662 - 663 - if (crtc_x >= primary_w || crtc_y >= primary_h) 664 - return -EINVAL; 625 + } 665 626 666 627 /* Don't modify another pipe's plane */ 667 - if (intel_plane->pipe != intel_crtc->pipe) 628 + if (intel_plane->pipe != intel_crtc->pipe) { 629 + DRM_DEBUG_KMS("Wrong plane <-> crtc mapping\n"); 668 630 return -EINVAL; 631 + } 632 + 633 + /* FIXME check all gen limits */ 634 + if (fb->width < 3 || fb->height < 3 || fb->pitches[0] > 16384) { 635 + DRM_DEBUG_KMS("Unsuitable framebuffer for plane\n"); 636 + return -EINVAL; 637 + } 669 638 670 639 /* Sprite planes can be linear or x-tiled surfaces */ 671 640 switch (obj->tiling_mode) { ··· 677 638 case I915_TILING_X: 678 639 break; 679 640 default: 641 + DRM_DEBUG_KMS("Unsupported tiling mode\n"); 680 642 return -EINVAL; 681 643 } 682 644 683 645 /* 684 - * Clamp the width & height into the visible area. Note we don't 685 - * try to scale the source if part of the visible region is offscreen. 686 - * The caller must handle that by adjusting source offset and size. 646 + * FIXME the following code does a bunch of fuzzy adjustments to the 647 + * coordinates and sizes. We probably need some way to decide whether 648 + * more strict checking should be done instead. 687 649 */ 688 - if ((crtc_x < 0) && ((crtc_x + crtc_w) > 0)) { 689 - crtc_w += crtc_x; 690 - crtc_x = 0; 650 + max_scale = intel_plane->max_downscale << 16; 651 + min_scale = intel_plane->can_scale ? 1 : (1 << 16); 652 + 653 + hscale = drm_rect_calc_hscale_relaxed(&src, &dst, min_scale, max_scale); 654 + BUG_ON(hscale < 0); 655 + 656 + vscale = drm_rect_calc_vscale_relaxed(&src, &dst, min_scale, max_scale); 657 + BUG_ON(vscale < 0); 658 + 659 + visible = drm_rect_clip_scaled(&src, &dst, &clip, hscale, vscale); 660 + 661 + crtc_x = dst.x1; 662 + crtc_y = dst.y1; 663 + crtc_w = drm_rect_width(&dst); 664 + crtc_h = drm_rect_height(&dst); 665 + 666 + if (visible) { 667 + /* check again in case clipping clamped the results */ 668 + hscale = drm_rect_calc_hscale(&src, &dst, min_scale, max_scale); 669 + if (hscale < 0) { 670 + DRM_DEBUG_KMS("Horizontal scaling factor out of limits\n"); 671 + drm_rect_debug_print(&src, true); 672 + drm_rect_debug_print(&dst, false); 673 + 674 + return hscale; 675 + } 676 + 677 + vscale = drm_rect_calc_vscale(&src, &dst, min_scale, max_scale); 678 + if (vscale < 0) { 679 + DRM_DEBUG_KMS("Vertical scaling factor out of limits\n"); 680 + drm_rect_debug_print(&src, true); 681 + drm_rect_debug_print(&dst, false); 682 + 683 + return vscale; 684 + } 685 + 686 + /* Make the source viewport size an exact multiple of the scaling factors. */ 687 + drm_rect_adjust_size(&src, 688 + drm_rect_width(&dst) * hscale - drm_rect_width(&src), 689 + drm_rect_height(&dst) * vscale - drm_rect_height(&src)); 690 + 691 + /* sanity check to make sure the src viewport wasn't enlarged */ 692 + WARN_ON(src.x1 < (int) src_x || 693 + src.y1 < (int) src_y || 694 + src.x2 > (int) (src_x + src_w) || 695 + src.y2 > (int) (src_y + src_h)); 696 + 697 + /* 698 + * Hardware doesn't handle subpixel coordinates. 699 + * Adjust to (macro)pixel boundary, but be careful not to 700 + * increase the source viewport size, because that could 701 + * push the downscaling factor out of bounds. 702 + */ 703 + src_x = src.x1 >> 16; 704 + src_w = drm_rect_width(&src) >> 16; 705 + src_y = src.y1 >> 16; 706 + src_h = drm_rect_height(&src) >> 16; 707 + 708 + if (format_is_yuv(fb->pixel_format)) { 709 + src_x &= ~1; 710 + src_w &= ~1; 711 + 712 + /* 713 + * Must keep src and dst the 714 + * same if we can't scale. 715 + */ 716 + if (!intel_plane->can_scale) 717 + crtc_w &= ~1; 718 + 719 + if (crtc_w == 0) 720 + visible = false; 721 + } 691 722 } 692 - if ((crtc_x + crtc_w) <= 0) /* Nothing to display */ 693 - goto out; 694 - if ((crtc_x + crtc_w) > primary_w) 695 - crtc_w = primary_w - crtc_x; 696 723 697 - if ((crtc_y < 0) && ((crtc_y + crtc_h) > 0)) { 698 - crtc_h += crtc_y; 699 - crtc_y = 0; 724 + /* Check size restrictions when scaling */ 725 + if (visible && (src_w != crtc_w || src_h != crtc_h)) { 726 + unsigned int width_bytes; 727 + 728 + WARN_ON(!intel_plane->can_scale); 729 + 730 + /* FIXME interlacing min height is 6 */ 731 + 732 + if (crtc_w < 3 || crtc_h < 3) 733 + visible = false; 734 + 735 + if (src_w < 3 || src_h < 3) 736 + visible = false; 737 + 738 + width_bytes = ((src_x * pixel_size) & 63) + src_w * pixel_size; 739 + 740 + if (src_w > 2048 || src_h > 2048 || 741 + width_bytes > 4096 || fb->pitches[0] > 4096) { 742 + DRM_DEBUG_KMS("Source dimensions exceed hardware limits\n"); 743 + return -EINVAL; 744 + } 700 745 } 701 - if ((crtc_y + crtc_h) <= 0) /* Nothing to display */ 702 - goto out; 703 - if (crtc_y + crtc_h > primary_h) 704 - crtc_h = primary_h - crtc_y; 705 746 706 - if (!crtc_w || !crtc_h) /* Again, nothing to display */ 707 - goto out; 708 - 709 - /* 710 - * We may not have a scaler, eg. HSW does not have it any more 711 - */ 712 - if (!intel_plane->can_scale && (crtc_w != src_w || crtc_h != src_h)) 713 - return -EINVAL; 714 - 715 - /* 716 - * We can take a larger source and scale it down, but 717 - * only so much... 16x is the max on SNB. 718 - */ 719 - if (((src_w * src_h) / (crtc_w * crtc_h)) > intel_plane->max_downscale) 720 - return -EINVAL; 747 + dst.x1 = crtc_x; 748 + dst.x2 = crtc_x + crtc_w; 749 + dst.y1 = crtc_y; 750 + dst.y2 = crtc_y + crtc_h; 721 751 722 752 /* 723 753 * If the sprite is completely covering the primary plane, 724 754 * we can disable the primary and save power. 725 755 */ 726 - if ((crtc_x == 0) && (crtc_y == 0) && 727 - (crtc_w == primary_w) && (crtc_h == primary_h)) 728 - disable_primary = true; 756 + disable_primary = drm_rect_equals(&dst, &clip); 757 + WARN_ON(disable_primary && !visible); 729 758 730 759 mutex_lock(&dev->struct_mutex); 731 760 ··· 815 708 if (!disable_primary) 816 709 intel_enable_primary(crtc); 817 710 818 - intel_plane->update_plane(plane, fb, obj, crtc_x, crtc_y, 819 - crtc_w, crtc_h, x, y, src_w, src_h); 711 + if (visible) 712 + intel_plane->update_plane(plane, fb, obj, 713 + crtc_x, crtc_y, crtc_w, crtc_h, 714 + src_x, src_y, src_w, src_h); 715 + else 716 + intel_plane->disable_plane(plane); 820 717 821 718 if (disable_primary) 822 719 intel_disable_primary(crtc); ··· 843 732 844 733 out_unlock: 845 734 mutex_unlock(&dev->struct_mutex); 846 - out: 847 735 return ret; 848 736 } 849 737 ··· 1028 918 break; 1029 919 1030 920 case 7: 1031 - if (IS_HASWELL(dev) || IS_VALLEYVIEW(dev)) 1032 - intel_plane->can_scale = false; 1033 - else 921 + if (IS_IVYBRIDGE(dev)) { 1034 922 intel_plane->can_scale = true; 923 + intel_plane->max_downscale = 2; 924 + } else { 925 + intel_plane->can_scale = false; 926 + intel_plane->max_downscale = 1; 927 + } 1035 928 1036 929 if (IS_VALLEYVIEW(dev)) { 1037 - intel_plane->max_downscale = 1; 1038 930 intel_plane->update_plane = vlv_update_plane; 1039 931 intel_plane->disable_plane = vlv_disable_plane; 1040 932 intel_plane->update_colorkey = vlv_update_colorkey; ··· 1045 933 plane_formats = vlv_plane_formats; 1046 934 num_plane_formats = ARRAY_SIZE(vlv_plane_formats); 1047 935 } else { 1048 - intel_plane->max_downscale = 2; 1049 936 intel_plane->update_plane = ivb_update_plane; 1050 937 intel_plane->disable_plane = ivb_disable_plane; 1051 938 intel_plane->update_colorkey = ivb_update_colorkey;
+4 -4
drivers/gpu/drm/i915/intel_tv.c
··· 1521 1521 struct child_device_config *p_child; 1522 1522 int i, ret; 1523 1523 1524 - if (!dev_priv->child_dev_num) 1524 + if (!dev_priv->vbt.child_dev_num) 1525 1525 return 1; 1526 1526 1527 1527 ret = 0; 1528 - for (i = 0; i < dev_priv->child_dev_num; i++) { 1529 - p_child = dev_priv->child_dev + i; 1528 + for (i = 0; i < dev_priv->vbt.child_dev_num; i++) { 1529 + p_child = dev_priv->vbt.child_dev + i; 1530 1530 /* 1531 1531 * If the device type is not TV, continue. 1532 1532 */ ··· 1564 1564 return; 1565 1565 } 1566 1566 /* Even if we have an encoder we may not have a connector */ 1567 - if (!dev_priv->int_tv_support) 1567 + if (!dev_priv->vbt.int_tv_support) 1568 1568 return; 1569 1569 1570 1570 /*
+160
include/drm/drm_rect.h
··· 1 + /* 2 + * Copyright (C) 2011-2013 Intel Corporation 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice (including the next 12 + * paragraph) shall be included in all copies or substantial portions of the 13 + * Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 + * SOFTWARE. 22 + */ 23 + 24 + #ifndef DRM_RECT_H 25 + #define DRM_RECT_H 26 + 27 + /** 28 + * drm_rect - two dimensional rectangle 29 + * @x1: horizontal starting coordinate (inclusive) 30 + * @x2: horizontal ending coordinate (exclusive) 31 + * @y1: vertical starting coordinate (inclusive) 32 + * @y2: vertical ending coordinate (exclusive) 33 + */ 34 + struct drm_rect { 35 + int x1, y1, x2, y2; 36 + }; 37 + 38 + /** 39 + * drm_rect_adjust_size - adjust the size of the rectangle 40 + * @r: rectangle to be adjusted 41 + * @dw: horizontal adjustment 42 + * @dh: vertical adjustment 43 + * 44 + * Change the size of rectangle @r by @dw in the horizontal direction, 45 + * and by @dh in the vertical direction, while keeping the center 46 + * of @r stationary. 47 + * 48 + * Positive @dw and @dh increase the size, negative values decrease it. 49 + */ 50 + static inline void drm_rect_adjust_size(struct drm_rect *r, int dw, int dh) 51 + { 52 + r->x1 -= dw >> 1; 53 + r->y1 -= dh >> 1; 54 + r->x2 += (dw + 1) >> 1; 55 + r->y2 += (dh + 1) >> 1; 56 + } 57 + 58 + /** 59 + * drm_rect_translate - translate the rectangle 60 + * @r: rectangle to be tranlated 61 + * @dx: horizontal translation 62 + * @dy: vertical translation 63 + * 64 + * Move rectangle @r by @dx in the horizontal direction, 65 + * and by @dy in the vertical direction. 66 + */ 67 + static inline void drm_rect_translate(struct drm_rect *r, int dx, int dy) 68 + { 69 + r->x1 += dx; 70 + r->y1 += dy; 71 + r->x2 += dx; 72 + r->y2 += dy; 73 + } 74 + 75 + /** 76 + * drm_rect_downscale - downscale a rectangle 77 + * @r: rectangle to be downscaled 78 + * @horz: horizontal downscale factor 79 + * @vert: vertical downscale factor 80 + * 81 + * Divide the coordinates of rectangle @r by @horz and @vert. 82 + */ 83 + static inline void drm_rect_downscale(struct drm_rect *r, int horz, int vert) 84 + { 85 + r->x1 /= horz; 86 + r->y1 /= vert; 87 + r->x2 /= horz; 88 + r->y2 /= vert; 89 + } 90 + 91 + /** 92 + * drm_rect_width - determine the rectangle width 93 + * @r: rectangle whose width is returned 94 + * 95 + * RETURNS: 96 + * The width of the rectangle. 97 + */ 98 + static inline int drm_rect_width(const struct drm_rect *r) 99 + { 100 + return r->x2 - r->x1; 101 + } 102 + 103 + /** 104 + * drm_rect_height - determine the rectangle height 105 + * @r: rectangle whose height is returned 106 + * 107 + * RETURNS: 108 + * The height of the rectangle. 109 + */ 110 + static inline int drm_rect_height(const struct drm_rect *r) 111 + { 112 + return r->y2 - r->y1; 113 + } 114 + 115 + /** 116 + * drm_rect_visible - determine if the the rectangle is visible 117 + * @r: rectangle whose visibility is returned 118 + * 119 + * RETURNS: 120 + * %true if the rectangle is visible, %false otherwise. 121 + */ 122 + static inline bool drm_rect_visible(const struct drm_rect *r) 123 + { 124 + return drm_rect_width(r) > 0 && drm_rect_height(r) > 0; 125 + } 126 + 127 + /** 128 + * drm_rect_equals - determine if two rectangles are equal 129 + * @r1: first rectangle 130 + * @r2: second rectangle 131 + * 132 + * RETURNS: 133 + * %true if the rectangles are equal, %false otherwise. 134 + */ 135 + static inline bool drm_rect_equals(const struct drm_rect *r1, 136 + const struct drm_rect *r2) 137 + { 138 + return r1->x1 == r2->x1 && r1->x2 == r2->x2 && 139 + r1->y1 == r2->y1 && r1->y2 == r2->y2; 140 + } 141 + 142 + bool drm_rect_intersect(struct drm_rect *r, const struct drm_rect *clip); 143 + bool drm_rect_clip_scaled(struct drm_rect *src, struct drm_rect *dst, 144 + const struct drm_rect *clip, 145 + int hscale, int vscale); 146 + int drm_rect_calc_hscale(const struct drm_rect *src, 147 + const struct drm_rect *dst, 148 + int min_hscale, int max_hscale); 149 + int drm_rect_calc_vscale(const struct drm_rect *src, 150 + const struct drm_rect *dst, 151 + int min_vscale, int max_vscale); 152 + int drm_rect_calc_hscale_relaxed(struct drm_rect *src, 153 + struct drm_rect *dst, 154 + int min_hscale, int max_hscale); 155 + int drm_rect_calc_vscale_relaxed(struct drm_rect *src, 156 + struct drm_rect *dst, 157 + int min_vscale, int max_vscale); 158 + void drm_rect_debug_print(const struct drm_rect *r, bool fixed_point); 159 + 160 + #endif