For some time now the consumer electronics industry has been grappling with how to improve the performance and efficiency of display interfaces, especially in light of more recent increases in display resolution. Through the eras of DVI, LVDS/LDI, HDMI, and DisplayPort, video has been transmitted from source to sink as raw, uncompressed data, a conceptually simple setup that ensures high quality and low latency but requires an enormous amount of bandwidth. The introduction of newer interface standards such as HDMI and DisplayPort have in turn allowed manufacturers to meet those bandwidth requirements so far. But display development is reaching a point where both PC and mobile device manufacturers are concerned about their ability to keep up with the bandwidth requirements of these displays, and their ability to do so at reasonable cost and resource requirements.

In order to address these concerns the PC and mobile device industries – through their respective VESA and MIPI associations – have been working together to create new technologies and standards to handle the expected bandwidth requirements. The focus of that work has been on the VESA's Display Stream Compression (DSC) standard, a descriptively named standard for image compression that has been in development at the VESA since late 2012. With that in mind, the VESA and MIPI have announced today that DSC development has been completed and version 1.0 of the DSC standard has been ratified, with both organizations adopting it for future display interface standards.

As alluded to by the name, DSC is an image compression standard designed to reduce the amount of data that needs to be transmitted. With DisplayPort 1.2 already pushing 20Gbps and 1.3 set to increase that to over 30Gbps, display interfaces are already the highest bandwidth interfaces in a modern computer, creating practical limits on how much further they can be improved. With limited headroom for increasing interface bandwidth, DSC tackles the issue from the other end of the problem by reducing the amount of bandwidth required in the first place through compression.

Since DSC is meant to be used at the final transmission stage, DSC itself is designed to be “visually lossless”. That is to say that it’s intended to be very high quality and should be unnoticeable to users across wide variety of content, including photos/video, subpixel text, and potentially problematic patterns. But with that said visually lossless is not the same as mathematically lossless, so while DSC is a high quality codec it’s still mathematically a lossy codec.

In terms of design and implementation DSC is a fixed rate codec, an obvious choice to ensure that the bandwidth requirements for a display stream are equally fixed and a link is never faced with the possibility of running out of bandwidth. Hand-in-hand with the fixed rate requirement, the VESA’s standard calls for visually lossless compression with as little as 8 bits/pixel, which would represent a 66% bandwidth savings over today’s uncompressed 24 bits/pixel display streams. And while 24bit color is the most common format for consumer devices, DSC is also intended work with higher color depths, including 30bit and 36bit (presumably at higher DSC bitrates), allowing it to be used even with deep color displays.

We won’t get too much into the workings of the DSC algorithm itself – the VESA has a brief but insightful whitepaper on the subject – but it’s interesting to point out the unusual requirements the VESA has needed to meet with DSC. Image and video compression is a well-researched field, but most codecs (like JPEG and H.264) are designed around offline encoding for distribution, rather than real-time encoding as part of a display standard. DSC on the other hand needed to be computationally cheap (to make implementation cheap) and low latency, all the while still offering significant compression ratios and doing so with minimal image quality losses. The end result is an interesting algorithm that uses a combination of delta pulse code modulation and indexed color history to achieve the fast compression and decompression required.

Moving on, with the ratification of the DSC 1.0 standard, both the VESA and MIPI will be adopting it for some of their respective standards. On the VESA side, eDP 1.4 will be the first VESA standard to include it, while we also expect DSC’s inclusion in the forthcoming DisplayPort 1.3. MIPI in turn will be including DSC in their Display Serial Interface (DSI) 1.2 specification for mobile devices.

With the above in mind, it’s interesting how both groups ended up at the same standard despite their significant differences in goals. The VESA is primarily concerned with driving ultra high resolutions such as 8K@60Hz, which would require over 50Gbps of uncompressed video and something not even DisplayPort 1.3 would be able to achieve. MIPI on the other hand is not concerned about resolutions as much as they are concerned about power and cost requirements; a DisplayPort-like interface could supply mobile devices with plenty of bandwidth, but high bitrate interfaces are expensive to implement and are typically very power hungry, both on an absolute basis and a per-bit basis.

Display Bandwidth Requirements, 24bpp (Uncompressed)
Resolution Bandwidth Minimum DisplayPort Version
1920x1080@60Hz 3.5Gbps 1.1
2560x1440@60Hz 6.3Gbps 1.1
3840x2160@60Hz (4K) 14Gbps 1.2
7680x4320@60Hz (8K) >50Gbps 1.3 + DSC

DSC in turn solves both of their problems, allowing the VESA to drive ultra high resolutions over DisplayPort while allowing MIPI to drive high resolution mobile displays over low cost, low power interfaces. In fact it’s surprising (and almost paradoxical) that even with the additional manufacturing costs and encode/decode overhead of DSC, that in the end DSC is both cheaper to implement and lower power than a higher bandwidth interface.

Wrapping things up, while DSC enabled devices are still some time off – the fact that the standard was just ratified means new display controllers still need to be designed and built – DSC is something we’re going to have to watch closely. Display compression is not something to be taken lightly due to the potential compromises to both image quality and latency, and while it’s unlikely the average consumer will notice it’s definitely going to catch the eyes of enthusiasts. The VESA and MIPI are going in the right direction by targeting visually lossless compression rather than accepting a significant image quality tradeoff for better bandwidth savings, but it remains to be seen just how lossless/lossy DSC really is. At a fundamental level DSC can never beat the quality of uncompressed display streams, but that doesn’t rule out other tradeoffs that will make compression worth the cost.

Source: VESA

Comments Locked


View All Comments

  • ivan256 - Wednesday, April 23, 2014 - link

    Copper is generally higher bandwidth and lower latency over short distances than digital-optical. Mostly because you have to convert from and back to electrical signaling.
  • p1esk - Wednesday, April 23, 2014 - link

    Any links to back that up?

    Multimode fiber can do 100Gbps at distances up to 150m [1]. Good luck trying to do that with copper, even at 1m.
  • madwolfa - Wednesday, April 23, 2014 - link

    He said - short distances.
  • p1esk - Wednesday, April 23, 2014 - link

    Yes, and I have shown that at short distances, plastic fiber beats copper by at least a factor of 10.
  • Gnarr - Thursday, February 19, 2015 - link

    Latency and bandwidth are two separate things. You haven't shown anything.
  • Gnarr - Thursday, February 19, 2015 - link

    Optical can surely be slower:
  • madmilk - Tuesday, April 22, 2014 - link

    A 66% reduction in bitrate seems pretty conservative. Even an intraframe algorithm (which has zero latency) such as MJPEG should appear lossless with that much bandwidth available.
  • Guspaz - Wednesday, April 23, 2014 - link

    An intraframe algorithm doesn't have zero latency, because if you're working on entire frames, you have to buffer the whole thing before you can start sending. Of course, if you use MJPEG and use restart markers at the end of every row of macroblocks, you're only adding 8 to 16 scanlines of latency.
  • psychobriggsy - Wednesday, April 23, 2014 - link

    It's a stream compression protocol, not a frame compression protocol. It has to be low cost to implement - frame compression requires RAM to decompress into, compare with N previous frames, etc. And MJPEG is expensive to encode on the fly compared to this stream compression algorithm.
  • afree - Tuesday, April 22, 2014 - link

    Well as the table in the article indicates 4K shouldn't be affected. 8K won't be mainstream for 5 years at least (IMO) and by then we should have a displayport connector that is twice the bandwidth of displayport 1.3 (which is being standardised in a few months) if displayport technology keeps progressing as it is. Therefore this won't affect most people, however those who plan to buy 8K displays before they get to consumer available prices will likely be in a bit of trouble.

    Of course 8K at the distance most people sit from a screen (2 inches) is according to slightly greater "resolution" than the healthiest possible adult eye can see for a 22 inch screen. However as the compression algorithm reduces 24bits per pixel down to 8 that suggests the amount of colours to be displayed is lowered which could be very bad if that is even the slightest bit decernable by artists (who are probably the only ones who are dedicated enough to buy early 8K monitors AND notice the difference).

    I wonder if leading 2 displayport cables into a single monitor is possible, because 2 display port 1.3 cables are cable of 8K 60hertz (that seems easier to me to implement than a compression algorithm). Although perhaps higher screen refresh rates will be of more benefit than 4K to 8K to a huge portion of people.

Log in

Don't have an account? Sign up now