TechDeepDive#01: Clearing Up the Confusion: 8b/10b TMDS vs. 8-bit/10-bit Color in HDMI

Only read if you REALY want to go down the HDMI rabbit hole

TechDeepDive#01: Clearing Up the Confusion: 8b/10b TMDS vs. 8-bit/10-bit Color in HDMI

There's a common misconception regarding HDMIs 8b/10b TMDS encoding and 8 and 10 bit color depth. Many assume that the 8b/10b encoding directly correlates with the 8 or 10 bits used per color channel, but this is not entirely accurate.

What is TMDS anyway?

TMDS stands for Transition Minimized Differential Signaling, a technology integral to HDMI (High-Definition Multimedia Interface).

It is designed to efficiently transfer digital video signals. TMDS minimizes signal degradation and electromagnetic interference over the cable by converting the data into a format that reduces signal frequency.

What do you mean by signal degradation?

And what does the frequency have to do with it.

The term Hertz or "frequency" refers to how often the signal changes state – from '1' to '0' or from '0' to '1' - within one second.

A wire has a limit on the highest frequency it can carry. For HDMI cables this is usually given in MHz (Megahertz) or GHz (Gigahertz). This is hugely dependent on the quality of the shield and materials used in a cable.

signal switching between 1 and 0

1 MHz is 1.000.000 changes per second.

For HDMI cables, higher MHz values suggest the cable is capable of supporting higher resolutions, refresh rates, and color depths. For instance, HDMI cables for 4K resolution at 60Hz, 10bit color will need a higher MHz rating than those intended for 1080p at 60Hz at 8bit.

TMDS uses a mathematical algorithm that transforms the byte to reduce the frequency of these state changes – that is, by making fewer transitions from '1' to '0' or vice versa. This allows more data bandwidth to be transmitted reliably over the wire.

In short, TMDS allows more bits of data per seconds (signal demand) transported over the same amount of Megahertz per second (cable maximum)

What exactly does this algorithm do?

When transmitting an 8bit pixel color data byte like "10100010" using TMDS, the original pixel color data undergoes a transformation to become “less changing”.

The transformation itself is designed to balance the number of zeros and ones in the signal, thereby reducing electromagnetic interference and improving signal integrity over HDMI cables.

To make it easier to understand, here is an example:

A data byte of 10100010 can be transformed to 00001100.

This is of course a well chosen byte to show you how effective TMDS can be in reducing state changes. The overall reduction depends on the complete signal.

Of course the receiving side has to know what happened to the data. Otherwise it would not know how to decode the byte back into color again. Thats why there are two more bits added to the data.

And this is where the name TMDS 8b/10b actually derives from. An 8bit data packet becomes a 10 bit data packet. This new packet is called TMDS symbol by the way.

How does the process actually work?

Lets go through the algorithm in a reduced form to see what happens in TDMS.

Count the '1's:

You start by counting the number of '1's in the data byte.

10100010

Minimize Transitions:

According of the number of 1s in the data byte you choose one of two operations XOR or XNOR (more on that in a later post). Both operations reduces the amount of state changes to a minimum.

11110011

There are five state changes within the data packet before TMDS…

Before TMDS

….and only two changes after TMDS.

Adjust for Disparity:

Believe it or not, TMDS goes so far as it keeps track of data even over time, so the data never creates too much voltage bias in one direction.

In easy words, if the pixel data before had more 1s than zeros, and the recent data would again have more 1s than 0s, all the bits get inverted.

11001110 00001100

Set the control bits:

0000110001

This 9th and 10th bit of the TMDS symbol are used to indicate whether an XOR or XNOR operation was applied, and if the symbol was inverted.

For standard 24-bit color depth, where each of the color channels is 8 bits, the pixel color data just happens to fit within one TMDS symbol.

And when the color depth increases to 10 bit per color?

When using 30-bit color, more bits are needed to represent each pixel (10 bits per color channel). As TMDS can’t increase its maximum size to make room for additional information bits, the data is simply spread across multiple symbols.

The symbol is still the same 8b/10b size, but you need at least two symbols to recreate the color for one pixel. That’s why the TMDS clock rate is increased as well, so the complete data arrives on time.

And what about 12 and 16 bit color?

With higher color depths, like 12bits or 16bits per channel a different encoding comes into play. FRL Fixed Rate Link. But more on that in a later post.

So, there is no relationship between the 8bit/10bit notation of TMDS and the 8bit/10bit color depth!

The one notation describes color-depth.

The other describes a data byte structure.

It's all about keeping signals clean and balanced, especially as we push for more colors and details on our screens.

NOW, This was a wild ride through a very complicated part of HDMI video signals. Pat yourself on the back and keep an eye out for more tech deep dives soon!

So long

Basti

external sources:

TMDS image wdwd, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons