Jump to content

Talk:Truecolor

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Lumpbucket (talk | contribs) at 11:49, 27 October 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

I'm curious as to what hardware, specifically, utilizes the extra byte in 32-bit truecolor for the implementation of transparency. I've always been under the impression that 32-bit depth in the context of graphic display is exactly the same as 24-bit depth, with the extra byte purely for padding. An actual pixel (on the monitor) cannot be partially transparent, so if the extra byte is used for this purpose, it would have to be a capability of the video card.

Now, this is distinct from 32-bit image files (such as PNG), which clearly (no pun intended) may have an 8-bit alpha channel; this, of course, can be used to layer images and achieve transparency; however, the current article implies that GUI effects involving transparency somehow take advantage of the extra byte. Is this true? Do newer video cards utilize the extra 8 bits? -- Wapcaplet 19:06, 19 Oct 2004 (UTC)


Well, when an operating system draws something such as a menu, it usually has an off-screen bitmap for it. If the *offscreen* bitmap is 32bit, you can use the extra byte for the transparency of the menu as you copy it to the screen. As far as i'm aware, if the screen is also 32bit, the graphics card can accelerate translucent blitting of the bitmap for you, otherwise the CPU has to do it. On the actual screen bitmap, the extra bytes are ignored. -- Lumpbucket