Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
AndrewB

64-bit color...

Recommended Posts

John Carmack had mentioned 64-bit color in some recent plan, how some future game might use it. The average computer user might be stumped by such an idea, since monitors are really only capable of 24-bit color, and the extra 8-bits is just used for the odd transparency task. Right?

Well, let's just picture that blue-to-black vertical fade that we see in many software installations. It's your typical 256 shades of plain blue, and with a decent monitor and pair of eyes you can easily see the gradients, revealing the limits of 24-bit color. Obviously there is still some room for improvement.

Simple dithering that you see in a grainy GIF file can help rub away the step-by-step violent color changes. But it's easy to do even better than that.

The fact is, with a still 24-bit image, every pixel is staying its own rigid shade all the time. If you give an image 48 bits of detail, it can use frame-by-frame color variation to, in all practical terms, eliminate the nocitible limits of 24-bit color. Let's say a pixel is told to be a quarter of the way between 0,0,216 and 0,0,217. That pixel can change color at least 60 times per second, flickering back and forth between 0,0,216 during a quarter of the screen refreshes, and 0,0,217 during the remaining refreshes.

This is exactly what is done on 16-bit DVD images (just look any well-done DVD, such as LOTR) to remove color gradients and graininess. It makes DVD color, which is limited to 16-bit, look almost like perfect 24-bit. The same philosophy can be used on a greater scale, making 24-bit look like absolutely perfect color with no detectable hint of graininess or gradient-changes whatsoever.

So, this is how 64-bit color can be a benefit, even on a standard monitor. Of course, while immersed in a game, 32-bit color almost never reveals any color limits. But sometimes it does, and 64-bit can virtually eliminate all of them.

That is all.

Share this post


Link to post

Ya, but how many screen gradients would there be in an average video game? I don't think it would be worth it to channel all this GPU power just to 'fix' something that would probably be unnoticeable anyway.

Share this post


Link to post

there are so few 64-bit operating systems that I think making 2 calls per pass would be silly at this time. unless we are just talking GPUs...
very interesting though... and bound to happen in the coming year or two...

Share this post


Link to post
AndrewB said:

This is exactly what is done on 16-bit DVD images (just look any well-done DVD, such as LOTR) to remove color gradients and graininess. It makes DVD color, which is limited to 16-bit, look almost like perfect 24-bit. The same philosophy can be used on a greater scale, making 24-bit look like absolutely perfect color with no detectable hint of graininess or gradient-changes whatsoever.

So THAT'S why my DVD player bitches to me that my screen is too good to run DVDs on. Stupid 800x600 mode...

Share this post


Link to post

DVD software shouldn't need any particular mode or resolution. It should be able to take the source image and stretch it in any resolution you desire. It does on mine.

Share this post


Link to post

Well, I only have 2 dvd software programs, and one of them sucks complete ass, so I'm sticking with PowerDVD.

Share this post


Link to post

What about WinDVD? It costs, but, it came with my computer, so, I guess there's a good reason why you don't have it.

Share this post


Link to post
Lüt said:

Hey what are you doing outside the skulltag forum?


OMG! Security has been breached!

Share this post


Link to post

I don't think 48-bit color would improve anything on the monitor side. Personally, I can just barely tell the difference between 0,128,0 and 0,129,0, and that's when I have two big fields of them next to each other, which won't ever happen in a computer game. The question of color depth is only a computational issue as far as I can tell.

Share this post


Link to post

Yeah, Carmack was talking about making the calculations 64bit, not the actual colour depth that is displayed. If you're calculating colours from a set of non-moving numbers, lack of presicion leads to banding... even in 32bit.

Share this post


Link to post

The human eye cant distinguish the full range of 24-bit color, from what I understand. The extra bits are used for internal calculations, such as holding bump mapping or translucency data.

Share this post


Link to post
fraggle said:

The human eye cant distinguish the full range of 24-bit color, from what I understand. The extra bits are used for internal calculations, such as holding bump mapping or translucency data.

It's partially true that the human eye can't distinguish the 24-bit color range. On a good enough display, you can effortlessly tell the difference between two bright colors whose RGB values differ by 1. Good luck telling one dark gray from another though.

As for storing extra data in each pixel, I think it's quite interesting. 16-bit per-channel precision will of course be necessary soon, not to mention storing data for bumps, gloss, etc in place in textures. Another cool use is having data such as depth of field for each pixel in the rendered buffer, which could be used for neat effects.

Share this post


Link to post
Lord Flathead said:

Thanks for the lecture, professor Obvious.

DooMBoy said:

Who gives a fuck?

STFU, I found this whole thread to be pretty informative.

Share this post


Link to post

With 64 bit colour you can use floating point for the colours. In a normal 32 bit environment, each pass can degrade the image quality. With a 48-bit display, instead of representing each colour with an 8 bit number between 0 and 255, you can represent it with a 16 bit floating point number, which results in greater accuracy and less image degradation. You could probably Google for more technical information if you're that way inclined, but that's the basics of it.

And if that doesn't sound good, you could use extra bits to upload stuff like bump mapping, reflection mapping, etc to the video card without using extra passes/layers on the texture, which as you could imagine would be faster.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
×