# Bits per channel

Discussion in 'Digital Photography' started by Siddhartha Jain, Jan 6, 2005.

1. ### Siddhartha JainGuest

Digicams sensors have 8 bits per channel to record represent voltage
per pixel. So luminescence is represented by a number between 0 and
255.

Is there something to gain by moving to 16-bit or 32-bit? If yes, when
are we moving? Things seem to be at 8-bit for pretty long now.
- Siddhartha

Siddhartha Jain, Jan 6, 2005

2. ### OwamangaGuest

Then get a better camera.

But, before you get too fret up, remember that your graphics card
can't even display 11 bits per channel. So, if you want 32 bits per
channel you'll never see the difference.

Owamanga, Jan 6, 2005

3. ### David J TaylorGuest

You get an analog value from the sensor that can be represented with about
12 bits, i.e. 4096 grey levels, with a linear scaling of grey level to
luminance. This is what you get in RAW files. This is the situation
inside the camera, before conversion to 8-bits takes place.

In image files, 8-bit JPEG or TIFF files, the digital value does not
directly represent light level, but a value nearer to the log of the light
level. A gamma-correction of approximately 0.45 power law is used to
convert the 0..4095 range of the sensor data into the 0..255 range of the
8-bit image data. In practice, perhaps only value 0..2047 are converted,
the remaining values in the RAW file representing the extra "headroom"
which people mention.

The effect of the gamma correction is to reduce the number of light levels
which can be separately represented at the bright end of the range. I.e.
the eye cannot distinguish between light levels of 2045 and 2046, so they
are both mapped to "254", for example. Light levels at the low end of the
0..4095 range (for example 1 or 2) are much more accurately represented in
the 8-bit JPEG/TIFF image, so shadow detail is preserved.

The display device typically has a gamma around 2.2, i.e. it is rather
non-linear between the drive voltage in and the light level out. The
combination of a 0.45 * 2.2 gamma (camera and display) result in an
approximately linear net transfer between light into the sensor and light
out of the display.

This is all a simplification, but should help you understand why 8-bit
data is adequate (just) for normal usage. Personally, I would like to see
rather more than 8-bits, perhaps 10-bit or 12-bit JPEGS, so that all of
the sensor range and more could be used for subsequent processing steps.

Cheers,
David

David J Taylor, Jan 6, 2005
4. ### Siddhartha JainGuest

Ok, correction. 12-bits.

I don't think that answers the question.

Ok, so why not?

What I am trying to understand is that are there no significant
benefits in moving to a broader bus?

Siddhartha Jain, Jan 6, 2005
5. ### Siddhartha JainGuest

Thanks. Correction, 12-bits.
I don't think that answers the question.
What I am trying to understand is are there no significant advantages
to pushing the number of bits upwards?

- Siddhartha

Siddhartha Jain, Jan 6, 2005
6. ### David J TaylorGuest

Siddhartha Jain wrote:
[]
"significant" is the operative word.

Tests have show that the eye has problems in using more than 8-bit data
when applied to a gamma-corrected monitor as I described, although you can
set up some special cases which show that for colour slightly more may be
required. Prior to conversion to 8-bits for display, though, there may be
a slight advantage working in the linear 12/16-bit domain.

My guess is that it will be like CDs - for domestic use 16-bit 44KHz audio
is adequate, for production studios have moved to 24-bit/96/192Khz audio.

Cheers,
David

David J Taylor, Jan 6, 2005
7. ### paulGuest

My desktop display properties indicate 32 bit 'color quality' with an
option for 16 bit. I'm not sure if this is the same terminology. My
older PC ran a lot slower in 32 bit mode as I recall.

paul, Jan 6, 2005
8. ### OwamangaGuest

That's right - diminishing returns. This whole thing is designed
around what the human eye can see. There is no point going crazy with
16 bits, 32 bits, 64 bits, 128 bits per channel when we can't display,
print or see the added detail.

Can you see the difference between 24 bit mode (8 per channel) and 32
bit mode (10.5 per channel) on your graphics card ? I am sure if
someone switched mine down to 24 bits one morning, I'd probably never

The only argument for 48 bit scanners (16 per channel) and the like is
because it allows slightly more scope for exposure correction (ie, you
get to choose later which 8 bits per channel you want to keep)

...same with digital audio. Take CDs - 44.1Khz at 16 bits per sample is
plenty good enough for our human ears. Since that invention 20 years
ago, many subsequent digital audio standards have actually been much
worse and the best one is only just over double that sampling rate and
nobody is buying it. We have reached a plateau.

Owamanga, Jan 6, 2005
9. ### David Dyer-BennetGuest

Well, 12 bits is 50% more than 8 bits, last I checked; so the benefit
of going all the way to 16 from 12 is less than the benefit of going
to 12 from 8.
Because your eyes can't distinguish that many colors.
One limitation is the human visual system. Now, it's useful to
capture more than that in the initial shot -- but it has to be reduced
to what humans can see to work as a print for humans. Like a negative
being printed.

And, as people said, we *are* moving to broader buses. When the web
was new, it was rare to see pictures in more than 256 colors. Now
24-bit color is pretty much the baseline; and better cameras and
scanners produce 12 bits or more per channel. That sounds like

David Dyer-Bennet, Jan 6, 2005
10. ### OwamangaGuest

32 bits per pixel. Split this into the three color components of Red,
Green and Blue and you've got theoretical 10.6 bits per channel. In
fact, most (if not all) are using 32 bits just to pad 24 actual bits
into something that fits neatly into 4 bytes - this is for performance
and design simplicity reasons. So, these modes are actually only
displaying 8 bits per channel. 16,776,215 discrete colors.

24/32 bit modes will be slower because they use 4 bytes of card memory
per pixel instead of 2 bytes in 16 bit mode (65,536 colors) or 1 byte
in 8 bit (256 color) mode.

Someone might correct me and tell me there is now available a true
10.6 bit, 12 or 16 bit per channel graphics card out there - anything
is possible I am sure.

Owamanga, Jan 6, 2005
11. ### paulGuest

So I guess the real advantage is (maybe in printing?) and with post
curves & adjust contrast, there is info in there which can be made
visible with adjustments. This includes sharpening. unprocessed DSLR
images look quite bland and soft but there is a heck of a lot more info
in there to fiddle with and bring out.

paul, Jan 6, 2005
12. ### David J TaylorGuest

Owamanga wrote:
[]
There is no difference in the colour displayed - in each case it's 8 bits
of red, 8 of green, and 8 of blue. The extra 8 bits are for alpha masks.
It can be faster to move 32-bit data to the card, not that you could
perceive the difference with today's PCs.

David

David J Taylor, Jan 6, 2005
13. ### Steve WolfeGuest

Digicams sensors have 8 bits per channel to record represent voltage
I think that you would see a much greater improvement by moving to another
color space instead of increasing the bits - the RGB color space doesn't
even come close to covering the gamut that the human eye can see.

steve

Steve Wolfe, Jan 6, 2005
14. ### Steve WolfeGuest

My desktop display properties indicate 32 bit 'color quality' with an
Actually, 32-bit mode gives you 24 bits of color (8 bits per channel),
and an additional 8 bits of alpha (transparency).

steve

Steve Wolfe, Jan 6, 2005
15. ### OwamangaGuest

Makes sense, but it is an alpha transparency to what ? - ie what's
underneath?

Owamanga, Jan 6, 2005
16. ### Siddhartha JainGuest

Apart from RGB variants what other colour-spaces are feasible?
- Siddhartha

Siddhartha Jain, Jan 6, 2005
17. ### RSD99Guest

Sounds Good ... but just what "Color Space" would that be?

RSD99, Jan 6, 2005
18. ### Dave MartindaleGuest

There are video cards that can composite the computer-generated image
onto a background image that comes from somewhere else (e.g. a video
source). In that case, the alpha channel determines the opacity of the
upper layer.

But mostly it's not used in PCs. It may still be advantageous to use 32
bit instead of 24 bit representation, wasting 1/4 of the memory, because
most processors can't address 24 bit packed data except by using byte
operations, while 32-bit access is faster.

Dave

Dave Martindale, Jan 6, 2005
19. ### JPSGuest

In message <>,
Actually, it is 12 bits per channel for most current digitals.

Either marketing concerns, cost concerns, storage concerns, technical
problems, or any combination are keeping the current data at 12-bit.
Current sensors could easily warrant 16 bits, as their lower 12 bits at
ISO 100 would have the same quality as ISO 1600 currently has on the
same sensor. Maybe the problem is in the analog-to-digital converter.
Maybe converters that can do 16 bits are expensive or impractical, but
current sensors could certainly warrant it.
--

JPS, Jan 6, 2005
20. ### JPSGuest

In message <>,
It's not about seeing it directly on the screen, raw. It's about
boosting shadows, compressing highights globally while expanding their
local detail, etc, etc.
--

JPS, Jan 6, 2005