dSLR dynamic range question

Discussion in 'Digital Photography' started by chibitul, Aug 12, 2004.

  1. chibitul

    chibitul Guest

    Question: from what I understand, RAW files have more dynamic range than
    JPEG files, i.e. one can choose some slight "exposure compensation" (+/-
    0.5, maybe +/- 1 stop) when going from RAW to JPEG. Now, if this
    information is in there anyway, why can't it be displayed in jpeg??? I
    read of people making 2 JPEGs, from the same RAW file, one under and one
    over-"exposed" and then blending them in Photoshop. Why can't the RAW
    converter do that, i.e. extract the entire dynamic range from a file???
     
    chibitul, Aug 12, 2004
    #1
    1. Advertising

  2. chibitul wrote:
    > Question: from what I understand, RAW files have more dynamic range than
    > JPEG files, i.e. one can choose some slight "exposure compensation" (+/-
    > 0.5, maybe +/- 1 stop) when going from RAW to JPEG. Now, if this
    > information is in there anyway, why can't it be displayed in jpeg??? I
    > read of people making 2 JPEGs, from the same RAW file, one under and one
    > over-"exposed" and then blending them in Photoshop. Why can't the RAW
    > converter do that, i.e. extract the entire dynamic range from a file???


    Yes, the full dynamic range is preserved in the raw file.
    If one converts the raw to 16-bit output, e.g. a 16-bit tif,
    then one can process in one file the full precision dynamic range.

    Note that the jpeg is 8-bit (256 levels) whereas raw is 12-bit
    (4096 levels), or 14-bit (16,384 levels). The jpeg 256 levels
    can be spread out over the same dynamic range as the 12 or 14 bit
    raw, just that there is not the fine intensity detail. The jpeg
    conversion is not linear either, but follows a transfer curve
    similar to film, with less intensity detail in the shadows and
    highlights. A raw file can usually be used to extract subtle
    information in the shadows and highlights, if processed in
    16-bit mode. To extract 2 jpegs, a high level and a low
    is just extra work in my opinion. Older versions
    of photoshop had less 16-bit ability so perhaps it was
    needed, but not with modern software. Note you must effectively
    compress the intensity range for display anyway as no screen
    or print medium has much more than 8-bits of range. Depending on the
    scene, a jpeg, if exposed well, may be adequate. I shoot both
    jpeg and raw files, depending on the situation.

    Roger Clark
    photos, digital info at: http://www.clarkvision.com
     
    Roger N. Clark (change username to rnclark), Aug 12, 2004
    #2
    1. Advertising

  3. chibitul

    chibitul Guest

    In article <>,
    "Roger N. Clark (change username to rnclark)" <>
    wrote:
    [snip]
    > A raw file can usually be used to extract subtle
    > information in the shadows and highlights, if processed in
    > 16-bit mode. To extract 2 jpegs, a high level and a low
    > is just extra work in my opinion. Older versions
    > of photoshop had less 16-bit ability so perhaps it was
    > needed, but not with modern software. Note you must effectively
    > compress the intensity range for display anyway as no screen
    > or print medium has much more than 8-bits of range. Depending on the
    > scene, a jpeg, if exposed well, may be adequate. I shoot both
    > jpeg and raw files, depending on the situation.


    Thanks for your comments; i had some trouble with shadows myself, next
    time i will try to shoot RAW as well as jpeg.
     
    chibitul, Aug 12, 2004
    #3
  4. chibitul

    gsum Guest

    Very interesting comments.

    One further comment: The resolution of each ccd sensor is unlikely to
    be able to resolve anything like 16 bits which is probably why the
    manufacturers limit the raw resolution to 12 bits. At higher ISOs
    the resolution is 'swamped' by noise so there is no point in attempting
    to preserve all 12 bits.

    Graham


    "Roger N. Clark (change username to rnclark)" <> >
    > Yes, the full dynamic range is preserved in the raw file.
    > If one converts the raw to 16-bit output, e.g. a 16-bit tif,
    > then one can process in one file the full precision dynamic range.
    >
    > Note that the jpeg is 8-bit (256 levels) whereas raw is 12-bit
    > (4096 levels), or 14-bit (16,384 levels). The jpeg 256 levels
    > can be spread out over the same dynamic range as the 12 or 14 bit
    > raw, just that there is not the fine intensity detail. The jpeg
    > conversion is not linear either, but follows a transfer curve
    > similar to film, with less intensity detail in the shadows and
    > highlights. A raw file can usually be used to extract subtle
    > information in the shadows and highlights, if processed in
    > 16-bit mode. To extract 2 jpegs, a high level and a low
    > is just extra work in my opinion. Older versions
    > of photoshop had less 16-bit ability so perhaps it was
    > needed, but not with modern software. Note you must effectively
    > compress the intensity range for display anyway as no screen
    > or print medium has much more than 8-bits of range. Depending on the
    > scene, a jpeg, if exposed well, may be adequate. I shoot both
    > jpeg and raw files, depending on the situation.
    >
    > Roger Clark
    > photos, digital info at: http://www.clarkvision.com
    >
     
    gsum, Aug 12, 2004
    #4
  5. "Roger N. Clark (change username to rnclark)" <>
    wrote in message news:...
    SNIP
    > Note that the jpeg is 8-bit (256 levels) whereas raw is
    > 12-bit (4096 levels), or 14-bit (16,384 levels).


    True, but do note that the JPEG is 8-bit after gamma adjustment, and
    the Raw data is before gamma adjustment. Also, there is probably 1.5
    bits of noise in the Raw data (ADC + quantization).

    Bart
     
    Bart van der Wolf, Aug 12, 2004
    #5
  6. Bart van der Wolf wrote:
    > "Roger N. Clark (change username to rnclark)" <>
    > wrote in message news:...
    > SNIP
    >
    >>Note that the jpeg is 8-bit (256 levels) whereas raw is
    >>12-bit (4096 levels), or 14-bit (16,384 levels).

    >
    >
    > True, but do note that the JPEG is 8-bit after gamma adjustment, and
    > the Raw data is before gamma adjustment. Also, there is probably 1.5
    > bits of noise in the Raw data (ADC + quantization).
    >
    > Bart
    >

    Bart,
    Yes, that is what I meant when I said
    "The jpeg conversion is not linear either, but
    follows a transfer curve similar to film..."

    Roger
     
    Roger N. Clark (change username to rnclark), Aug 12, 2004
    #6
  7. chibitul

    Mitch Alsup Guest

    "gsum" <> wrote in message news:<411b1589$>...
    > Very interesting comments.
    >
    > One further comment: The resolution of each ccd sensor is unlikely to
    > be able to resolve anything like 16 bits which is probably why the
    > manufacturers limit the raw resolution to 12 bits. At higher ISOs
    > the resolution is 'swamped' by noise so there is no point in attempting
    > to preserve all 12 bits.
    >
    > Graham
    >


    You are confusing the storage capacity in each sensor on the CCD/CMOS
    with the number of bits in the Analog to Digital converter. IIRC the
    storage wells are capable of storing 60,000 to 100,000 electrons,
    while the A/Ds are limited to 12 bits due to cost structure. That is
    it is easy and cheap to manufacture a 12-bit A/D but considerably more
    expensive to manufacture a 14-bit A/D or a 16-bit A/D. Also note, the
    faster on wants the A/D to operate the less accurate each measurement
    becomes.

    Mitch
     
    Mitch Alsup, Aug 12, 2004
    #7
  8. "Roger N. Clark (change username to rnclark)" <>
    wrote in message news:...
    SNIP
    > Yes, that is what I meant when I said
    > "The jpeg conversion is not linear either, but
    > follows a transfer curve similar to film..."


    Thanks for clarifying, because I thought you meant "toe and shoulder"
    compression which is present in the film's response curve.

    It would make a nice option (shadow and highlight compression)
    combined with an increased midtone contrast for JPEGs. Of course sRGB
    output is already gamma slope limited in the shadows.

    Bart
     
    Bart van der Wolf, Aug 12, 2004
    #8
  9. chibitul

    jpc Guest

    On 12 Aug 2004 07:53:50 -0700, (Mitch Alsup) wrote:

    >"gsum" <> wrote in message news:<411b1589$>...
    >> Very interesting comments.
    >>
    >> One further comment: The resolution of each ccd sensor is unlikely to
    >> be able to resolve anything like 16 bits which is probably why the
    >> manufacturers limit the raw resolution to 12 bits. At higher ISOs
    >> the resolution is 'swamped' by noise so there is no point in attempting
    >> to preserve all 12 bits.
    >>
    >> Graham
    >>

    >
    >You are confusing the storage capacity in each sensor on the CCD/CMOS
    >with the number of bits in the Analog to Digital converter. IIRC the
    >storage wells are capable of storing 60,000 to 100,000 electrons,
    >while the A/Ds are limited to 12 bits due to cost structure. That is
    >it is easy and cheap to manufacture a 12-bit A/D but considerably more
    >expensive to manufacture a 14-bit A/D or a 16-bit A/D. Also note, the
    >faster on wants the A/D to operate the less accurate each measurement
    >becomes.
    >


    A question and a comment.

    Where did you get the 60,000 to 100,000 photoelectron number? I've
    heard numbers like 400 to 800 photoelectrons per square micron of
    sensor area. If my numbers are still correct for the current
    technology, a well depth of 100000 photo electons means some very
    large sensors such as those used in astronomy.

    As for the comment, if the well depth was 100000 electrons you would
    still have only a S/N of about 300 because of photon noise. This could
    be easily digitized by a 10 bit A/D if the sensor output was matched
    to the dynamic range of the A/D. I believe the real reason camera
    manufactures use a 12 bit A/D is;

    1 they are cheap and relatively fast

    2 having more bits than needed allows them to set the white point a
    couple bits below the max voltage limits of the A/D. That way they can
    still use the low responsivity sensors that come of the line

    jpc
     
    jpc, Aug 12, 2004
    #9
  10. chibitul

    gsum Guest

    <jpc> wrote in message news:...
    > On 12 Aug 2004 07:53:50 -0700, (Mitch Alsup) wrote:
    >
    > >"gsum" <> wrote in message

    news:<411b1589$>...
    > >> Very interesting comments.
    > >>
    > >> One further comment: The resolution of each ccd sensor is unlikely to
    > >> be able to resolve anything like 16 bits which is probably why the
    > >> manufacturers limit the raw resolution to 12 bits. At higher ISOs
    > >> the resolution is 'swamped' by noise so there is no point in attempting
    > >> to preserve all 12 bits.
    > >>
    > >> Graham
    > >>

    > >
    > >You are confusing the storage capacity in each sensor on the CCD/CMOS
    > >with the number of bits in the Analog to Digital converter. IIRC the
    > >storage wells are capable of storing 60,000 to 100,000 electrons,
    > >while the A/Ds are limited to 12 bits due to cost structure. That is
    > >it is easy and cheap to manufacture a 12-bit A/D but considerably more
    > >expensive to manufacture a 14-bit A/D or a 16-bit A/D. Also note, the
    > >faster on wants the A/D to operate the less accurate each measurement
    > >becomes.
    > >


    No I'm not. You're assuming that those 60 - 100 k electrons are an accurate
    representation of the light intensity but this is not the case. The step
    change
    of the ls bit of a 14 bit word is minute when compared with the ms bit
    and I doubt that any A/D can measure such a small change.

    Graham
     
    gsum, Aug 13, 2004
    #10
  11. gsum wrote:
    []
    > No I'm not. You're assuming that those 60 - 100 k electrons are an
    > accurate representation of the light intensity but this is not the
    > case. The step change
    > of the ls bit of a 14 bit word is minute when compared with the ms bit
    > and I doubt that any A/D can measure such a small change.
    >
    > Graham


    ADCs can be accurate to 16-bits or even better (some audio ADCs are 24
    bits).

    Cheers,
    David
     
    David J Taylor, Aug 13, 2004
    #11
  12. chibitul

    gsum Guest

    It is easy to demonstrate that a ccd sensor has nowhere near that level
    of accuracy - zoom in on an area of uniform colour and the
    slight variations in colour between pixels will become apparent.
    Manufacturers may claim x bits of accuracy for the A/D but the
    electronics also includes an analogue sensor that is subject to many factors
    that affect its performance. Temperature, manufacturing faults, humidity
    to name but a few.

    Graham

    "David J Taylor" <-this-bit.nor-this-part.uk>
    wrote in message news:nc_Sc.4408$...

    >
    > ADCs can be accurate to 16-bits or even better (some audio ADCs are 24
    > bits).
    >
    > Cheers,
    > David
    >
    >
     
    gsum, Aug 13, 2004
    #12
  13. gsum wrote:
    > "David J Taylor"
    > <-this-bit.nor-this-part.uk> wrote in
    > message news:nc_Sc.4408$...
    >
    >>
    >> ADCs can be accurate to 16-bits or even better (some audio ADCs are
    >> 24 bits).
    >>
    >> Cheers,
    >> David


    > It is easy to demonstrate that a ccd sensor has nowhere near that
    > level
    > of accuracy - zoom in on an area of uniform colour and the
    > slight variations in colour between pixels will become apparent.
    > Manufacturers may claim x bits of accuracy for the A/D but the
    > electronics also includes an analogue sensor that is subject to many
    > factors that affect its performance. Temperature, manufacturing
    > faults, humidity
    > to name but a few.
    >
    > Graham
    >


    Graham, I would agree with you about the possible requirements of an ADC
    for a digital camera, but you said:

    "The step change of the ls bit of a 14 bit word is minute when compared
    with the ms bit and I doubt that any A/D can measure such a small change."

    and I was simply saying the there /are/ ADCs which can measure a 14-bit
    change.

    Cheers,
    David
     
    David J Taylor, Aug 13, 2004
    #13
  14. chibitul

    Mitch Alsup Guest

    jpc wrote in message news:<>...
    > On 12 Aug 2004 07:53:50 -0700, (Mitch Alsup) wrote:
    >
    > >You are confusing the storage capacity in each sensor on the CCD/CMOS
    > >with the number of bits in the Analog to Digital converter. IIRC the
    > >storage wells are capable of storing 60,000 to 100,000 electrons,
    > >while the A/Ds are limited to 12 bits due to cost structure. That is
    > >it is easy and cheap to manufacture a 12-bit A/D but considerably more
    > >expensive to manufacture a 14-bit A/D or a 16-bit A/D. Also note, the
    > >faster on wants the A/D to operate the less accurate each measurement
    > >becomes.
    > >

    >
    > A question and a comment.
    >
    > Where did you get the 60,000 to 100,000 photoelectron number? I've
    > heard numbers like 400 to 800 photoelectrons per square micron of
    > sensor area. If my numbers are still correct for the current
    > technology, a well depth of 100000 photo electons means some very
    > large sensors such as those used in astronomy.


    I started here from google 'CCD well capacity':

    http://www.photomet.com/library_enc_fwcapacity.shtml
    http://www.mc2scopes.com/en-gb/dept_28.html
    http://www.phys-astro.sonoma.edu/observatory/documentation/st7_specifications.html

    But then found a photo forum that was discussing this exact issue:

    http://www.photo.net/bboard/q-and-a-fetch-msg?msg_id=008gr8

    See post 5::

    Mark U , jun 29, 2004; 03:16 p.m.
    It's not necessarily true that more bits give a greater dynamic range.
    The dynamic range depends on the ratio between the maximum exposure a
    pixel can take (its "well capacity") and the minimum level that can be
    distinguished from background electronic noise from various sources.
    2^16 is 64K, which is the same order of magnitude as the well capacity
    of a typical DSLR pixel (40-100K is the rough range for today's DSLRs).
    Background noise levels are commonly at least of the order of 30, or
    2^5 (maybe slightly lower i.e. around 2^4 with a high end cooled back
    for MF cameras - e.g. Phase One or Sinar), which leaves only 10-12 bits
    of real information at best. In practice, it's usually worse than this
    because photon shot noise is an inevitable consequence of quantum mechanics,
    dictating a noise component that has a standard deviation equal to the
    square root of the signal. The extra bits really only come in handy for
    image processing, where they avoid truncation e.g. when pixel values are
    multiplied by factors with fractional values. This helps to reduce noise
    in shadow areas, but it doesn't create any more real data - in effect,
    it's a noise reduction artifact.

    Given that the well capacity may even be less than 64K (or not a convenient
    multiple of 64K) how do the extra bits arise? The pixel signal is amplified
    (the amplification factor depends on the ISO set). This amplification is not
    perfect - it introduces some noise. A pixel signal (including pixel and
    readout noise) of 1,000 electrons might be supposed to be amplified by 20
    times, yet produce an output of say 20,193. The amplified signal is then
    fed to the analogue/digital converter (ADC), which subtracts an assumed
    average background noise level before digitising. It is the ADC rather than
    the sensor which determines the number of bits in the image file.

    >
    > As for the comment, if the well depth was 100000 electrons you would
    > still have only a S/N of about 300 because of photon noise. This could
    > be easily digitized by a 10 bit A/D if the sensor output was matched
    > to the dynamic range of the A/D. I believe the real reason camera
    > manufactures use a 12 bit A/D is;


    Agreed as SQRT(100,000) = 316 as the signal to photon noise ratio (
    e.g. the signal to noise in the image itself). But read the whole
    thread...

    >
    > 1 they are cheap and relatively fast
    >
    > 2 having more bits than needed allows them to set the white point a
    > couple bits below the max voltage limits of the A/D. That way they can
    > still use the low responsivity sensors that come of the line
    >
    > jpc


    Mitch
     
    Mitch Alsup, Aug 13, 2004
    #14
  15. chibitul

    Guest

    Mitch Alsup <> wrote:
    > "gsum" <> wrote in message news:<411b1589$>...
    >> Very interesting comments.
    >>
    >> One further comment: The resolution of each ccd sensor is unlikely to
    >> be able to resolve anything like 16 bits which is probably why the
    >> manufacturers limit the raw resolution to 12 bits. At higher ISOs
    >> the resolution is 'swamped' by noise so there is no point in attempting
    >> to preserve all 12 bits.


    > You are confusing the storage capacity in each sensor on the CCD/CMOS
    > with the number of bits in the Analog to Digital converter. IIRC the
    > storage wells are capable of storing 60,000 to 100,000 electrons,
    > while the A/Ds are limited to 12 bits due to cost structure.


    Let's say, optimistically, that the noise level is about 30
    photoelectrons. In that case, the dynamic range is 2-3000, or about
    11 bits. A 12-bit ADC is a good choice. Even if you had a 16-bit ADC
    there'd be no point saving all 16 bits.

    Andrew.
     
    , Aug 13, 2004
    #15
  16. lid wrote:
    []
    >> You are confusing the storage capacity in each sensor on the CCD/CMOS
    >> with the number of bits in the Analog to Digital converter. IIRC the
    >> storage wells are capable of storing 60,000 to 100,000 electrons,
    >> while the A/Ds are limited to 12 bits due to cost structure.

    >
    > Let's say, optimistically, that the noise level is about 30
    > photoelectrons. In that case, the dynamic range is 2-3000, or about
    > 11 bits. A 12-bit ADC is a good choice. Even if you had a 16-bit ADC
    > there'd be no point saving all 16 bits.
    >
    > Andrew.


    Doesn't Poisson give noise for N photons as square-root (N) ?

    Cheers,
    David
     
    David J Taylor, Aug 13, 2004
    #16
  17. chibitul

    jpc Guest

    On 13 Aug 2004 07:25:12 -0700, (Mitch Alsup) wrote:

    >I started here from google 'CCD well capacity':
    >
    >http://www.photomet.com/library_enc_fwcapacity.shtml
    >http://www.mc2scopes.com/en-gb/dept_28.html
    >http://www.phys-astro.sonoma.edu/observatory/documentation/st7_specifications.html
    >
    >But then found a photo forum that was discussing this exact issue:
    >
    >http://www.photo.net/bboard/q-and-a-fetch-msg?msg_id=008gr8



    Thanks for the links. My data of 400 to 800 electrons per square
    micron was from a 6 year old report. Now we seem to be at 800 to 1200
    elecrons per square micron which makes some reverse engineering
    numbers I calculated for my camera more believable.

    jpc
     
    jpc, Aug 14, 2004
    #17
  18. lid wrote:
    >
    > Let's say, optimistically, that the noise level is about 30
    > photoelectrons. In that case, the dynamic range is 2-3000, or about
    > 11 bits. A 12-bit ADC is a good choice. Even if you had a 16-bit
    > ADC there'd be no point saving all 16 bits.


    That would be an extremely optimistic value. Let's assume, for the
    moment, that you have a perfect detector that converts each incident
    photon into a single electron. There is noise inherent in the
    incident photon signal, on the order of the square root of the
    number of incident photons.

    So, a 100,000 photon signal has about 316 noise photons. Other noise
    sources inherent in the sensor only add to this result. At best,
    you're looking at a signal to noise ratio of about 316 to 1. Twelve
    bits is more than enough to handle this span, with room to spare
    for better noise quantization at the low end and/or some margin at
    the high end to keep from completely filling electron wells.

    This has been mentioned already in part by David Taylor, and more
    thoroughly covered by jpc in other parts of this thread.

    BJJB
     
    BillyJoeJimBob, Aug 14, 2004
    #18
  19. Re: dSLR dynamic range question: 14 bits required

    Guys,

    You forget an important fact about signal to noise. While it is
    true that Poisson statistics say the signal to noise is
    square root of the number of photons counted by the
    system (this is the best possible; other added noise only makes
    things worse), the low end is as important as the high end.

    Let's say (for round numbers), the full well capacity is 100,000
    electrons, and the quantum efficiency is 100%, so this represents
    100,000 photons. The signal to noise is 100,000/srqt(100000)
    or about 316. In general, you need at least 1.5 bits per
    root means square (RMS) noise to reasonably digitize the noise, so
    316*1.5 = 474, so 9 bits is sufficient.

    However, now let's look at the low end. The low end on the Canon
    10D at 70 degrees F, ISO 100 is root mean square noise about 16
    see: http://clarkvision.com/astro/canon-10d-signal-to-noise
    This drops to 15 DN at 33 F (not much difference). If 100,000
    is full frame, and you digitize 1.5 DN per RMS noise, then you
    need 100,000*1.5/15 = dynamic range 10,000, or 14 bits resolution.
    The new Canon 1D Mark II is lower noise, so probably 15 bits are
    needed.

    SUMMARY:
    Digitizing the low end of the system sets the bits/RMS noise, the
    full well at the top sets the range, and the two together sets
    the requires bits. Current sensors are well within the
    14-bit range requirement.

    Roger



    Mitch Alsup wrote:

    > jpc wrote in message news:<>...
    >
    >>On 12 Aug 2004 07:53:50 -0700, (Mitch Alsup) wrote:
    >>
    >>
    >>>You are confusing the storage capacity in each sensor on the CCD/CMOS
    >>>with the number of bits in the Analog to Digital converter. IIRC the
    >>>storage wells are capable of storing 60,000 to 100,000 electrons,
    >>>while the A/Ds are limited to 12 bits due to cost structure. That is
    >>>it is easy and cheap to manufacture a 12-bit A/D but considerably more
    >>>expensive to manufacture a 14-bit A/D or a 16-bit A/D. Also note, the
    >>>faster on wants the A/D to operate the less accurate each measurement
    >>>becomes.
    >>>

    >>
    >>A question and a comment.
    >>
    >> Where did you get the 60,000 to 100,000 photoelectron number? I've
    >>heard numbers like 400 to 800 photoelectrons per square micron of
    >>sensor area. If my numbers are still correct for the current
    >>technology, a well depth of 100000 photo electons means some very
    >>large sensors such as those used in astronomy.

    >
    >
    > I started here from google 'CCD well capacity':
    >
    > http://www.photomet.com/library_enc_fwcapacity.shtml
    > http://www.mc2scopes.com/en-gb/dept_28.html
    > http://www.phys-astro.sonoma.edu/observatory/documentation/st7_specifications.html
    >
    > But then found a photo forum that was discussing this exact issue:
    >
    > http://www.photo.net/bboard/q-and-a-fetch-msg?msg_id=008gr8
    >
    > See post 5::
    >
    > Mark U , jun 29, 2004; 03:16 p.m.
    > It's not necessarily true that more bits give a greater dynamic range.
    > The dynamic range depends on the ratio between the maximum exposure a
    > pixel can take (its "well capacity") and the minimum level that can be
    > distinguished from background electronic noise from various sources.
    > 2^16 is 64K, which is the same order of magnitude as the well capacity
    > of a typical DSLR pixel (40-100K is the rough range for today's DSLRs).
    > Background noise levels are commonly at least of the order of 30, or
    > 2^5 (maybe slightly lower i.e. around 2^4 with a high end cooled back
    > for MF cameras - e.g. Phase One or Sinar), which leaves only 10-12 bits
    > of real information at best. In practice, it's usually worse than this
    > because photon shot noise is an inevitable consequence of quantum mechanics,
    > dictating a noise component that has a standard deviation equal to the
    > square root of the signal. The extra bits really only come in handy for
    > image processing, where they avoid truncation e.g. when pixel values are
    > multiplied by factors with fractional values. This helps to reduce noise
    > in shadow areas, but it doesn't create any more real data - in effect,
    > it's a noise reduction artifact.
    >
    > Given that the well capacity may even be less than 64K (or not a convenient
    > multiple of 64K) how do the extra bits arise? The pixel signal is amplified
    > (the amplification factor depends on the ISO set). This amplification is not
    > perfect - it introduces some noise. A pixel signal (including pixel and
    > readout noise) of 1,000 electrons might be supposed to be amplified by 20
    > times, yet produce an output of say 20,193. The amplified signal is then
    > fed to the analogue/digital converter (ADC), which subtracts an assumed
    > average background noise level before digitising. It is the ADC rather than
    > the sensor which determines the number of bits in the image file.
    >
    >
    >>As for the comment, if the well depth was 100000 electrons you would
    >>still have only a S/N of about 300 because of photon noise. This could
    >>be easily digitized by a 10 bit A/D if the sensor output was matched
    >>to the dynamic range of the A/D. I believe the real reason camera
    >>manufactures use a 12 bit A/D is;

    >
    >
    > Agreed as SQRT(100,000) = 316 as the signal to photon noise ratio (
    > e.g. the signal to noise in the image itself). But read the whole
    > thread...
    >
    >
    >>1 they are cheap and relatively fast
    >>
    >>2 having more bits than needed allows them to set the white point a
    >>couple bits below the max voltage limits of the A/D. That way they can
    >>still use the low responsivity sensors that come of the line
    >>
    >>jpc

    >
    >
    > Mitch
     
    Roger N. Clark (change username to rnclark), Aug 14, 2004
    #19
  20. writes:

    >So, a 100,000 photon signal has about 316 noise photons. Other noise
    >sources inherent in the sensor only add to this result. At best,
    >you're looking at a signal to noise ratio of about 316 to 1.


    That all applies to scene white, or whatever in the scene actually
    generates 100,000 photons per pixel. Now suppose the darkest usable
    portion of the scene is 1% as bright (a bit less than 7 stops dynamic
    range). Here you have 1000 photons, with a noise standard deviation
    of 36 photons. This needs to be digitized with the same A/D converter
    and the same amplifier at the same gain as the 100,000-photon pixel.

    >Twelve
    >bits is more than enough to handle this span, with room to spare
    >for better noise quantization at the low end and/or some margin at
    >the high end to keep from completely filling electron wells.


    It might be enough, depending on how large you want the A/D conversion
    quantization error to be compared to the inherent photon noise. But to
    determine this, you need to look at the size of the noise in the deepest
    resolvable shadows compared to the signal in the brightest (non
    saturated) portion of the image.

    Dave
     
    Dave Martindale, Aug 14, 2004
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. c
    Replies:
    2
    Views:
    858
  2. Hans-Peter Walter
    Replies:
    3
    Views:
    1,213
    Joe Bloggs
    Jan 21, 2004
  3. Robert Feinman

    Scene range vs dynamic range

    Robert Feinman, Jun 30, 2005, in forum: Digital Photography
    Replies:
    2
    Views:
    712
    Marvin
    Jul 4, 2005
  4. Replies:
    0
    Views:
    304
  5. Replies:
    18
    Views:
    1,016
    The One
    Nov 29, 2007
Loading...

Share This Page