low light

Discussion in 'Digital Photography' started by ipy2006, Mar 7, 2007.

  1. But you keep comparing this artificial A/D limit to other
    cameras (with small pixels) that aren't limited by the
    A/D converter.
    The test data I've seen do not indicate this, whether it is
    44,000 or 51,000 makes little difference; that is only 0.2 stop
    And is your camera calibrated to better than 0.2 stop to be
    sure you are seeing true sensor property. The calibration
    includes the f/stop, the shutter speed, the light source
    stability, and the ISO gain. Again, it matters little
    in the argument.
    Your smaller range (3967) only makes your argument worse.
    No, the error in quantization is +/- 1 bit. There are no "half" bits!
    I took the 2-bit error and assumed the RMS would be root 2,
    so the factor 1.4. But I have underestimates the A/D noise.
    It seems that the minimum noise is more like 1.8 ADU.
    This is illustrated in Christian Buil's wen page, see Table 3:
    The data for canon sensors clearly shows all the sensors
    bottoming out just above 1.8 ADU.
    1 bit = 1 ADU, the smallest unit of change.
    What is your evidence for the noise source. I pointed you to
    Christian Buils web page that clearly shows the read noise
    from multiple sensor bottoming out above 1.8 ADU. Then 16-bit
    CCDs used in amateur and professional astronomy properly
    digitize lower read noise than 26 electrons. So do point
    and shoot cameras with smaller well depths. P&S cameras
    with their cheaper electronics don't seem limited by
    some mythical 26 electron analog noise. To the contrary,
    a 1.8 ADU error in A/D conversion best describes all the
    available data, from P&S to scientific systems.
    Why not. 4800 electrons/3982 = 1.2 electrons/ADU, pretty good.
    Sorry, but I see no evidence for the need to oversample
    electrons 3x. That would be ISO 4800 on the Canon 5D!
    Again, see Christian Buil's web page. He has a section
    on optimal gain: see Table 5:

    What you need is adequate sampling of the noise, not an electron.
    Christian derives, for example the optimal gain for the 5D
    at ISO 1100, or 1.5 electrons/ADU.
    Hmmm. Doesn't fit your paradigm, so its marketing lies.
    Canon claims significant improvement in shadow detail.
    We'll have to wait and see until real tests are performed,
    not reviewers using raw converters that mes sup the signal.
    Sorry, not the physics of the real world. If we had perfect
    sensors with zero added noise, then small pixels summed
    could equal larger pixels, but not better them.

    Roger N. Clark (change username to rnclark), Mar 18, 2007
    1. Advertisements

  2. ipy2006

    Lionel Guest

    WTF are you jabbering about with that comment?
    What factor is that, John? The above argument is just as valid if you
    are talking about two different sensors, I'm simply making the point
    that the silicon process types, etc, are irrelevant to my point, which
    is simply that a small well has less dynamic range than a big well,
    which should be fairly obvious.
    Then what is it about, exactly?
    I've already proved you wrong, the comical lengths seem to be what's
    required to make you see the flaws in your incorrect beliefs.
    Lionel, Mar 18, 2007
    1. Advertisements

  3. ipy2006

    acl Guest

    He is trying to say that bits=log(counts)/log(2), and you meant
    counts, not bits.
    acl, Mar 18, 2007
  4. ipy2006

    John Sheehy Guest

    I am "jabbering" about the fact that you went way out of the context of
    the conversation to create a scenario that involves microlenses and
    affects dynamic range. The assumed normal context is uniform
    microlenses, and all other things being equal. If you had something
    unusual in mind like mixed well depths and mixed microlenses, you should
    have said so. I never would have said that partial microlenses would not
    affect DR of the system.
    See above.
    That's only true if there is no read noise. Read noise can potentially
    limit the DR of a larger well more.
    It's about mixed microlenses and well capacities. You hinted *NOTHING*
    about this early on, and even after disagreement you did not even ask,
    "what about cases in which microlenses are different or missing on a
    fraction of the pixels?".
    You haven't proved me wrong, because you never said what it is that you
    had in mind until the end of a few rounds of disagreement. You are
    either clumsy in your argument, or intentionally deceptive.

    I never would have said that selective use of microlenses on pixels would
    not affect DR, and I think that it is totally unreasonable to expect
    that to be the context when discussing whether or not microlenses affect
    DR, because they do not affect the DR of any pixel, and blanket microlens
    use is the norm.

    John Sheehy, Mar 18, 2007
  5. We've been over this one before in this news group.
    For the same image quality, there is NO loss in
    depth of field by using a larger sensor.
    Details are discussed here:

    Roger N. Clark (change username to rnclark), Mar 18, 2007
  6. Binning methods are used in astronomy all the time.
    e.g.: http://www.mistisoftware.com/Astronomy/Galaxies_m33.htm
    Sometimes binning is needed to improve the signal-to-noise ratio
    on faint subjects.

    It is also used in scientific work. Thus it is well understood,
    and the claims being made here about equaling or bettering
    a larger pixel sensor have never been demonstrated. The one
    example being used in this thread to support the claim is
    a restricted case where the larger pixel sensor is A/D limited,
    not sensor performance limited. If John would do the test
    with the same two sensors at a higher ISO where the
    large pixel sensor was not A/D limited, he would come to the
    opposite conclusion.

    Roger N. Clark (change username to rnclark), Mar 18, 2007
  7. By 1-bit I implied the least significant bit. In remote sensing science,
    this is called a BN (Data Number).
    Roger N. Clark (change username to rnclark), Mar 18, 2007
  8. Yes, the smallest integer interval. In a 12-bit A/D, there are
    2^12-1 levels = 4095. So the finest interval that is recorded is
    max signal into the A/D converter / 4095.
    Yes. And digital converters always have an accuracy of +- one number.
    So if the signal is 2/4095 of full signal, the answer the A/D will give
    is sometimes 1, sometimes 2, or sometimes 3.
    Yes, assuming one can actually "see" one electron (and in electronic
    sensors, the noise is only a few electrons, so there is really no benefit
    to gains higher that digitizing one electron. It's really pretty impressive
    when you think about it. We are buying, for a few hundred dollars,
    devices (digital cameras) that directly detect quantum processes!
    Effectively, yes. They capture so few photons that 12 bits (4095 levels)
    adequately records the highest signals to the smallest signals with the
    few electron noise. Current electronic sensors, CCDs or CMOS, capture
    at most about 1000 to 2000 photo-electrons per square micron.
    So a 2 square micron CCD fills up with electrons at only 4,000 to
    8,000 electrons. 8000/4095 = 1.95 electrons per number out of
    the A/D converter. But a large pixel DSLR can have 60+ square micron
    collection area. For example the 1D mark II stores a maximum of about
    80,000 electrons (ISO 50), so the 12-bit A/D converter gives
    80,000/4095 = 19.5 electrons per data number. If you boost the
    gain to a higher ISO, so you look at only the bottom 8,000 electrons,
    then the A/D records 1.95 electrons per number, like the small
    P&S camera, but at a much higher ISO. When you boost gain
    to so one number in the A/D conversion is equivalent to 1 electron,
    that is the unity gain ISO. That is a factor of more than
    16 from current small pixel P&S cameras to large pixel DSLRs,
    and is the fundamental reason why small sensor cameras have poor
    high ISO performance, and why they always will relative to their
    larger cousins.
    Noise that we view is mostly due to intensity variations.
    Noise due to color shifts is called chrominance noise and is less
    bothersome. Noise in bright parts of a scene are not objectionable
    to most viewers, but the noise becomes more obvious in night scenes
    or in shadow detail. For example, look at Figure 5 on this page:
    The Canon S70 image looks pretty noisy, especially in the dark areas,
    and that is due to a few electron noise. So 1 bit noise is usually
    not a factor unless one is pushing limits (like is done in
    high ISO action photography, night and low light photography).
    All electronic sensors have some electrons that leak into the
    well with the other electrons from the converted photons.
    The dark current amount is temperature dependent and that adds noise
    equal to the square root of the number of electrons accumulated
    from the dark current over the exposure. For most modern
    digital cameras with exposures less then a few tens of seconds,
    dark current is negligible. For long exposure of
    minutes it can become dominant over read noise.
    Because there is noise all signals, e.g. read noise, the natural
    fluctuations can send a measured signal to negative voltage.
    Manufacturers usually set a small offset in the electronics
    voltage to compensate. Lets say the sensor put out 1 volt on the
    output amplifier to the analog-to-digital (A/D) converter. Manufacturers
    add a small negative voltage, like 0.02 volts so the A/D converter
    digitizes from -0.02 volt to 1 volt. Thus 0 light on the sensor
    gives about number 100 in the output raw file. Some raw converters
    subtract off that level, but some values will be less than 100, and
    in the subtracted image, values would be clipped at zero.
    Actually, what matters is:
    1) quantum efficiency of converting photons to electrons
    (typically in the 30 to 50% range in modern digital cameras,
    and that is very good),
    2) active area to convert photons to electrons (currently effectively
    in the 80% range although manufacturers do not generally
    publish that number (Kodak does on their sensors),
    3) the full well capacity to hold those electrons.

    Quantum efficiency is similar for current consumer devices, so
    within a factor of two they are pretty much the same. Full well
    capacity is correlated to pixel pitch, as is active area.
    Full well capacity is about 1000 to 2000 electrons per square
    micron. The vertical scatter in the pixel pitch plots you refer to are
    mostly due to the variations in active area, full well capacity
    and quantum efficiency between devices.
    Not quite. Not a color, but an intensity in each red, green or
    blue channel.
    The raw conversion with Bayer interpolation is variable. Some converters,
    like the Canon converter do minimal sharpening and effectively average
    pixels, reducing noise by about 1.5x. Other converters (in their default
    settings) attempt to increase apparent spatial detail but at the
    expense of increased noise. The Rawshooter Essentials is one
    such example (technology now in Photoshop CS3 beta), and does very
    well in my experience. It is nice to have the high signal to noise
    ratio that large pixels give to play the game in raw conversion:
    do I want a lower noise lower resolution image, or more detail
    at the expense of noise? If the signal-to-noise ratio is high
    to begin with, you can afford to push for more apparent
    spatial detail. You don't have that luxury with smaller pixels
    and the lower signal-to-noise ratios they give.
    Yes, I basically agree. One must have adequate S/N to white balance,
    however. For example, there are very few blue photons from
    an incandescent lamp, so after white balancing noise in the blue
    can be quite large. In that case it might be better to use a color
    correction filter on the lens and a longer exposure to get more blue
    I think so. It's those who don't know what question to ask that
    are probably not understanding (unless of course they completely

    Very good questions. I'll probably develop this into a web page and
    add it to my sensor analysis section.

    Roger N. Clark (change username to rnclark), Mar 18, 2007
  9. The reality of all of this is if you were to use the world famous and
    legendary 58mm f/1.2 Noct Nikkor none of this would even be a concern. A
    great lens eliminates all of these problems, real or imagined.

    =?iso-8859-1?Q?Rita_=C4_Berkowitz?=, Mar 18, 2007
  10. ipy2006

    Lionel Guest

    Yes, I know, but lots of people reading this group won't know that, &
    will confuse it with an output bit.
    I'm used to more mundane ADC applications, where we refer to LSBs or
    counts. ;^)
    Lionel, Mar 18, 2007
  11. ipy2006

    John Sheehy Guest

    The "one example" was at ISO 1600. Read noise is 3.34 electrons at ISO
    1600 on the FZ50, and as I already said, I found afterwards that just
    puxhing ISO 100 would have been better, with a read noise of 2.7 electrons.
    3 2.7 electron FZ50 ISO 1600 pixels binned together will collect a max of
    2700 electrons, with a read noise of about 8.1 electrons, quite comparable
    to a DSLR. The best Canons are about half of that; shot noise is
    significant in ISO 1600 shadows, however, and should be similar.

    If you don't bin, you have 3x the linear resolution.

    John Sheehy, Mar 18, 2007
  12. ipy2006

    John Sheehy Guest

    Which problems? There are multiple layers going on here.

    I'll assume you mean the long-lost topic of the the OP; low light (sorry, I
    should have started a new thread).

    Even with fast lenses, you hit limits, too. And do you really want to use
    such a lens wide open? Focusing is difficult in low light, and the DOF at
    f/1.2 is not very forgiving. Many fast lenses are optically compromised
    wide open, too, even when focused, and tend to have lots of luminance roll-
    off in the corners. I generally don't let my f/1.4 lenses shoot below
    f/2.0. The lens you suggest, of course, may only have the DOF issue.

    John Sheehy, Mar 18, 2007
  13. Binning will result in gross aliasing effects, which are very nasty with a
    Bayer sensor. So it's probably not useful in real life.

    Noise reduction (via a Gaussian blur) followed by downsampling wouldn't have
    that problem.

    David J. Littleboy
    Tokyo, Japan
    David J. Littleboy, Mar 18, 2007
  14. ipy2006

    acl Guest

    If I take 4x4 blocks and replace all 4 spins by the average of the
    original 4 spins (but do not coarse grain them into 1 pixel, but
    instead leave them as 4 separate but identical pixels), is this a
    blurring operation or not? If I next take these 4 (now identical)
    pixels and create one with the same value, is the combined operation
    what is meant by binning or not?

    Compare the above to "nearest neighbour" interpolation, which is to
    take one of the 4 pixels and discard the others (thus no blurring
    before downsampling, if you prefer this way of looking at it)
    acl, Mar 19, 2007
  15. ipy2006

    acl Guest

    pixels damnit, pixels! sorry
    acl, Mar 19, 2007
  16. Yes. Binning (presumably) averages four (or some other number) of pixels
    with no information from any other pixels. This doesn't work very well as a
    low-pass filter, and allows aliasing artifacts to occur.

    Applying a (well-designed) blur at every pixel at the higher resolution and
    then downampling will produce an image with much lower levels of aliasing
    That's decimation. And is, of course, even worse than averaging, since it
    approximates point sampling.

    David J. Littleboy
    Tokyo, Japan
    David J. Littleboy, Mar 19, 2007
  17. ipy2006

    John Sheehy Guest

    That's true with large binnings. For 2x2 ones, from cameras with AA
    filters, you're actually not going to see much aliasing.

    There are so many ways to bin, though. If you want to bin big, you have
    the option of including overlapping pixels in multiple output pixels.
    Downsampling often reduces the noise statistic better than binning, but the
    (traditionally) binned image is already sharper to begin with.

    Median filtering is better than gaussian blur, IME. (unless you just do
    blur on 'a' and 'b' in Lab mode for chromatic noise). I made my own CFA-
    aware median filter in Filtermeister that is based on ratio instead of
    difference from neighbors. Good for hot pixels, and warm and cool ones and
    dead ones, as well.

    PS' "dust and scratches" can actually work well, too, before downsampling.

    John Sheehy, Mar 19, 2007
  18. ipy2006

    Scott W Guest

    Whereas binning will lead to some artifacts it is not much worse then
    bicubic. It is pretty simple to do a quick test in Photoshop to see
    how binning will affect the image, use a custom filter with the 5x5
    matrix fill in with 1s and the scale set to 25. Then down sample to
    20% using nearest neighbor sampling. Make sure the original image is
    size by a factor of 5 in both height and width otherwise you won't get
    true binning.

    This is a test I did, here is the original image.

    This is the image that you get binning 5 x 5 pixels, so the new image
    is 20% the size of the original.
    Note there are some artifacts in the steeple of the church.

    This is a down sampling using bicubic
    The artifacts look about the same here as in the one that used

    This time I have used gaussian blur followed by down sampling to 20%,
    using nearest neighbor.
    Here we are pretty much free of the artifacts.

    Now if you want something really bad just down sample using nearest
    neighbor with no filtering before hand.

    No the one disclaimer is that this was not done with the raw value
    from the bayer sensor but rather a filer that had already been
    converter to RGB.

    Scott W, Mar 19, 2007
  19. ipy2006

    John Sheehy Guest

    Sounds like PS' "Pixelate|Mosaic".
    That's sort of like binning, but by averaging instead of adding, you are
    working with less precision. Pure binning does not lose anything to
    rounding errors. IOW, If you bin 40, 40, 40, and 41, you get 161, which is
    equivalent to 40.25 in the original scale, but it would be either 40 or 41
    in your method, which is less accurate.
    Nearest neighbor increases noise at the pixel level with an RGB image,
    because the low-frequency noise becomes high-frequency. The Nearest
    neighbor output is full of strong noise at the nyquist, compared to a
    downsampled or binned output.

    John Sheehy, Mar 19, 2007
  20. ipy2006

    acl Guest

    Yes, at the cost of lowering resolution. No free lunch! While more
    pixels allow a weaker AA filter, downsampling to improve S/N would
    force us to blur them (using software) to a similar extend as the AA
    filter would for larger pixels. Good point.
    That was the point. I didn't know the terminology was used in signal
    processing, if that's where you know it from (I know it from
    elsewhere). I am painfully aware of what it can do to you (much worse
    than false high-frequency detail, it can completely mislead and give
    an explicitly wrong answer in some situations).
    acl, Mar 19, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.