low light

Discussion in 'Digital Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    John Sheehy Guest

    Why? DR depends on noise. Less read noise means higher low-standard (1:1
    SNR) DR, and the same number of photons means the same shot noise, and
    therefore the same high-standard (10:1 SNR) DR.

    This all works with or without binning. Don't forget, light is really
    falling everywhere with extreme resolution, and extreme shot noise. Just
    looking at it that way doesn't make it any noisier than it is.

    John Sheehy, Mar 17, 2007
    1. Advertisements

  2. Your number for the 20D is incorrect. What you refer to as read noise
    is not true sensor read noise, but the limitation of a 12-bit A/D converter.
    The 20D has a full signal well depth of 51,000 electrons.
    So: 51000/4095 = 12.5 electrons per A/D bit. Add +/- 1 bit noise
    on each A/D reading, and noise should be about 1.4 bits, or
    1.4 * 12.5 = 17.5 electrons not including actual sensor read
    noise. DSLRs are so good at low ISOs, they are limited at the
    low end by A/D converter electronics, not sensor read noise.

    Small pixel P&S cameras get so few photons, that the entire range
    is adequately characterized by 12-bit converters.

    Again, not doing well with 12-bit A/Ds. That is why canon has
    now announced a camera with 14-bit A/Ds. We'll see more of that
    in the future,
    The real world marches to a complete description of physics,
    not a narrow view to push an agenda.

    Roger N. Clark (change username to rnclark), Mar 17, 2007
    1. Advertisements

  3. 6-micron limit? Who said 6-micron limit? I haven't said anything
    about a 6 micron limit. I've only argued that the premise
    of binning smaller pixels can equal the performance of larger pixels.
    That goes for 8 versus 16-micron pixels or 2 versus 6.
    I also said given the choice of a 200+ megapixel camera with
    2-micron pixels or a 24 mpixel camera with 6 micron pixels,
    I would choose the 200+ camera for static shots and the
    24 mpixel camera for fast action and low light work.

    Roger N. Clark (change username to rnclark), Mar 18, 2007
  4. ipy2006

    John Sheehy Guest


    I didn't say that it was "true sensor read noise". I said that it was
    "read noise" and sometimes I qualify it by saying "blackframe read

    Regardless of the source, it is making the shadows of the RAW files much
    less useful than what the sensor wells themselves record.
    No, it has about 44,000. It has about 26,500 at RAW saturation at ISO
    200, but ISO 100 has less headroom, and the camera stretches the upper
    highlights to reach 4095 ADUs, like the other ISOs on the camera.

    Here is the same scene, with twice the exposure time on the left, ISO 100
    on the left, and ISO 200 on the right:


    The ISO 100 comes to clipping faster in the gradient, even though all the
    darker tones are exactly the same.
    The 20D has 3967 meaningful ADUs; 0 through 127 are only for negative

    51000/3967 = 12.85 electrons per ADU. Read noise is 2.07 ADU at ISO 100.
    Read noise is then 12.85*2.07 = 26.61 electrons.
    +/- 0.5 ADU, with a global offset of +/- 0.5 ADU, which should be
    accounted for in blackpointing the RAW data, and the former adds like any
    other noise; the square root of the sum of the squares, so you don't add
    it linearly. It sometimes goes positive when the analog noise goes
    negative, and visa-versa.
    What kind of math is that? You're multiplying a term called "bits" which
    should be a logarithm, I think, by a linear ratio. Also, where did the
    1.4 come from? The noise in an ISO 100 blackframe from a 20D is about
    2.07 ADU. 2.07 * 12.5 = 25.87. 2.07 * 12.85 = 26.61. Neither is close
    to 17.5.
    Whatever it is, it is a problem. 2.07 ADU of noise is not caused by bit
    depth. It is analog noise, regardless of where it happens in the signal
    No argument there, for high ISOs. The FZ50 has 4800 photons per pixel,
    and 3982 (IIRC) RAW values at ISO 100. 12 bits could not count that
    accurately. Even at ISO 200, it would cause an uneven histogram, with a
    gap every so often. Electrons need to be oversampled by about 3x or so
    to avoid erratic histograms (the effects, of course, are quite subtle,
    but if you really were just counting photons, it could make a visible
    difference in deep shadows pushed in PP). Of course, read noise make
    exact photon counting impossible with any bit depth.
    Again, probably marketing lies. The read noise is analog. The 1DmkIIIs
    that Canon has released to reviewers has the same read noise, relative to
    max signal at ISO 100, as the mkII does.

    You greatly overestimate the role of bit depth. Its effect is rather
    subtle in the ranges we're talking about.
    The real world has small pixels that bin down to better pixels than the
    big DSLR pixels, and are better also, unbinned, with the same
    magnification of the sensor surface. So to say that larger pixels are
    needed for IQ and SQ is nonsense.

    John Sheehy, Mar 18, 2007
  5. ipy2006

    Lionel Guest

    Correction: You mean one count, not one bit.
    Lionel, Mar 18, 2007
  6. ipy2006

    Lionel Guest

    No, that's not how photodiodes work. Those photons are lost.

    And you're calling names, not supplying facts.
    I did? Where?
    How would you see it if you're binning all these drops? - You /are/
    still talking about binning all these samples, aren't you?
    Lionel, Mar 18, 2007
  7. ipy2006

    Lionel Guest

    DR depends on a number of factors, of which noise is only one. At the
    analog level, dynamic range is the difference between the smallest
    detectable signal & the largest measurable signal.
    Lionel, Mar 18, 2007
  8. ipy2006

    Lionel Guest

    Who said either of those things? I haven't seen Roger do so, I
    certainly haven't, nor have I noticed anyone else saying that.

    You seem to be arguing with something that nobody has said.
    Lionel, Mar 18, 2007
  9. ipy2006

    John Sheehy Guest

    Which ones are lost? You made a very vague reference to drops on the
    edge; I can only guess the scenario in which they occur. I don't know
    how thick the walls are in your mind, etc. I get the feeling that you
    meant imply "in the worst and most impractical manner possible". The
    fact is, current tiny-sensor pixels are not losing many photons the way
    they would in your horror story.
    I hope I didn't offend any boogey-men, but then again, they *want* to
    scare you.

    I am supplying facts about various cameras and how much noise they have,
    and how they bin. You're just making up worst-case scenarios.
    You complained about a small container filling up, while its neighbors
    Binning is optional. It's saves space on memory cards, and speeds up
    their writes.

    John Sheehy, Mar 18, 2007
  10. ipy2006

    John Sheehy Guest

    Which is in the same ratio with or without microlenses.

    John Sheehy, Mar 18, 2007
  11. ipy2006

    Lionel Guest

    Explain to me how a small photodiode under a microlens can possibly
    have a well size as large as that of a photodiode the size of the
    microlens itself.
    Lionel, Mar 18, 2007
  12. ipy2006

    acl Guest

    Nobody mentioned 6 microns. But 6 microns is the current smallest size
    of DSLR pixels, and you wrote:

    2) Current DSLR cameras, like the Canon 20D, Nikon D50, and all other
    DSLRs tested on my web pages and other people testing cameras

    So read noise isn't the limiting factor for current dslrs, but the
    ADC. If the pixels were significantly smaller (thus lower well
    capacities), eventually read noise would become limiting (and yes,
    shot noise too, but we can bin them and get rid of that, while we
    cannot reduce the significance of read noise by binning, as has been
    gone over ad nauseum by all sorts of people here). Since current
    sensors aren't limited by their read noise, but smaller pixels would,
    I concluded that the argument was that we are well-balanced now. Maybe
    I misunderstood.

    But I cannot see why read noise can't be reduced further. Also, I
    cannot see why on-chip binning cannot be implemented, so that the read
    noise (or the part of it that is purely due to reading out the
    charges) can be independent of how many pixels are binned (thus, r per
    pixel for unbinned pixels, r per binned pixel for binned). I do not
    know how much of it is indeed due to readout and how much due to other
    stuff (eg the structures around the sensing area), though. It's one
    thing to say it's not done now, and another to say it cannot be done
    (which you didn't, I know; nor did anybody else).

    And a very expensive lens, and a very heavy tripod...
    acl, Mar 18, 2007
  13. ipy2006

    acl Guest

    No point in rewriting my reply, read the one to Clark.
    Maybe. Happens often! I don't think so in this case, though.
    acl, Mar 18, 2007
  14. ipy2006

    Lionel Guest

    The ones that land anywhere other than the sensitive area of the
    photodiode, obviously. That includes the areas between the
    photodiodes, & the edges of the photodiodes themselves.
    My mind has nothing to do with it. We're talking about real, physical
    devices that have edges, & cannot even be packed edge to edge anyway.
    (Well, not if you want to be able to read them, at least.)
    They don't need to lose 'many' photons to prove my point, just some
    percentage, because the point is that those same photons /won't/ be
    lost with a larger photodiode.
    No, I'm talking about real-world electronic engineering & physics.
    You're just hand waving about stuff that you clearly don't understand.
    You don't get to just make shit up & expect the rest of us to take
    your word for it.
    If the bright image covers its neighbours as well, they'll obviously
    fill up too. What's your point?
    In that case, you don't get to claim the noise reduction benefit of
    binning that you've been claiming throughout this thread. You can
    either claim the resolution benefit of havng more photodiodes in a
    given area (in which case you lose SNR & DR), or you can claim the
    noise benefit of binning those smaller photodiodes, (in which you lose
    well-capacity/DR/maximum-count + resolution), but you can't claim both
    benefits at the same time!
    Um, yeah, whatever.
    Lionel, Mar 18, 2007
  15. ipy2006

    John Sheehy Guest

    I can't explain that; I'm not sure it's possible. However, it has nothing
    whatsoever to do with the fact that a microlens only affects how fast a
    well fills, not the ratio of max signal to lowest usable signal (DR).

    John Sheehy, Mar 18, 2007
  16. ipy2006

    Paul Furman Guest

    I really don't get this binning. Does anybody do that (half the
    megapixels) because they can't afford memory cards?
    Paul Furman, Mar 18, 2007
  17. ipy2006

    Lionel Guest


    Suppose you have two photodiodes next to each other on the same piece
    of silicon. One can hold a maximum of 1000 photons. The second has
    only half the surface area of the first, & can therefore hold only 500
    photons at most. The second photodiode, however, has a microlens in
    front of it, so it will collect photons from an area exactly the same
    size as the surface area of the first photodiode.
    Now, send a stream of photons at both photodiodes & measure the output
    signal from the diodes. Everything is equal from 0-500 photons/diode,
    but the smaller photodiode tops out at 500 photons, whie the larger
    photodiodes keeps on providing a useful signal right up to 1000
    photons. Therefore, the larger photodiode has double the (linear)
    dynamic range of the smaller one.
    Lionel, Mar 18, 2007
  18. ipy2006

    Lionel Guest

    Not that I know of. But all the Canons already offer this amazing
    'binning' feature anyway - that's what you get when you set your
    camera to JPEG mode, & use a JPEG size smaller than full resolution.
    Lionel, Mar 18, 2007
  19. ipy2006

    John Sheehy Guest

    I qualify my statements much more thoroughly than most of the population,
    and even I wouldn't consider it necessary to qualify for your hypothetical
    "mixed capacity" sensor. We were discussing whether or not "microlenses"
    affect DR. You had to come up with a scenario that adds another factor,
    and the discussion is not about microlenses anymore. You are going to
    comical lengths to try to find me wrong about something.

    John Sheehy, Mar 18, 2007
  20. ipy2006

    John Sheehy Guest

    Well, downsampling and binning are a bit different. A downsampling should
    use a filter to remove frequencies that complicate and artifact the output.
    Binning is just a box effect, with the contents literally added together.

    John Sheehy, Mar 18, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.