low light

Discussion in 'Digital Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    John Sheehy Guest

    You say "Not so", then you imply "Yes so":
    Which is exactly why microlenses have no effect on DR. No matter how
    big, small, or absent your "light funnel", the S/N at any given level in
    the RAW data is exactly the same; the 1:1 S/N is the same number of stops
    below saturation for any microlens; and that follows for 10:1, 3:1, 1:2,
    or any ratio you deem as "minimum usable".
    This is all relative. The microlens coverage is equivalent to using
    variable neutral density filters, as far as signal level is concerned.
    ND filters do not affect DR.
    Think Canon high ISO. Less noise in electrons, at higher gain. That's
    real world. The small-pixel cameras tend to have very good read noise at
    ISO 100, and poor amplifiers for high ISO; worse than pushing. Better
    can be done.
    Roger has a knack for testing one thing, and drawing conclusions about
    another. My tests show that low noise in big pixels is only an advantage
    when magnifying the *PIXELS* at the same size; not the subject - when the
    same focal length, Av, Tv, and ISO are used. Roger compares large
    pixel,large sensor to small pixels, small sensor, with equivalent FOV,
    and then draws conclusions about small *pixels*. That is wrong. The
    conclusions should be about small *sensors*.
    That is a real-world example of how small signals are not necessarily
    read out with more noise in electrons, even without binning.

    Stop defending the myth, and start thinking.



    --
     
    John Sheehy, Mar 17, 2007
    1. Advertisements

  2. ipy2006

    Lionel Guest

    Pixel level is what we're discussing here. If you don't actually
    understand how photodiodes or sense amplifiers work, & aren't
    interested in finding out, there's not really a lot of point in you
    arguing with people about them.
    Then why are you arguing about them? Drop the thead, & discuss the
    images instead.
    Except that it doesn't. It's entirely possible that the sensor you're
    talking about produces a better image than the other sensor of the
    same size that you're comparing it too, but if so, it won't be because
    the one you like has smaller pixels, it'll be because it has better
    photodiodes, sense amps, ADCs, software, (or some combination of the
    above) than the other sensor.
    Amongst other things, sure.
    Oh dear. Please tell me you don't seriously think that the process is
    as simple as that. There are actually a number of possible
    explanations for the increased sensitivity & higher noise in Canons
    higher ISO modes, the most obvious being that they simply turn up the
    gain in the read amps, which is also simple, & has the advantage that
    it's compatible with the laws of physics.
    You've never designed an amplifier, have you? ;^)
     
    Lionel, Mar 17, 2007
    1. Advertisements

  3. ipy2006

    John Sheehy Guest

    Huh? That doesn't make any sense.

    In order for what you say to be true, the smaller pixels would have to
    have a read noise at least 4x as high as that in pixels 4x as large. 4
    pixels into one means 4x the signal, and 2x the noise, for an increase in
    S/N of 2x.

    In all these arguments against what I'm saying, I see people fabricating
    high read noises for small pixels that don't happen in real life. I'm
    supposed to believe the boogey-man stories of the hand-wavers who accuse
    me of hand-waving, and none have actually taken the trouble to see what
    happens in the real world with available products.

    Take any blackframe; load into a program that bins, check the standard
    deviation before and after. For an NxN bin, signal increases by a factor
    of N^2, and noise increases by N, for an increase in S/N of N times.
    That would require N times as much noise in the smaller pixel to equal
    the larger, after binning, and more than N times to make it worse.

    --
     
    John Sheehy, Mar 17, 2007
  4. ipy2006

    acl Guest

    Even ignoring all the mess in this thread, it has always irritated me
    how people measure the noise stdv and compare it between cameras as if
    nothing else matters. Obviously a given noise frequency spectrum
    (spatial frequency in terms of pixels) will have a different visual
    effect (when printed to a given size) if present in a 6mp image or a
    12mp image. Of course, some thought about this results in what is
    being "discussed" here, ie binning and addition of noises and how
    things may be scaled, and I reached similar conclusions to yours some
    time ago (although I never measured anything, I just thought about it
    a bit).

    But I never bothered mentioning anything here for reasons that I think
    are now obvious, to wit that everybody will start inventing reasons
    that it cannot be done, they'll argue against 1 angstrom-sized pixels
    as if this is what is being proposed, not bother to read what is
    written because they already know the answer etc.

    And anyway, in 5-10 years I'll buy a very high mp small-sensor camera
    with built-in binning ability whether people here agree now or not.
     
    acl, Mar 17, 2007
  5. ipy2006

    Lionel Guest

    Of course they do. Your microlenses gather more light. They dump more
    light in the well, it fills faster. Once it's full, it clips. If the
    pixel is larger, it doesn't fill as quick, therefore it has more
    dynamic range. Simple.
    Obviously. We're discussing a bunch of small photodiodes, relative to
    a single larger photodiode. Unless the smaller ones have the same
    total surface area as the larger one (ie; fill factor of 100%), the
    smaller ones will fill up faster from the same total exposure,
    therefore they will clip (blow out) more easily, therefore they have a
    smaller dynamic range than the larger pixel.
    Um, no. Filters /reduce/ the total exposure for a photodiode,
    microlenses /increase/ total exposure for a photodiode.
    Of course not.
    You're assuming that. You (and I too) don't have sufficient data to
    make this kind of definitive statement. I am, at least, making
    educated guesses, as I have some experience with related technology.
    (But I'd shut the hell up if someone from the Canon design team told
    me I was wrong. ;^)
    With small pixels? - Maybe. Easily? - Probably not.
    So far, I haven't seen him arguing with the laws of physics. ;^)
    Well, the thing is that it's pretty easy to see that the images from a
    smaller sensor are noisier. Small pixels on a large sensor must, in
    the real world, give lesser results than large pixels on a large
    sensor, but the differences aren't as obvious.
    Of course there's more noise. The raised noise floor in high ISO
    images is clearly visible in the data. The bit that's interesting is
    that Canon manage to keep it as low as they do. That's really not an
    easy thing to do, despite all the assumptions some people are making
    about the ease of building zero-noise sense amps.
    I still haven't seen any hard evidence that a myth exists. Are you
    aware that binning the output of a bunch of small photodiodes simply
    simulates what would happen inside a larger, single photodiode with
    the same surface area as the bunch of small diodes? And that all the
    extra processing involved /must/ increase the total noise level over
    that of the single photodiode? I can think of other, more complicated
    algorithms that /might/, give an improvement, but simple binning isn't
    one of them.
     
    Lionel, Mar 17, 2007
  6. ipy2006

    John Sheehy Guest

    Obviously; you are a member of the "Pixel as an end in itself" club.

    I'm saying that pixels don't determine noise by themselves. Their
    spatial magnification is another factor in real-world noise strength.
    I don't know all the details of the process, but I *CAN* read the
    evidence of current cameras, which says that read noise does *NOT*
    deteriorate as much with small signals in the real world, as you boogey-
    man stories suggest.
    I'm arguing against them as ends in themselves. My interest is in the
    subject and the image, and pixels are only one of the factors. The noise
    of an image, and the dynamic range of an image, are not directly related
    to the noise of the pixels. That is just a lot of a priori nonsense that
    has been repeated so many times that people believe and defend it.
    I have been, all along. Are you really that dense?
    Hardly. Extra resolution is worth a little more noise, even if there is
    more noise at the image level. At the pixel level, noise can increase
    with smaller pixels, and still result in lower image noise.

    No comment on this? It is the jist of my argument.
    You'd love to believe that, wouldn't you? This is an *EXAMPLE* of how a
    lower signal can is amplified with less added (read) noise. An
    *EXAMPLE*, to show that there isn't a strict real-world correlation, and
    all you can think of responding is to try to belittle me as if I were to
    say, "the higher the gain, the less noise added, always".
    That sounds like a genie wish, and you accuse *me* of naive simplicity.
    What a joke.
    You've never looked at existing products, have you? The read noise in
    small-pixel cameras, binned down to large pixels, can be less than in
    native big pixels. This is a fact, and you seem to want to avoid it, and
    cling to worst-case scenarios that are far from current reality.

    --
     
    John Sheehy, Mar 17, 2007
  7. ipy2006

    Lionel Guest

    Exactly. And he's also neglecting to account for (in his analogy), the
    drops that hit the lips of the smaller containers, & are lost,
    (equivalent to the extra fill-factor loss), & the fact that the
    smaller containers will fill & overflow more easily at points with
    heavy exposure (which is equivalent to lowered well capacity in the
    smaller photodiodes, thus a reduced maximum photon capacity).
     
    Lionel, Mar 17, 2007
  8. John Sheehy wrote:
    r

    That last sentence is WRONG. That said, it is wrong only because
    of read noise, which is of the order of 3 to 5 electrons. If the
    actual signal is bigger than say 100 photoelectrons the read
    noise becomes negligible. Nevertheless, at the very lowest signal levels,
    your argument is wrong in the real quantized world.

    Doug McDonald
     
    Doug McDonald, Mar 17, 2007
  9. I think that that depends on the f/number of the camera lens. At f/1.2
    microlenses that really help are going to be hard to fabricate (they reach their
    ultimate limit at about f/0.25 using lenses of diamond or cubic zirconia).

    Do current microlenses function fully if the light is f/1.2 coming in
    (and, at the corners, at an angle.)?

    Doug McDonald
     
    Doug McDonald, Mar 17, 2007

  10. I should add that in the limit of really small pixels, I suppose that
    a designer could get the read noise well below 1 electron. When that happens,
    you really DO reach the limit where smaller pixels are better (except
    for fill factor arguments and the associated factor of microlenses.) It IS
    possible to get semiconductor read noise below one electron, it just hard to
    do very fast. With read noise well below one electron, and enough analog gain, amplifier noise
    at low signals (streaks) can be processed way.

    Doug McDonald
     
    Doug McDonald, Mar 17, 2007
  11. ipy2006

    acl Guest

    Hi. I am obviously not JPS. Basically, you and everybody else who
    disagree with what he says keep repeating this as if it was never
    mentioned, and imply that it negates his argument. It doesn't. For a
    uniform subject, it's true that if I have two sensors of the same size
    but with different sized pixels, and if they all have the same read
    noise (per pixel), then the only difference between the binned pixels
    is that the effective read noise in the binned pixel is n times larger
    (if I binned n pixels), I don't think anybody disagreed. I ignore fill
    factors here (ie take them to be unity).

    Without retyping every single thing I have typed in this thread
    (obviously nobody cares about it), your argument seems to be that this
    means that we'll always get better performance from larger pixels.
    That's true, and obviously there's a lower bound to read noise/pixel,
    and this sets a lower bound on the effective pixel size (this will
    depend on a threshold I set for acceptable low illumination
    performance).

    But I find it very hard to believe that this limit is at 6 microns, or
    that the performance of the canon cameras in terms of read noise
    cannot be duplicated and improved. So I don't understand everybody's
    reactions to this idea (on the other hand, they do not surprise me).
    Yes it's a tradeoff between low light performance in that for given
    read noise we'll get better performance with larger pixels, although
    nowhere near the extend to which this is done now with small pixel
    cameras. The low light performance can be improved by lower read noise
    and larger pixels (obviously!), so it is a tradeoff, in the end, and
    for low enough read noise we can approach the performance of larger
    pixels (as you said, the only difference is a term in the noise of
    n*r, with n^2 the number of binned pixels and r the read noise/pixel).
    Nobody is arguing that we want 1nm^2 pixels here, so we're not dealing
    with a physical limit.

    Look, if I met you in 1984 at a conference (so in a technological
    mood :) ) and told you that in 22 years you'll be able to buy a 5D or
    a D200 for less than 2000 euro/dollars (or even a p&s for 100 euro),
    what would you say? Remember, even AF cameras were novelties then. My
    point is that your arguments are basically based on current
    technology, not fundamental limits, except when talking about extreme
    low light performance (a few electrons/pixel). And it's not as if
    we're talking about something as extreme as a detector requiring .4K
    to work, we are discussing reducing read noise at low gains. I am sure
    it is not trivial to do, but that's another story.


    I don't think he said it's wrong, he disagreed with your conclusions.
     
    acl, Mar 17, 2007
  12. ipy2006

    John Smith Guest

    When you say "hand gestures" you mean they get angry if you use a flash or
    other additional light source?

    Sounds like you're going to be working in a confined area. If you don't need
    the long reach of a zoom, you might want to consider something like the
    Nikon D40, throw away the kit lens, and buy a 50mm 1.8 (about 100$give or
    take) Nice fast lens, and the camera is getting good reviews for higher ISO
    shootin'.

    I'm going to order one myself tomorrow night.

    DP
     
    John Smith, Mar 17, 2007
  13. ipy2006

    Paul Furman Guest

    Aughhhh, I hate that I can't understand these discussions.

    Unity Gain ISO - the ISO where 1 electron = 1 bit in the A/D converter

    A/D Converter - analog to digital, where electrons are assigned numbers

    So unity gain ISO is where there isn't a rounding error problem.
    Read Noise is the rounding problems, higher bit depth in the raw file
    lessens read noise.

    P&S cameras don't have this problem because there are so few electrons,
    they are easy to count?

    Does it really matter if there are minor rounding errors? Is it really
    noise because colors are off by 1 bit? Relevant noise is random wack
    hair-brained colors, not minute color shifts, right?

    What is Dark Current?

    What's this business of clipping at blackpoint before setting gamma?
    That means you can set the blackpoint? How could there be negative
    number? Why?

    I don't get the charts against pixel pitch. It doesn't matter because
    there could be some efficiency or inefficiency in the layout, the only
    thing that matters is full well electrons, right?

    Signal to Noise seems clear enough, shoot a gray card & count the pixels
    that come out some color other than gray.

    What is the significance of raw noise versus final bayer interpolated
    RGB values unless you are doing binning to interpolate by greatly
    shrinking the pixel count? (if I understand the term 'binning' correctly)

    And who cares what the characteristics are before white balancing?
    Nobody is using un-whitebalanced images and the basic WB is fairly
    dramatic in any lighting. I can see how these things make the math clean
    but I don't see how they are necessarily relevant in the final product.

    Ah, my head hurts... am I understanding?
     
    Paul Furman, Mar 17, 2007
  14. John, you are confusing several things in your argument in this thread.
    1) Small pixel size cameras are at near unity gain at low ISO.
    (for other readers, unity gain ISO is the ISO where 1 electron
    equals one bit in the A/D converter). ISO's above unity gain
    ISO is nothing more than "digital ISO." You essentially gain
    nothing at higher ISOs as you can't measure a fraction of an electron.

    2) Current DSLR cameras, like the Canon 20D, Nikon D50, and all other
    DSLRs tested on my web pages and other people testing cameras
    that I reference, ARE NOT SENSOR READ NOISE LIMITED AT LOW ISO.
    You cite the high read noise of DSLRs at low ISO in electrons,
    but that is an electronics limitation, NOT sensor limitation.

    2a) As electronics improve, e.g we see that in the newly announced
    canon 1D mark III with 14-bit converter, the low ISO noise
    is reduced (according to Canon). Current scientific sensors
    use 16-bit converters and achieve full well digitization with
    good digitization of true sensor read noise. We'll see that
    soon in coming DSLRs.

    3) So what you are describing as lower read noise for binned small
    pixel sensors, is really because the small pixel cameras are operating
    at near unity gain ISOs, while the DSLRs are not. You are comparing
    apples and oranges. (Interesting that DSLRs are not yet living up
    to their low ISO potential! That is exactly what is shown in Figure 4 at
    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary
    where the P&S small sensors are below the 12-bit line and the DSLRs are
    above the line indicating they are A/D electronics limited. Note the
    data indicate the 1D Mark II and 5D would still be limited with 14-bit
    converters, so the new 1D Mark III will still not quite live up to its
    potential.)

    4) DSLRs do not live up to their full low signal, low ISO potential because they
    are A/D limited. Remember, all A/D conversions are accurate to +/- 1 bit.
    A DSLR with 50,000 electron full well (e.g ~ Canon 20D) and a
    12-bit converter digitizes 1 bit = 50000/(2^12 -1) = 50000/4095 = 12.2 electrons.
    14-bit converter digitizes 1 bit = 50000/(2^14 -1) = 50000/16383= 3.05 electrons.
    The read noise of the 20D sensor is 3.9 electrons. 14-bits is close to
    digitizing that well but 12-bits is not (remember the +/- 1 bit added noise).

    The current crop of available cameras gives choices like:
    8 megapixel P&S small sensor, small pixel camera versus
    8 megapixel DSLR with large pixels. I provide data and information
    for people to evaluate what they would lose in making a choice
    between such cameras. THAT IS NOT WRONG.

    For those new to my web sites, relevant web pages are at:
    http://www.clarkvision.com/imagedetail/index.html#sensor_analysis

    John, your assertion seems to be (correct me if I'm wrong):
    Given two sensors of the same total size, one with large pixels,
    and one with small pixels, the small pixel camera can deliver
    equal quality images when the pixels are binned to the
    size of the larger pixels. You then support this by saying
    the low ISO read noise of DSLRs is larger than at high ISOs
    and at low ISOs is larger than small pixel cameras.
    You bolster your argument using currently electronics limited
    DSLRs with fictitious small pixel sensors that have many times
    the pixels currently available in any sensor (e.g. your 223 megapixel
    sensor or something like that).

    So with the fact that current 12-bit DSLRs must be limited in their
    read noise at low ISOs and the fact that 16-bit scientific sensors
    available (even amateur astronomers have such chips), it is a
    matter of time before we see the DSLRs with lower effective read noise,
    e.g. as is the case with the new 1D Mark III.
    So, I think you should level the playing field and use true sensor
    read noise in your calculations. When you do, you'll find this
    is a common sampling problem and is encountered in science all the
    time. The noise in any one measurement has an error. The smaller
    the signal, the lower the signal to noise ratio of that measurement.
    Adding multiple samples improves as the square root of the number
    of samples.

    So in the level playing field of equal read noise in both large and
    small sensor, the small sensor binned up to the large sensor will have
    read noise higher by the square root of the number of pixels binned.

    If you want to try the experiment with current cameras, use an
    ISO where the DSLR is not electronics limited, like ISO 800.
    You'll see that the small pixel camera can actually come close
    in binned images quality, but noise AT BEST at the low end will be worse
    by square root number of pixel binned. Fill factor using edge
    effects will actually make the small pixels worse overall.
    How much worse depends on a number of factors, all quite predictable
    if we knew what the parameters were (like the fill factors).

    Then in the real world, you earlier complained about fixed pattern noise.
    The practical problem with your idea is that for the small pixel
    camera to compete with the large pixel camera, the fixed pattern
    noise would have to be considerably less than the fixed pattern noise
    in the large pixel camera, by at least the square root of the
    number of pixels being binned (depends on the spatial frequency distribution
    of the fixed pattern noise).

    So, again, I resent you saying my data are wrong when you are applying it
    to theoretical cameras that do not exist. My data and conclusions
    are absolutely right for evaluating current cameras that DO EXIST and
    that people are trying to choose and understand the differences between such
    real cameras.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 17, 2007
  15. ipy2006

    John Sheehy Guest

    This was a thought experiment, to isolate the shot noise issue. People
    are always saying that smaller pixels means more shot noise. That is
    what this was about. Read noise is another subject, but the fact is,
    read noise in the real world decreases with smaller pixels, either in
    binning, downsampling, or considering noise power as a function of
    magnification.

    Read noise is 2.7 electrons in my FZ50 at ISO 100. 9 of those pixels
    binned together equal a typical DSLR pixel both in coverage area, and
    maximum photon count. 2.7 * 9^0.5 = 8.1 electrons of read noise for the
    "superpixel". That's about 1/3 of the ISO 100 read noise on my 20D.
    What are you comparing it against? Let's say you have 9 tiny pixels with
    a signal of 5 electrons, and a read noise of 2.7 electrons. 9 of those,
    binned together, have a signal of 45 electrons, and a read noise of 8.1
    electrons. With hardware binning like Dalsa uses, that might mean 2.7
    electrons (or a tad more) even for the binned superpixel. A typical DSLR
    at ISO 100 will have a read noise of 18 to 30 electrons.

    Large pixels are not doing well in current technology, in terms of read
    noise at low ISOs!

    The real world does not match your boogey-man stories of read noise
    problems with small pixels.

    --
     
    John Sheehy, Mar 17, 2007
  16. ipy2006

    John Sheehy Guest

    Apparently, not many photons are lost with the 1.97u pixel pitch in my
    FZ50. It captures almost exactly the same amount of photons per square mm
    at ISO 100 saturation as the 1DmkII! Even photons that get close to the
    edge may still go into a well, whether it is the next one over is
    irrelevant, as the resolution is still more precise than if you had large
    pixels, where any collected photon could have been in any of a number of
    pixel wells on a finer-pixel-pitch sensor.

    You're talking boogey-men. Talk facts, from real world stuff, please. No
    cultish hand-waving.
    You're getting very funny now. Go back and look at what you just wrote;
    you just complained about resolution!

    What if there was a pattern of extra drops in every second row of small
    containers? How would you see that with the larger containers?

    --
     
    John Sheehy, Mar 17, 2007
  17. ipy2006

    John Sheehy Guest

    Popular belief varies greatly from forum to forum. There are a good number
    of people who are on top of the technological issues in the Open Talk and
    News Discussion forums on DPReview, even people who actually design digital
    cameras, and there is more of a belief there that the future lies in true
    digital imaging; gigapixel digital sensors, measuring as little as one
    photon per pixel, with one bit of depth. IIRC, there are already
    prototypes of some of this technology.

    No color aliasing. No downsampling artifacts. No need for AA filters, or
    teleconverters (except for the optical viewfinders). Just highly detailed
    photon capture.


    --
     
    John Sheehy, Mar 17, 2007
  18. ipy2006

    ASAAR Guest

    I think that yip meant that if photos of those subjects are taken
    in dim light without the use of a flash, the various movements, such
    as hands moving about while gesturing will result in image blur.
    Did you *really* think that?

    Set aside when not needed, maybe, but throw it away? Anyway, the
    slightly larger, slightly more capable D50 is still available new
    from B&H without kit lens for $108 less than the D40 with 18-55mm
    kit lens. It's low light performance is virtually identical to the
    D40's, so you could get a D50 with 50mm f/1.8 lens for less than the
    cost of the D40 with the relatively inferior 18-55mm kit lens.
    Every reviewer other than Ken Rockwell seems to think that Nikon's
    18-70mm lens is a much more capable lens, even though it's more
    expensive. Faster in both operating speed and lens aperture,
    sharper, less distortion and it has a metal mount and includes a
    lens hood vs. the plastic used by the 18-55mm lens which does not
    include the hood.

    Congratulations. I'll probably do the same, but substitute a D50
    for the D40, since I don't know how much longer they'll be available
    new, and I recently found out that my old Ai-S lenses are atypical
    in that they don't require a D200 to be fully functional, but they'd
    lose both AF and metering capability on a D40.
     
    ASAAR, Mar 17, 2007
  19. ipy2006

    acl Guest

    Exactly, but you cannot see this if you're stuck at thinking in terms
    of today's electronics (as most people here are). Knowledge inertia.
     
    acl, Mar 17, 2007

  20. Hardware binning, that is, adding the charge before the amplifiers,
    does indeed work fine, except you lose the resolution!


    As I said in another post, all else (i.e. fill factor) being
    equal, smaller pixels do result in a smaller read noise due
    to smaller capacitance. The downside of course is lower
    dynamic range.

    Doug McDonald
     
    Doug McDonald, Mar 17, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.