low light

Discussion in 'Digital Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    acl Guest

    On Mar 12, 2:11 am, "Bart van der Wolf" <> wrote:
    > "John Sheehy" <> wrote in message
    >
    > news:Xns98F06D6F99D10jpsnokomm@130.81.64.196...
    >
    > > "Roger N. Clark (change username to rnclark)" <>
    > > wrote
    > > innews::

    >
    > >> The problem is that our eyes plus brain are very good at
    > >> picking out patterns, whether that pattern is below random
    > >> noise, or embedded in other patterns.

    >
    > What's worse, we see non-existing patterns (e.g. a triangle in the
    > following link) because we want to:
    > <http://www.xs4all.nl/~bvdwolf/temp/Triangle-or-not.gif>.
    >
    > > Yes, that is a problem, and that is exactly why you can't evaluate
    > > noise by standard deviation alone.

    >
    > That depends what one wants to evaluate. Standard deviation (together
    > with mean) only tells something about pixel to pixel (or sensel to
    > sensel) performance. It doesn't allow to make valid judgements about
    > anything larger.


    As a matter of fact, they don't tell you anything (literally) about
    pixel to pixel behaviour. If I tell you that a signal has mean zero
    and given standard dev, what else can you tell me about it? Nothing.
    It could be anything from an otherwise random time series to a sine
    wave to a series of square waves to anything else. It's like knowing
    the first two coefficients in an infinite power series (well that's
    exactly what it is: the first two coefficients in an infinite power
    series).

    the reason people use the first two moments (mean and std) is that the
    noises under consideration are often assumed to be gaussian, in which
    case these 2 qtys completely characterise the noise. this is usually a
    good approximation when the noise comes from many different sources.

    > Banding could be either calibrated out of the larger
    > structure, or an analysis of systematic noise should be done (and care
    > should be taken to not mistake Raw-converter effects for camera or
    > sensor array effects).
     
    acl, Mar 11, 2007
    #81
    1. Advertising

  2. John Sheehy wrote:
    > "Roger N. Clark (change username to rnclark)" <> wrote in
    > news::
    >
    >> I too agree that pattern noise is more obvious that random noise.
    >> Probably by at least a factor of ten. It is our eye+brain's
    >> ability to pick out a pattern in the presence of a lot
    >> of random noise that makes us able to detect many things
    >> in everyday life. It probably developed as a necessary
    >> thing for survival. But then it becomes a problem when we try
    >> and make something artificial and we see the defects in it.
    >> It gives the makers of camera gear quite a challenge.

    >
    > How does that co-exist with your conclusion that current cameras are
    > limited by shot noise?
    >
    > Saying that current cameras are limited by shot noise means that all future
    > improvements lie purely in well depth, quantum efficiency, fill factor, and
    > sensor size (you'd probably add "large pixels", but I'd disagree). The
    > fact is, a 10:1 S:N on the 1DmkII at ISO 100 would be 1.5 stops further
    > below saturation, and 1:1 would be 4.3 stop further below it, if there were
    > no blackframe read noise
    >
    > http://www.pbase.com/jps_photo/image/75392571
    >
    > and that is only statistically, without consideration for the pattern noise
    > effects, which widen the visual gap even further.
    >

    Nice plot. If you look at my past posts, you would also see that
    I've said for at least a couple of years 14-bit or higher A/D are
    needed too because current DSLRs are limited by 12-bit converters.
    Some attacked me in this NG with the idea that "if more than 12-bits were
    really needed, then why haven't camera manufacturers done it?"
    We'll we now see they have, and I'm sure 14 or more-bits will become a
    new standard in future DSLRs.

    Regarding fixed pattern noise versus photon Poisson noise, your plot
    and some simple illustrations show what is dominant. First clue,
    look at the thousands of images on the net. How many show fixed
    pattern noise? It is very rare. You tend to see fixed pattern noise
    at the very lowest lows in an image. Second, if fixed pattern noise
    is really a factor, guess what, you can calibrate most of it out with dark
    frame subtraction. I think good examples of fixed pattern noise is
    illustrated at:
    http://www.clarkvision.com/photoinfo/night.and.low.light.photography
    Figure 1, for example shows two merged low light images and fixed pattern
    noise is not apparent, nor is it the dominant noise source in the image.
    Figure 2 shows the black sky above the Sydney opera house in an ISO 100
    20 second exposure. Fixed pattern noise is a little over 1 bit out of 12
    in the raw data. It simply is not a factor. But where the scene has
    signal, e.g. the lit roof, noise is proportional to the square root
    of the signal strength, with photon noise up to 18 out of 4095
    in the 12-bit raw file. So, over most of the range, photon noise
    dominates. The low end, the bottom few values or bottom couple of bits,
    is a combination of photon noise, read noise, and fixed pattern noise.
    That gives about 10 bits out of 12 with photon noise as the dominant
    noise source. Again, if you work at the low end, calibrate out
    the majority of fixed pattern noise with dark frames.


    Let's work an example.
    Let's assume fixed pattern noise is more objectionable by
    10 times random noise (this is a reasonable estimate
    for me, and I find fixed pattern noise quite objectionable).
    But then with processing, e.g. dark frame subtraction, it can
    be reduced about 10x, then filtered and reduced more, all with
    minimal impact on resolution. Random photon noise in an image
    from can only be reduced by pixel averaging, thus reducing spatial
    resolution.

    Let's use your full well depth, rounding off to 53,000 electrons.
    Fixed pattern noise in DSLRs like the 20D and 1D Mark II are between 1 and
    2 bits in the A/D at low ISOs. At low signal levels, line-to-line
    pattern noise is on the order of 7 electrons in the 1D Mark II, with
    low frequency offset of a few tens of electrons (at ISO 100 fixed pattern
    noise appears at about the 1-bit level, which is ~13 electrons. The low frequency
    fixed pattern noise is entirely eliminated by a dark frame subtraction,
    and line-to-line (what you call 1D) is reduced by about 10X with
    dark frame subtraction.

    So there are multiple conditions. Here is one example:

    ISO 100, 1D Mark II, 53,000 electron full signal:

    Signal Photon noise Read Noise Fixed-pattern What noise dominates
    (elect- stops (electrons) +A/D noise noise Photon, read, or pattern
    rons) (electrons) (electrons)

    53,000 0 230 17 ~13 Photon
    12,250 -4 110 17 ~13 Photon
    3,312 -6 57 17 ~13 Photon
    828 -8 29 17 ~13 Photon
    207 -10 14 17 ~13 all 3 similar
    51 -12 7 17 ~13 read + pattern

    The above table demonstrate the the sensor has noise dominated by photon
    statistics over most of its dynamic range. Each generation
    of cameras that comes out pushed the floor where other noise sources in the
    electronics show. It is likely we'll see the 1D Mark III push those limits
    a stop or two lower. But photon noise remains and is the ultimate
    limit.

    Here is another test series that illustrates the above conclusions:
    Digital Camera Raw Converter Shadow Detail and Image Editor Limitations:
    Factors in Getting Shadow Detail in Images
    http://www.clarkvision.com/imagedetail/raw.converter.shadow.detail

    Figure 6 shows areas from +2 to -7.6 stops. But if you look at the different
    raw conversions, you'll see widely different results and wildly different
    fixed pattern noise. Then look at Figure 16: the camera jpeg looks pretty
    clean with less pattern noise than some of the raw conversions.
    So when you say you don't believe photon noise versus fixed pattern noise,
    understand the effects of converters too.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 12, 2007
    #82
    1. Advertising

  3. ipy2006

    acl Guest

    On Mar 12, 2:53 pm, "Roger N. Clark (change username to rnclark)"
    <> wrote:

    > And that is why people who evaluate sensors do more than simply
    > study the standard deviation of one image. To understand noise sources,


    Never claimed otherwise! By the way, why don't people study the full
    power spectrum of the noise (ie of a blackframe)? That would give
    quite a lot of information (it should allow distinguishing between the
    white part of the noise and things like banding). And it should not be
    too hard (eg with IRIS, split the channels and FT them). And if you do
    that to an average of many frames, you'll be studying repeatable noise
    only. Is there some particular reason this isn't done by anybody?
    >
    > The Nikon D50 Digital Camera:
    > Sensor Noise, Dynamic Range, and Full Well Analysis
    > http://www.clarkvision.com/imagedetail/evaluation-nikon-d50
    >


    That's quite interesting, why don't you include dark frames from more
    cameras? I'd think that this would be quite useful for people
    intending to do very low light work.

    > http://www.clarkvision.com/imagedetail/long-exposure-comparisons/inde...
    >
    > and more at:http://www.clarkvision.com/imagedetail/index.html#sensor_analysis
    >
    > other:http://www.astrosurf.org/buil/20d/20dvs10d.htm
    >
    > Roger
    >
     
    acl, Mar 12, 2007
    #83
  4. ipy2006

    acl Guest

    On Mar 12, 12:15 am, "Bart van der Wolf" <> wrote:
    > "John Sheehy" <> wrote in message
    >
    > news:Xns98F06DCDB2811jpsnokomm@130.81.64.196...
    >
    > > "Roger N. Clark (change username to rnclark)" <>
    > > wrote in
    > >news::

    >
    > >> I too agree that pattern noise is more obvious that random noise.
    > >> Probably by at least a factor of ten. It is our eye+brain's
    > >> ability to pick out a pattern in the presence of a lot
    > >> of random noise that makes us able to detect many things
    > >> in everyday life. It probably developed as a necessary
    > >> thing for survival. But then it becomes a problem when we try
    > >> and make something artificial and we see the defects in it.
    > >> It gives the makers of camera gear quite a challenge.

    >
    > > How does that co-exist with your conclusion that current cameras are
    > > limited by shot noise?

    >
    > Shot noise is a physical limitation, not a man made one. The man made
    > limitations can be improved upon.
    >


    The speed of light is also a physical limitation. Would you therefore
    agree to the statement that the top speeds of current spaceships are
    limited by the speed of light, and therefore we must work on finding
    ways to circumvent that (rather than on finding some better propulsion
    system than semi-controlled explosions) :)? (I'm not claiming that
    banding really is the main limitation, by the way, I actually agree
    with Roger and presumably you).
     
    acl, Mar 12, 2007
    #84
  5. acl wrote:
    > On Mar 12, 2:11 am, "Bart van der Wolf" <> wrote:
    >> "John Sheehy" <> wrote in message
    >>
    >> news:Xns98F06D6F99D10jpsnokomm@130.81.64.196...
    >>
    >>> "Roger N. Clark (change username to rnclark)" <>
    >>> wrote
    >>> innews::
    >>>> The problem is that our eyes plus brain are very good at
    >>>> picking out patterns, whether that pattern is below random
    >>>> noise, or embedded in other patterns.

    >> What's worse, we see non-existing patterns (e.g. a triangle in the
    >> following link) because we want to:
    >> <http://www.xs4all.nl/~bvdwolf/temp/Triangle-or-not.gif>.
    >>
    >>> Yes, that is a problem, and that is exactly why you can't evaluate
    >>> noise by standard deviation alone.

    >> That depends what one wants to evaluate. Standard deviation (together
    >> with mean) only tells something about pixel to pixel (or sensel to
    >> sensel) performance. It doesn't allow to make valid judgements about
    >> anything larger.

    >
    > As a matter of fact, they don't tell you anything (literally) about
    > pixel to pixel behaviour. If I tell you that a signal has mean zero
    > and given standard dev, what else can you tell me about it? Nothing.
    > It could be anything from an otherwise random time series to a sine
    > wave to a series of square waves to anything else. It's like knowing
    > the first two coefficients in an infinite power series (well that's
    > exactly what it is: the first two coefficients in an infinite power
    > series).


    And that is why people who evaluate sensors do more than simply
    study the standard deviation of one image. To understand noise sources,
    the standard procedure is to make a series of exposures and analyze
    the results from the different test conditions. e.g.:

    The Nikon D50 Digital Camera:
    Sensor Noise, Dynamic Range, and Full Well Analysis
    http://www.clarkvision.com/imagedetail/evaluation-nikon-d50

    http://www.clarkvision.com/imagedetail/long-exposure-comparisons/index.html

    and more at:
    http://www.clarkvision.com/imagedetail/index.html#sensor_analysis

    other:
    http://www.astrosurf.org/buil/20d/20dvs10d.htm

    Roger

    > the reason people use the first two moments (mean and std) is that the
    > noises under consideration are often assumed to be gaussian, in which
    > case these 2 qtys completely characterise the noise. this is usually a
    > good approximation when the noise comes from many different sources.
    >
    >> Banding could be either calibrated out of the larger
    >> structure, or an analysis of systematic noise should be done (and care
    >> should be taken to not mistake Raw-converter effects for camera or
    >> sensor array effects).

    >
    >
     
    Roger N. Clark (change username to rnclark), Mar 12, 2007
    #85
  6. acl wrote:
    > On Mar 12, 2:53 pm, "Roger N. Clark (change username to rnclark)"
    > <> wrote:
    >
    >> And that is why people who evaluate sensors do more than simply
    >> study the standard deviation of one image. To understand noise sources,

    >
    > Never claimed otherwise! By the way, why don't people study the full
    > power spectrum of the noise (ie of a blackframe)? That would give
    > quite a lot of information (it should allow distinguishing between the
    > white part of the noise and things like banding). And it should not be
    > too hard (eg with IRIS, split the channels and FT them). And if you do
    > that to an average of many frames, you'll be studying repeatable noise
    > only. Is there some particular reason this isn't done by anybody?


    Time and effort--remember most are doing this for free out of
    curisoty. I started doing this to try and get the best camera for
    astrophotography. Then after seeing the trends, it became clear to
    me that because the photon noise limit had been reached, one can
    model and predict performance pretty closely. Now I find it
    interesting about the claims coming out in some press releases
    that seem to ignore physical reality ;-).
    I and other astrophotographers tend to ignore fixed pattern noise
    because we can calibrate most of it out of our images. If that is an
    issue for other people, then I suggest they learn how to take
    dark frames, average them, and subtract them from their images.
    It is really pretty easy, but for best results, it needs to be
    done on linear data. Another calibration that can improve images is
    flat field calibration, which not only corrects for pixel to pixel
    variations, but corrects for light fall-off from lenses.

    But if someone wants to pay me to run more tests......

    >> The Nikon D50 Digital Camera:
    >> Sensor Noise, Dynamic Range, and Full Well Analysis
    >> http://www.clarkvision.com/imagedetail/evaluation-nikon-d50

    > That's quite interesting, why don't you include dark frames from more
    > cameras? I'd think that this would be quite useful for people
    > intending to do very low light work.


    Again, time. I do have a fair amount of additional data for a number
    of cameras but I have not had time to write it up.

    Roger

    >> http://www.clarkvision.com/imagedetail/long-exposure-comparisons
    >>
    >> and more at:http://www.clarkvision.com/imagedetail/index.html#sensor_analysis
    >>
    >> other:http://www.astrosurf.org/buil/20d/20dvs10d.htm
    >>
    >> Roger
    >>

    >
     
    Roger N. Clark (change username to rnclark), Mar 12, 2007
    #86
  7. Roger N. Clark (change username to rnclark) wrote:
    > acl wrote:
    >> On Mar 12, 2:53 pm, "Roger N. Clark (change username to rnclark)"
    >> <> wrote:
    >>
    >>> And that is why people who evaluate sensors do more than simply
    >>> study the standard deviation of one image. To understand noise sources,

    >>
    >> Never claimed otherwise! By the way, why don't people study the full
    >> power spectrum of the noise (ie of a blackframe)? That would give
    >> quite a lot of information (it should allow distinguishing between the
    >> white part of the noise and things like banding). And it should not be
    >> too hard (eg with IRIS, split the channels and FT them). And if you do
    >> that to an average of many frames, you'll be studying repeatable noise
    >> only. Is there some particular reason this isn't done by anybody?

    >
    > Time and effort--remember most are doing this for free out of
    > curisoty. I started doing this to try and get the best camera for
    > astrophotography. Then after seeing the trends, it became clear to
    > me that because the photon noise limit had been reached, one can
    > model and predict performance pretty closely.



    I have the Canon 30D. I took a bunch of very underexposed shots
    recently (no tripod at critical time) and found that background
    subtraction didn't help much. The annoying noise is some sort
    of horizontal banding or streaking (these are landscape shots).
    Looks sort of like they scan the image TV-wise and this is 1/f noise
    in the amplifiers.

    Comments?

    Doug McDonald
     
    Doug McDonald, Mar 12, 2007
    #87
  8. Doug McDonald wrote:

    > I have the Canon 30D. I took a bunch of very underexposed shots
    > recently (no tripod at critical time) and found that background
    > subtraction didn't help much. The annoying noise is some sort
    > of horizontal banding or streaking (these are landscape shots).
    > Looks sort of like they scan the image TV-wise and this is 1/f noise
    > in the amplifiers.
    >
    > Comments?


    Doug,
    Did you record the raw data, or just jpegs?
    You need to record the dark frames under as close to the
    same temperature as you can. With the lens cap on
    (a dark or dimly lit room is fine too) set the
    exposure time to the same as the exposures with the problem.
    Record ten to twenty of the. If raw, convert them
    with the same settings as your landscape image.
    Average all the darks into one master dark, then
    subtract the master dark from the landscape frames.
    The closer the environmental conditions are to the
    landscape images, the better the correction will be.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 13, 2007
    #88
  9. ipy2006

    John Sheehy Guest

    Doug McDonald <mcdonald@SnPoAM_scs.uiuc.edu> wrote in
    news:et3rqs$m2h$:

    > I have the Canon 30D. I took a bunch of very underexposed shots
    > recently (no tripod at critical time) and found that background
    > subtraction didn't help much. The annoying noise is some sort
    > of horizontal banding or streaking (these are landscape shots).
    > Looks sort of like they scan the image TV-wise and this is 1/f noise
    > in the amplifiers.
    >
    > Comments?


    That's pretty typical of digital cameras in general; it is simply more
    visible in cameras with a certain ratio of banding noise to total noise.
    For the 30D it should be the same as the 20D (ignoring the 30D's fake,
    extra ISOs):

    http://www.pbase.com/jps_photo/image/65737967/original

    The yellow line represents standard deviation of a blackframe, divided by
    10 to fit in with the horizontal and vertical banding noises (they'd be
    flat if the entire chart scaled for the the yellow line).

    A few things become very clear here; the banding is generally only about
    1/10 the strength of the total noise, and yet it is highly visible. With
    more read noise, the banding would be less obvious (although it may still
    contribute somewhat to visible noise, just without the obvious pattern).
    The higher ISOs are all normalized for ISO 100; IOW the values for ISO
    200 are divided by two, ISO 400 values are divided by 4, etc, so these
    are proportional to electrons as units of noise. All noises decrease as
    you get to the higher ISOs, and the total noise looks like it is leveling
    off a bit from 800 to 1600, but still had room to improve a little at
    3200, but 3200 is "fake" and is really ISO 1600 amplification, multiplied
    by two, so it is exactly the same as ISO 1600. The horizontal banding is
    still dropping dramatically from 800 to 1600, and seems to have the
    capability of dropping even further if the amplification went to 3200 or
    even 6400.


    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
     
    John Sheehy, Mar 13, 2007
    #89
  10. ipy2006

    C J Campbell Guest

    On 2007-03-07 04:03:00 -0800, "ipy2006" <> said:

    > I have to shoot action photos in low light conditions. What is the
    > best DSLR for this purpose?
    > Thanks,
    > Yip


    For your budget, any of the cameras that handle well -- a D40 or Rebel
    will work nicely, especially if you get a fast 50mm lens.

    If you want to use flash, get a real flash unit.
    --
    Waddling Eagle
    World Famous Flight Instructor
     
    C J Campbell, Mar 13, 2007
    #90
  11. ipy2006

    John Sheehy Guest

    "Roger N. Clark (change username to rnclark)" <> wrote
    in news::

    > I and other astrophotographers tend to ignore fixed pattern noise
    > because we can calibrate most of it out of our images.


    I'm not sure where "fixed pattern noise" came into play here; the issue
    was read noise and one of it's components, 1-D noise. There is, for all
    intents and purposes, zero fixed pattern noise in my 20D. Subtracting a
    stack of black frames from a short exposure results in nothing but
    slightly higher noise.

    > If that is an
    > issue for other people, then I suggest they learn how to take
    > dark frames, average them, and subtract them from their images.


    What about the read noise in short exposures?

    > It is really pretty easy, but for best results, it needs to be
    > done on linear data.


    And in the case of Canons which have "negative noise" at the blackpoint,
    it needs to be done without any clipping at the black level.

    > Another calibration that can improve images is
    > flat field calibration, which not only corrects for pixel to pixel
    > variations, but corrects for light fall-off from lenses.


    > But if someone wants to pay me to run more tests......


    I don't feel like financing anything right now, but I might suggest that
    when you have the time, you do a "gap" test of large vs small pixels.
    Your 1DmkII vs S70 page seems to be about pixel size, but it is really
    about sensor size. Do a test with a small-pixel camera, and the 1DmkII,
    both using the same real focal length, the same Av value, the same Tv
    value, the same ISO setting, of the same detailed subject from the same
    distance. I guarantee that your big pixels will fall to the ground like
    Goliath, when viewing the subject at any magnification, from both
    downsampled to both upsampled, or printed large. This is the real test
    of pixel size. What you seem to overlook in your analyses is the fact
    that standard deviation is only *one* factor in the noise equation;
    magnification is another, and the low noise of big pixels is visually
    magnified when the pixels are magnified along with the subject.

    I am quite certain that the only benefits of big pixels are:

    1) quicker readout time and less storage requirements, and

    2) slight benefit in photons collection rate per unit of sensor area due
    to less wasted space on the sensor (not always realized, however; my
    1.97u FZ50, for example, collects about the same number of photons per
    unit of area as the 1DmkII, at RAW saturation for the same ISO).

    Here is one of my tests; it needs to be redone, because I realized after
    doing it that ISO 1600 on the FZ50 is crippled by a very bad amplifier,
    that is worse than pushing 100 to 1600. Here is the original, however:

    http://www.pbase.com/jps_photo/image/74020772

    Don't forget that the 10D images would need to be sharpened more,
    sharpening the noise as well.


    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
     
    John Sheehy, Mar 15, 2007
    #91
  12. ipy2006

    acl Guest

    On Mar 15, 2:31 pm, John Sheehy <> wrote:
    > that standard deviation is only *one* factor in the noise equation;
    > magnification is another, and the low noise of big pixels is visually
    > magnified when the pixels are magnified along with the subject.


    Exactly, and if you don't need the extra pixels you can bin.

    >
    > I am quite certain that the only benefits of big pixels are:
    >
    > 1) quicker readout time and less storage requirements, and
    >
    > 2) slight benefit in photons collection rate per unit of sensor area due
    > to less wasted space on the sensor (not always realized, however; my
    > 1.97u FZ50, for example, collects about the same number of photons per
    > unit of area as the 1DmkII, at RAW saturation for the same ISO).


    Well, as long as there are no constant noise sources (eg 10 electrons/
    pixel independent of the area). I have no idea if there are or not.
     
    acl, Mar 15, 2007
    #92
  13. ipy2006

    Pat Guest

    On Mar 7, 7:03 am, "ipy2006" <> wrote:
    > I have to shoot action photos in low light conditions. What is the
    > best DSLR for this purpose?
    > Thanks,
    > Yip


    At the risk of pissing off all of the "purists" out there, you might
    want to consider the original Canon Digital Rebel (the good old 300).
    That would get you a useable body for not much money. Then add the
    Russian operating system to get to ISO of 3200. It's a bit grainy but
    sometimes grainy is better than nothing.

    Then, with your "extra" money get a Canon 580 flash (or two) and a
    "wedding bracket" to avoid red eye and limit shadow. Skip the kit
    lens and get the Tokina F2 (or f2.8) zoom. it is something like a 28
    to 70mm.

    That would get you a servicable package within you price range.

    There are lots of situation where this wouldn't be the right setup,
    but for what you are describing it will work just fine.

    Good luck with it.
     
    Pat, Mar 15, 2007
    #93
  14. John Sheehy wrote:
    > "Roger N. Clark (change username to rnclark)" <> wrote
    > in news::
    >
    >>I and other astrophotographers tend to ignore fixed pattern noise
    >>because we can calibrate most of it out of our images.

    >
    > I'm not sure where "fixed pattern noise" came into play here; the issue
    > was read noise and one of it's components, 1-D noise. There is, for all
    > intents and purposes, zero fixed pattern noise in my 20D. Subtracting a
    > stack of black frames from a short exposure results in nothing but
    > slightly higher noise.


    Fixed pattern noise occurs in different ways with different
    sensors. All sensors have fixed pattern noise, even your
    20D unless you have a magical one. For example, see:
    http://www.astrosurf.org/buil/5d/test.htm
    It is in French, but the pictures are labeled well enough
    with 30D, 5D etc, that you can see the effects. Common
    is vertical striping and amplifier glow. There is no camera,
    CCD or CMOS that doesn't have fixed pattern noise.

    Figure 10 at this page:
    http://www.clarkvision.com/photoinfo/night.and.low.light.photography
    shows that the Canon 1D Mark II has a low level background offset.
    That too is fixed pattern noise. So is the line striping
    you see in the images on this page. All cameras have these
    effects.

    A good example of amplifier glow creating an offset near the
    edge of the frame is at:
    http://www.clarkvision.com/imagedetail/long-exposure-comparisons
    e.g. see Figure 2b.

    >> If that is an
    >>issue for other people, then I suggest they learn how to take
    >>dark frames, average them, and subtract them from their images.

    >
    > What about the read noise in short exposures?


    Read noise produces a random signal added to all
    other signals, regardless of exposure. It is a property of
    reading the sensor, not a property of the exposure time.
    Examples on the above two web pages show read noise in both
    short and long exposures.

    >>It is really pretty easy, but for best results, it needs to be
    >>done on linear data.

    >
    > And in the case of Canons which have "negative noise" at the blackpoint,
    > it needs to be done without any clipping at the black level.


    Sensors collect photons, which are converted to electrons.
    The signal is always positive or zero, not negative.
    The readout electronics add a negative offset
    so that the signals do not go negative. Of course,
    if noise is too high, then the output signal could hit
    zero. Very few pixels are zero in most cameras, even
    at the shortest exposure times in the dark.
    (I know you know this; I'm adding information to provide
    a complete story for others reading, so please don't take
    offense; I know you have studied sensors in detail and you have
    provided great information to us for years.)
    So, I don't know what you mean by negative noise.

    >>Another calibration that can improve images is
    >>flat field calibration, which not only corrects for pixel to pixel
    >>variations, but corrects for light fall-off from lenses.
    >>But if someone wants to pay me to run more tests......

    >
    > I don't feel like financing anything right now, but I might suggest that
    > when you have the time, you do a "gap" test of large vs small pixels.
    > Your 1DmkII vs S70 page seems to be about pixel size, but it is really
    > about sensor size. Do a test with a small-pixel camera, and the 1DmkII,
    > both using the same real focal length, the same Av value, the same Tv
    > value, the same ISO setting, of the same detailed subject from the same
    > distance. I guarantee that your big pixels will fall to the ground like
    > Goliath, when viewing the subject at any magnification, from both
    > downsampled to both upsampled, or printed large. This is the real test
    > of pixel size. What you seem to overlook in your analyses is the fact
    > that standard deviation is only *one* factor in the noise equation;
    > magnification is another, and the low noise of big pixels is visually
    > magnified when the pixels are magnified along with the subject.
    >
    > I am quite certain that the only benefits of big pixels are:
    >
    > 1) quicker readout time and less storage requirements, and
    >
    > 2) slight benefit in photons collection rate per unit of sensor area due
    > to less wasted space on the sensor (not always realized, however; my
    > 1.97u FZ50, for example, collects about the same number of photons per
    > unit of area as the 1DmkII, at RAW saturation for the same ISO).


    Here is the fundamental fallacy of your assertion that the
    only benefit is better fill factor (that is what you describe in
    #2 above): The physics of lenses, and not directly related
    to sensors at all.

    Every lens at a given f/ratio delivers, for a given light source,
    the same surface brightness in th4e focal plane. Another
    way to put this is the photons per square micron is constant
    at a given f/ratio regardless of the lens focal length.
    So an f/4 lens of 20mm focal length looking at a gray
    card in sunlight delivers the same number of photons per
    square micron to its focal plane as does a 500 mm f/4 lens
    looking at the same gray card. It is a simple deduction,
    that given two sensors, identical in every way including
    quantum efficiency, read noise, and fill factor, that
    the sensor with larger pixels collects more photons
    simply due to lens physics.

    An 8 micron pixel collects 16 times the photons as a
    pixel 2 microns in size (8*8/(2*2) = 16), and that is exactly
    what we observe with today's digital cameras. For example,
    see:
    Digital Cameras: Does Pixel Size Matter?
    Part 2: Example Images using Different Pixel Sizes
    http://www.clarkvision.com/imagedetail/does.pixel.size.matter2

    > Here is one of my tests; it needs to be redone, because I realized after
    > doing it that ISO 1600 on the FZ50 is crippled by a very bad amplifier,
    > that is worse than pushing 100 to 1600. Here is the original, however:
    >
    > http://www.pbase.com/jps_photo/image/74020772
    >
    > Don't forget that the 10D images would need to be sharpened more,
    > sharpening the noise as well.


    Your test is biased in that the two images from the two cameras
    are not comparable. By using two different sized sensors
    with the same focal length, of course the sensor with
    smaller pixels sees finer detail. But the large sensor
    shows a larger field of view that is not covered by the
    smaller sensor at all. So depending on who wanted the
    image, one could draw different conclusions: the person who
    wanted a wide field of view would choose the large sensor;
    one who wanted a telephoto image would choose the small
    pixels. But in either case, the pixels from the small
    sensor would be noisier in proportion to the square root
    ratio of the areas of each pixel.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 16, 2007
    #94
  15. "Roger N. Clark (change username to rnclark)" <> wrote:
    > John Sheehy wrote:
    >>
    >> I don't feel like financing anything right now, but I might suggest that
    >> when you have the time, you do a "gap" test of large vs small pixels.
    >> Your 1DmkII vs S70 page seems to be about pixel size, but it is really
    >> about sensor size. Do a test with a small-pixel camera, and the 1DmkII,
    >> both using the same real focal length, the same Av value, the same Tv
    >> value, the same ISO setting, of the same detailed subject from the same
    >> distance. I guarantee that your big pixels will fall to the ground like
    >> Goliath, when viewing the subject at any magnification, from both
    >> downsampled to both upsampled, or printed large. This is the real test
    >> of pixel size. What you seem to overlook in your analyses is the fact
    >> that standard deviation is only *one* factor in the noise equation;
    >> magnification is another, and the low noise of big pixels is visually
    >> magnified when the pixels are magnified along with the subject.
    >>
    >> I am quite certain that the only benefits of big pixels are:
    >>
    >> 1) quicker readout time and less storage requirements, and
    >>
    >> 2) slight benefit in photons collection rate per unit of sensor area due
    >> to less wasted space on the sensor (not always realized, however; my
    >> 1.97u FZ50, for example, collects about the same number of photons per
    >> unit of area as the 1DmkII, at RAW saturation for the same ISO).

    >
    > Here is the fundamental fallacy of your assertion that the
    > only benefit is better fill factor (that is what you describe in
    > #2 above): The physics of lenses, and not directly related
    > to sensors at all.
    >
    > Every lens at a given f/ratio delivers, for a given light source,
    > the same surface brightness in th4e focal plane. Another
    > way to put this is the photons per square micron is constant
    > at a given f/ratio regardless of the lens focal length.
    > So an f/4 lens of 20mm focal length looking at a gray
    > card in sunlight delivers the same number of photons per
    > square micron to its focal plane as does a 500 mm f/4 lens
    > looking at the same gray card. It is a simple deduction,
    > that given two sensors, identical in every way including
    > quantum efficiency, read noise, and fill factor, that
    > the sensor with larger pixels collects more photons
    > simply due to lens physics.


    I think you guys are talking past each other here.

    I think John is arguing that _for a sensor of a given size_, larger pixels
    aren't any better.

    David J. Littleboy
    Tokyo, Japan
     
    David J. Littleboy, Mar 16, 2007
    #95
  16. David J. Littleboy wrote:

    > I think you guys are talking past each other here.
    >
    > I think John is arguing that _for a sensor of a given size_, larger pixels
    > aren't any better.


    1) Well, his example used 2 different sized sensors.

    2) There is a difference. The signal you record has added
    read noise. A larger pixel collects more photons
    so the signal is larger compared to the read noise.
    Thus you can detect fainter things, or have better high
    ISO performance. If you sum the signal from a smaller
    pixels to equal the area of a larger pixel size,
    you are also adding read noise, so you don't gain
    as much as having the larger pixel with one read noise.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 16, 2007
    #96
  17. ipy2006

    Lionel Guest

    On Fri, 16 Mar 2007 12:29:52 +0900, "David J. Littleboy"
    <> wrote:

    >I think you guys are talking past each other here.
    >
    >I think John is arguing that _for a sensor of a given size_, larger pixels
    >aren't any better.


    But they /are/ better! - That's why the sensor designers are
    constantly trying to improve the fill-factor, ie; make the pixels (or,
    more accurately, the actual photo diode surface, which is smaller than
    the pixel size) bigger for a given sensor size/resolution ratio. This
    is because the bigger the suface of the photodiode (as a proportion of
    the size of that pixel on the sensor), the more photons it'll collect
    for a given exposure. And, all else being equal, more photons equals a
    better signal to noise ratio.

    --
    W "Some people are alive only because it is illegal to kill them."
    . | ,. w ,
    \|/ \|/ Perna condita delenda est
    ---^----^---------------------------------------------------------------
     
    Lionel, Mar 16, 2007
    #97
  18. ipy2006

    acl Guest

    On Mar 16, 6:29 am, "David J. Littleboy" <> wrote:

    > I think you guys are talking past each other here.
    >
    > I think John is arguing that _for a sensor of a given size_, larger pixels
    > aren't any better.
    >


    But doesn't this make him a "crop fan" for you? Or does your attitude
    depend on who you're replying to?
     
    acl, Mar 16, 2007
    #98
  19. ipy2006

    acl Guest

    On Mar 16, 7:04 am, "Roger N. Clark (change username to rnclark)"
    <> wrote:

    > 2) There is a difference. The signal you record has added
    > read noise. A larger pixel collects more photons
    > so the signal is larger compared to the read noise.
    > Thus you can detect fainter things, or have better high
    > ISO performance. If you sum the signal from a smaller
    > pixels to equal the area of a larger pixel size,
    > you are also adding read noise, so you don't gain
    > as much as having the larger pixel with one read noise.
    >


    Is read noise fixed per pixel, per unit area, or something else?
     
    acl, Mar 16, 2007
    #99
  20. acl wrote:
    > On Mar 16, 7:04 am, "Roger N. Clark (change username to rnclark)"
    > <> wrote:
    >
    >> 2) There is a difference. The signal you record has added
    >> read noise. A larger pixel collects more photons
    >> so the signal is larger compared to the read noise.
    >> Thus you can detect fainter things, or have better high
    >> ISO performance. If you sum the signal from a smaller
    >> pixels to equal the area of a larger pixel size,
    >> you are also adding read noise, so you don't gain
    >> as much as having the larger pixel with one read noise.
    >>

    >
    > Is read noise fixed per pixel, per unit area, or something else?


    Read noise is per pixel. Say you had 2 sensors, one with half
    the pixel size, so you needed to add 4 pixels to equal the area
    of the larger pixel. Lats say both had great read noise of
    4 electrons. The larger pixel gets: X + 4 electrons noise.
    The smaller pixel sensor, adding 4 pixels gets:
    X + sqrt(4)*4 = X + 8, so the read noise is effectively
    doubled.

    Read noise for a given sensor is dependent on the design of the sensor
    and how the readout is configured. Read ranges from just under 4
    to about 30 electrons and is not dependent on pixel size.
    For example, see Figure 3 at;
    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

    At low ISO, and low bit count (e.g. 12 bits) and noise in the A/D converter
    contributes greater noise than the true read noise from the sensor.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 16, 2007
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Mike O.

    How low is "low light"?

    Mike O., Jan 3, 2004, in forum: Digital Photography
    Replies:
    15
    Views:
    630
    Michael Meissner
    Jan 4, 2004
  2. ishtarbgl
    Replies:
    0
    Views:
    594
    ishtarbgl
    Apr 1, 2004
  3. D

    Sony HC5E low frame rate in low light

    D, May 21, 2008, in forum: Digital Photography
    Replies:
    12
    Views:
    661
  4. low end digital / low light and macro ?

    , Jun 27, 2008, in forum: Digital Photography
    Replies:
    19
    Views:
    701
    Chris Malcolm
    Jul 11, 2008
  5. Brian
    Replies:
    31
    Views:
    1,188
    Bob Larter
    Jun 14, 2009
Loading...

Share This Page