Velocity Reviews > Calculation of snr

# Calculation of snr

John P Sheehy
Guest
Posts: n/a

 06-21-2008
"Roger N. Clark (change username to rnclark)" <(E-Mail Removed)> wrote
in news:(E-Mail Removed):

> this group anymore due to all the bickering and for me little overall
> value).
>
> The above SNR discussion is not quite right and is incomplete.
>
> You can find read noise for the 5D discussed above in Table 4a at:
> http://www.clarkvision.com/imagedeta...formance.summa
> ry

That's based on a single camera, of course, and uses some assumptions on

> The 5D read noise from table 4a and the max signal derived
> from Table 3a (gain *(4095-bias))

Try figures like 3652-bias, which is a typical for a real 5D, which
typically clips at 3652 (and that can vary).

> Assuming bias = 128, then we
> can compute the stops below max signal from the equation:
>
> crossover point: stops below
> max signal where
> (electrons) (electrons)
> 100 30.1 65000 6.2
> 200 15.6 32000 7.0
> 400 8.4 16000 7.8
> 800 5.2 7900 8.2
> 1600 3.7 4000 8.2

I thought the 5D had the same pixels as the 1D2, and had about 52 - 53K
photons at ISO 100 RAW saturation? In any event, my sensor was a
hypothetical one, based on the 5D. It's like a 5D with enough quantum
efficiency to have its 80K photons at the ISO 100 RAW saturation.

> John showed what is probably the worst case in modern cameras:
> the 5D ISO 100 case.

Yes, and I showed it for a reason; to show how badly read noise can
mangle an otherwise high DR. of course, canon's foolish choice of not
having a base ISO that saturates the RAW data just short of sensor
saturation is another problem.

> That camera is several generations old
> and even the next generation after the 5D had lower
> read noise as the 5D. At this cross-over point, read noise
> doe NOT dominate as John says; it is equal to the photon
> noise.

I don't remember saying that, but even where they are equal, read noise
is still significant. My comments were about a point a certain number of
stops below metered gray; not about the crossover point, per se.

> Photon noise must be below this level before

That level is not far below middle gray. It is actually still in the
lower midtone range. As soon as the composite curve starts to separate
from the shot noise curve, read noise is taking its toll. It doesn't
matter if it is dominant or not; it is destructive.

> And in older cameras such as the 5D
> fixed pattern noise has a greater visual effect anyway.

No. Why do you keep repeating this? There is very little fixed pattern
noise in Canon sensors, for short exposures. A stack of 16 black frames
subtracted from a low-light exposure does not make any noticeable change
in a short exposure. The problemic pattern noises are not "fixed" at
all. Some components are somewhat periodic, and others seem to be
raqndom with random frequency distribution; some are generated by the
camera, and some are captured from RF interference.

> At the high ISO end, with read noise below 4 electrons
> one would be hard pressed to see the noise
> difference between pure photon noise and a read noise of
> 3.7 electrons.

Maybe you're not pressing very hard in the right direction. I don't
think you fully appreciate the difference between read noise and shot
noise. They are not 2 different sources of the same thing. Shot noise
never blankets the deep shadows. The shot noise of 0 photons is no noise
at all. The read noise of 0 photons is whatever it is, the same as for a
large signal. You model seems to be stuck on test charts with flat, OOF
areas. You can not see the real difference between shot noise and read
noise in such a limited sample of textures. Shot noise simulates a
blanket effect because the *signal* is a blanket, in your models.

> Again fixed pattern noise in the older
> cameras would be more annoying. The newer 14-bit cameras
> have greatly improved the fixed pattern noise, and I predict
> the next generation of cameras will have better 14-bit
> converters (perhaps even 16-bit) that will improve the
> low ISO noise floor further.

Canon seems to re-adjust the blackpoint of individual lines in the RAW
data in the 14-bit models, if the 40D is any indication. It's clipping
points are adjusted by small amounts on a line-by-line basis. Of course,
the RAW file does not have to be 14 bits to do this with the extra
precision; it could be done in the converter. The extra two bits are
mostly a waste of storage, unless you're going to be stacking multiple
images, in which case, as signal adds linearly while noise adds in
quadrature, some significant extra signal may come out of them.

I don't agree with you that better ADCs are all that is needed. If the
ADCs were causing most of the read noise at low ISOs, the "in-between"
1/3-stop ISOs would not cause a saw-tooth effect in the ISO vs read noise
curves for the 1 series and the 5D. I think Canon is doing something at
the sensor differently (not necessarily signal gain; maybe known noise
subtraction is done differently for different ISOs).

> In any case, if you want low light photon noise limiting performance,
> use your high ISO settings.

Still limited in usefulness by BLANKETING read noise, and non-repeating
pattern noises (mostly 1-dimensional noises) are part of the

>> This is a pixel-level analysis, however, and it is becoming more
>> apparent that as we approach higher pixel counts, this pixel-centric
>> measure is not accurate about *image*-level SNR, as image noise and
>> dynamic range and SNR all improve when you have more pixels in the
>> image, with the same pixel noise.

> This too is not quite correct. As pixel size decreases,
> dynamic range decreases. Dynamic range is limited by the
> number of photons you collect, even in a perfect system.

No; the DR of a pixel (which is not necessarily the same thing as "DR")
is *only* determined by photon count when you have a perfect system. In
real systems it is limited by read noise (and any thermal/leakage noises,
when they are significant).

The DR of an image is not the DR of a pixel. "Noise floor" is only a
metaphor, poorly chosen, IMO, which suggests an opaqueness which does not
exist. There is nothing different about a signal 1/2 stop above the
noise floor and one 1/2 stop below it, except that the former has about
double the SNR.

> Increasing pixel count while keeping sensor size constant
> can have benefits, those benefits only improve if your
> print/display size does not increase. If you want larger
> prints with more pixels when printing at the human eye resolution
> limits (e.g. 300 to 600 ppi) then pixel level performance is
> important,

It is still better to have more pixels, in the bigger sensor, too. You
have this obsession with always scaling the pixels going up to a bigger
sensor, when just expanding the same density over a greater area would do
better. Your DR and noise figures are stuck in a pixel-centric paradigm
which has nothing to do with real images or human perception.

> and ultimately diffraction limits image
> resolution. I've modeled all these effects in the
> Apparent Image Quality section (AIQ) (see Figure 9) at:
> http://www.clarkvision.com/imagedeta...formance.summa
> ry/index.html#AIQ Note how the dashed lines improve AIQ as pixel size
> decreases, then at some point AIQ drops.

Then you're doing something wrong.

> For f/8 diffraction
> limited lenses, 5-micron or slightly larger pixels seems to
> be a sweet spot, and that is the level of the latest crop of
> APS-C DSLRs.

My G9 with 1.9 micron pixels, when shot at f/8, loses detail when
downsampled to 70% or 80%. Diffraction limits are a Chicken Little
effect, IMO, and are taken far too literally. The point of no further
returns due to diffraction is the point where it takes 3 pixels exclusive
in the red and green channels in a bayer CFA (6 full-res pixels) for a
sharp B/W edge transition to occur, due to diffraction.

No doubt, further oversampling is memory- and processor-hungry, but it
leaves you with data that is resilient to PP artifacts. You can rotate,
perspective-correct, fix distortion, CA, etc, without causing unequal
sharpness throughout the image.

--
John Sheehy