Roger N. Clark:

> I'll assemble the questions and answers into one post in

> case other people want to go through the exercise.
Here it is:

Let's start from the beginning and work through things one step

at a time. First rule: don't jump ahead to conclusions.

The first question (this is not a trick question):

-----------

Given two cameras A and B: Camera B is a perfectly

scaled up version of camera A. Thus, the sensor in

camera B is twice as large as the sensor in camera A.

Both cameras have the same number of pixels, such that

the pixels in cameras B are twice the linear size

(4 times the area) of camera A.

Each camera has an f/2 lens and exposes a

scene with constant lighting at f/2.

The lens on camera B is twice the focal length as

camera A so that both cameras record the same

field of view.

Let's say that camera A collects 10,000 photons

in each pixel in a 1/100 second exposure at f/2.

How many photons are collected in each pixel in

camera B in the same exposure time on that same

scene with the same lighting at f/2?

Answer: 4 * 10,000 = 40,000 photos per pixel.

The number of photons collected scales as the area of

the pixel. The area of a pixel in camera B is 4 times

that of camera A, so the answer is 4 * 10,000.

---------

Now that we have established that the two cameras have

identical fields of view, just that one camera is twice size

of the other. Their spatial resolution on the subject is

identical. Now let's say we are taking a picture

of a flat wall so there are no depth of field issues.

The images have the same pixel count, the same

field of view, and the same spatial resolution

taken with the same exposure time and f/stop.

Are the pictures from the two cameras the same?

Answer: No. They are the same except camera B

has collected 4 times the photons.

The images from both cameras are absolutely

identical in every respect except that the image from

camera B, the larger camera, has higher a signal-to-noise

ratio (2x higher).

An analogy to collecting photons in a camera with pixels is like

collecting rain drops in buckets, assuming the drops are

falling randomly, which is probably the way it is. Larger

buckets collect more rain drops. If you measure the number

of drops collected from a bunch of buckets, you will find

that the amount in each bucket is slightly different.

The noise in the counted raindrops collected in any one

container is the square root of the number of rain drops.

This is called Poisson statistics (e.g. check wikipedia).

So if you double the count (photons in a pixel or rain drops

in a bucket), the noise in the count will go up by

square root 2. For example, put out 10 buckets and you

would find the level in the buckets that on average

collected 10,000 rain drops, would vary in the measured

amount of water by 1% from bucket to bucket (the standard

deviation):

square root (10,000)/10,000 = 0.01 = 1%

(10,000 rain drops is about 0.5 liter of water by the way.)

So with Poisson statistics, which is the best that can be

done measuring a signal based on random arrival times

(e.g.of photons), the

signal / noise = signal / square root(signal) = square root(signal).

So in our camera test, collecting 4x the photons increases

signal-to-noise ratio by square root (4) = 2.

Fortunately, most digital cameras have such noise characteristics

except at near zero signal. This means that improving

noise performance can only come through increasing the photon

count. That can be done 3 ways: increasing quantum efficiency

(currently dcams are around 30%), fill factor (most are probably

already above 80%), or increase the pixel size (e.g. the larger

bucket collects more rain drops).

----------------

Next question:

Assuming there is no change in aberrations if you change f/stop,

what could be done to the above test images to make camera B

produce an image that is completely identical to that from camera A?

A: Assume the subject is static; no movement, so there

are two answers; extra credit for giving both.

B: Assume the subject is not static, then there is one answer. What

is it?

Answer A:

We have 4x the photons, so the two answers are:

1) stop down 2 stops to decrease the light level 4x.

2) decrease the shutter speed 4x.

(OPTIONAL: Increase the ISO):

While changing ISO changes the perceived image, it does not

change the number of photons collected.

Answer B: The one and only answer is

stop the lens down two stops. This reduces the

photon count and also happens to make the depth of field the

same as the smaller sensor camera, finally making the

results from the two cameras identical (total photons

per pixel as well as depth of field). Changing the

ISO higher 2 stops would bring the digitized signal to

the same relative level as the small camera, but that could

also be done in post processing (again the photon count

and signal-to-noise ratio would be the same). The ISO change

would also make the metering the same as the small camera,

then the metered shutter speeds would be identical too.

In real cameras, boosting the ISO increase is a good step

as it reduces A/D quantization and reduces the read noise

contribution to the signal.

So, what was the result of the exercise? In making the images

from two different sized cameras identical in terms of resolution,

angular coverage, exposure time, and signal-to-noise ratio,

we find the final property: the depth of field is also identical.

I have added this discussion to:

The Depth-of-Field Myth and Digital Cameras

http://www.clarkvision.com/photoinfo/dof_myth
Roger