What compact digicam has the biggest CCD pixels?

Discussion in 'Digital Photography' started by Paul Rubin, Apr 22, 2005.

  1. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    Roger, you need to choose it one way or another. Either

    ISO is tied to "full well capacity" (as you said in another message
    yesterday, probably in the same thread),

    or "full well capacity" is a property of a sensor, not of

    In my experience, people use this term in both senses. But using them
    in both senses in the same thread leads to too much confusion.
    As I said, this assumes one particular sensor with one particular
    QE. And IIRC the context you used this sentence was not this.
    This happens with which ISO settings? [Moreover, this statement is
    vague enough to be overembracing; I know *I* would press *anything* up
    to the envelop; some people are just hard to please. ;-]
    So you do not need high full well. All you need is to collect enough
    electrons both in bright areas, and in dark areas. This removes the
    restriction on capacitance ("one cannot keep too many electrons on the
    given area of p/n-transition in silicon") (at least with certain
    technological countermeasures; not with the current generation of

    BTW, the other common confusion is that smaller sensels create "extra
    noise" (given the same sensor size). This is true only with high
    enough readout noise. *If* the readout noise is low enough, smaller
    sensels have *the same noise* as larger ones. [Well, edge effects can
    also increase noise, but so far you claim that smaller sensors have
    the same QE as larger ones - which is not what I observe.]

    The *extra* information added by smaller sensels (w.r.t. larger
    sensels) has, of course, lower S/N than the information already
    present in sensors with larger sensels. However, this information can
    be easily filtered out (together with the added noise) if needed.
    This way you can, e.g., trade extra resolution for higher noise
    (differently in different areas of image).
    [Partially OT]

    Using simple unsharp mask is, as you saw, pure trash... One *needs*
    to know the MTF curve beforehands. And my (very limited) experience
    is that some corrections of MTF lead to very little change in
    *visible* noise (while the changes in *measured* noise may be very

    But I cannot make more educated decisions about this until I find some
    information about the sensibility of eye to noise with different
    spectral distribution. Anyone knowing any reference?

    BTW, the MTF curve of the monitor vs print can also be relevant.

    Ilya Zakharevich, May 2, 2005
    1. Advertisements

  2. It is both ways. The sensor has its characteristics which sets its
    basic speed, defined by the percentage of full well achieved
    for a given light input. But because the sensor is linear in
    its response to light, one can amplify the signal to give
    the impression of higher speed. The sensor itself does not
    actually have higher speed.
    No it has nothing to do with QE. It has ONLY to do with
    what you can count. QE effects the rate at which you
    count, but not the total photons you can count.

    ISO 100.

    If you would actually read my web pages, e.g.
    see Table 2, you would see the P&S camera (S60) has lower
    read noise than the larger pixel DSLR (10D). Yet the 10D
    still produces much higher signal to noise at both high
    and low signal levels (Tables 3 and 4). It is very basic

    Read the above page on image restoration, then go to the
    references and read those pages. In particular, read
    the STSCI (that's space telescope science institute)
    reference. There is a section on noise amplification.

    It is a basic fact that restoration methods add noise.
    It you can come up with a new method, do it, patent it,
    and make millions. No one has done it so far.

    Roger N. Clark (change username to rnclark), May 3, 2005
    1. Advertisements

  3. Except that you can't seem to make that connection with pixel size
    in a camera.

    The question is:
    Do you agree that given X photons per
    square micron per second incident on a surface,
    and given two "buckets" to collect photons, one Y microns
    on a side and the other 2Y microns, the 2Y bucket collects
    4 times as many photons per second as the Y bucket?

    Possible answers: A) Yes, or B) No.

    Correct answer: A
    Roger N. Clark (change username to rnclark), May 3, 2005
  4. Paul Rubin

    Paul Rubin Guest

    Very good page about increasing resolution by adding noise. But what
    if I want to go the other way, i.e. I want to turn noisy high res
    pictures into clean low res pictures. Is there a better way to do
    that than simplistic pixel averaging or low pass filters?
    Paul Rubin, May 3, 2005
  5. Most designers set the A-to-D converter maximum to just under
    the full well capacity. Right at full well the signal becomes non
    linear, but this is within a percent or so of full well (for
    the devices I've seen data for). So for all practical purposes,
    the two are the same.
    Yes, Dave is correct.
    There is the inherent "speed" of the electronic sensor. That never
    changes, and is in fact the same for all devices with the same
    quantum efficiency. As all current digital cameras use front
    side illuminated sensors, the "speed" or ISO is very close
    between all CCDs and CMOS sensors.

    The amplifier that boosts the signal from the electronic sensor is
    effectively an artificial boost in ISO. Changing gain simply
    rescales the output of the electrons in the well to a larger
    number in the integer output from the camera. This is no different
    than taking a digital image file and multiplying all the numbers
    by 2 and saying the ISO was increased by 2 (assuming you recorded
    linear data from the sensor, and assuming read noise was negligible).
    It is effectively a post processing step done after the photons have
    been converted to electrons.

    Since they both have about the same quantum efficiency, they are
    approximately equally sensitive to light. That is the inherent nature
    of electronic sensors in production today (ignoring front side
    illuminated CCDs).
    But it doesn't work that way with digital cameras, except for small
    variations in transmission of blur filters and small differences in QE
    they all have the same basic sensitivity. Electronics have become so
    good, that Poisson statistics dominates.
    I disagree. The basic sensor data, Poisson statistics (Photon/electron noise
    limited limited) sensors proves otherwise.

    Roger N. Clark (change username to rnclark), May 3, 2005
  6. Paul Rubin

    Paul Rubin Guest

    That makes me wonder something else. If these wells hold 60k
    electrons, why are they using 12 bit converters instead of 16 bits?
    So with 16 bit converters we could get rid of the gain control since
    we'd have recorded the actual amount of electrons in the well. It
    sounds to me like digicam technology isn't mature until we're doing that.
    Paul Rubin, May 3, 2005
  7. 16-bit converted are much slower and cost more. At the speed of
    fast cameras like the 1D Mark II, the camera must process data
    at the rate of 70 megapixels/second (8.5 frames/sec * 8.2 megapixels).
    Yes. But 14-bits would pretty much do it. With read noise of
    10 to 15 electrons, 52000 electrons/2^24 = 52000/16384 = 3.2 electrons
    per output number (DN). With a 14 bit converter, one might
    need only 2 gain states: low (e.g. iso 100) and high (2x) for
    iso 200 and up. You would see very little difference between the
    two states because the noise is already higher than the digitization
    at all signal levels (e.g. at intensity = 9 electrons (photons),
    the noise is 3 electrons).

    My guess is we'll see 14-bit converters in higher end cameras soon.

    Cameras with small pixels do not need 14 bit converters, 12-bits
    is adequate to digitize the noise.

    Roger N. Clark (change username to rnclark), May 3, 2005
  8. You need an adaptive averaging algorithm, one that tries to
    find edges and not average them out. I'm not aware of such
    software. I could use it too. Anyone know of one?
    (Especially a free one with source code so I could compile
    it on a unix machine.)

    One method that works well for many images is to do a blur, then
    downsize, then unsharp mask. You could do find edges, invert
    to everything but the edges, then blur, then unselect and
    downsize, followed by unsharp mask.

    Roger N. Clark (change username to rnclark), May 3, 2005
  9. Paul Rubin

    JPS Guest

    In message <>,
    It is *very* different. A shot taken at ISO 100 4 stops under-exposed
    looks like absolute garbage compared to a shot taken at ISO 1600, on a
    camera that has 16x as much gain at ISO 1600 as ISO 100. You seem to be
    ignoring posterization, which often distorts the signal and creates far
    more noise than readout noise.

    There is nothing more important with a 12-bit A2D converter than trying
    to almost saturate the 4095 levels with the subject, even if it means
    going to a higher ISO to maintain desired aperture and shutter speed.

    If the current DSLRs had clean 18- or 20-bit converters, then what you
    are saying would be closer to the truth. 12-bit A2D conversion is
    grossly insufficient for ISO 100.
    JPS, May 3, 2005
  10. Paul Rubin

    JPS Guest

    In message <>,
    I've been playing around with RAW data for the last few months, at the
    bit level, and I couldn't agree with you more. The digitization
    hardware in DSLRs like the 20D is absolute garbage compared to the
    sensor quality. A total mismatch, and waste of a very good sensor. The
    banding noise in the deepest RAW shadows is a real pain in the ass; if
    the noise were random with all patterns removed, it would give images
    that lose their noise dramaticcally when downsized or viewed small.
    Banding does not disappear when you shrink 20D images with banding in

    Hopefully, this is where we'll see future improvements. I'd like my
    next Canon DSLR to have a choice between RAW, and super-RAW (16-bit or
    more). I'd also like to see auto white balance that actually meant
    something; higher analog gain of the blue channel for tungsten, higher
    red for the shade, etc.

    We are being spoonfed.
    JPS, May 3, 2005
  11. Paul Rubin

    Paul Rubin Guest

    Hmm, I wonder whether software postprocessing can take out the banding.
    Is that different from AWB does now?! AWB on video cameras has worked
    that way since the dawn of time, I thought.
    Paul Rubin, May 3, 2005
  12. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    Then do not argue when one uses it the way the camera manufactures do:
    when you rotate the dial marked "ISO sensitivity", *something*
    changes. It makes much more sense to call *this* something "ISO
    sensitivity" - at least on this newsgroup. ;-)
    Anyway, your statement "1 photon = 1 electron" is not *only* "wrong";
    wrong statements are OK when they are simplifications which help
    understanding (as my scientific advisor would say, "TEACHING is
    CHEATING"). What is bad is that your *in most contexts* your
    statement is plain confusing...
    Sigh. What you count is electrons, not photons. And with current
    sensor assemblies, 1 photon corresponds to approximately 0.1
    electrons. With correspondingly higher noise of counting.

    There *may* be situations where your argument/simplification makes
    sense (e.g., considering the same sensor with different
    amplifiers/ADC, etc). But *please* check whether the context matches
    this before repeating this statement again and again.
    Thanks. BTW, do you have a clear opinion on what full well is enough
    to provide "satisfying" images *if one does no postprocessing* (view
    things "as is")?

    I find it very hard to distinguish two images of 18% gray with S/N of
    luma above 25; this would lead to S/N of luminosity about 12, and full
    well about 800; but

    *) my eyes are not very typical,
    *) I did not try it with other brightnesses,
    *) I did not try it with other, other, "real", images.

    I want to be able to get some understanding along such lines: 52K full
    well is (e.g.) 60 times above "confortable" one, so allows you perform
    postprocessing which decreases signal/noise 7.5 times. (One needs to
    change numbers to suit real eye sensitivity, not the number I provide
    Thanks for being so polite. Especially since this remark has no
    reference to what I wrote in the original posting (having the same
    sensor size, and considered *added* information).

    Hope this helps,
    Ilya Zakharevich, May 4, 2005
  13. When you rotate the iso dial on a camera, analog gain between
    the CCD/CMOS pixel and the A-to-D converter is changed.
    The sensor sensitivity is not changed.

    So how in quantum mechanics does one photon create 0.1 electron?

    Like I've tried to illustrate multiple times, it matters not what
    photons you lose, get absorbed by optics, pass through the sensor
    unchanged, whatever. It only matters what photons get converted to
    electrons, and it takes precisely one photon to create that electron
    in the CCD or CMOS pixel. Put a neutral density 3 filter over your
    camera so the throughput is 0.001 or whatever factor you want to
    use. Those photons absorbed do not matter. It simply takes you longer
    (longer exposure) to collect enough electrons (with 1 photon producing
    1 electron in the photosensitive pixel) to get to a decent signal-to-noise.

    By your idea, the signal-to-noise ratio is magically dependent on the
    original photons incident on the camera. You say the transmission
    of the camera's lenses and filters and the device QE, what you call the
    system QE effects the signal-to-noise. Again, add the neutral density
    filters so the "system QE" is 10, 100, 1000, 10000000000 times lower. Now
    expose longer to compensate. You get the same number of electrons counted
    each pixel. You get the same S/N. IT IS THE ELECTRONS YOU COUNT.
    The fact that you absorbed 99.999999999% of photons with ND filters
    doesn't matter. Only those that get counted matter. And for
    every electron in the pixel's well, 1 photon created it.

    This is a subjective call. Film is pretty noisy but produces great images.
    But as signal-to-noise drops, how much you can enlarge the image
    becomes less. I would consider film's noise the minimum.
    Fine grained film can be enlarged more that high speed film.
    P&S cameras produce great images when exposed right.

    Roger N. Clark (change username to rnclark), May 4, 2005
  14. Paul Rubin

    JPS Guest

    In message <>,
    Noise reduction works some, but it does not reduce the root of the
    banding, which is offsets in the RAW data. Noise is just the "fuzz" on
    these offsets.

    I haven't tried ACR 3 yet, but supposedly it addresses these offsets
    I don't know, but apparently Canon thinks that their sensors are so
    good, and their digitization so good, that you can let a single channel
    be represented by only a few bits.
    JPS, May 4, 2005
  15. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    I did read this paper some time ago. It does not show a lot of clue;
    as usual in such matters, very little discussion of what one *wants*
    to achieve, just an observation that a "random" ad hoc tweak to the
    algorithm produces a "nicer looking" result...

    And it has NO RELATION to my question, which was about human eye
    response to the noise.

    Hope this helps,
    Ilya Zakharevich, May 4, 2005
  16. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    Do you want me to give a lecture on quantum mechanics? Wrong forum.
    I see, you have a very naive understanding of quantum mechanics...
    Anyway, it does not matter much: the issue of QE can be addressed very
    well in classical terms.
    Use the same exposure, and the S/N value you measure after this is
    decreased 30 times. So you see that the amount of photons which
    generate 1 electron matters.
    Put a 0.001 density filter over your sensor. Use 1000 times larger
    exposition. You get 1000 times more photons in your sample. So the
    photon count S/N is 30 times lower. But the electron count S/N
    remains the same.

    Puff! There goes your theory that photon noise is the same as
    electron noise.
    Absolutely. I'm asking for your subjective opinion, based on your
    personal experience. And everybody else is, of course, very welcome
    to add their opinions.
    As you demonstrate it many times, in this decade it does not make
    sense to use film as the gold standard of quality w.r.t. noise. But
    the human eye does not change this quick, so it makes some sense to
    use the eye response as a gold standard.
    If Velvia 50 gives S/N=20 for luminance of 18% gray, it looks like I
    cannot reliably distinguish it from 0... This is for pixel size 6.3
    microns, while the dot pitch on my monitor is 260 microns; quite a lot
    of magnification. Maybe my eyes are not good enough...

    Ilya Zakharevich, May 4, 2005
  17. Ilya,
    You compare photon numbers in front of the 0.001 transmittance filter
    to photons converted to electrons in the sensor's pixel.
    This is completely absurd and is totally irrelevant to
    the signal-to-noise that the sensor is capable of achieving.
    An electronic sensor converts 1 photon into one electron
    in the potential well. It only matters what photons are converted,
    not what gets absorbed/lost external to the photosensitive area.

    Roger N. Clark (change username to rnclark), May 4, 2005
  18. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    Yes, I followed the model you proposed to show that QE does not
    matter. Your model shows that it *does* matter, and the photon
    Poisson noise is not equal to the electron Poisson noise. Thus using
    these terms *interchangeably* is not suitable; it leads to *confusion*.
    In your settings I agree that a S/N ratio *the sensor* is able to
    achieve is determined by the amount of electrons generated in a cell.
    However, as I said it many times, if you replace "the sensor" by "a
    sensor", this statement becomes wrong as far as *photographer's* point
    of view is taken:

    take two sensors with different QE; make two shots with the same exposure.

    You get different noise. So, from photographer's point of view
    ("artistic value" being the main issue), these sensors have different
    S/N ratio at the same exposure.

    If the full well capacities of these two sensors are the same, then,
    as you say, the noise can be made equal by compensating the exposure.
    From the photographer's point of view, this means that these sensors
    may be used to provide the same "artistic value" with different ISO
    I repeat it again: most photons are converted to 0 electrons. This
    makes the electron Poisson_noise/signal much larger than photon
    Poisson_noise/signal. [This discussion goes in circles, so feel free
    to cut it off.]

    Hope this helps,
    Ilya Zakharevich, May 5, 2005
  19. You are not using "my model." See below.
    A sensor can be a one element sensor, same as "the sensor." I still
    work with 1-pixel systems. It has 1 sensor. The sensor is one pixel.
    Hallelujah!!!! We've finally made progress. This is what I have been
    trying to tell you for a long time.
    I agree.
    I agree.
    Yes, I agree, but when a photon IS converted, it generates one electron
    in CCDs and CMOS sensors.
    This is where the confusion is. When I say photon noise I mean those
    photons that are converted to electrons. In the electronics
    industry, in astronomy, and I'm sure other industries, photon
    counting devices are commonly referred to as, well, photon counting
    devices. By your nomenclature, they are not counting photons, but
    electrons, and you would say most photons are not counted (most
    devices have low QE, and even lower optical transmission * device QE).
    But the electronics/astronomy fields do not use your nomenclature.
    I know who you mean, and while a legitimate point of view, it is not
    the standard nomenclature in use. When I say photon counting, it
    refers to the photons that get converted to the electronic signal.
    And it is that generated signal we measure and which ultimately
    influences the signal-to-noise ratio that can be gotten from that system.

    Roger N. Clark (change username to rnclark), May 5, 2005
  20. I do not have banding problems with my 10D or 1D Mark II. The test
    20D dark frames posted by astrohotographers do not seem to show
    banding, at least the ones I've seen. Your camera may have a problem
    and might need fixing.
    Take a look at ImagesPlus; it has some tools for this.

    Roger N. Clark (change username to rnclark), May 6, 2005
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.