The Joy of Pixel Density

Discussion in 'Digital Photography' started by John P Sheehy, Jul 15, 2008.

  1. John P Sheehy

    Bob Newman Guest

    This isn't exactly the case. That source follower is an amplifier, but
    it's amplifying the current, not the voltage. As it happens, that's
    pretty much equivalent. The charge/voltage gain is supplied by the
    capacitance, as Eric explained. However, since it still only contains
    the photo (and shot noise) electrons, it's ability to drive the
    downstream stages ins not big. The source follower amplifies the
    current, introducing some noise along the way. I think that noise is
    voltage, not charge dependent, but I could be wrong.
    There are circuits that can do this, by changing the cell capacitance,
    either by switching in and out capacitors or changing the DC bias on
    the cell, and I was informed by one DPR correspondent that Canon has a
    patent on this. However, I'm pretty sure no current camera does it,
    and Eric seems to confirm.
    I don't think we've positively established that the front end read
    noise is truly electron referenced. I would like to see an explanation
    of how it is so. The classical electronics says it isn't, I think we
    have to look at the device physics and quantum effects to understand
    more (i.e. what is the noise contribution of a single electron charge
    on the gate of the source follower)
    Bob Newman, Jul 20, 2008
    1. Advertisements

  2. Which suggests that you /do/ care about the pixel density....

    David J Taylor, Jul 20, 2008
    1. Advertisements

  3. John P Sheehy

    ejmartin Guest

    Yes, but it's neither here nor there. Just treating the sensor as a
    black box and running dual amplifications from the output as I have
    described in a previous post, one obtains 14 stops of pixel DR. So
    unless there is a flaw in that scheme, the result is tantamount to the
    assumption that the sensor read noise is 4 electrons at all ISO, since
    it results in the same overall system DR as one would get from a
    sensor with 14 stops DR and a downstream processing chain that was not
    limiting that DR.

    But I didn't have to make any assumption that read noise is electron
    referred to get the extra two stops of DR, I just used measured
    properties from current DSLR's together with the assumption that ISO
    amplification is not implemented at the photosite (so that the output
    from the sensor could be run through parallel amplification channels
    of different gain). If ISO is implemented at the photosite, one could
    use the ability to nondestructively read the sensor twice and process
    sequentially with the two different gains, with much the same result
    (though at a cost in frame rate due to the requirement of sequential
    rather than parallel processing).
    ejmartin, Jul 20, 2008
  4. John P Sheehy

    ASAAR Guest

    Which only shows that you haven't been listening.

    It seems that you have a rare talent to make your misstatements
    less visible by muddying the water through misinterpretation. I
    don't want you to compare any two cameras that I or anyone else
    mentions. I only gave a couple of examples that disproved your
    statement that such similar pairings did not exist. You replied
    that the D40 and D40x couldn't be considered because they have
    completely different types of circuitry. Please tell us what these
    differences are. Will you avoid this question again? Then explain
    why the FZ50 and 5D (oops, sorry, 400D) can be fairly compared. I
    could be mistaken, but it seems likely that they use types of
    circuitry that would be at least as inappropriate for comparison as
    the D40 and D40x.

    I've already given you more than you deserve. It's up to *you* to
    select a camera that's appropriate for your comparisons. It can
    even be an old, used, very inexpensive camera. If you had a dollar
    for each of the hundreds of hours you've been beating this dead
    horse here and on DPR, you could have purchased several of these
    cameras by now. I wouldn't suggest trying to get the money by
    fundraising or begging for dollars because by your insults and tone
    (those that disagree with you are stupid), you've alienated what
    might have been your potential base.

    They're not, see above. It's amazing how you can be wrong so
    often yet not recognize that there might be a problem. Especially
    since you've already used the term that explains the most likely
    reason - cognitive dissonance.
    ASAAR, Jul 20, 2008
  5. John P Sheehy

    Bob Newman Guest

    Ever since I joined this long discussion on DPR those months ago, I've
    been puzzled by this read noise thing, and especially by the reference
    to electrons. Every way I look at it 'read noise' looks to me to be
    voltage referred. If you'll excuse a small treatise on the subject, it
    might be interesting. If people can pick holes in it, then I'll learn.
    If not, it suggests that you are essentially right.

    OK. The canonical analog processing chain, for both CMOS and CCD has a
    source follower front end, which current amplifies the voltage on the
    pixel, which derives from the cell charge according to the well known
    V = Q/C. Being a mosfet, the input current is essentially zero, since
    the gate is insulated. This is true unless scales are reduced such
    that quantum tunneling of the electrons through the gate becomes
    significant. I think, unless shown otherwise, that image sensor
    geometries are far away from that (lateral evidence would be that
    flash memory uses gate geometries much smaller and manages to retain
    gate charges of a few electrons for years). Sometimes, the source
    follower will be followed by one or more stages of current or voltage
    amplification (the Kodak reference circuits show an emitter follower
    to provide further current amplification, followed by whatever voltage
    amplification is needed to produce the required full scale input for
    the ADC system). Anyway, let's call the noise produced by the source
    follower and any subsequent fixed gain amplifier the 'front end read
    noise', Nf

    The source follower is followed by one or more stages of voltage
    amplification and one or more stages of programmable gain
    amplification. Let's call these 'middle read noise, Nm'.

    Finally, we have the ADC system, which generally consists of a sample-
    and-hold (for correlated double sampling) an amplifier and the ADC.
    let's call this the 'back end read noise', Nb.

    Assuming that all the three noises are produced by a single stage of
    amplification, without overall feedback (which isn't always the case)
    and that all the voltage gain is in the ISO gain stage (also a
    simplification, but not one which affects the following argument) then
    the 'read noise' recorded by the ADC is Gi*(Nf +q Nm) +q Nb. (where +q
    is shorthand for adding in quadrature) This assumes that the variable
    gain amplifier is a well designed feedback controlled amplifier, and
    its noise is somewhat independent of gain.

    Firstly, why does 'read noise' reduce with ISO? Of course, in reality
    it doesn't, but it appears so if we relate it to the photoelectrons in
    the sensel. To reference the read noise to electrons, we need to take
    into account the charge/voltage gain, which is given by the sensel
    capacitance (Cs), the charge of an electron (Qe) and the voltage gain
    of the chain, so the electron referred noise is (Cs/Gi*Qe)*(Gi*(Nf +q
    Nm) + Nb). If we re-arrange that we get (Cs/Qe)*(Nf +q Nm +q Nb/Gi).
    So we can see, if we want to 'electron refer' the read noise, we
    divide the back end noise by the gain, which is higher at high ISO's.
    If that gain is high enough, the back end noise becomes insignificant.

    Back to pixel size and read noise. The sensor measurers have
    established a standard practice of measuring read noise in electron
    equivalents, as though they were noise in the pixel itself. This means
    'passing' the noise 'backwards' through the charge/voltage converter,
    which is the cell capacitance. This must mean that the electron
    referred read noise depends on the cell capacitance, which will mean
    it tends to reduce as pixel sizes reduce. If this is the case, it
    removes the argument that small pixels contribute more read noise per
    unit area.

    Both Roger Clark and Emil Martinec claim that electron referred read
    noise is essentially independent of pixel size (and therefore cell
    capacitance). I haven't seen any theory to support that, and I don't
    think that Roger Clark's measurements, taken as a whole, support it
    either. You can find pairs of sensors to compare which suggest any of
    three conclusions (small more noise, no dependence, large more noise)
    but overall the picture is pretty mixed. To my eyes the biggest
    correlations with read noise would be price, manufacturer and date of
    introduction. In any case, so far as the downstream electronics are
    concerned, the conversion argument above must hold, essentially the
    electron referred read noise is scaled by the charge/voltage gain,
    which in turn is dictated by cell capacitance. That just leaves the
    source follower noise. I cannot find anything to suggest that will be
    any different. Again, the noise in this stage is essentially voltage
    noise, which must be scaled by the cell capacitance. The noise of a
    mosfet is explained here (
    Looking at equation 4, it can be seen that this is scale independent -
    that is the front end noise will not scale up to compensate for the
    capacitance scaling argument I have described.

    So, I believe that there is a good argument that read noise, when
    electron referred scales down with pixel size. Why isn't this observed
    in practice. So much as one can determine a pattern, I believe that
    this is dictated by design practice, as much as any underlying
    physical reason. Partially, it could be that imager designers are also
    looking at things at a pixel level (they certainly would have
    designing CCD's for astronomy and similar uses, and standard practices
    become embedded. If you photocell design only has 10 stops of dynamic
    range, why design for very low read noise, particularly if it makes
    things more expensive? There is only a reason to do so if you realise
    that a sensor will often be used in a downsampled mode. In essence, if
    electron referred read noise is approximately constant it is because
    it has been designed to be so. If someone wants to think out of the
    box, as you have , there is nothing to stop it scaling down with pixel
    area, which will exactly compensate for the aggregation of read noise
    from several pixels.
    Bob Newman, Jul 20, 2008
  6. John P Sheehy

    Bob Newman Guest

    We agree that in real, present day cameras ISO doesn't occur in the
    photosite. In fact, it doesn't really occur at all, what's happening
    is that the capture system is being finagled to best read the bit of
    the sensor DR you're actually interested in. As you point out, with a
    better capture system, no need for ISO. What I was correcting was your
    statement that no amplification occurs in the photocell of an active
    pixel sensor. It does (although current rather than voltage
    amplification) and its noise is probably the irreducible minimum read
    noise. Luckily, if electron referred, it scales down with cell
    capacitance, and therefore area (unless you or someone shows
    Bob Newman, Jul 20, 2008
  7. John P Sheehy

    John Sheehy Guest

    A problem I predict with your system is noise-contouring. I'm not quite
    sure I'm ready to look at pulled-up shadows where the read noise at one
    level 4.5 stops below saturation is 4 to 7 times as high as a stop below
    that level. I'm not a big fan of current HDR; I use it when necessary,
    but it is always a hack. If the photosite can be read multiple times
    non-destructively, why not just read it multiple times and add the
    results together, in an extended-DR mode? You could tell the camera how
    many times to read, depending on the amount of time you can allot between
    shots, and it would add them in a 16-bit or deeper buffer. That would
    give a natural SNR curve with no contouring bands in the noise. Or, if
    you can't afford the extra read time and you want different gains, do a
    spread of 3 or 4 and blend them into a better curve.

    4 stops below saturation at ISO 100 (which would be on the upper side of
    your blend, since the top level of the blend must be comfortably below
    ISO 1600 saturation) still results in things like chromatic noise in
    solid colors in the best DSLRs, unless you have something like a 1Ds3
    where it dissolves in the resolution, or print small.

    John Sheehy, Jul 20, 2008
  8. John P Sheehy

    ejmartin Guest

    OK, perhaps one should separate two potentially distinct issues:

    1. How does pixel read noise scale with pixel pitch?

    2. Are John Sheehy's claims that FZ50 pixels are, per area, better
    performers than 1D3 pixels, supported by the evidence?

    On question 1, I'll have to think about your little treatise. One
    thing confuses me, however, and that is your lumping the noise of the
    programmable gain amplifier into the "middle read noises" Nm. This
    means that they get multiplied by the amplifier gain Gi in your
    formulae; but then you say that "This assumes that the variable gain
    amplifier is a well designed feedback controlled amplifier, and its
    noise is somewhat independent of gain." But in effect, according to
    your expression, the noise output by the amplifier attributable to the
    amplifier itself IS proportional to the gain, since it's Gi Nm in your
    expression. So I'm perplexed as to your meaning there.

    As to question 2, we agree that, for current DSLR's, the downstream
    circuitry is limiting the DR. A dual amplification scheme recovers
    that DR (BTW, Eric Fossum said there might be issues with CDS; do you
    know what he was referring to?), and so the comparison between the
    FZ50 to the DSLR should be made using that fully recovered DR, not
    using the limited DR of fixed ISO. Because, while the FZ50 does do
    better than the 1D3 numbers at low ISO, it is about a stop and a half
    worse in DR per area than the 1D3 with its sensor DR fully recovered.
    So, on that basis, I think John's claims are incorrect, though it is
    surprising how favorably the FZ50 pixels compare on a per area basis
    with *currently realized* DSLR's such as the 1D3.

    There may be in some hoped-for future a means of lowering the small
    pixel read noise to about 1 electron (input referred), which is not
    simultaneously available for bigger pixels; perhaps the reason will be
    the sort of capacitance arguments you have put forth. At that point,
    small pixel DR on a per area basis will equal that of the 1D3's fully
    realized sensor DR, and small pixels will be competitive on SNR and
    DR. But there is no such pixel like that among current examples.
    ejmartin, Jul 20, 2008
  9. John P Sheehy

    ejmartin Guest

    The fact that the red ISO 1600 and blue ISO 100 SNR curves (see the
    below-linked figure) are quite close to one another at the upper end
    of the ISO 1600 curve's range says that read noise is quite small
    relative to photon shot noise in that range. So while the ISO 100
    read noise is about 6 times larger than its ISO 1600 read noise in
    absolute terms, this is still a negligible contribution to overall
    noise, otherwise the SNR curves wouldn't be close to one another. And
    if that small difference is really bothersome, we can use ISO 800
    instead (shown in black on the linked figure below) to get a truly
    seamless match of SNR at the crossover, with only a slight penalty in
    SNR in the very lowest stops of EV:

    Since read noise is so subdominant a component of total noise, there
    won't be any "noise contouring" at least from the component of read
    noise that is white and gaussian. I think the only issue along these
    lines, so to speak, will be pattern noises. I should probably do some
    sample blends to see if this is going to be an issue; I doubt it,
    since pattern noise is so well controlled on the 1D3, and I certainly
    haven't found any situation where it affected IQ in the tonal range
    4-5 stops down from saturation at ISO 100.
    ejmartin, Jul 20, 2008
  10. John P Sheehy

    ejmartin Guest

    Oops, I hit the send button before I was ready...
    Because of the only sqrt increase of the SNR as a function of the
    number of reads, that is a very inefficient way to improve SNR.
    Yes, the appearance of noise is somewhat different with big pixels vs
    small pixels, due to the spatial frequency where it has its support.
    But now we're starting to talk about the tradeoffs of big pixels
    (better SNR) vs small pixels (better resolution), aren't we?
    ejmartin, Jul 20, 2008
  11. John P Sheehy

    Ray Fischer Guest

    No, it says that they care about the number of pixels. Notice that you
    didn't ask about pixel density? You asked about number of pixels?

    That should have been a clue.
    Ray Fischer, Jul 20, 2008
  12. John P Sheehy

    Bob Newman Guest

    That's fair enough. The problem is, when it gets in the way of people
    conceptualising what's actually happening. In this case, the photo-
    electron referred noise figure is distinctly unhelpful in working
    one's way through the actual noise sources, and the variously
    amplified versions of them which appear in the final signal. One of
    your major criticisms of John's stuff is based on an assumption that
    'read noise', referred to photo-electrons, remains approximately
    constant. I cannot see why this should be so, since the translation
    from what it is (a noise current in the amplifier stage) to apparent
    photo-electrons must be due to the cell capacitance, which is at least
    loosely related to cell area. The real truth is that a lot of the
    assumptions are based on observations of sensors which are not things
    that occur naturally in inevitable configurations. What you are
    observing is the result of conscious design choices, to use them as
    evidence of an inevitable trend is hardly 'scientific'.
    Bob Newman, Jul 20, 2008
  13. John P Sheehy

    ejmartin Guest

    Carrying this analysis a step further, can we assume that Nf is
    thermal noise? Then <V^2>=kT/C, and so at high gain (thus dropping
    the effects of Nb) one has

    (Cs/Qe)*(Sqrt[kT/Cs] +q Nm +q Nb/Gi)

    Cs should be proportional to the collection area, as this gets
    asymptotically small the input-referred noise should scale according
    to this formula as the sqrt of the collection area, ie with the linear
    size of the pixel. Actually it would decrease somewhat faster than
    that, for a given level of technology the size of the support
    electronics is fixed and the collection area will decrease *faster*
    than linearly with the pixel spacing. We can make the input referred
    read noise as small as we want if we let the photosensitive area go to

    If the collection area is Ac and the support electronics occupies Ae,
    and the pixel spacing is d, one has d^2=Ac+Ae. The FWC goes as Ac,
    the read noise as sqrt[Ac], and the DR per area is (see above post)

    DR/area ~ const * Ac/(sqrt[Ac] * d) ~ const * sqrt[1-(Ae/d^2)]

    So with these assumptions -- fixed area requirements for support
    electronics -- DR per area goes down as the pixels are shrunk. One
    can only decrease pixel spacing and maintain DR per area if the
    support electronics shrinks in proportion to the pixel size, which
    makes a lot of sense.

    One might also worry that there are input referred noises that might
    not scale. Can we be sure that there are no constant sources of input-
    referred noise, for instance noise in the 4T arrangement that reads
    out the photoelectron count?
    ejmartin, Jul 20, 2008
  14. John P Sheehy

    Paul Furman Guest

    Could someone elaborate on this please? It seems critical to this
    question. Which 'catches faster'? I've never heard of that. Are there
    different depths or it's just widths?

    Paul Furman

    all google groups messages filtered due to spam
    Paul Furman, Jul 21, 2008
  15. John P Sheehy

    Paul Furman Guest

    Is the claim that smaller pixels are more sensitive to shadow detail (at
    low ISO) because large pixels shadow detail is swamped by read noise
    since they are such low numbers which small pixel's shadow detail is
    represented by fairly high numbers with less pollution from the read
    noise (but more shot noise)?

    Paul Furman

    all google groups messages filtered due to spam
    Paul Furman, Jul 21, 2008
  16. You said: "People care abut noise per pixel, not noise per sensor area".
    Now you say "they care about the number of pixels". Which is it to be? I
    was careful to specify the same size sensor.

    Although some would choose 6MP, I bet a majority would choose 12MP, which
    suggests to me that either marketing has succeeded, or that they really do
    prefer resolution over noise.

    David J Taylor, Jul 21, 2008
  17. John P Sheehy

    Bob Newman Guest

    Yes, mainly thermal noise, but noise in the mosfet channel, not the
    cell capacitance. Obviously there is some of that too, but I think
    it's usually agreed not to be significant. And it's not rread noise.
    No, the first noise term is given by mosfet noise equations, as the
    source I gave. As well as the channel mobility (which is adjustable by
    doping) this turns out to be proportional to the width/length ratio
    (long thin transistors are quieter than short fat ones). This is in
    the limit a constraint for small pixels, since the process geometry
    determines just how long and thin you can make your fet, and of course
    the longer you make it, the bigger the gate capacitance and the less
    the cell capacitance scaling effect.

    Got to go to a meeting now! Follow up the rest later.
    Bob Newman, Jul 21, 2008
  18. John P Sheehy

    Steve Guest

    Or, that for an APS-C sensor size, you're in the area of the S/N vs.
    Pixel Density curve where it's still relatively flat. So you don't
    give up all that much S/N when moving from 6 to 12MP for an APS-C DSLR
    like you would for a 1/1.8 pocket camera, which has about 10 times
    less sensor area.

    Steve, Jul 21, 2008
  19. John P Sheehy

    Bob Newman Guest

    I really do agree it's impressive, and a very useful body of work.
    However, I think you need to understand the limitations of such data,
    and how far you can stretch it in terms of analysis of trends from it.
    For instance, I have had your site quoted at me as irrefutable
    evidence that the 5D has a higher FWC than the D3. Certainly, most of
    your graphs show the 5D outperforming the D3. Does it? I don't know,
    but some of your figures are taken from Peter Facey, yet in his direct
    comparison, the D3 has a higher FWC. Because of the constraints, these
    figures are not exact, and it's fruitless to talk about
    interpretations which are in the noise, unless you can average a very
    large numebr of such datapoints to reduce the noise ;-)
    I believe there is a weak corelation in theory (in that input referred
    read noise is clearly scaled by the cell capacitance, which is weakly
    related to cell area). The best you can say is that there is no
    evidence in your data to support that, not that there is no
    OK, but it's not particularly illuminating when we get to the level of
    discussion we're at now.
    I don't think anyone is arguing about the science (apart from my read
    noise scaling thing, which might have been dealt with by Emils
    response, but I think not). It's the engineering consequences of that
    science. Everyone is at least agreed that the photon shot noise is, in
    the end an area based, not a pixel based thing. Where the disagreement
    is, is whethere engineering solutions may exist which allow you to
    utilise that fact to make a camera which can bot produce high
    resolution and, when you want it, low noise at smaller magnifications.
    In most of the applications you deal with, the engineering constraints
    are very different. For instance, spacecraft rarely use processor
    designs less that 20 years old, and data processing is at a premium.
    An imaging system design which produces humungous amounts of data, but
    rrequires a lot of signal processing to release low noise images from
    it is hardly ever going to be a goer. By contrast, consumer cameras
    use commodity, Moores law empowered processors and memory. Often,
    piling in processing can be a better solution to fancy high cost
    Bob Newman, Jul 21, 2008
  20. John P Sheehy

    Bob Newman Guest

    Bob Newman, Jul 21, 2008
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.