The Joy of Pixel Density

Discussion in 'Digital Photography' started by John P Sheehy, Jul 15, 2008.

  1. John P Sheehy

    Bob Newman Guest

    Something that strikes me about this is that Canon's fab line is now
    quite old. Presumably, the new one is not simply a duplicate of the
    old one, and can work at finer geometries. There's a wealth of
    interesting stuff in Eric's workshop papers, and one of the things
    that also seems to be a myth is that CMOS sensors are not very
    demanding process-wise. That might be true if they don't want to
    explore the limits of pixel size/noise, but if they do, they need fine
    drawing. Maybe that's an advantage the newer MOS lines (Panasonic,
    Sony, IBM/Kodak, Toshiba) have. The 52MPix sensor seems to have some
    feature that look like saving space (like the 4T 4S architecture -
    maybe Canon has to because of the limitations of its current fab.
     
    Bob Newman, Jul 23, 2008
    1. Advertisements

  2. John P Sheehy

    ejmartin Guest

    ejmartin, Jul 23, 2008
    1. Advertisements

  3. John P Sheehy

    John Sheehy Guest

    (Ray Fischer) wrote in $:

    I haven't even told you what my weak argument is yet.



    --
     
    John Sheehy, Jul 23, 2008
  4. John P Sheehy

    Bob Newman Guest

    Yup, and look at figure 5. There is predictable behaviour there that
    could be used to identify the flicker noise and process it out - real
    'noise reduction'.

    One of the other papers is about 'backside illuminated sensors' -
    presumably they need to be made of Gallium Arsenide :D
     
    Bob Newman, Jul 23, 2008
  5. John P Sheehy

    ejmartin Guest

    Why do you say it's predictable? In terms of the output yes, but I
    thought the point was that you couldn't predict when the interface
    will trip into the "excited" state, which is why there is a 1/f random
    noise spectrum.
     
    ejmartin, Jul 23, 2008
  6. John P Sheehy

    Bob Newman Guest

    You can't detect 'when', but you can detect 'if'. Multiple sample,
    exclude the high samples, noise gone. This might take longer, but it
    could be a user choice - fast/noisy, slow/quiet.
     
    Bob Newman, Jul 23, 2008
  7. John P Sheehy

    Bob Newman Guest

    ROFLMAO
     
    Bob Newman, Jul 23, 2008
  8. John P Sheehy

    John Turco Guest


    Hello, Roger:

    Look at the bright side: This current controversy has hastened your
    return to <at least.

    Welcome back, doc! <g>


    Cordially,
    John Turco <>
     
    John Turco, Jul 25, 2008
  9. John P Sheehy

    ejmartin Guest

    ejmartin, Jul 27, 2008
  10. John P Sheehy

    John Sheehy Guest

    Your result is probably a tad better than what you would get with two-
    channel amplification, because you are getting different shot noise details
    with the two shots, and they would be exactly the same with a dual-readout
    of the same exposure, so your simulation has a "notch" in the shot noise of
    1/2 stop where the ramped blend gives equal shot noise from both images.

    --
     
    John Sheehy, Jul 27, 2008
  11. John P Sheehy

    Ray Fischer Guest

    And we could improve the fuel efficiency of car by elimiting the
    transmission and just having a combined 2nd and 5th gear.

    I suspect that the key problem is that it's not possible to read out
    the iso1600 image while continuing to expose the iso100 image.
     
    Ray Fischer, Jul 27, 2008
  12. John P Sheehy

    John Sheehy Guest

    (Ray Fischer) wrote in $:
    There is only one exposure in his scenario; it is simply read out in two
    prallel channels with different gain. The high-gain one would clip away
    the top 4 stops of the signal.

    In his simulation, there are only two different exposures because he had no
    choice in the matter.



    --
     
    John Sheehy, Jul 27, 2008
  13. John P Sheehy

    Ray Fischer Guest

    Then you have a noisy image at iso100. Not all the noise comes from
    the amplifier. Much (maybe most) comes from the sensors themselves.
    With few photos and then few electrons to read out there is more
    sensitivity to randomness.

    Of course you can increase the size of the sensors. Making them much
    bigger improves their sensitivity and reduces noise at low light
    levels. But the drawback there is you end up with a 2MP camera.
     
    Ray Fischer, Jul 27, 2008
  14. John P Sheehy

    ejmartin Guest

    One of the points of the demonstration is that, at low light levels at
    ISO 100, most of the noise *does* come from the amplifier. If it
    weren't so, then there wouldn't be any difference with the ISO 1600
    image (shot with the exact same shutter speed and aperture) blended
    in. Instead, there is a dramatic improvement. Note also that most of
    the nasty line noises are an artifact of the amplifier, and disappear
    to a large extent in the blended image.
     
    ejmartin, Jul 28, 2008
  15. John P Sheehy

    John Sheehy Guest

    (Ray Fischer) wrote in

    The noise in the sensor is a given. However, there is much more
    read/digitization-related noise at base ISO, relative to absolute
    signal, in advanced CMOS designs. The objective in Emil's desired
    camera design is not to get less sensor noise, but less of the extra
    noise caused by low gain, which severely restricts DR in cameras with
    big pixels.
    That might be an issue varying the exposure, but not varying the readout
    gain.
    So far, this sounds reasonable.
    Now, it sounds like you're talking about increased photosite size, but
    you wrote about increased sensor size; two different things.

    The reality of the situation is that for base ISO performance and DR, a
    higher density of smaller photosites works better with the traditional
    design. As I demonstrated in my OP, the higher pixel density does not
    mean poorer performance. What Emil is trying to do is to get the
    practical, area-based read noise low with big pixels, which is not
    happening with current technology. With current technology, higher
    pixel densities yield lower read noise at base ISO, higher DR, and
    higher resolution for the image. Emil's suggestion could get the DR and
    read noise a bit better than higher pixel densities can for base ISO,
    but without their resolution benefit.

    --
     
    John Sheehy, Jul 29, 2008
  16. John P Sheehy

    ejmartin Guest

    More than a bit better. I think we're agreed that FZ50 pixels are
    competitive in light collection per unit area with D3 pixels. If one
    compares read noises that pertain to my suggestion, the D3 has about
    5e- read noise with 8.45µ pixels while the FZ50 has 3.3e- read noise
    with 1.97µ pixels (according to your figures; it was more like 5e- in
    the raw file I analyzed). The read noise/area figure of merit divides
    the read noise in electrons by the pixel pitch, and those figures are
    5/8.45=0.6 for the D3, and 3.3/1.97=1.7 for the FZ50, about 1.5 stops
    worse for the small pixels. In other words, the target you've been
    aiming at is the low ISO performance of big pixels limited by the DR
    of components other than those pixels; once freed from that
    restriction, their performance exceeds the small pixels in terms of
    dynamic range. The results are consistent with what both you and
    Roger have been saying -- the 12.5 stop DR of the FZ50 when scaled to
    the D3 pixel size outperforms the D3 at base ISO in DR, but the D3
    pixels freed of their downstream circuits' limited DR have 14 stops or
    perhaps a bit more and thus quite a bit more DR than the FZ50 pixels
    on a per area basis.

    But I think the DR numbers don't reveal a different and equally
    important issue -- in your more recent comparison

    http://forums.dpreview.com/forums/read.asp?forum=1018&message=28760503

    there is lots of line noise with the FZ50, so read noise is having a
    substantial impact on image quality (which indeed makes it interesting
    to see whether the G9 does better, as you hint it should). On the
    other side, the dual amplification/blended image sample I presented
    shows extremely little in the way of line noise, even 12-13 stops down
    from raw saturation, and a range of about 10-11 stops where the image
    is shot noise limited.

    So the conclusion I am coming to is that (as Roger had been claiming
    for some time) the issue is really a tradeoff of low noise/sensitivity
    favoring big pixels, versus resolution favoring small pixels. And the
    issue becomes what is the optimal tradeoff for a given application; I
    can see some applications (landscape comes to mind) favoring small
    pixels, while others (photojournalism eg) favoring large pixels.
     
    ejmartin, Jul 30, 2008
  17. John P Sheehy

    Paul Furman Guest

    How are the two shots blended? Is that mapped with some kind of mask or
    intensity cutoff or merged like stacked astro shots?

    --
    Paul Furman
    www.edgehill.net
    www.baynatives.com

    all google groups messages filtered due to spam
     
    Paul Furman, Jul 31, 2008
  18. John P Sheehy

    ejmartin Guest

    I read the raw data from both shots into IRIS and thence to a FITS
    file which was loaded into Mathemtica; then I constructed an output
    image consisting of

    1) All pixel values from the top four stops of the ISO 100 image;
    2) A linear combination of the ISO 100 and 1600 images in the next
    stop down of EV, ramping linearly between all ISO 100 at the top of
    this window, to all ISO 1600 at the bottom end of this window;
    3) All ISO 1600 pixel values in the remaining lower stops of EV.

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    The relevant Mathematica code is

    windowmax = 15000;
    windowmin = 7000;
    range = windowmax - windowmin;
    blend = hiend; (* hiend is the ISO 100 image data; lowend is the ISO
    1600 image data *)
    Do[If[lowend[[i, j]] < windowmin,
    blend[[i, j]] = Round[lowend[[i, j]] - 1024/scale] + 1024, (* scale
    is the relative normalization factor ~16 between the ISO 100 & 1600
    data *)
    If[lowend[[i, j]] < windowmax,
    blend[[i, j]] =
    Round[((windowmax - lowend[[i, j]]) (lowend[[i, j]] - 1024)/
    scale + (lowend[[i, j]] - windowmin) (hiend[[i, j]] -
    1024))/range] + 1024]], {i, 1, Dimensions[lowend][[1]]},
    {j, 1, Dimensions[lowend][[2]]}];

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    The resulting image data was output to a FITS file, and loaded back
    into IRIS (I would have done this all in IRIS but I couldn't see how
    to do the conditional blend easily; it was trivial however in
    Mathematica). I then used IRIS to do a rudimentary demosaic, an
    eyeball white balance, and gamma correction. The result was output to
    a tiff file and further curves work done in CS3, because the relevant
    image details from the colorchecker chart etc were so deep in shadows
    that they didn't show on an 8-bit monitor.

    The ISO 100 image alone was treated in the same way (though the final
    curves corrections were only approximately the same) and used to make
    the comparison crops.

    The upshot is that the two RAW images were blended at the RAW stage
    before demosaic, rather than after RAW conversion. That is important,
    since it can happen that (especially with the tungsten light source
    used in the demo) one color channel in a region has pixel values that
    should be taken from the ISO 1600 shot, while other pixel values
    should be taken from the ISO 100 shot, for best results. One is free
    to do that knowing the relative normalization of the ISO 100 and 1600
    data, due to the linearity of sensor response. I was trying to mimic
    the most natural way of combining the image data using a simple
    algorithm that one might implement in the firmware of a camera.
     
    ejmartin, Jul 31, 2008
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.