Why not the ultimate DSLR sensor?

Discussion in 'Digital Photography' started by Benny, Mar 11, 2007.

  1. Benny

    Benny Guest

    Is there any reason why one day sensors will be designed to give correct
    exposure at each pixel by applying the required ISO at each and every pixel.

    We are now dealing with electronics here so why couldn't this be achieved?

    The outcome would be that every image would be 'perfectly' exposed at all
    points (pixels) through-out the image.

    Shadows would be exposed correctly as would highlights etc etc.

    The light falling on each pixel would be evaluated and exposed accordingly.

    It is probably well beyond the current types of sensors but with
    nano-electronics I would imagine this is easily achievable.

    B.
     
    Benny, Mar 11, 2007
    #1
    1. Advertising

  2. Benny schrieb:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.
    >
    > B.
    >
    >

    Then you would get a grey soup :)
     
    Gregor Kobelkoff, Mar 11, 2007
    #2
    1. Advertising

  3. Benny wrote:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.
    >


    It's already been done - the IRIDIX processor by Apical Ltd, as used by
    Sony, Nikon, and Olympus. Known as Dynamic Range Optimisation by Sony,
    D-Lighting by Nikon (only included in consumer cameras) and something
    else by Olympus.

    Since pixels have no ISO sensitivity - they just receive photons and the
    'ISO' is a result of applying A to D conversions/gain to the values of
    photons sitting in the pixel wells - any process which takes the pre-raw
    image at A to D stage, analyses it an applies selective gain to certain
    pixels is already doing this.

    That's what Apical's process does, why the Sony A100 has an IRIDIX chip
    included. It only works on the JPEG output, and it has to take over
    entirely - the camera can no longer write a normal RAW file if you opt
    for DRO+ (the full Apical process which analyses the image pixel by
    pixel, maps out values, and detects zones which can be processed). You
    can't even opt for RAW+JPEG, and you can't use manual exposure either,
    as the CCD exposure has to be controlled by the metering to provide the
    right data for DRO+ to work on. Currently the Sony A100 is the only DSLR
    to use the Apical process directly, and Nikon's D-Lighting which can be
    post-applied to RAW images in some cameras may not necessarily be the
    Apical system. Their software D-Lighting does not use Apical patents or
    licenses, while the same function in Olympus Studio does. The hardware
    in Coolpix cameras with auto D-Lighting uses Apical's chip.

    I believe the higher noise levels in the Sony A100 at high ISOs, even
    from raw, compared to other cameras using the same chip may be a result
    of setting the photon count for any given ISO equivalent to a lower
    value, in order for DRO+ to work. Like Hi200 - the Sony highlight
    preservation mode - DRO+ appears to require a 'darker' raw file to work
    on, though you never get to see the raw file. This may also explain why
    the Sony A100 when reviewed has been noted to have shorter actual
    exposures and produce slightly darker images. I rather hope they miss
    DRO+ and the IRIDIX processor out of future models, as any process
    viable on JPEGs only is of no interest to me, and any loss of other
    image qualities caused by having it present is really undesirable.

    See www.ukapical.com for graphs, animated explanations etc.

    David
     
    David Kilpatrick, Mar 11, 2007
    #3
  4. Benny

    Mike Russell Guest

    "Benny" <no spam > wrote in message
    news:WLQIh.9793$...
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every
    > pixel.


    Yes.

    > We are now dealing with electronics here so why couldn't this be achieved?


    Electronics has never simulated any complex cognitive process, and eyesight
    is the most complex cognitive process of them all.

    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.


    The perfect exposure for an scene depends on the interpretation of the
    person looking at it. One person might find the highlight detail important,
    another might prefer the shadow. This has at least two effects: it makes
    the final image dependent on the interpretation of the scene, and like any
    other interesting ambiguity, it creates a context for artistic expression.

    > Shadows would be exposed correctly as would highlights etc etc.


    If you can explain what "correct" is, yes. Otherwise, of course, no.

    > The light falling on each pixel would be evaluated and exposed
    > accordingly.


    Individual pixel light values have no meaning, unless they take into account
    adjacent light values in a way that mimics the eye, as well as the
    interpretation of the overall scene as to what elements are important,
    etc.. Since we do not understand how the eye works, or what is important in
    a given scene, that's another problem.

    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.


    Micro electronics, yes, nano-electronics no.

    The gist of your proposal is based on the false assumption that there is one
    correct numeric interpretation of how the eye would interpret a particular
    photographic subject. The fact that there is no such interpretation is a
    good thing because it rescues photography, and all the other visual arts
    from stenography. It is one of the things that makes photography
    interesting.
    --
    Mike Russell
    www.curvemeister.com/forum/
     
    Mike Russell, Mar 11, 2007
    #4
  5. Benny

    Alan Browne Guest

    Benny wrote:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.


    The only way to do so without compromising dynamic range and noise is to
    have a sensor and converter that has enough dynamic range plus noise
    budget. A "sunny day" includes areas in deep shaddow with no fill
    light. That can easilly exceed 12 stops from white snow to the cave
    where the black bear sleeps. If you give 3 bits to noise, then a 15 or
    16 bit sensor will have to be used just to start.

    There is no such thing as correct exposure everywhere in an image for
    the range of lighting situations that we get. It is just as correct to
    judiciously overexpose and/or underexpose certain areas according to the
    dynamic range of the film/sensor to achieve the desired (envisioned)
    result. And I ask you, if "each pixel is evaluated and exposed
    correctly" then wouldn't they all be mid-tone? Because that is what you
    say means. An image can be exposed to deepen shaddows or push
    highlights. Lighting can also be such (and so chosen) that an image
    will fit in the dynamic range of the film or sensor.

    Perhaps a more processing powerful sensor could "shutter" pixels that
    get too much light and let expose longer those pixels not receiving
    enough. This would require reading, measuring, computing and contolling
    every pixel individually and in a range-balance with every other pixel.
    (The Nikon D70 and others have sensor based shuttering for fast flash
    sync @ 1/500, but this applied to the whole sensor, not specific pixels).

    The Sony Alpha A100 does something of what you suggest above in
    processing. From what I've seen it works. And from what I've seen I
    would avoid using it in most situations. The image ends up looking
    bland. Maybe I haven't seen it used to the best advantage.

    Another approach is Fujifilm S2,S3 S5 which have a lower sensitivity
    pixel paired with all the other pixels. This lower sensitivity pixel
    gives the camera more reach into the highlights, in effect extended
    dynamic range above the highlights, but no advantage to the shaddows.
    Further, as the sensors are more densely populated, the future growth
    into more pixels is compromised (The Fujifilm S5 is still 6 Mpix-sites
    (12 Mpixels) as was the S3).

    And finally, after all the above is said and done, there is no way (to
    date) to display or print such a dynamic range.

    Cheers,
    Alan

    --
    -- r.p.e.35mm user resource: http://www.aliasimages.com/rpe35mmur.htm
    -- r.p.d.slr-systems: http://www.aliasimages.com/rpdslrsysur.htm
    -- [SI] gallery & rulz: http://www.pbase.com/shootin
    -- e-meil: Remove FreeLunch.
     
    Alan Browne, Mar 11, 2007
    #5
  6. Benny

    Alan Browne Guest

    Gregor Kobelkoff wrote:
    > Benny schrieb:
    >
    >> Is there any reason why one day sensors will be designed to give
    >> correct exposure at each pixel by applying the required ISO at each
    >> and every pixel.
    >>
    >> We are now dealing with electronics here so why couldn't this be
    >> achieved?
    >>
    >> The outcome would be that every image would be 'perfectly' exposed at
    >> all points (pixels) through-out the image.
    >>
    >> Shadows would be exposed correctly as would highlights etc etc.
    >>
    >> The light falling on each pixel would be evaluated and exposed
    >> accordingly.
    >>
    >> It is probably well beyond the current types of sensors but with
    >> nano-electronics I would imagine this is easily achievable.
    >>
    >> B.
    >>

    > Then you would get a grey soup :)


    I get you! but I think he means compressing the highs and boosting the
    lows...

    --
    -- r.p.e.35mm user resource: http://www.aliasimages.com/rpe35mmur.htm
    -- r.p.d.slr-systems: http://www.aliasimages.com/rpdslrsysur.htm
    -- [SI] gallery & rulz: http://www.pbase.com/shootin
    -- e-meil: Remove FreeLunch.
     
    Alan Browne, Mar 11, 2007
    #6
  7. Benny

    C J Campbell Guest

    On 2007-03-11 03:22:46 -0700, "Benny" <no spam > said:

    > Is there any reason why one day sensors will be designed to give
    > correct exposure at each pixel by applying the required ISO at each and
    > every pixel.


    That would be ugly at the pixel level. There would no longer be any
    shadow or light areas. Every blue pixel would be exactly equal to every
    green pixel and exactly equal to every red pixel.

    However, I could see more advanced software adjusting exposure in areas
    that are blown out or too dark.

    --
    Waddling Eagle
    World Famous Flight Instructor
     
    C J Campbell, Mar 11, 2007
    #7
  8. On Mar 11, 4:22 am, "Benny" <no spam > wrote:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.
    >
    > B.


    Keep in mind that the dynamic range of the imaging sensor far exceeds
    any printing medium. We already lose most of the range in our image
    when we print it. Most projection displays do not even reach sensor
    dynamic range. A wider range sensor would just mean reducing contrast
    further in the output.
     
    Don Stauffer in Minnesota, Mar 11, 2007
    #8
  9. Benny wrote:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.


    And every pixel would be uniformly neutral gray!

    Yes, that's easily achievable now; but there's no photographic use for
    it. (It'd be cheap, though; you could even dispense with that pesky lens!)

    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.


    Okay, maybe you weren't actually asking for the extreme form that I
    jokingly described above.

    What would be more useful is more bits of dynamic range. You don't
    actually *want* to do all the range compression in camera, at least not
    without controls to decide exactly what will be done. If you have to
    crunch the range drastically to make a print (HDR and such), the
    photographer wants to be in control of how it's done.
     
    David Dyer-Bennet, Mar 11, 2007
    #9
  10. David Kilpatrick wrote:
    > Benny wrote:
    >> Is there any reason why one day sensors will be designed to give
    >> correct exposure at each pixel by applying the required ISO at each
    >> and every pixel.
    >>
    >> We are now dealing with electronics here so why couldn't this be
    >> achieved?
    >>
    >> The outcome would be that every image would be 'perfectly' exposed at
    >> all points (pixels) through-out the image.
    >>
    >> Shadows would be exposed correctly as would highlights etc etc.
    >>
    >> The light falling on each pixel would be evaluated and exposed
    >> accordingly.
    >>
    >> It is probably well beyond the current types of sensors but with
    >> nano-electronics I would imagine this is easily achievable.
    >>

    >
    > It's already been done - the IRIDIX processor by Apical Ltd, as used by
    > Sony, Nikon, and Olympus. Known as Dynamic Range Optimisation by Sony,
    > D-Lighting by Nikon (only included in consumer cameras) and something
    > else by Olympus.
    >
    > Since pixels have no ISO sensitivity - they just receive photons and the
    > 'ISO' is a result of applying A to D conversions/gain to the values of
    > photons sitting in the pixel wells - any process which takes the pre-raw
    > image at A to D stage, analyses it an applies selective gain to certain
    > pixels is already doing this.
    >
    > That's what Apical's process does, why the Sony A100 has an IRIDIX chip
    > included. It only works on the JPEG output, and it has to take over
    > entirely - the camera can no longer write a normal RAW file if you opt
    > for DRO+ (the full Apical process which analyses the image pixel by
    > pixel, maps out values, and detects zones which can be processed). You
    > can't even opt for RAW+JPEG, and you can't use manual exposure either,
    > as the CCD exposure has to be controlled by the metering to provide the
    > right data for DRO+ to work on. Currently the Sony A100 is the only DSLR
    > to use the Apical process directly, and Nikon's D-Lighting which can be
    > post-applied to RAW images in some cameras may not necessarily be the
    > Apical system. Their software D-Lighting does not use Apical patents or
    > licenses, while the same function in Olympus Studio does. The hardware
    > in Coolpix cameras with auto D-Lighting uses Apical's chip.
    >
    > I believe the higher noise levels in the Sony A100 at high ISOs, even
    > from raw, compared to other cameras using the same chip may be a result
    > of setting the photon count for any given ISO equivalent to a lower
    > value, in order for DRO+ to work. Like Hi200 - the Sony highlight
    > preservation mode - DRO+ appears to require a 'darker' raw file to work
    > on, though you never get to see the raw file. This may also explain why
    > the Sony A100 when reviewed has been noted to have shorter actual
    > exposures and produce slightly darker images. I rather hope they miss
    > DRO+ and the IRIDIX processor out of future models, as any process
    > viable on JPEGs only is of no interest to me, and any loss of other
    > image qualities caused by having it present is really undesirable.
    >
    > See www.ukapical.com for graphs, animated explanations etc.
    >
    > David

    Having read the web page, they call it dynamic range compression,
    one of many methods available. All algorithms I've seen produce
    artifacts. It is better in my opinion to do such applications
    in post processing where one can control the magnitude of the
    effects and mitigate artifacts.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 11, 2007
    #10
  11. Benny

    ASAAR Guest

    On Sun, 11 Mar 2007 11:21:05 -0700, Roger N. Clark (change username
    to rnclark) wrote:

    > The ultimate sensor is one that has essentially 100% quantum
    > efficiency, and counts every photon and its energy. The raw output
    > would be a list of photon energies for each pixel; 8-bits would do
    > very well. For example, if a pixel recorded 50,000 photons,
    > the data for that pixel would be 50,000 bytes.


    Multiple typos or am I misreading what you're trying to get
    across? I assume that you're talking about a pixel that could
    identify the number of photons collected for it up to at least
    50,000. It would need 16, not 8 bits to do that. And the 16-bits
    would just be enough bits to represent a value of 50,000 photons. I
    don't see where the data for a pixel would amount to 50,000 bytes.
    That's enough to result in a single image that wouldn't fit on a CF
    card, but *might* fit on a really large hard drive. What you wrote
    would be much more reasonable if the paragraph ended with "would be
    a value of 50,000" instead of "would be 50,000 bytes". I'm having a
    hard time keeping my eyes open right now and need some coffee. Did
    you miss your morning brew too or have too much of it? :)
     
    ASAAR, Mar 11, 2007
    #11
  12. Benny wrote:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.
    >
    > B.
    >
    >

    Correct exposure implies enough photons, and that is directly
    related to the size of a pixel and its ability (quantum efficiency)
    to collect those photons in a reasonable exposure time.
    Today, that is best done by the largest pixel sensors (all digital
    cameras have similar and quite good quantum efficiencies).

    The ultimate sensor is one that has essentially 100% quantum
    efficiency, and counts every photon and its energy. The raw output
    would be a list of photon energies for each pixel; 8-bits would do
    very well. For example, if a pixel recorded 50,000 photons,
    the data for that pixel would be 50,000 bytes. In the raw conversion
    step to form an image, the best model of the color response of the
    human eye could then be used to produce the most accurate color.
    The effective speed of a sensor (currently reduced by the Bayer and IR
    filters and quantum efficiencies of about 30%, for a sensor
    package efficiency of about 10%) would increase about 10 times.
    So current large pixel DSLRs with top ISOs of 3,200 could rise
    to 32,000.

    Such a solid state photon detector was reported in the scientific literature
    around 1995. But I have not heard of any further developments.
    The difficulty with such detectors is responding fast enough
    between photons so you count individual photons, not paired
    photons. Thus, photon counting sensors have been limited to
    very low light situations.

    Roger
    http://www.clarkvision.com
     
    Roger N. Clark (change username to rnclark), Mar 11, 2007
    #12
  13. Benny

    Paul Furman Guest

    Benny wrote:

    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.


    It's close to that when you make adjustments in a raw converter for
    fill, brightness, curves, etc.
     
    Paul Furman, Mar 11, 2007
    #13
  14. Benny

    Pete D Guest

    "Roger N. Clark (change username to rnclark)" <> wrote in
    message news:...
    > David Kilpatrick wrote:
    >> Benny wrote:
    >>> Is there any reason why one day sensors will be designed to give correct
    >>> exposure at each pixel by applying the required ISO at each and every
    >>> pixel.
    >>>
    >>> We are now dealing with electronics here so why couldn't this be
    >>> achieved?
    >>>
    >>> The outcome would be that every image would be 'perfectly' exposed at
    >>> all points (pixels) through-out the image.
    >>>
    >>> Shadows would be exposed correctly as would highlights etc etc.
    >>>
    >>> The light falling on each pixel would be evaluated and exposed
    >>> accordingly.
    >>>
    >>> It is probably well beyond the current types of sensors but with
    >>> nano-electronics I would imagine this is easily achievable.
    >>>

    >>
    >> It's already been done - the IRIDIX processor by Apical Ltd, as used by
    >> Sony, Nikon, and Olympus. Known as Dynamic Range Optimisation by Sony,
    >> D-Lighting by Nikon (only included in consumer cameras) and something
    >> else by Olympus.
    >>
    >> Since pixels have no ISO sensitivity - they just receive photons and the
    >> 'ISO' is a result of applying A to D conversions/gain to the values of
    >> photons sitting in the pixel wells - any process which takes the pre-raw
    >> image at A to D stage, analyses it an applies selective gain to certain
    >> pixels is already doing this.
    >>
    >> That's what Apical's process does, why the Sony A100 has an IRIDIX chip
    >> included. It only works on the JPEG output, and it has to take over
    >> entirely - the camera can no longer write a normal RAW file if you opt
    >> for DRO+ (the full Apical process which analyses the image pixel by
    >> pixel, maps out values, and detects zones which can be processed). You
    >> can't even opt for RAW+JPEG, and you can't use manual exposure either, as
    >> the CCD exposure has to be controlled by the metering to provide the
    >> right data for DRO+ to work on. Currently the Sony A100 is the only DSLR
    >> to use the Apical process directly, and Nikon's D-Lighting which can be
    >> post-applied to RAW images in some cameras may not necessarily be the
    >> Apical system. Their software D-Lighting does not use Apical patents or
    >> licenses, while the same function in Olympus Studio does. The hardware in
    >> Coolpix cameras with auto D-Lighting uses Apical's chip.
    >>
    >> I believe the higher noise levels in the Sony A100 at high ISOs, even
    >> from raw, compared to other cameras using the same chip may be a result
    >> of setting the photon count for any given ISO equivalent to a lower
    >> value, in order for DRO+ to work. Like Hi200 - the Sony highlight
    >> preservation mode - DRO+ appears to require a 'darker' raw file to work
    >> on, though you never get to see the raw file. This may also explain why
    >> the Sony A100 when reviewed has been noted to have shorter actual
    >> exposures and produce slightly darker images. I rather hope they miss
    >> DRO+ and the IRIDIX processor out of future models, as any process viable
    >> on JPEGs only is of no interest to me, and any loss of other image
    >> qualities caused by having it present is really undesirable.
    >>
    >> See www.ukapical.com for graphs, animated explanations etc.
    >>
    >> David

    > Having read the web page, they call it dynamic range compression,
    > one of many methods available. All algorithms I've seen produce
    > artifacts. It is better in my opinion to do such applications
    > in post processing where one can control the magnitude of the
    > effects and mitigate artifacts.
    >
    > Roger


    Of course that would always be the case because PP can always throw more
    processing power at the job but for those that do not want to PP it is a
    good answer by the looks.
     
    Pete D, Mar 11, 2007
    #14
  15. ASAAR wrote:
    > On Sun, 11 Mar 2007 11:21:05 -0700, Roger N. Clark (change username
    > to rnclark) wrote:
    >
    >> The ultimate sensor is one that has essentially 100% quantum
    >> efficiency, and counts every photon and its energy. The raw output
    >> would be a list of photon energies for each pixel; 8-bits would do
    >> very well. For example, if a pixel recorded 50,000 photons,
    >> the data for that pixel would be 50,000 bytes.

    >
    > Multiple typos or am I misreading what you're trying to get
    > across? I assume that you're talking about a pixel that could
    > identify the number of photons collected for it up to at least
    > 50,000. It would need 16, not 8 bits to do that. And the 16-bits
    > would just be enough bits to represent a value of 50,000 photons. I
    > don't see where the data for a pixel would amount to 50,000 bytes.
    > That's enough to result in a single image that wouldn't fit on a CF
    > card, but *might* fit on a really large hard drive. What you wrote
    > would be much more reasonable if the paragraph ended with "would be
    > a value of 50,000" instead of "would be 50,000 bytes". I'm having a
    > hard time keeping my eyes open right now and need some coffee. Did
    > you miss your morning brew too or have too much of it? :)
    >


    I think you are misunderstanding what I meant.
    Think if a list of numbers in a spreadsheet. Each number is the energy
    or wavelength of a photon. One can digitize the energy over the
    sensitivity spectrum of the sensor (about 300 to 1,000 nanometers)
    with 256 levels, or 8-bits. That would digitize each photon's
    wavelength to (1000-300)/256 = 2.73 nm, plenty accurate for
    constructing the eye's spectral response in each color.
    So to record the energy or wavelength of 50,000 photons,
    you would need 50,000 bytes. Each number represents detection of
    a photon, and the number its energy.
    A 10 megapixel sensor would then record up to a 50,000 * 10,000,000 =
    0.5 Terabyte raw file. No problem with the 4 petabyte compact flash cards
    we'll have some day ;-).

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 11, 2007
    #15
  16. Benny

    Marvin Guest

    Benny wrote:
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.
    >
    > B.
    >
    >

    It is quite feasible. One has only to change the gain on a
    pixel-by-pixel basis, which could be done after the
    exposure. In digicams, changing the ISO is done by changing
    the gain in the readout.

    Another way is to change the exposure for each pixel before
    the pixel is read. I am co-inventor on a patent that
    applied that method to measuring spectra photoelectrically
    with a 2D array sensor. It is U.S. 3,728,576.
     
    Marvin, Mar 11, 2007
    #16
  17. Benny

    acl Guest

    On Mar 11, 9:21 pm, "Roger N. Clark (change username to rnclark)"
    <> wrote:
    > Benny wrote:
    > > Is there any reason why one day sensors will be designed to give correct
    > > exposure at each pixel by applying the required ISO at each and every pixel.

    >
    > > We are now dealing with electronics here so why couldn't this be achieved?

    >
    > > The outcome would be that every image would be 'perfectly' exposed at all
    > > points (pixels) through-out the image.

    >
    > > Shadows would be exposed correctly as would highlights etc etc.

    >
    > > The light falling on each pixel would be evaluated and exposed accordingly.

    >
    > > It is probably well beyond the current types of sensors but with
    > > nano-electronics I would imagine this is easily achievable.

    >
    > > B.

    >
    > Correct exposure implies enough photons, and that is directly
    > related to the size of a pixel and its ability (quantum efficiency)
    > to collect those photons in a reasonable exposure time.
    > Today, that is best done by the largest pixel sensors (all digital
    > cameras have similar and quite good quantum efficiencies).
    >
    > The ultimate sensor is one that has essentially 100% quantum
    > efficiency, and counts every photon and its energy. The raw output
    > would be a list of photon energies for each pixel; 8-bits would do
    > very well. For example, if a pixel recorded 50,000 photons,
    > the data for that pixel would be 50,000 bytes. In the raw conversion
    > step to form an image, the best model of the color response of the
    > human eye could then be used to produce the most accurate color.
    > The effective speed of a sensor (currently reduced by the Bayer and IR
    > filters and quantum efficiencies of about 30%, for a sensor
    > package efficiency of about 10%) would increase about 10 times.
    > So current large pixel DSLRs with top ISOs of 3,200 could rise
    > to 32,000.
    >
    > Such a solid state photon detector was reported in the scientific literature
    > around 1995. But I have not heard of any further developments.
    > The difficulty with such detectors is responding fast enough
    > between photons so you count individual photons, not paired
    > photons. Thus, photon counting sensors have been limited to
    > very low light situations.


    Hello. Do you have a reference? I am curious.
     
    acl, Mar 11, 2007
    #17
  18. Benny

    acl Guest

    On Mar 11, 9:09 pm, ASAAR <> wrote:

    > Multiple typos or am I misreading what you're trying to get
    > across? I assume that you're talking about a pixel that could
    > identify the number of photons collected for it up to at least
    > 50,000. It would need 16, not 8 bits to do that. And the 16-bits
    > would just be enough bits to represent a value of 50,000 photons. I
    > don't see where the data for a pixel would amount to 50,000 bytes.
    > That's enough to result in a single image that wouldn't fit on a CF
    > card, but *might* fit on a really large hard drive. What you wrote
    > would be much more reasonable if the paragraph ended with "would be
    > a value of 50,000" instead of "would be 50,000 bytes". I'm having a
    > hard time keeping my eyes open right now and need some coffee. Did
    > you miss your morning brew too or have too much of it? :)


    The idea is that, not only do we count electrons at each pixel, but in
    fact detect the energy (ie wavelength thus "colour" of each). He says
    that 8 bit accuracy in the determination of this energy/wavelength is
    enough, thus 1 byte/electron; if we receive 50000 electrons/pixel etc.

    This way would get rid of the CFA. I cannot imagine how one could
    measure the energies of 50000 electrons in eg 36 square microns in
    1/100s, and even more do this in a periodic array on a piece of
    silicon (or whatever), though.
     
    acl, Mar 11, 2007
    #18
  19. acl wrote:

    >>Such a solid state photon detector was reported in the scientific literature
    >>around 1995. But I have not heard of any further developments.


    > Hello. Do you have a reference? I am curious.


    I just looked up when. It was January, 1996. I was
    conducting thermal vacuum tests on the Cassini VIMS
    instrument and another scientists was reading, I think
    either Nature or Science and pointed out the article.
    We both were impressed and hoped that one day we could
    use such a detector as a spectrometer with no need
    for a grating, prism, or filter. So Science or Nature
    within a couple of months up to January, 1996 is as close
    as I can get at the moment.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 12, 2007
    #19
  20. Benny

    Benny Guest

    Thanks to all for the technical (and in some cases very detailed) replies.
    You have answered a slightly niggling thought of mine regarding sensors.
    regards
    B.



    "Benny" <no spam > wrote in message
    news:WLQIh.9793$...
    > Is there any reason why one day sensors will be designed to give correct
    > exposure at each pixel by applying the required ISO at each and every
    > pixel.
    >
    > We are now dealing with electronics here so why couldn't this be achieved?
    >
    > The outcome would be that every image would be 'perfectly' exposed at all
    > points (pixels) through-out the image.
    >
    > Shadows would be exposed correctly as would highlights etc etc.
    >
    > The light falling on each pixel would be evaluated and exposed
    > accordingly.
    >
    > It is probably well beyond the current types of sensors but with
    > nano-electronics I would imagine this is easily achievable.
    >
    > B.
    >
     
    Benny, Mar 12, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Steven M. Scharf

    D-SLR Sensor Resolution and Sensor Size Comparison Size Matters!

    Steven M. Scharf, May 14, 2004, in forum: Digital Photography
    Replies:
    32
    Views:
    5,525
    Georgette Preddy
    May 16, 2004
  2. Doug MacLean
    Replies:
    0
    Views:
    1,034
    Doug MacLean
    Sep 10, 2004
  3. Replies:
    14
    Views:
    2,144
  4. David J Taylor
    Replies:
    4
    Views:
    936
    Anoni Moose
    Aug 15, 2007
  5. RichA

    Re: O/T: The Ultimate Sensor Makeover ...

    RichA, Mar 23, 2012, in forum: Digital Photography
    Replies:
    3
    Views:
    256
    Wolfgang Weisselberg
    Mar 26, 2012
Loading...

Share This Page