The Human Eye: 120 Megapixel Monochrome, 6 Megapixel Color

Discussion in 'Digital Photography' started by Brian C. Baird, Jun 15, 2004.

  1. Brian C. Baird

    Crownfield Guest

    look at it again.
    Crownfield, Jun 15, 2004
    1. Advertisements

  2. Brian C. Baird

    no Guest

    So you admit that you're a troll?
    no, Jun 15, 2004
    1. Advertisements

  3. Now you are being stubborn.

    Having three colors at each sensor location is not efficient
    if you want to take photos aimed at humans too look at.

    It is more efficient to e.g.

    1. misaligen the color sensors to achieve more resolution
    with the same amount of sensors.
    2. have more B&W sensors and fewer color sensors for
    exactly the same reason
    3. have sensors of different sizes to increase dynamic range.
    4. etc

    Now - do you still think that haveing three color sensors in
    the smae location is the most efficient, measured agians number
    of sensors?

    Roland Karlsson, Jun 15, 2004
  4. Brian C. Baird

    JPS Guest

    In message <>,
    With any given budget of photosensors, this is probably true; with any
    given budget of pixels, it is false, all other things being equal.
    JPS, Jun 16, 2004
  5. Don't be silly. The only part of a Bayer sensor that matches human vision is
    the use of RGB.

    Bayer pattern sensor are at the moment the best technical compromise for
    making silicon based sensors for still-picture photography.

    Anyhow, I don't think that the limits of human vision can be translated to
    image resolution. I want as much resolution as I can because I really like
    to walk towards a poster-sized print and basically zoom in on the image.
    It is the difference between looking at a picture and the pervasiveness of
    'virtual' reality.
    Philip Homburg, Jun 16, 2004
  6. Brian C. Baird

    Martin Brown Guest

    It isn't RGB in the eye. The eye pigment sensitivities are much closer
    to YGB. R is generated by interpretation in the brain as Y-G. You can
    have some fun with perceived colour using a filter that is a narrowband
    notch reject against yellow light.
    But colour detail is subsampled in the eye relative to the intensity
    data. And it is a compromise adopted for broadcast colour TV too - and
    for good reason the human eye cannot see colour detail so fine as
    Full field of view to 1' arc resolution at closest focus gives an upper

    Martin Brown, Jun 16, 2004
  7. I know that yellow is special in some sense. I think it has something to
    do with the fact that both the 'red' and the 'green' cones are quite
    sensitive to yellow, but 'blue' ones are not.
    The old analog component video (Y, B-Y, R-Y) is not subsampled. Three
    chip video cameras subsample when they encode the data (composite, Y/C,
    or any of the digital formats). In retrospect, I think that broadcast
    colour TV is a very bad backward compatibility hack.

    For obvious reasons, film does not subsample color information.

    The problem with subsampled color data is that you have to make sure that
    you don't enlarge beyond a certain size. A soft image still looks kind
    of natural, but when the colors are wrong, it looks weird.

    Both broadcast colour TV and Bayer pattern sensor generate weird colors
    as a result of aliasing errors.
    It doesn't. That assumes a fixed viewing distance. When you can very
    the distance, or if you can walk around, there is almost no limit to
    the number of pixels you can use.

    Unless you mean, the field of view at the largest distance, together
    with the required resolution at the closest distance.
    Philip Homburg, Jun 16, 2004
  8. Shill.
    Brian C. Baird, Jun 16, 2004
  9. Human vision doesn't use RGB. It uses yellow/green, green and blue.
    Brian C. Baird, Jun 16, 2004
  10. Brian C. Baird

    Alfred Molon Guest

    We are debating semantics. Of course if you have six individual colour
    million sensors it's better to deploy them in a 6MP Bayer pattern rather
    than having a two million full colour pixel CCD.

    But six million full colour pixels are of course better than six million
    single colour pixels, which is why the Foveon technology is a step

    All we need now are higher MP Foveon CCDs (with more than just 3.4 MP)
    and better performance. It's entirely possible that all CCDs will be
    Foveon one day.
    Alfred Molon, Jun 16, 2004
  11. It is not semantics. Three sensors in the same location is
    probably as expensive as three sensors in different locations.
    There are no six million full color Foveon sensors, only 3.4.
    And that for a reason. If you try to put 6 million pixels at
    the size of a Foveon sensor you will get very low fill factor.
    You might be right. It does not look that way yet though.

    Roland Karlsson, Jun 16, 2004
  12. (Philip Homburg) wrote in
    The eye does not use RGB.

    Roland Karlsson, Jun 16, 2004
  13. I guess I should have written something like 'a linear combination of the CIE
    tristimulous respons curves'.
    Philip Homburg, Jun 16, 2004
  14. Brian C. Baird

    Leonard Guest

    Perhaps. It's also possible however that the individual colour sensing
    layers in a foveon-type sensor will never be as good as the individual
    colour sensing pixels in a bayer-type array, owing to them being
    necessarily more complex devices. In which case the bayer compromise
    may be better than the foveon compromise.

    Or we may end up with a different system entirely. You could for
    example use a half-silvered mirror to send most of the incoming
    light to a high-resolution monochrome sensor, and the remainder
    to a somewhat lower resolution bayer sensor, and combine the
    results in firmware. This would also allow for excellent black-and-
    white performance. Another possibility that is the subject of a
    recent PhD thesis is the use of a two-layer sensor combined with
    a two-colour filter pattern.

    The problem with all these alternatives is that the bayer compromise
    is hardly a compromise at all. Reduced chroma resolution is
    completely irrelevant in a photographic context, and the existing
    implementations have coloured moire artifacting well enough
    under control that this could prove to be similarly unimportant.

    - Len
    Leonard, Jun 17, 2004
  15. Not to mention that as sensors get larger, any artifacts or quality
    issues due to anti-aliasing will be less apparent in the final result.
    Since making a single-layer Bayer sensor will probably always be
    considerably cheaper per pixel than a multi-layered sensor, it's hard to
    see a future with multi-layer sensors in it.
    Brian C. Baird, Jun 17, 2004
  16. In fact the eye works just like a Foveon sensor, which is exactly why
    the Fovea was named after the Foveon sensor. Seriously, the fact that
    the Foveon sensor mimics the eye exactly is why it was named after the
    sharpest full color portion of the human eye.
    Georgette Preddy, Jun 17, 2004
  17. Wait, you're saying the FOVEA was named after the FOVEON sensor? You're
    Brian C. Baird, Jun 17, 2004
  18. SNIP
    Wrong, again, as usual. The cones in the retina that enable color vision,
    don't sense full color each, they sense a partial spectrum.

    Bart van der Wolf, Jun 17, 2004
  19. Brian C. Baird

    Waldo Guest

    What a bullshit, the cones and rods are placed besides each other, not
    in layers...

    Waldo, Jun 17, 2004
  20. Brian C. Baird

    Big Bill Guest

    Thius caught me by surprise, so I looked it up (something you might
    want to try).
    You're wrong.

    " the Fovea was named after the Foveon sensor."
    Ah, I get it now.

    Bill Funk
    Change "g" to "a"
    Big Bill, Jun 17, 2004
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.