Putting the SD9 "yellow myth" to bed, a follow along RAW demo

Discussion in 'Digital Photography' started by George Preddy, Dec 24, 2003.

  1. So now interpolated output pixels don't matter when determining optical
    resolution. Would you please make up your mind?
    Same for Foveon. There is no one spot with 2 red sensors. These are
    mutually exclusive sampling populations. Foveon takes a lot more discrete
    color samples (170%) over the same chip area, not less.
    That's why 10.3M beats 6M.
    The 10D result proves it. The 1Ds is slightly worse than the 10D in terms
    of optical resolution, its "extra" (if you think 11M is a lot, which I
    certainly do not) sensors to build more FOV using a bigger chip, not higher
    density sensing.
    Exactly. More specifically, they only help when there is light.

    Think about it, if you accept the Bayer requirement of only using B&W
    targets to measure optical resolution, then why even field 1/3rd spectrum
    sensors (12-bits, in the current context)? Use1-bit sensors, it'll be a lot
    cheaper and you could field a 100MP full frame sensor tomorrow that will
    blow all DSLRs away.

    Even a 3-colour scanner or 3CCD camera has little or no
    It can only resolve 1.5M discrete full color entities, no more. Foveon can
    resolve 3.4M full color.
    Only B&W tests, in color tests Bayers do the dead bug.
    11M what?
    Why? The 1Ds has a huge color sensor disadvantage, Foveon has 25% more
    full color sensors.
     
    George Preddy, Jan 1, 2004
    1. Advertisements

  2. George Preddy

    JPS Guest

    In message <bt0ds6$lg6$>,
    No, because greyscale resolution is more vivid to the eye/brain.
    --
     
    JPS, Jan 1, 2004
    1. Advertisements

  3. Grayscale has nothing to do with B&W. Gray requires 3 RGB values which
    requires erroneously borrowing color components from neighbors using dirty
    Bayer sensors. Variable gray resolution charts would be immediately
    shunned and sponsorship dollars pulled by Bayer manufacturers if ever used.
    But full color is obviously the best way to measure the optical resolution
    of a color sensor.
     
    George Preddy, Jan 1, 2004
  4. George Preddy

    JPS Guest

    In message <bt0sj7$qh8$>,
    Because color resolution lags somewhere behind, but is close enough that
    there is no shortage of color resolution as far as the viewer is
    concerned. A greyscale resolution test *is* a sort of color test; a
    camera could have problems recreating the B&W, as your SD9 seems to do
    in your shot of the the 60-spoke sine wheel. You have color pixels all
    over the place in the B&W wheel. I get no visible color (just
    saturation numbers like 0% to 3%, with an occasional 4%) with the 10D,
    with a *single* attempt at clicking on the white of the paper with the
    color dropper in the RAW converter.
    --
     
    JPS, Jan 1, 2004
  5. Only with Bayer, all the more reason to test it.
     
    George Preddy, Jan 1, 2004
  6. Its clearly obsessive behaviour. I have to wonder how many hours a
    day he spends on his posting frenzies, and if it has started to affect
    his life.

    I rarely post on Usenet these days, but I like to lurk here to keep
    tabs on what the digital camera user base is like.

    I suspect that Foveon may be a dead-end technology, a solution to a
    problem that does not exist. Still an interesting concept, mind you,
    even if the implementation has some issues. I would be curious to see
    what a full-frame 10Mp version would produce (spatial resolution,
    George, no creative inaccuracies please!) .

    Perfect technology does not exist, and no one item is "the best".
    What is best is context-dependent, and subjective. Getting
    emotionally involved with your equipment is neither healthy nor
    mature, and leads to bad decisions. I just moved from one camera
    brand/system to another. The *least* important part of that move was
    the actual camera body and its sensor - far more important was the
    overall camera system, the company's stability and long-term market
    share, and the cost. 20 years from now I will likely be on my tenth
    DSRL body, but still using the same lenses, flash, and accessories.
    Looking at it this way the sensor in my current camera body is nearly
    irrelevant.
     
    The Black Sheep, Jan 1, 2004
  7. What's not spatial about 10M spatially discrete points spread over the full
    area of the sensor? The color channels are sampling mutually exclusive
    sets.

    And what is "spatially discrete" about borrowing 75% of the color
    information from neighboring pixels?
    Just better.
     
    George Preddy, Jan 1, 2004
  8. Do you think photographs are two-dimensional or three-dimensional?
    LOL.
    Did you think I would fall for that, George? LOL. Playing word
    games is a poor way to make a point.

    You could stack one million spatially "discrete" sampling points
    *behind* each other, all combining to form one image pixel if you
    wanted to. Would not have any more, or less, spatial resolution than
    a single sampling point.

    What you failed to grasp here is that I think a higher-pixel count
    Foveon might be worth a look, I stated I was willing to examine the
    results. You see, I don't hold irrational extreme views, and I am
    willing to give the technology a second chance.
    Of course you would snip the most important parts of that paragraph,
    the ones discussing *context*, just to make another generalisation.
    I'll give you a second chance to write a reasoned reply if you wish.
     
    The Black Sheep, Jan 1, 2004
  9. You only need 3.

    4 does no good, which is why RGGB is inefficient.
    You can also put 3, 1/3rd spectrum sensors side by side, and you still have
    only one spatially discrete full color sampling point.

    This is the Bayer approach (well, 4, but with no color benefit to double
    green). The optically proper thing to do would be to translate/compress
    full color triplets to a single point digitally, but unfortunately that is
    virtually impossible to do with a scalable 2D sensor (4 spots per scalable
    unit, and 3 primary colors). A scalable Bayer is a square peg.
    See above. You are missing some important concepts.
     
    George Preddy, Jan 1, 2004
  10. You seem to have a problem distinguishing between spatial and
    non-spatial interpolation. The *spatial* interpolation used by SPP to
    generate a 14 megapixel output image from the Sigma camera data adds no
    spatial information at all, so there is no increase in resolution in
    the output image. It's empty magnification. There's no point in having
    more output pixels than sensing locations when the sensor is a normal
    row/column grid.

    On the other hand, the D60 and D100 use interpolation in colour to
    provide 3-colour output pixels. This *colour* interpolation does not
    add any more *colour* detail, but it does preserve the 6 or 11 million
    pixels of spatial detail.

    The relationship is extremely simple: SD9/SD10 samples the image at
    3.4 million locations, and gives a 3.4 megapixel image. The D60/1Ds
    sample the image at 6 and 11 million locations, and give 6/11 MP images.
    Actual resolution tests confirm the resolution difference. The extra
    *colour* samples of the SD9/SD10 can be expected to improve colour
    resolution, at best, not luminance resolution. (But in face the Foveon
    cameras have other colour problems not related to resolution).

    In the first place they're not "mutually exclusive sampling
    populations", since the colour responses of the Foveon sensor overlap a
    great deal. Where do you learn all these fancy phrases without
    understanding what they mean? Second, when you're trying to determine
    luminance, the RGB values measured at a single location are not
    independent at all. They are three components of the light falling on
    the same point.

    It would, if the Foveon sensor had 10.3M sensing locations. It doesn't;
    it has only 3.4M.
    The result of an improperly performed test does not prove anything.

    Again, nonsense. In general photography, the "extra" sensors of the 1Ds
    are used, in conjunction with a shorter focal length lens, to capture
    the desired field of view at higher resolution. It is *not* used to
    capture a wider field of view at the same resolution.

    Your argument would have us believe that if someone was happy with
    their D60 or 10D shooting (for example) portraits with a 50 mm lens
    giving the effect of an 80 mm lens on a 35 camera, and they received a
    1Ds as a gift, they would continue shooting with the 50 mm lens at the
    same distance, discarding 60% of the image area during cropping, and
    ending up with *fewer* pixels and lower image resolution from the much
    more expensive camera. But only an idiot would do this. Anyone else
    would switch to using an 80 mm lens on the 1Ds, to capture the same
    field of view with the new camera, and make use of the higher resolution
    images that result.

    Because this is the way photographers work, resolution tests are
    designed to have the test target fill the frame no matter what sensor
    size and lens focal length are used. Both of the "resolution tests" you
    refer to violate this basic principle of testing. Essentially, they
    take your assumption that the point of a larger sensor is greater FOV
    and not more resolution, but then test resolution and forget to mention
    the extra FOV.

    That makes no sense at all.

    It doesn't resolve *any* "discrete full color entities" at all. It
    senses one colour at each pixel location, then processes them into an
    RGB image that beats the SD9/SD10 in resolution and colour accuracy
    in all normal photographic contexts. That's what matters, not whether
    it has "discrete full color entities".

    Bayer sensors do fine in colour tests as well, as long as the pattern
    has both luminance and colour changes. Bayer does poorly *only* when
    the pattern has no luminance change, which requires specially-designed
    test targets. If all you ever shoot is special test targets designed to
    show Bayer at its worst, then the Sigma cameras are for you. For the
    rest of us, the Bayer cameras perform better.

    Saying this over and over again does not make it any less false.

    Dave
     
    Dave Martindale, Jan 1, 2004
  11. Nice try George, but this is yet another dodge. The discussion was
    about spatial resolution, and my argument was that it is irrelevant
    how many sample sites are stacked over each other to form 1 pixel,
    they add nothing to spatial resolution in a 2D medium.

    Would you care to address this point, or not? Enough pathetic
    evasions, if you can't dispute the point than let it go, you'll feel
    much better about yourself.
    Neither Foveon nor Bayer sensors sample full colour at any point.
    Both systems use sample sites which detect a part of the colour
    spectrum. Both systems attempt to reconstruct full colour, using RGB
    colour spaces, from these samples. Neither system has any "full color
    sampling point".
    I'm not the one missing important parts, Georgie. Guess that college
    education you made fun of is paying off, at least I can hold a
    reasoned discussion without evasion and intellectual dishonesty.
     
    The Black Sheep, Jan 1, 2004
  12. At their absolute worst, when shooting specially-designed test targets
    which are intended to show Bayer sensors at their worst, Bayer colour
    resolution is 1/2 that of luminance resolution.

    But the human eye's colour resolution is 1/10 of its luminance
    resolution. If you have a print where the luminance detail looks
    fairly sharp, the luminance resolution limit of the print is somewhere
    within a factor of 2 or 3 of equalling the luminance resolution limit
    of the eye. When that's true, even if colour resolution in the print
    is half that of the luminance resolution, it still *exceeds* the colour
    resolution of the eye. You can't see any loss in colour resolution
    until the luminance resolution of the print is 5 times or more worse
    than the eye's resolution limit, and at that point the whole print will
    appear fuzzy.

    The eye sees colour changes poorly. That's why JPEG works so well while
    (normally) reducing colour resolution to half of luminance. That's why
    NTSC television can get away with colour resolution that is only 1/8 of
    luminance resolution. That's why Bayer sensors work so well. And
    that's why resolution tests measure luminance resolution - because that
    predicts image quality for human viewing.

    Having colour resolution equal to luminance resolution is important only
    when the pixels of the image will be subject to colour-dependent
    processing, such as in blue screen or green screen compositing.

    Dave
     
    Dave Martindale, Jan 1, 2004
  13. That's what I answered, you need to think more deeply.

    and my argument was that it is irrelevant
     
    George Preddy, Jan 1, 2004
  14. Same for Bayer spatial interpolation, which is 4:1 upscaling.
    And 4:1 spatial upscaling that is less accurate than Sigma's 4:1 14MP
    upscaling.
    Same as Bayer, again. So Bayer has no spatial resolution according to you.
    It most certainly does have 10.3M discrete sampling locactions.
    Then the same number of "extra" sensor on the SD9 can be used with longer
    focal length to built FOV, you can't have it both ways. Both are ~11MP
    camera, though the SD9 has a 25% advantage in full color sensors due to much
    higher efficiency.
    No they're not. They test the smallest objet a camera can see in the proper
    color, that is, without interpolation. Read any definition.
    Yes it does.
    Bluring 1.5MP of data around doesn't increase optical resolution. That is
    why interpolation never, ever, adds any optical resolution, only the number
    of unique RGB sets has any impact at all. 6MP-interpolated cameras are
    1.5MP. Just like the 14MP SD9 is 3.43MP.
    If your target is B&W (grayscale doesn't even help) then Bayer only loses by
    about 2:1, in color Bayer loses by 3-4:1.
    How many full color sensors does the 1Ds have?

    Answer: 2.7MP.

    It is a very low MP color camera.
     
    George Preddy, Jan 1, 2004
  15. Bayer does poorly with any color, including grayscale. B&W is the only
    exception when Bayer does just ok, though it still loses to Foveon 2:1.
    If you were right this chart would be invisible...

    http://www.outbackphoto.com/artofraw/raw_05/essay.html

    It's not.
    Or, if you want to resolve a color image.
     
    George Preddy, Jan 1, 2004
  16. Nice try George. Blame-shifting is a pretty lame way to side-track a
    discussion.


    And of course you *did* use another pathetic evasion. Pretty lame
    that one sentence was the only reply you could make to an entire post!
    You need to learn to trim your quoted material too, and not be lazy.
     
    The Black Sheep, Jan 2, 2004
  17. You don't understand it, doesn't mean I didn't answer. There are no
    "layers" optically, each layer is the top layer, that is, the first layer a
    certain color photon cracks it's little head upon. To say that each RGB
    sensor isn't discrete is silly, this isn't Bayer where neighboring data is
    used over and over again in 9 different pixels to interpolatively upscale
    the image, ultimately forming 4X as many output pixels as the discrete
    optical data supports.

    10.3M discrete RGB data points = 3.43M full color pixels. Never more.
    Ever. Never ever.

    3.4MP full color non-interpolated pixels, when interpolated to the Bayer
    standard of 4:1 upscaling (1.5M full color sensors spread over 6M ouput
    pixels due to borrowing from neighbors) is 14MP. That is, 25% real optical
    color info per output pixel as interpolated.

    That is exactly why Foveon provides 14MP interpolated output as an option,
    to match common Bayer standards.
     
    George Preddy, Jan 2, 2004
  18. No it is not. For every N distinct spatially-separated sample location,
    you get N pixels out. That's not upscaling, it's simply working at the
    existing resolution.

    There's no spatial upscaling at all in the Canon Bayer-sensor cameras.
    The number of output pixels is the same as the number of photosites on
    the sensor. And resolution tests demonstrate that the Bayer sensors
    provide resolution comparbable to B&W sensors of 6 and 11 MP.
    Meanwhile, the 4:1 upscaling done in the Sigma SPP software provides no
    additional resolution at all, since the sensor only has 3.4 million
    photosites.

    I didn't say that. Bayer clearly achieves good resolution for its pixel
    count. What I said is that your method of counting "full colour
    samples" is nonsense because it implies Bayer resolution should be zero,
    and it obviously is not. Therefore your explanation is wrong. There's
    nothing wrong with the sensor.

    It does not. In the 2D image plane, there are only 3.4 million
    measuring locations. The 3 colour sensors are at different *depths* in
    the silicon, but the lens does not deliver different spatial detail to
    different depths in the silicon. The third geometric dimension,
    perpendicular to the image plane, does not count as being a discrete
    location in the 2D image.

    More nonsense. First, the SD9/10 has the smaller field of view to
    start, and a longer focal length lens will further reduce that. You'd
    have to use a shorter FL lens on the SD9/10 to get more field of view.
    And if you did that, the subject-space resolution would decrease
    proportionally.

    Second, the "extra" colour sensors of the SD9/10 are stacked under each
    other, and provide no additional spatial information at all.

    Basically, there are two ways to interpret the results shown by the
    second pair of photos. (1) If you use the same focal length lens
    on both SD10 and 1Ds, the image quality is roughly the same in the area
    covered by both sensors, but the 1Ds captures 3 times the image area of
    the SD10 (1.73 times the horizontal and vertical field of view).
    Alternately (2) if you put a 1.73X longer focal length lens on the 1Ds
    than on the SD10 to get the same FOV from both cameras, the 1Ds will
    have about 1.7 times higher resolution.

    Now, the second interpretation is how resolution tests are supposed to
    be shot: match FOV, then compare resolution. But you can have the first
    interpretation if you want. Either one makes the SD10 a considerably
    inferior camera compared to the 1Ds; it's up to you whether you want to
    think of it as the same resolution with only 1/3 the image FOV, or
    worse resolution with the same FOV.

    I've been reading optics and photography texts for 30 years, and I've
    never seen the definition you propose. Can you quote one single
    reference that matches your definition of resolution? Particularly
    with respect to the fact that your definition doesn't take into account
    the lens focal length.

    The "6 MP interpolated" cameras you are talking about have 6 million
    separate sample locations, with individual (X,Y) locations across the
    camera focal plane. The 6 MP output file merely retains that spatial
    resolution. Colour interpolation is used, but not spatial
    interpolation.

    The 3.4 MP SD9 has only 3.4 million individual (X,Y) locations across
    the camera focal plane where light is measured. The fact that there are
    three colour sensors at the same (X,Y) location but different (Z) depth
    contributes nothing to spatial resolution, since the lens delivers a 2D
    image not a 3D one. The camera can deliver a 3.4 MP image without
    spatial resolution, but a 14 MP output file requires (useless) 2X
    spatial interpolation.

    Real resolution tests, performed properly according to instructions,
    show that the 6 MP Bayer cameras have more real resolution in lines per
    picture height than the 3.4 MP Sigma cameras, as you would expect by
    comparing pixel counts. If the Bayer camera is really only 1.5 MP
    while the Sigma is really 3.5 MP, the Bayer would not have better
    resolution. If the Sigma was really 10 or 14 MP (you can't seem to make
    up your mind) while the Bayer was only 6 MP, the Bayer would not have
    better resolution. But it does, so your theories about how to count
    pixels in the two camera designs must be wrong.
    Again nonsense. With a B&W target, a Bayer sensor and a Foveon sensor
    have about the same spatial resolution with the same number of pixels.
    If there was a 6 MP Foveon sensor available, it would compete well with
    a 6 MP Bayer sensor in spatial resolution. A true 11 MP Foveon sensor
    would compare well against a 11 MP Bayer sensor. However, the only
    Foveon sensor available in SLR cameras is 3.4 MP, and it is left in the
    dust by the 6 and 11 MP sensors.
    No, zero. It doesn't have any "full color sensors". But it does have
    11 million one-colour sensors located at 11 million separate (X,Y)
    locations in the image plane, which is far more than the 3.4 sensing
    locations of the SD9/10. Thus, the far superior performance of the 1Ds
    given the same field of view is not surprising.

    Bayer sensors do not operate in the same way as 3-colour sensors, and
    counting "full color sensors" by dividing by four simply doesn't make
    any sense when understanding or predicting camera performance.

    Dave
     
    Dave Martindale, Jan 2, 2004
  19. Not even you have posted anything that would support this statement.
    At what size? What my statement says is that you can see a given level
    of detail in a saturated colour chart at about 1/10 the viewing distance
    you can see the same level of detail in a black/white chart. I stand by
    that, and I can quote a vision textbook to back up the numbers if you
    like.

    However, it says nothing about absolute visibility. Given a set of
    red/blue and black/white test targets, there is a sufficiently close
    distance where you can see full detail in both targets, and a
    sufficiently far distance where you can see no detail in either target.
    The images in the outbackphoto test show a small target which occupied a
    small fraction of the field of view enlarged on screen. It's like
    looking at digital photos at 100% - it's not representative of how the
    final image will be viewed. If your eye had been in the same location
    as the camera lens when those tests were shot, you would not have been
    able to see the detail in those charts that you can see in the
    enlargements on the web page.
    As explained, "resolving" a colour image with quality equal to the real
    world requires about 10 times more luminance resolution than colour
    resolution. There are plenty of perception books that explain this.
    Or try John Sheehy's example of blurring A/B channels in LAB mode in
    Photoshop.

    There is no one so blind as one who refuses to see.

    Dave
     
    Dave Martindale, Jan 2, 2004
  20. George Preddy

    JPS Guest

    In message <bt1m3d$ifb$>,
    What a joke. Nobody who is involved in these discussions miss more of
    the concepts than you do. You have built up your own little world of
    raster graphics, to meet your own needs, with overly-simplistic logic
    and arithmetic. You know absolutely *ZERO* about raster graphics.
    --
     
    JPS, Jan 2, 2004
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.