Resolution and Mpixels and marketing creativity

Discussion in 'Digital Photography' started by Mark Herring, Oct 28, 2003.

  1. Mark Herring

    Mark Herring Guest

    The semantic soup has gotten a bit thick here.......

    There are precious few binding laws, standards, etc, but there are
    some pretty well-established norms (aka "de-facto" standards)

    1. Resolution: Unless qualified, this means SPATIAL
    resolution---the ability to resolve spatial detail. In astronomy, it
    often means the ability to distinguish two stars. In photography, it
    has a variety of definitions--eg "lines" , line pairs/mm, etc.---all
    some sort of spatial frequency metric.

    2. Resolution in samples: In a sampled data system, the number
    of samples sets an upper limit (the Nyquist limit) on the resolution.
    Implicit in this definition is that the samples are taken at different
    spatial locations in the data---this is in fact the only way they can
    contribute to the spatial resolution.
    The precise interpretation of the Nyquist limit is that is the
    MAXIMUM resolution the system can hope to achieve---aka the "limiting
    resolution".
    de facto standard: resolution stated in Megapixels refers to
    the number of independent spatial samples acquired. "Independent"
    means the samples are all in different places in the image.
    This de facto standard is followed by the vast majority of
    camera vendors.

    3. Interpolation: Any process in which data gaps are filled in
    by using the adjacent data. While interpolation can smooth out the
    visual effects associated with sampling, there can NEVER be an
    increase in information.
    Applying this rule to the earlier de-facto standards: The
    limiting resolution set by the number of spatially-independent samples
    can NEVER by increased by any form of post-processing--including
    interpolation.

    4. Color sampling, registration, and related issues:
    Again, by default, resolution means ability to resolve
    detail--regardless of color. Our eye is far more sensitive to
    uncolored detail, than it is to the color. This is why compatible
    color TV works, and it is why the prevailing "Bayer" filter pattern in
    digicams works. We don't NEED the same spatial resolution in every
    color that we need in the aggregate (also called the luminance signal)

    5. The soup: Currently, we have two marvelous bits of creative
    marketing:

    Fuji F700 2 photodiodes per pixel (put there to increase dynamic
    range) somehow morphs into the sensor having twice as many PIXELS.
    From the "no free lunch" section of the good book, note that sensing
    sites used to enhance dynamic range CANNOT be simultaneously used to
    increase spatial resolution.

    Foveon/Sigma: Clever new sensor in which each pixel includes samples
    in all three colors. Thus the COLOR resolution is increased and the
    dreaded color fringing of the Bayer is gone. Numerous problems with
    this logic have been aired---not the least of which is: "What color
    fringing?" Above 2-3 Mpixels, it's just not an issue.
    Yes the Foveon may have higher color resolution, but you do not NEED
    higher color resolution.
    More serious is the implied claim now propagating: This 3.4 Mpixel
    sensor is really 10Mpixels. Sorry--NO--not according to the standards
    in common use.
    And there are other points to be made about what is lost in trying to
    achieve that which is not needed, but which the marketing department
    hopes to sell anyway.


    BOTTOM LINE:
    Using the de-facto definitions, plot (for a class of camera---eg the
    non-interchangeable lens compact digicam) the true Mpixels against the
    price. For anything not falling on or near this plot----as they said
    in the promo for some movie----"Be afraid---be very afraid."

    -Mark
    *****************************************
    digital photos, more and better computers,
    and never enough time to do the projects.
    Private e-mail: Just say no to No
    Mark Herring, Oct 28, 2003
    #1
    1. Advertising

  2. Mark Herring <> wrote in
    news::

    > The semantic soup has gotten a bit thick here.......
    >
    > There are precious few binding laws, standards, etc, but there are
    > some pretty well-established norms (aka "de-facto" standards)
    >
    > 1. Resolution: Unless qualified, this means SPATIAL
    > resolution---the ability to resolve spatial detail. In astronomy, it
    > often means the ability to distinguish two stars. In photography, it
    > has a variety of definitions--eg "lines" , line pairs/mm, etc.---all
    > some sort of spatial frequency metric.
    >
    > 2. Resolution in samples: In a sampled data system, the number
    > of samples sets an upper limit (the Nyquist limit) on the resolution.
    > Implicit in this definition is that the samples are taken at different
    > spatial locations in the data---this is in fact the only way they can
    > contribute to the spatial resolution.
    > The precise interpretation of the Nyquist limit is that is the
    > MAXIMUM resolution the system can hope to achieve---aka the "limiting
    > resolution".
    > de facto standard: resolution stated in Megapixels refers to
    > the number of independent spatial samples acquired. "Independent"
    > means the samples are all in different places in the image.
    > This de facto standard is followed by the vast majority of
    > camera vendors.


    It is followed by Bayer-using camera vendors, and it happens to be the
    "norm" that favors them most. Fuji and Sigma are unfairly disfavored by
    that "norm", since they truly have sensors that are theoretically
    superior to same-megapixel Bayer sensors (same-megapixel according to
    the "norm" that seems designed to favor Bayer sensors and that, no great
    surprise, is subscribed to by Bayer-using camera vendors, who are, as
    you say, the vast majority of camera vendors).

    > 3. Interpolation: Any process in which data gaps are filled in
    > by using the adjacent data. While interpolation can smooth out the
    > visual effects associated with sampling, there can NEVER be an
    > increase in information.
    > Applying this rule to the earlier de-facto standards: The
    > limiting resolution set by the number of spatially-independent samples
    > can NEVER by increased by any form of post-processing--including
    > interpolation.
    >
    > 4. Color sampling, registration, and related issues:
    > Again, by default, resolution means ability to resolve
    > detail--regardless of color. Our eye is far more sensitive to
    > uncolored detail, than it is to the color. This is why compatible
    > color TV works, and it is why the prevailing "Bayer" filter pattern in
    > digicams works. We don't NEED the same spatial resolution in every
    > color that we need in the aggregate (also called the luminance signal)


    No luminance signal is actually captured by the Bayer sensor. Each
    sensel captures either red, or green, or blue. None of these are
    luminance. The luminance at a particular pixel can be calculated only
    through interpolation, as defined in (3). At any given image pixel, the
    corresponding sensel only captured one color, and that is not sufficient
    to calculate the luminance. Therefore, to calculate luminance, adjacent
    data must be used. Which is interpolation by definition (3).

    > 5. The soup: Currently, we have two marvelous bits of creative
    > marketing:
    >
    > Fuji F700 2 photodiodes per pixel (put there to increase dynamic
    > range) somehow morphs into the sensor having twice as many PIXELS.
    > From the "no free lunch" section of the good book, note that sensing
    > sites used to enhance dynamic range CANNOT be simultaneously used to
    > increase spatial resolution.


    Try applying that to Bayer. Start with the green sensels. By themselves,
    these can only capture a low-res, monochrome image. Now add in the blue
    and red sensels. These red and blue sensels increase the colors that can
    be captured by the sensor (since with only the green sensels the sensor
    could only capture a monochromatic image), but they, allegedly,
    simultaneously increase the spatial resolution. If we believe your
    argument (5) and apply it to Bayer, then that's a sham, since sensing
    sites (the red and blue sensels) used to enhance the color-capturing
    ability cannot be simultaneously used to increase spatial resolution.

    You might argue that there is some important difference between
    enhancing dynamic range and enhancing color-capturing ability that
    somehow prevents me from extending your argument from the one to the
    other, but I can say ahead of time I'm not likely to buy it.

    > Foveon/Sigma: Clever new sensor in which each pixel includes samples
    > in all three colors. Thus the COLOR resolution is increased and the
    > dreaded color fringing of the Bayer is gone. Numerous problems with
    > this logic have been aired---not the least of which is: "What color
    > fringing?" Above 2-3 Mpixels, it's just not an issue.
    > Yes the Foveon may have higher color resolution, but you do not NEED
    > higher color resolution.


    But we have noticeable softness, i.e., lack of detail,i.e. lowered
    effective resolution, in Bayer images caused by the very thing that
    protects against color aliasing - i.e., anti-aliasing filters. Which are
    necessarily stronger in a Bayer than would be in a Foveon because of the
    wide spacing of the green sensels, of the blue sensels, and of the red
    sensels. In short, to fully anti-alias the individual color channels of
    the raw image, the anti-alias filters must be such that they would over-
    anti-alias (i.e., blur) the luminance channel of the raw image even if
    there were such a thing, even if there were an actual set of luminance
    sensels having the full resolution of the whole sensor.

    > More serious is the implied claim now propagating: This 3.4 Mpixel
    > sensor is really 10Mpixels. Sorry--NO--not according to the standards
    > in common use.
    > And there are other points to be made about what is lost in trying to
    > achieve that which is not needed, but which the marketing department
    > hopes to sell anyway.
    >
    >
    > BOTTOM LINE:
    > Using the de-facto definitions, plot (for a class of camera---eg the
    > non-interchangeable lens compact digicam) the true Mpixels against the
    > price. For anything not falling on or near this plot----as they said
    > in the promo for some movie----"Be afraid---be very afraid."
    >
    > -Mark
    > *****************************************
    > digital photos, more and better computers,
    > and never enough time to do the projects.
    > Private e-mail: Just say no to No
    >
    >
    Constantinople, Oct 28, 2003
    #2
    1. Advertising

  3. Mark Herring

    Bill Guest

    Mark,
    I follow you up to a point. OKso any "de facto standard" CCD uses the
    RGRGRGRG
    GBGBGBGB
    layout, and gathers data from groups of 4 pixels.
    RG
    GB
    Assuming a 4MP CCD (in what would be commonly advertised as a "4MP Camera"),
    would have 4 million+ actual photodiodes and also then 4 million+ pixels.
    But with 1 million each recording red and blue, and the remaining 2 million
    recording green. Then the data from each group of 4 is used to create a
    single RGB value. This means there are really only 1 million different color
    values created. The camera's firmware then must use some algorithm and color
    interpolation to create data for each pixel in the image output file,
    assigining the best color value for each pixel in the image file being
    created. The result is a 4MP image. Is this more or less what happens?

    I agree that the new Fuji 4th generation Super CCD is really only a 3mp
    chip. Although having 2 photodiodes at each pixel location might help with
    dynamic range. I have a Fuji S602, and I like it a lot. I have seen a lot of
    people make reference to Fuji's alleged misrepresentation of facts, but I
    can honestly say, as least as far as the S602, I have never seen any of
    their ads call it anything other than a 3MP camera. They do say in the specs
    that it has the option to create a larger (6MP) file, which is true. Do you
    think it was Fuji or the retailers who tried to call it a 6MP camera?
    Again......this is ONLY in refernce to the S602. I just wondered where
    and/or when they did this.

    Bill


    "Mark Herring" <> wrote in message
    news:...
    > The semantic soup has gotten a bit thick here.......
    >
    > There are precious few binding laws, standards, etc, but there are
    > some pretty well-established norms (aka "de-facto" standards)
    >
    > 1. Resolution: Unless qualified, this means SPATIAL
    > resolution---the ability to resolve spatial detail. In astronomy, it
    > often means the ability to distinguish two stars. In photography, it
    > has a variety of definitions--eg "lines" , line pairs/mm, etc.---all
    > some sort of spatial frequency metric.
    >
    > 2. Resolution in samples: In a sampled data system, the number
    > of samples sets an upper limit (the Nyquist limit) on the resolution.
    > Implicit in this definition is that the samples are taken at different
    > spatial locations in the data---this is in fact the only way they can
    > contribute to the spatial resolution.
    > The precise interpretation of the Nyquist limit is that is the
    > MAXIMUM resolution the system can hope to achieve---aka the "limiting
    > resolution".
    > de facto standard: resolution stated in Megapixels refers to
    > the number of independent spatial samples acquired. "Independent"
    > means the samples are all in different places in the image.
    > This de facto standard is followed by the vast majority of
    > camera vendors.
    >
    > 3. Interpolation: Any process in which data gaps are filled in
    > by using the adjacent data. While interpolation can smooth out the
    > visual effects associated with sampling, there can NEVER be an
    > increase in information.
    > Applying this rule to the earlier de-facto standards: The
    > limiting resolution set by the number of spatially-independent samples
    > can NEVER by increased by any form of post-processing--including
    > interpolation.
    >
    > 4. Color sampling, registration, and related issues:
    > Again, by default, resolution means ability to resolve
    > detail--regardless of color. Our eye is far more sensitive to
    > uncolored detail, than it is to the color. This is why compatible
    > color TV works, and it is why the prevailing "Bayer" filter pattern in
    > digicams works. We don't NEED the same spatial resolution in every
    > color that we need in the aggregate (also called the luminance signal)
    >
    > 5. The soup: Currently, we have two marvelous bits of creative
    > marketing:
    >
    > Fuji F700 2 photodiodes per pixel (put there to increase dynamic
    > range) somehow morphs into the sensor having twice as many PIXELS.
    > From the "no free lunch" section of the good book, note that sensing
    > sites used to enhance dynamic range CANNOT be simultaneously used to
    > increase spatial resolution.
    >
    > Foveon/Sigma: Clever new sensor in which each pixel includes samples
    > in all three colors. Thus the COLOR resolution is increased and the
    > dreaded color fringing of the Bayer is gone. Numerous problems with
    > this logic have been aired---not the least of which is: "What color
    > fringing?" Above 2-3 Mpixels, it's just not an issue.
    > Yes the Foveon may have higher color resolution, but you do not NEED
    > higher color resolution.
    > More serious is the implied claim now propagating: This 3.4 Mpixel
    > sensor is really 10Mpixels. Sorry--NO--not according to the standards
    > in common use.
    > And there are other points to be made about what is lost in trying to
    > achieve that which is not needed, but which the marketing department
    > hopes to sell anyway.
    >
    >
    > BOTTOM LINE:
    > Using the de-facto definitions, plot (for a class of camera---eg the
    > non-interchangeable lens compact digicam) the true Mpixels against the
    > price. For anything not falling on or near this plot----as they said
    > in the promo for some movie----"Be afraid---be very afraid."
    >
    > -Mark
    > *****************************************
    > digital photos, more and better computers,
    > and never enough time to do the projects.
    > Private e-mail: Just say no to No
    >
    Bill, Oct 28, 2003
    #3
  4. Mark Herring <> wrote:

    >Fuji F700 2 photodiodes per pixel (put there to increase dynamic
    >range) somehow morphs into the sensor having twice as many PIXELS.
    >From the "no free lunch" section of the good book, note that sensing
    >sites used to enhance dynamic range CANNOT be simultaneously used to
    >increase spatial resolution.


    Mark,

    they could do a similar thing that's being done for
    color---intersperse two different sensor types with an
    overlapping sensitivity in the central luminance range.

    That would effectively mean that the camera has a lower
    resolution in very dark and very bright areas, but a much bigger
    contrast range altogether.

    I don't know whether they actually do that.

    Hans-Georg

    --
    No mail, please.
    Hans-Georg Michna, Oct 28, 2003
    #4
  5. "Bill" <> wrote:

    >Assuming a 4MP CCD (in what would be commonly advertised as a "4MP Camera"),
    >would have 4 million+ actual photodiodes and also then 4 million+ pixels.
    >But with 1 million each recording red and blue, and the remaining 2 million
    >recording green. Then the data from each group of 4 is used to create a
    >single RGB value.


    Bill,

    I don't know what the manufacturers actually do, but obviously
    that would be a poor use of the available information. Even
    though there are adjacent red, green, and blue sensors, there
    can still be luminance information that can be derived from
    them. I think the mathematics shouldn't be as trivial as
    defining a certain group of color pixels as one luminance pixel.
    For luminance at a certain point it is wiser to use all nearby
    pixels, without preassigning them to luminance superpixels.

    Hans-Georg

    --
    No mail, please.
    Hans-Georg Michna, Oct 28, 2003
    #5
  6. Mark Herring

    Don Stauffer Guest

    We can go a bit further. Orthicon and vidicon TV was a sampled data
    system in the vertical direction. A team led by Ray Kell did a lot of
    research, based on statistics and measured performance with human
    viewers, that resulted in what is now referred to as the 'Kell factor'.

    He showed that a system with N scan lines can actually resolve about
    0.7N horizontal lines.

    I did some modeling using Monte Carlo methods during the seventies that
    modeled resolution of Kell factor in staring arrays. I also found
    roughly the same value. Exact shape of photo response of pixels did
    have some influence, but it was small.

    This assumed perfect optics, etc, only the effect of the scanning or
    sampling. So, as Mark says, the 0.7N is a maximum resolution if no
    sharpening filters are used.

    Mark Herring wrote:

    >
    > 2. Resolution in samples: In a sampled data system, the number
    > of samples sets an upper limit (the Nyquist limit) on the resolution.
    > Implicit in this definition is that the samples are taken at different
    > spatial locations in the data---this is in fact the only way they can
    > contribute to the spatial resolution.
    > The precise interpretation of the Nyquist limit is that is the
    > MAXIMUM resolution the system can hope to achieve---aka the "limiting
    > resolution".


    > -Mark
    > *****************************************
    > digital photos, more and better computers,
    > and never enough time to do the projects.
    > Private e-mail: Just say no to No


    --
    Don Stauffer in Minnesota

    webpage- http://www.usfamily.net/web/stauffer
    Don Stauffer, Oct 28, 2003
    #6
  7. "Constantinople" <> wrote in message
    news:Xns94222A5DFD8EB234997@140.99.99.130...
    SNIP
    > It is followed by Bayer-using camera vendors, and it happens to be the
    > "norm" that favors them most.


    That is not correct. The resolution of a monochromatic or a Bayer CFA sensor
    design is measured in essentially the same way. The only difference is that
    the Bayer CFA design delivers three reconstructed color layers, and total
    luminance is closely approximated by a weighted average of the three. The
    ISO uses luminance as Y = 0.2125*R + 0.7154*G + 0.0721*B (Luminance
    weighting according to ITU-R BT.709).

    > Fuji and Sigma are unfairly disfavored by
    > that "norm", since they truly have sensors that are theoretically
    > superior to same-megapixel Bayer sensors (same-megapixel according to


    No they are not disfavored, because the resolution of the pixel matrix they
    output can still be measured the same way (ISO 12233). The necessary
    interpolation of the Fuji captured pixels for output reduces the true
    resolution per pixel, but is compensated for by a smaller magnification for
    a given output size. Not interpolating to a larger pixel grid would throw
    away some of the sampling accuracy, interpolating will allow to keep it.
    The Foveon just has a small pixel matrix, so they disfavor themselves by
    creating more color resolution at the expense of luminance resolution, but
    that's a deliberate choice.

    > the "norm" that seems designed to favor Bayer sensors and that, no great
    > surprise, is subscribed to by Bayer-using camera vendors, who are, as
    > you say, the vast majority of camera vendors).


    As above, the norm (ISO 12233 - 2000 Photography - electronic still picture
    cameras - Resolution Measurements) also applies to monchrome sensors and has
    nothing to do with favoring anyone. Just unambiguous scientific norms that
    are universally applicable.

    SNIP
    > No luminance signal is actually captured by the Bayer sensor. Each
    > sensel captures either red, or green, or blue. None of these are
    > luminance.


    They are each parts of the total luminance.
    The luminance for a given exposure time is proportional to the integrated
    transmittance of the filter layer, and the sensor well doesn't record
    colored electrons, just a number of electrons. The number is roughly
    equivalent to the luminance impression of the human visual system for that
    part of a full spectrum.
    Interpolation of ajacent colors is only needed to approximate the missing
    spectral components of the light incident on the pixel, and they are less
    accurate *only* for high frequency detail.

    > The luminance at a particular pixel can be calculated only
    > through interpolation, as defined in (3).


    Only for calculation of the total pixel luminance. That is the weighted sum
    of three luminance components.

    > At any given image pixel, the
    > corresponding sensel only captured one color, and that is not sufficient
    > to calculate the luminance. Therefore, to calculate luminance, adjacent
    > data must be used. Which is interpolation by definition (3).


    Only for *total* pixel luminance, which is thus by definition very accurate
    for low frequency components, and less accurate for high frequency detail.

    SNIP
    > If we believe your
    > argument (5) and apply it to Bayer, then that's a sham, since sensing
    > sites (the red and blue sensels) used to enhance the color-capturing
    > ability cannot be simultaneously used to increase spatial resolution.


    Spatial sampling IS resolution. More independent samples per unit area IS
    higher resolution.

    SNIP
    > But we have noticeable softness, i.e., lack of detail,i.e. lowered
    > effective resolution, in Bayer images caused by the very thing that
    > protects against color aliasing - i.e., anti-aliasing filters.


    Not really, but we do have lowered modulation, only for the highest
    frequencies, which is not the same as overall blur (it is frequency
    dependent modulation reduction). This modulation can be partially restored
    by e.g. small radius USM operations.
    The real issue IMHO is the different aliasing frequencies for Green and for
    Red/Blue which can cause false color aliasing artifacts. Green has a higher
    sampling frequency (every diagonal pixel is sampled, but only every other
    horizontal/vertical one). Green will thus aliase less in some directions,
    but will be restricted more by the AA filtering.

    > Which are
    > necessarily stronger in a Bayer than would be in a Foveon because of the
    > wide spacing of the green sensels, of the blue sensels, and of the red
    > sensels.


    Only if they have the same number of pixels (spatial samples per area).
    Most Bayer sensors have more samples per equivalent field of view. The
    larger manufactured quantity of sensors allows a lower unit cost, so for an
    equivalent price, you get a higher spatial sample resolution Bayer unit. The
    AA filter needs to be strong enough to reduce the different color aliasing
    characteristics, and is geared at the lowest sampling pitch namely the
    horizontal/vertical one. This all results in a lower color resolution (but
    higher luminance resolution) for a 6MP Bayer sensor, and an equivalent color
    resolution (but vastly superior luminance resolution) for, say, a 13MP Bayer
    sensor.

    > In short, to fully anti-alias the individual color channels of
    > the raw image, the anti-alias filters must be such that they would over-
    > anti-alias (i.e., blur) the luminance channel of the raw image even if
    > there were such a thing, even if there were an actual set of luminance
    > sensels having the full resolution of the whole sensor.


    That's why FULL anti-aliasing is not aspired (that would require a
    non-existent physical brick-wall filter). Attenuation of the frequencies
    that cause aliasing is enough to make aliasing artifacts almost disappear.
    Additional high frequency color smoothing in the raw conversion will hide
    the remainder of color aliasing (aka False Color Filtration). Luminance
    modulation can then be boosted for the highest frequencies to restore
    some/most of the loss.

    Bart
    Bart van der Wolf, Oct 28, 2003
    #7
  8. "Bill" <> wrote in message
    news:uAqnb.105786$sp2.4991@lakeread04...
    SNIP
    > But with 1 million each recording red and blue, and the remaining 2

    million
    > recording green. Then the data from each group of 4 is used to create a
    > single RGB value.


    No, 4 RGB values, because *each* image pixel uses a weighting of its 8 (or
    more) neigbors and itself to calculate its own RGB value.

    The concept of 4 pixel forming some group is (often intentionally)
    misleading.

    Bart
    Bart van der Wolf, Oct 28, 2003
    #8
  9. Mark Herring

    Ray Fischer Guest

    Mark Herring <> wrote:
    >The semantic soup has gotten a bit thick here.......


    No kidding.

    but ...

    >Foveon/Sigma: Clever new sensor in which each pixel includes samples
    >in all three colors. Thus the COLOR resolution is increased and the
    >dreaded color fringing of the Bayer is gone. Numerous problems with
    >this logic have been aired---not the least of which is: "What color
    >fringing?" Above 2-3 Mpixels, it's just not an issue.


    Well, it's always an issue, but it may not be important. One of the
    sample images Canon has on their web site is a photo taken using the
    1Ds (11MP) of the inside of a church. With a little magnification
    you'll find color fringing in the light reflected off of one of the
    columns.

    Aliasing artifacts occur anytime you digitize analog data. How the
    artifacts appear varies with the digitizing process, but they never go
    away. While the Fovean sensor may be better at avoiding color
    artifacts, it's worse as avoiding Moire artifacts.

    --
    Ray Fischer
    Ray Fischer, Oct 29, 2003
    #9
  10. Mark Herring

    Guest

    In message <Xns94222A5DFD8EB234997@140.99.99.130>,
    Constantinople <> wrote:

    >No luminance signal is actually captured by the Bayer sensor. Each
    >sensel captures either red, or green, or blue. None of these are
    >luminance. The luminance at a particular pixel can be calculated only
    >through interpolation, as defined in (3). At any given image pixel, the
    >corresponding sensel only captured one color, and that is not sufficient
    >to calculate the luminance. Therefore, to calculate luminance, adjacent
    >data must be used. Which is interpolation by definition (3).


    This is all happening under the hood of the anti-aliasing filter, so any
    point of light is already scattered over more than one pixel, anyway.
    The resulting weight still tells you more things, though, than you would
    know about spatial location than if there were less pixels that sensed
    full RGB, as witnessed by the fact that the 3.43 mp Foveon only
    out-performs the 6.3mp bayer cameras at resolution in high-saturation
    line tests, even though the latter only has 40% more dimensional
    resolution. Perhaps some "in-between" test need to be performed, like
    line tests where the colors used vary over the area of the chart, and
    are chosen to represent typical color edges encountered in real-world
    photography.
    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
    , Oct 31, 2003
    #10
  11. Mark Herring

    Guest

    In message <Xns94222A5DFD8EB234997@140.99.99.130>,
    Constantinople <> wrote:

    >Try applying that to Bayer. Start with the green sensels. By themselves,
    >these can only capture a low-res, monochrome image. Now add in the blue
    >and red sensels. These red and blue sensels increase the colors that can
    >be captured by the sensor (since with only the green sensels the sensor
    >could only capture a monochromatic image), but they, allegedly,
    >simultaneously increase the spatial resolution. If we believe your
    >argument (5) and apply it to Bayer, then that's a sham, since sensing
    >sites (the red and blue sensels) used to enhance the color-capturing
    >ability cannot be simultaneously used to increase spatial resolution.


    The better demosaicing algorithms calculate/interpolate luminance and
    color separately, and recombine them into an RGB output. The full data
    is used for each, separately. It is usually errant in an obvious way
    only in special cases, and the whole system seems cautious enough that
    the worst that happens is that certain color contrasts are
    under-sampled; you can't see this, anyway, in real-world sized displays.
    I have yet to see anything I would call an artifact from a 6mp bayer;
    just undersampling, such as in the highly saturated red and blue line
    test. I see more artifacts in the SD9 results, even though I can see
    "lines" at a higher resolution.
    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
    , Oct 31, 2003
    #11
  12. Mark Herring

    Guest

    In message <Xns94222A5DFD8EB234997@140.99.99.130>,
    Constantinople <> wrote:

    >But we have noticeable softness, i.e., lack of detail,i.e. lowered
    >effective resolution, in Bayer images caused by the very thing that
    >protects against color aliasing - i.e., anti-aliasing filters.


    No, they come soft out of the camera with mediocre and worse lenses.
    The pictures I take with my 300mm Canon prime are sharp right out of the
    camera on the 10D. The high frequencies are attenuated by the AA
    filter, but they are so strong you still see them. You also need to
    realize that the resolution of the SD9 and 6mp bayer cameras with
    1.5x-1.7x factors are right near the borderline of the resolution of
    consumer-grade lenses. A Canon 75-300mm f4-5.6 IS lens taking a picture
    at 300mm @ f5.6 is too blurry for a 6mp 22mm*17mm sensor. The lens can
    not resolve detail at the highest frequencies the sensor can record. It
    might at 180mm and f9.
    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
    , Oct 31, 2003
    #12
  13. Mark Herring

    Guest

    In message <uAqnb.105786$sp2.4991@lakeread04>,
    "Bill" <> wrote:

    >Mark,
    >I follow you up to a point. OKso any "de facto standard" CCD uses the
    > RGRGRGRG
    > GBGBGBGB
    >layout, and gathers data from groups of 4 pixels.
    > RG
    > GB


    No; stop right there. A good bayer demosaicing algorithm does not think
    in terms of 4-pixel cells in any way, whatsoever. That would cause all
    the colors to shift horizontally and vertically. Each output pixels is
    determined with equal weight from all directions of the sensor pixel
    that corresponds to it. The idea of a pixel being composed of R,G, and
    B sensors only applies to bitmaps and monitors; not to sensors.
    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
    , Oct 31, 2003
    #13
  14. Mark Herring

    Guest

    In message <3f9ea0b5$0$58697$4all.nl>,
    "Bart van der Wolf" <> wrote:

    >The concept of 4 pixel forming some group is (often intentionally)
    >misleading.


    That's the major flaw in George Preddy's logic; he speaks as if the data
    from 2 green, and 1 red and 1 blue pixel merge into 1 RGB pixel, and
    that the number of RGB pixels are scaled up 4x, in the bayer
    demosaicing.
    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
    , Oct 31, 2003
    #14
  15. Mark Herring

    Rafe B. Guest

    On Tue, 28 Oct 2003 13:31:07 +0100, Hans-Georg Michna
    <> wrote:

    >Mark Herring <> wrote:
    >
    >>Fuji F700 2 photodiodes per pixel (put there to increase dynamic
    >>range) somehow morphs into the sensor having twice as many PIXELS.
    >>From the "no free lunch" section of the good book, note that sensing
    >>sites used to enhance dynamic range CANNOT be simultaneously used to
    >>increase spatial resolution.

    >
    >Mark,
    >
    >they could do a similar thing that's being done for
    >color---intersperse two different sensor types with an
    >overlapping sensitivity in the central luminance range.
    >
    >That would effectively mean that the camera has a lower
    >resolution in very dark and very bright areas, but a much bigger
    >contrast range altogether.
    >
    >I don't know whether they actually do that.



    Isn't this more or less what Sony is doing (or has done)
    with the RBGE sensor?

    Instead of two greens, it has Green and Emerald. (Sony's
    nomenclature, not mine.)

    For that matter, I've never heard a precise explanation of
    why green is "over-represented" on a Bayer device.
    My understanding is:

    1. Green is at the center of the human visual spectrum
    2. Green is crucial to human visual perception, especially
    in terms of spatial contrast and detail

    Even so, I think Sony's idea is clever -- why not eke out just
    a bit more information from the same amount of real estate
    on the chip.


    rafe b.
    http://www.terrapinphoto.com
    Rafe B., Oct 31, 2003
    #15
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Alan Browne
    Replies:
    2
    Views:
    276
    Ron Hunter
    May 14, 2005
  2. # MPixels displayed on a 1080i HDTV?

    , Jun 12, 2005, in forum: Digital Photography
    Replies:
    1
    Views:
    367
  3. Jax
    Replies:
    11
    Views:
    574
    bener
    May 16, 2006
  4. Wayne J. Cosshall

    Depression and Creativity

    Wayne J. Cosshall, Aug 16, 2007, in forum: Digital Photography
    Replies:
    28
    Views:
    789
    John Turco
    Aug 31, 2007
  5. peter
    Replies:
    0
    Views:
    289
    peter
    Nov 25, 2007
Loading...

Share This Page