What Comprises a "Pixel"?

Discussion in 'Digital Photography' started by Martin, Aug 30, 2005.

  1. Martin

    Martin Guest

    I know this sounds like a really basic question, but I never thought
    about it before recently, when considering taking grayscale photos of
    old prints for a digital archive.

    As I understand it, my focal-plane sensor chip does not consist of
    elements that individually respond to all "colors". Instead, there are
    three distinct types of elements, each of which responds to particular
    color range and associated intensity for that range. Is this correct?

    If so, then is a 3-megapixel image made of 3 million triads? Or do the
    spec-writers "cheat" and call each individual sensor element, though
    incapable of capturing full-frequency information, a pixel? If that is
    the case, then is my 3-megapixel camera really only providing
    information for one million sites of combined color/intensity?

    Finally, if this is the case, and I switch to grayscale capture mode,
    does each of the three elements in a triad now capture independent
    intensity information and provide me a 3X increase in spatial
    resolution, giving me a "real" 3 megapixels in grayscale vs only a
    "real" 1 megapixel in full color?

    Martin
    Martin, Aug 30, 2005
    #1
    1. Advertising

  2. Martin

    Guest

    I know this sounds like a really basic question, but I never thought
    about it before recently, when considering taking grayscale photos of
    old prints for a digital archive.


    As I understand it, my focal-plane sensor chip does not consist of
    elements that individually respond to all "colors". Instead, there are

    three distinct types of elements, each of which responds to particular
    color range and associated intensity for that range. Is this correct?

    YES


    If so, then is a 3-megapixel image made of 3 million triads? Or do the

    spec-writers "cheat" and call each individual sensor element, though
    incapable of capturing full-frequency information, a pixel? If that is

    the case, then is my 3-megapixel camera really only providing
    information for one million sites of combined color/intensity?

    There are not 3 million triads, there are 3million sensor elements (a
    mix of
    R, G, and B). A technique known as Bayer interpolation is able to
    reconstruct a full 3PM worth of full color data. Of course it is not
    perfect
    and there are losses and artifacts. The Foveon sensor truly captures
    R, G, and B for each pixel and for a given number of 'sites' it
    provides higher
    res than 'Bayer' or mosaic sensors. However, there are few cameras
    that use the Foveon sensor (Fuji Finepix SLR) and unfortunately, they
    (Foveon) have fallen behind in the sensor race. It has been said that a
    Foveon sensor of X MP gives as much res as a Bayer sensor of 2X MP.
    Of course this is a rough comparison.

    Finally, if this is the case, and I switch to grayscale capture mode,
    does each of the three elements in a triad now capture independent
    intensity information and provide me a 3X increase in spatial
    resolution, giving me a "real" 3 megapixels in grayscale vs only a
    "real" 1 megapixel in full color?

    Gray scale capture is the same as color, it is just that the camera
    processor
    does the conversion instead of you. You do not get higher res.
    , Aug 30, 2005
    #2
    1. Advertising

  3. You may find this page useful:

    http://heim.ifi.uio.no/~gisle/photo/pixels.html

    although your precise question is one which Gisle might want to add.

    Yes, the manufacturers "prefer" to use the larger number both when
    referring to the sensor or to the LCD on the back of the camera. So they
    count individual RGB receptors, not (RGB) triplets. On a typical camera,
    there will be four receptors per composite pixel, perhaps organised as
    RGGB or RGBCyan.

    David
    David J Taylor, Aug 30, 2005
    #3
  4. Martin

    Jim Townsend Guest

    Martin wrote:

    > I know this sounds like a really basic question, but I never thought
    > about it before recently, when considering taking grayscale photos of
    > old prints for a digital archive.
    >
    > As I understand it, my focal-plane sensor chip does not consist of
    > elements that individually respond to all "colors". Instead, there are
    > three distinct types of elements, each of which responds to particular
    > color range and associated intensity for that range. Is this correct?


    First off.. Pixel is a blend of two words

    'Picture - Pix' and 'Element - el'.

    The **actual** pixels your camera produces are binary representations
    of data that was obtained by sampling light reflected from a real
    world object.

    Pixels have no shape, size or weight. They are just strings of ones
    and zeros held as electric charges in your camera memory card, or computer
    RAM, or as magnetic impressions on a spinning disk, or pits and valleys
    on a CD.

    I don't like the idea of saying a sensor has pixels.. I prefer
    to call them sensor sites. The sensor sites produce the pixels.

    I feel calling sensor sites pixels adds a level of confusion
    to this digital imaging thing. It's bad enough calling
    scanner sensor sites 'dots' and measuring them in dots
    per inch.

    Camera sensors are very complex.. There is more than one way to
    create a pixel. (The bayer method or foveon method for example).
    You can look this up on the web. Google for 'bayer sensor'
    You'll see explanations with diagrams which make it much better
    than you'll get in a newsgroup post.

    Lets call sensor sites, sensor sites.. Lets call what they *produce*
    pixels :)

    > Finally, if this is the case, and I switch to grayscale capture mode,
    > does each of the three elements in a triad now capture independent
    > intensity information and provide me a 3X increase in spatial
    > resolution, giving me a "real" 3 megapixels in grayscale vs only a
    > "real" 1 megapixel in full color?


    No.. You get the same color image from the sensor.. The firmware
    within the camera removes the color information. You can do the
    exact same thing if you take a color shot and remove the color
    with photo editing software. When you select black and white on
    your camera, the camera is just saving you that step.
    Jim Townsend, Aug 30, 2005
    #4
  5. "Martin" <> writes:

    > I know this sounds like a really basic question, but I never thought
    > about it before recently, when considering taking grayscale photos of
    > old prints for a digital archive.
    >
    > As I understand it, my focal-plane sensor chip does not consist of
    > elements that individually respond to all "colors". Instead, there are
    > three distinct types of elements, each of which responds to particular
    > color range and associated intensity for that range. Is this correct?


    Probably. Nearly all modern digital camera, and every single consumer
    P&S digital camera I know of, uses what you describe (it's called the
    Bayer filter pattern).

    > If so, then is a 3-megapixel image made of 3 million triads? Or do the
    > spec-writers "cheat" and call each individual sensor element, though
    > incapable of capturing full-frequency information, a pixel? If that is
    > the case, then is my 3-megapixel camera really only providing
    > information for one million sites of combined color/intensity?


    Each individual sensor element is called a pixel.

    So, sort-of. Turns out, though, that the eyeball works the same way,
    and is much more sensitive to luminance detail than color detail. So
    the results match very well with human vision.

    > Finally, if this is the case, and I switch to grayscale capture mode,
    > does each of the three elements in a triad now capture independent
    > intensity information and provide me a 3X increase in spatial
    > resolution, giving me a "real" 3 megapixels in grayscale vs only a
    > "real" 1 megapixel in full color?


    No. The filters on each pixel site are permanently emplaced (consider
    the alignment issues of making them removable!), so you're getting
    interpolated data at each site either way.

    This is a dangerous topic (giving you the benefit of the doubt here,
    since there's no reason to think you're trolling); there's a company
    called Foveon that makes a sensor chip that *does* have three stacked
    sensors, one for each color, at each site. They're used in the Sigma
    Digital SLRs, and nowhere else. They have their *extreme* partisans;
    that's what makes it a dangerous topic. Most but not all people who
    have examined bunches of images from those cameras vs. Bayer pattern
    cameras come to the conclusion that the Foveon X3 sensor currently has
    enough drawbacks not to end up being superior.
    --
    David Dyer-Bennet, <mailto:>, <http://www.dd-b.net/dd-b/>
    RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/>
    Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/>
    Dragaera/Steven Brust: <http://dragaera.info/> Much of which is still down
    David Dyer-Bennet, Aug 30, 2005
    #5
  6. Jim Townsend wrote:
    > Martin wrote:
    >
    >> I know this sounds like a really basic question, but I never thought
    >> about it before recently, when considering taking grayscale photos of
    >> old prints for a digital archive.
    >>
    >> As I understand it, my focal-plane sensor chip does not consist of
    >> elements that individually respond to all "colors". Instead, there
    >> are three distinct types of elements, each of which responds to
    >> particular color range and associated intensity for that range. Is
    >> this correct?

    >
    > First off.. Pixel is a blend of two words
    >
    > 'Picture - Pix' and 'Element - el'.
    >
    > The **actual** pixels your camera produces are binary representations
    > of data that was obtained by sampling light reflected from a real
    > world object.
    >
    > Pixels have no shape, size or weight. They are just strings of ones
    > and zeros held as electric charges in your camera memory card, or
    > computer RAM, or as magnetic impressions on a spinning disk, or pits
    > and valleys on a CD.
    >
    > I don't like the idea of saying a sensor has pixels.. I prefer
    > to call them sensor sites. The sensor sites produce the pixels.
    >
    > I feel calling sensor sites pixels adds a level of confusion
    > to this digital imaging thing. It's bad enough calling
    > scanner sensor sites 'dots' and measuring them in dots
    > per inch.
    >
    > Camera sensors are very complex.. There is more than one way to
    > create a pixel. (The bayer method or foveon method for example).
    > You can look this up on the web. Google for 'bayer sensor'
    > You'll see explanations with diagrams which make it much better
    > than you'll get in a newsgroup post.
    >
    > Lets call sensor sites, sensor sites.. Lets call what they *produce*
    > pixels :)
    >
    >> Finally, if this is the case, and I switch to grayscale capture mode,
    >> does each of the three elements in a triad now capture independent
    >> intensity information and provide me a 3X increase in spatial
    >> resolution, giving me a "real" 3 megapixels in grayscale vs only a
    >> "real" 1 megapixel in full color?

    >
    > No.. You get the same color image from the sensor.. The firmware
    > within the camera removes the color information. You can do the
    > exact same thing if you take a color shot and remove the color
    > with photo editing software. When you select black and white on
    > your camera, the camera is just saving you that step.


    Better yet - if you do the grayscale conversion from a color original,
    you can choose from the individual color channels to get various
    contrast effects similar to using colored filters with black and white
    film.
    Bob Harrington, Aug 31, 2005
    #6
  7. "Martin" <> wrote in message
    news:...
    > I know this sounds like a really basic question, but I never thought
    > about it before recently, when considering taking grayscale photos of
    > old prints for a digital archive.
    >
    > As I understand it, my focal-plane sensor chip does not consist of
    > elements that individually respond to all "colors". Instead, there are
    > three distinct types of elements, each of which responds to particular
    > color range and associated intensity for that range. Is this correct?
    >
    > If so, then is a 3-megapixel image made of 3 million triads? Or do the
    > spec-writers "cheat" and call each individual sensor element, though
    > incapable of capturing full-frequency information, a pixel? If that is
    > the case, then is my 3-megapixel camera really only providing
    > information for one million sites of combined color/intensity?
    >
    > Finally, if this is the case, and I switch to grayscale capture mode,
    > does each of the three elements in a triad now capture independent
    > intensity information and provide me a 3X increase in spatial
    > resolution, giving me a "real" 3 megapixels in grayscale vs only a
    > "real" 1 megapixel in full color?
    >
    > Martin
    >


    While it's true that each pixel only responds to a particular color, it
    still produces a single color that isn't necessarily that pixel's original
    color. IOW, the green pixel doesn't necessarily produce green. For
    instance, if a green pixel is active, it "looks" at the other pixels around
    it to figure out what color it should be. If the red and blue pixels
    surrounding it are active, then it knows it should be white. If only the
    blue pixels are active, then it should produce a cyan color. If the red
    ones, then yellow. Similarly for the other pixels. Of course, the pixels
    don't "know" anything. It's all done in software.
    There was a good article on this at Lexar's site, but I can't find it
    anymore. Someone else mentioned bayer sensor. Googling that would
    certainly turn up some informative sites.
    William Oertell, Aug 31, 2005
    #7
  8. Martin

    Jim Townsend Guest

    Bob Harrington wrote:


    > Better yet - if you do the grayscale conversion from a color original,
    > you can choose from the individual color channels to get various
    > contrast effects similar to using colored filters with black and white
    > film.


    Yes.. Actually the newer cameras are incorporating this into
    their firmware.. You can tailor the color channels to obtain
    a better looking B&W image.. The Canon 20D does this..

    Of course, you can still do this to a JPEG after the fact
    with photoediting software.
    Jim Townsend, Aug 31, 2005
    #8
  9. Martin

    Bill Funk Guest

    On 30 Aug 2005 17:39:02 -0500, David Dyer-Bennet <>
    wrote:

    >"Martin" <> writes:
    >
    >> I know this sounds like a really basic question, but I never thought
    >> about it before recently, when considering taking grayscale photos of
    >> old prints for a digital archive.
    >>
    >> As I understand it, my focal-plane sensor chip does not consist of
    >> elements that individually respond to all "colors". Instead, there are
    >> three distinct types of elements, each of which responds to particular
    >> color range and associated intensity for that range. Is this correct?

    >
    >Probably. Nearly all modern digital camera, and every single consumer
    >P&S digital camera I know of, uses what you describe (it's called the
    >Bayer filter pattern).


    I wouldn't call the sensor elements "three distinct types of
    elements", rather identical elements fed light through filters of
    three colors: red, blue and green.
    The sensor elements are the same throughout. It's the filters (Bayer
    filter) that allows these elements to combine their output to create a
    color image.
    >
    >> If so, then is a 3-megapixel image made of 3 million triads? Or do the
    >> spec-writers "cheat" and call each individual sensor element, though
    >> incapable of capturing full-frequency information, a pixel? If that is
    >> the case, then is my 3-megapixel camera really only providing
    >> information for one million sites of combined color/intensity?

    >
    >Each individual sensor element is called a pixel.
    >
    >So, sort-of. Turns out, though, that the eyeball works the same way,
    >and is much more sensitive to luminance detail than color detail. So
    >the results match very well with human vision.
    >
    >> Finally, if this is the case, and I switch to grayscale capture mode,
    >> does each of the three elements in a triad now capture independent
    >> intensity information and provide me a 3X increase in spatial
    >> resolution, giving me a "real" 3 megapixels in grayscale vs only a
    >> "real" 1 megapixel in full color?

    >
    >No. The filters on each pixel site are permanently emplaced (consider
    >the alignment issues of making them removable!), so you're getting
    >interpolated data at each site either way.
    >
    >This is a dangerous topic (giving you the benefit of the doubt here,
    >since there's no reason to think you're trolling); there's a company
    >called Foveon that makes a sensor chip that *does* have three stacked
    >sensors, one for each color, at each site. They're used in the Sigma
    >Digital SLRs, and nowhere else. They have their *extreme* partisans;
    >that's what makes it a dangerous topic. Most but not all people who
    >have examined bunches of images from those cameras vs. Bayer pattern
    >cameras come to the conclusion that the Foveon X3 sensor currently has
    >enough drawbacks not to end up being superior.


    --
    Bill Funk
    Replace "g" with "a"
    funktionality.blogspot.com
    Bill Funk, Aug 31, 2005
    #9
  10. Martin

    imbsysop Guest

    On Tue, 30 Aug 2005 17:18:48 -0500, Jim Townsend <>
    wrote:

    >Martin wrote:
    >
    >> I know this sounds like a really basic question, but I never thought
    >> about it before recently, when considering taking grayscale photos of
    >> old prints for a digital archive.

    snip
    >I don't like the idea of saying a sensor has pixels.. I prefer
    >to call them sensor sites. The sensor sites produce the pixels.


    in some past discussions (Foveon dead horse :)) some people prefered
    to call them "sensels" IIRC ...
    FWIW
    imbsysop, Aug 31, 2005
    #10
  11. Martin

    imbsysop Guest

    On 30 Aug 2005 17:39:02 -0500, David Dyer-Bennet <>
    wrote:

    snip
    >This is a dangerous topic (giving you the benefit of the doubt here,
    >since there's no reason to think you're trolling); there's a company
    >called Foveon that makes a sensor chip that *does* have three stacked
    >sensors, one for each color, at each site. ..


    minor tech correction .. it is one sensor with 3 (silicon)layers ..
    :)
    imbsysop, Aug 31, 2005
    #11
  12. Martin

    Don Stauffer Guest

    Basically, a pixel is a mathematical construct defining how the data is
    arranged and stored. While monochrome cameras may have a physical focal
    plane array of detectors with each detector assigned to a "pixel" this
    is not necessary. There are three ways to generate color images. We
    could do it with three seperate chips and mirrors (better digital video
    cameras work this way), a single chip and a mosaic of color filters, and
    one unique type of chip that has essentially the detectors vertically
    stacked and each layer responding to a unique color.

    In a single chip camera with filter array, exactly what happens in
    greyscale mode is dependent on the mathematical formulas used in the
    internal camera processing.

    One reason the filter system works is that the eye gets its spatial
    resolution from the black and white (luminance) information in the
    image- the eye cannot detect color changes with the same acuity as
    brightness changes.

    Martin wrote:
    > I know this sounds like a really basic question, but I never thought
    > about it before recently, when considering taking grayscale photos of
    > old prints for a digital archive.
    >
    > As I understand it, my focal-plane sensor chip does not consist of
    > elements that individually respond to all "colors". Instead, there are
    > three distinct types of elements, each of which responds to particular
    > color range and associated intensity for that range. Is this correct?
    >
    > If so, then is a 3-megapixel image made of 3 million triads? Or do the
    > spec-writers "cheat" and call each individual sensor element, though
    > incapable of capturing full-frequency information, a pixel? If that is
    > the case, then is my 3-megapixel camera really only providing
    > information for one million sites of combined color/intensity?
    >
    > Finally, if this is the case, and I switch to grayscale capture mode,
    > does each of the three elements in a triad now capture independent
    > intensity information and provide me a 3X increase in spatial
    > resolution, giving me a "real" 3 megapixels in grayscale vs only a
    > "real" 1 megapixel in full color?
    >
    > Martin
    >
    Don Stauffer, Aug 31, 2005
    #12
  13. Martin

    Jim Townsend Guest

    imbsysop wrote:

    > On Tue, 30 Aug 2005 17:18:48 -0500, Jim Townsend <>
    > wrote:
    >
    >>Martin wrote:


    >>I don't like the idea of saying a sensor has pixels.. I prefer
    >>to call them sensor sites. The sensor sites produce the pixels.

    >
    > in some past discussions (Foveon dead horse :)) some people prefered
    > to call them "sensels" IIRC ...


    Yep.. Sensels is good too.. It's still less confusing than pixels.
    Jim Townsend, Aug 31, 2005
    #13
  14. Martin

    Guest

    Correction: Foveon is in Sigma DSLR, not Fuji
    , Aug 31, 2005
    #14
  15. "Martin" <> wrote in message
    news:...
    >I know this sounds like a really basic question, but I never thought
    > about it before recently, when considering taking grayscale photos of
    > old prints for a digital archive.
    >
    > As I understand it, my focal-plane sensor chip does not consist of
    > elements that individually respond to all "colors". Instead, there are
    > three distinct types of elements, each of which responds to particular
    > color range and associated intensity for that range. Is this correct?
    >
    > If so, then is a 3-megapixel image made of 3 million triads? Or do the
    > spec-writers "cheat" and call each individual sensor element, though
    > incapable of capturing full-frequency information, a pixel?

    <snip>
    IMO the cameras usually have the sensitive sites in fours, with two green
    filtered sites for each pair of red and blue.

    RGBGRGBG
    GBGRGBGR
    BGRGBGRG
    GRGBGRGB

    A 4 megapixel camera has 2 million green filtered sites, and one million
    each of red and blue. The camera records the brightness of the image at 4
    million sites (luminance) but has less information about color. If your
    scene were all red, your grayscale file would show no (or very low)
    brightness in three fourths of the sites. Twice as many green are used
    because it is in the center of the spectrum, and contains most of the
    luminance information of most scenes.

    High quality camcorders use three different sensors, each with its own
    filter, and thus have somewhat higher resolution than camcorders that use
    only one sensor (with the same number of pixels as each of the three in the
    higher quality ones) with filters on the individual sites. They don't have
    three times as much, though, because real scenes have different colors in
    different areas.

    This works most of the time because the eye-brain system has much higher
    resolution for brightness than for color. The same trick is used in
    television. When I was watching Venus Williams play tennis yesterday her
    costume looked white when the view took in the whole court, but proved to be
    a pale pink in close-ups.

    This is just one man's understanding.
    --
    Gerry
    http://www.pbase.com/gfoley9999/
    http://www.wilowud.net/
    http://home.columbus.rr.com/gfoley
    http://www.fortunecity.com/victorian/pollock/263/egypt/egypt.html
    Gerard M Foley, Aug 31, 2005
    #15
  16. Martin

    Guest

    On Tue, 30 Aug 2005 17:18:48 -0500, Jim Townsend <>
    wrote:

    >Martin wrote:
    >
    >> I know this sounds like a really basic question, but I never thought
    >> about it before recently, when considering taking grayscale photos of
    >> old prints for a digital archive.
    >>
    >> As I understand it, my focal-plane sensor chip does not consist of
    >> elements that individually respond to all "colors". Instead, there are
    >> three distinct types of elements, each of which responds to particular
    >> color range and associated intensity for that range. Is this correct?

    >
    >First off.. Pixel is a blend of two words
    >
    >'Picture - Pix' and 'Element - el'.
    >
    >The **actual** pixels your camera produces are binary representations
    >of data that was obtained by sampling light reflected from a real
    >world object.
    >
    >Pixels have no shape, size or weight. They are just strings of ones
    >and zeros held as electric charges in your camera memory card, or computer
    >RAM, or as magnetic impressions on a spinning disk, or pits and valleys
    >on a CD.
    >
    >I don't like the idea of saying a sensor has pixels.. I prefer
    >to call them sensor sites. The sensor sites produce the pixels.


    So ditch the middleman -- no one else in this ng has used your
    terminology.

    >
    >I feel calling sensor sites pixels adds a level of confusion
    >to this digital imaging thing. It's bad enough calling
    >scanner sensor sites 'dots' and measuring them in dots
    >per inch.
    >
    >Camera sensors are very complex.. There is more than one way to
    >create a pixel. (The bayer method or foveon method for example).
    >You can look this up on the web. Google for 'bayer sensor'
    >You'll see explanations with diagrams which make it much better
    >than you'll get in a newsgroup post.
    >
    >Lets call sensor sites, sensor sites.. Lets call what they *produce*
    >pixels :)


    Useless.

    >> Finally, if this is the case, and I switch to grayscale capture mode,
    >> does each of the three elements in a triad now capture independent
    >> intensity information and provide me a 3X increase in spatial
    >> resolution, giving me a "real" 3 megapixels in grayscale vs only a
    >> "real" 1 megapixel in full color?

    >
    >No.. You get the same color image from the sensor.. The firmware
    >within the camera removes the color information. You can do the
    >exact same thing if you take a color shot and remove the color
    >with photo editing software. When you select black and white on
    >your camera, the camera is just saving you that step.
    >
    >
    , Sep 1, 2005
    #16
  17. Martin

    Paul H. Guest

    "Jim Townsend" <> wrote in message
    news:...
    > Martin wrote:
    >
    > > I know this sounds like a really basic question, but I never thought
    > > about it before recently, when considering taking grayscale photos of
    > > old prints for a digital archive.
    > >
    > > As I understand it, my focal-plane sensor chip does not consist of
    > > elements that individually respond to all "colors". Instead, there are
    > > three distinct types of elements, each of which responds to particular
    > > color range and associated intensity for that range. Is this correct?

    >
    > First off.. Pixel is a blend of two words
    >
    > 'Picture - Pix' and 'Element - el'.
    >
    > The **actual** pixels your camera produces are binary representations
    > of data that was obtained by sampling light reflected from a real
    > world object.
    >
    > Pixels have no shape, size or weight. They are just strings of ones
    > and zeros held as electric charges in your camera memory card, or computer
    > RAM, or as magnetic impressions on a spinning disk, or pits and valleys
    > on a CD.
    >
    > I don't like the idea of saying a sensor has pixels.. I prefer
    > to call them sensor sites. The sensor sites produce the pixels.
    >
    > I feel calling sensor sites pixels adds a level of confusion
    > to this digital imaging thing. It's bad enough calling
    > scanner sensor sites 'dots' and measuring them in dots
    > per inch.


    I don't like the idea of calling them "sensor sites" because this name
    simply adds to the already-confusing terminology. After all, these putative
    "sensor sites" don't really sense anything by themselves; instead, they are
    simply well-defined regions of lightly-doped silicon which don't become true
    sensors until the addition of a goodly amount of supporting circuitry,
    microlenses, etc. Therefore, I propose a new name for these proto-sensing
    sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites on
    small CCDs or CMOS imagers (APS size or smaller) may be properly referred
    to as "minor quib-els", sites on large sensors are termed "major quib-els",
    and the proper name for discussions about both large and small varieties
    should be designated "quib-elling."

    And "dots" the way it is...


    Hope this helps. Right.
    Paul H., Sep 1, 2005
    #17
  18. Martin

    Don Stauffer Guest

    Paul H. wrote:
    >
    > I don't like the idea of calling them "sensor sites" because this name
    > simply adds to the already-confusing terminology. After all, these putative
    > "sensor sites" don't really sense anything by themselves; instead, they are
    > simply well-defined regions of lightly-doped silicon which don't become true
    > sensors until the addition of a goodly amount of supporting circuitry,
    > microlenses, etc. Therefore, I propose a new name for these proto-sensing
    > sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites on
    > small CCDs or CMOS imagers (APS size or smaller) may be properly referred
    > to as "minor quib-els", sites on large sensors are termed "major quib-els",
    > and the proper name for discussions about both large and small varieties
    > should be designated "quib-elling."
    >



    There already is a well-used term, in use for decades. Each site is a
    "detector". First electronic cameras were single detector, with some
    sort of built in scanning, mirrors or other ways. Then, in seventies
    folks began to make arrays of detectors, fabricating many detectors on
    the same chip.
    Don Stauffer, Sep 2, 2005
    #18
  19. Martin

    Paul H. Guest

    "Don Stauffer" <> wrote in message
    news:g3ZRe.6$...
    > Paul H. wrote:
    > >
    > > I don't like the idea of calling them "sensor sites" because this name
    > > simply adds to the already-confusing terminology. After all, these

    putative
    > > "sensor sites" don't really sense anything by themselves; instead, they

    are
    > > simply well-defined regions of lightly-doped silicon which don't become

    true
    > > sensors until the addition of a goodly amount of supporting circuitry,
    > > microlenses, etc. Therefore, I propose a new name for these

    proto-sensing
    > > sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites

    on
    > > small CCDs or CMOS imagers (APS size or smaller) may be properly

    referred
    > > to as "minor quib-els", sites on large sensors are termed "major

    quib-els",
    > > and the proper name for discussions about both large and small varieties
    > > should be designated "quib-elling."
    > >

    >
    >
    > There already is a well-used term, in use for decades. Each site is a
    > "detector". First electronic cameras were single detector, with some
    > sort of built in scanning, mirrors or other ways. Then, in seventies
    > folks began to make arrays of detectors, fabricating many detectors on
    > the same chip.


    The unecessary elucidation is appreciated, but I know what a detector is.
    But thanks for making a cameo appearance on "Quibbling for Dollars"! You've
    made my day and my point.
    Paul H., Sep 2, 2005
    #19
  20. Martin <> wrote:

    > Finally, if this is the case, and I switch to grayscale capture mode,
    > does each of the three elements in a triad now capture independent
    > intensity information and provide me a 3X increase in spatial
    > resolution, giving me a "real" 3 megapixels in grayscale vs only a
    > "real" 1 megapixel in full color?


    No. The camera takes exactly the same picture it did in color mode, and
    then converts the image to grayscale. The color filters which others
    have mentioned in this thread are still in place.

    In general, you probably don't want to use grayscale mode in your
    camera, converting later in Photoshop or similar applications will give
    you much better control over the conversion process. There are numerous
    techniques to obtain the best grayscale image, and the best technique
    often depends on the image itself as well as the objectives and
    preferences of the photographer.
    Paul Ferguson, Sep 2, 2005
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Abrasha

    Hot pixel vs. stuck pixel

    Abrasha, Aug 2, 2003, in forum: Digital Photography
    Replies:
    5
    Views:
    8,446
    Steven Buglass
    Sep 2, 2003
  2. Tom Thackrey

    Re: Pixel size of individual Pixel

    Tom Thackrey, Sep 14, 2003, in forum: Digital Photography
    Replies:
    2
    Views:
    798
  3. Robert E. Williams

    Re: Pixel size of individual Pixel

    Robert E. Williams, Sep 14, 2003, in forum: Digital Photography
    Replies:
    2
    Views:
    524
    Don Stauffer
    Sep 16, 2003
  4. Peter H

    Fuji S3000 - 3.2 m/pixel or 6 m/pixel?

    Peter H, Nov 18, 2003, in forum: Digital Photography
    Replies:
    3
    Views:
    677
  5. Peter H
    Replies:
    43
    Views:
    1,123
    Bill M
    Dec 4, 2003
Loading...

Share This Page