Why Don't High-End DSLRs Have Three Chips?

Discussion in 'Digital Photography' started by O R, Nov 12, 2003.

  1. O R

    O R Guest

    In the world of video cameras, consumer-level camcorders have one CCD
    and broadcast-quality TV cameras have three -- one for each color.
    Wouldn't digital still cameras benefit from 3 CCDs (or 3 CMOS chips)?
     
    O R, Nov 12, 2003
    #1
    1. Advertisements

  2. 3 chips = big bucks

    The incoming light has to be spit into three paths. More tricky optics and
    assembly tolerance issues (not to mention reliability). Also, current
    technology can't make it small and light and affordable. You are on the
    right path. So is Foveon. The history of digital imaging is just
    beginning.
     
    Charles Schuler, Nov 12, 2003
    #2
    1. Advertisements

  3. O R

    Snowman Guest

    Because NTSC video is 352x240 lines resolution, and those video cameras have
    cheaper, low(er) resolution sensors, so they can cram three of them. 6mp
    sensor carries the bulk of the cost in DSLR - with the splitter it would
    cost at least 3X more, but would not provide 3X better image.
    Not to mention size of the camera.
     
    Snowman, Nov 12, 2003
    #3
  4. O R

    PTRAVEL Guest

    The DV25 standard (which is what miniDV and Digital8 use) for NTSC video is
    720 x 480 pixels. VHS is capable of resolving approximately 250 lines
    (which has nothing to do with the number of pixels). Hi8 and SVHS resolve
    400+. A good miniDV machine will resolve 525 lines. The _camera_ section
    of a good machine can resolve 700 to 800 lines.
    This, however, is quite true. It would also diminish low-light imaging
    capability to the point of rendering the camera useless for video. However,
    I expect this will change as the technology improves.
     
    PTRAVEL, Nov 12, 2003
    #4
  5. O R

    Alan Browne Guest

    $


    (psst:broudcast quality cameras ain't cheap.)
     
    Alan Browne, Nov 13, 2003
    #5
  6. O R

    mark herring Guest

    Because I vowed never to respond to F----- threads, I cant respond directly
    to the first reply......;)

    Still cameras mostly use the Bayer pattern, which provides lower resolution
    in color than in luminance. In a low-res camera, eg 1 Mp or less, it can be
    an issue. Above aobut 2-3 Mp is it arguable whether it matters. Video
    cameras are usually much lower resolution than still---thus the
    "non-registered" color of the Bayer may be more of an issue.

    I question whether the "F" sensor has any future---for two reasons:
    1. At a "high-enough" pixel count, the Bayer is fine
    2. The F architecture is widely reported to introduce new problems of its
    own

    -Mark
     
    mark herring, Nov 13, 2003
    #6
  7. Yes, but the money and weight is better spent elsewhere.

    In video, the resolution is fixed by the format at about 640x480 or
    perhaps 720x480 for NTSC. There's no point in having more pixels than
    that (in fact more pixels gives less sensitivity). So if you want
    better colour than a single Bayer sensor can give you, and you have
    enough money, you go to a 3 CCD setup with 3 720x480 chips. You *could*
    also get good colour with a single 1440x960 Bayer chip, but it would be
    at least 4 times less sensitive to light (smaller cells). So the 3 CCD
    setup is the best you can do given the fixed resolution. It is
    expensive, because the prism block is expensive to make, has tight
    tolerances itself, and the CCDs have to be attached to the block with
    sub-pixel registration.

    In a digital camera, more resolution is actually useful. Doubling the
    resolution by going to 4 times as many pixels still costs light
    sensitivity, but it gains you actual image resolution. Going the prism
    block and 3 CCD route gets you better colour at the same sensitivity,
    but no extra resolution. And the larger the CCDs get in pixel count,
    the tighter the tolerances on manufacturing the prism/CCD block.

    The Foveon chip is an interesting way of tackling this. It gets 3
    colour samples per location, giving (in theory) the better colour of a
    3-chip camera, but with the simple optics of a single-chip unit. So far
    its performance has been disappointing, but it may eventually produce
    good results.

    Dave
     
    Dave Martindale, Nov 13, 2003
    #7
  8. They do.

    Remember the old days when MMU's and FPU's couldn't be fabbed together?
     
    George Preddy, Nov 13, 2003
    #8
  9. O R

    Samuel Paik Guest

    1 CCD vs. 3 CCD is a cost vs. benefit tradeoff. The tradeoffs are
    different for a video camera than for a digital still camera.

    By still digital camera standards, video cameras are very low
    resolution devices, producing optics good enough for high-end digital
    still camera resolutions is likely to be expensive. However, if
    I recall correctly, Foveon had a 3-imager prototype before they
    developed their 3-layer CMOS imager.

    Sam
     
    Samuel Paik, Nov 13, 2003
    #9
  10. O R

    Browntimdc Guest

    (Dave Martindale) wrote in @mughi.cs.ubc.ca:
    Couldn't this registration be done in firmware? (At least within one
    pixel.) The sensor would have to have a few extra rows and columns of
    pixels.

    Tim

    --

    "The strongest human instinct is to impart information,
    and the second strongest is to resist it."

    Kenneth Graham
     
    Browntimdc, Nov 13, 2003
    #10
  11. Shifting left/right or up/down by an integer number of pixels is easy,
    but you don't really want a half-pixel colour fringe (worst case), so
    you might need fractional-pixel shifts which need interpolation
    (moderately expensive).

    However, the real problem is rotation. Rotation by any amount at all is
    computationally expensive, and even a tiny amount of rotational
    misalignment will produce errors of several pixels at the edges of the
    image. To get less than 1/2 pixel error along a 3072-pixel-wide image
    you need a rotational error of less than 34 arc seconds.

    Dave
     
    Dave Martindale, Nov 13, 2003
    #11
  12. (Dave Martindale) wrote in @mughi.cs.ubc.ca:
    Yowsa. Maintaining that sort of rotational alignment in a device that gets
    bumped and jostled in the field would be quite a challenge.
     
    Albert Nurick, Nov 13, 2003
    #12
  13. Yes, but
    And TV camera lenses have a long enough back distance to get a 3
    way splitter into the light path without going spare!

    --
    Paul Repacholi 1 Crescent Rd.,
    +61 (08) 9257-1001 Kalamunda.
    West Australia 6076
    comp.os.vms,- The Older, Grumpier Slashdot
    Raw, Cooked or Well-done, it's all half baked.
    EPIC, The Architecture of the future, always has been, always will be.
     
    Paul Repacholi, Nov 13, 2003
    #13
  14. O R

    Browntimdc Guest

    (Dave Martindale) wrote in
    Thanks, that makes sense.

    Tim

    --

    "The strongest human instinct is to impart information,
    and the second strongest is to resist it."

    Kenneth Graham
     
    Browntimdc, Nov 13, 2003
    #14
  15. Actually, it's not. The prism block is several hunks of glass all glued
    together. Then the CCDs are glued to the prism faces. It either stays
    in alignment, or it's broken.

    The problem is (a) it's difficult to do the CCD alignment before gluing
    to the necessary tolerances, and (b) if you do break it, it's going to
    be expensive to repair or replace.

    Dave
     
    Dave Martindale, Nov 13, 2003
    #15
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.