What is holding back full size sensors? Just marketing?

Discussion in 'Digital Photography' started by Bay Area Dave, May 24, 2004.

  1. Are the majors using small sensors now ONLY because they can
    then bring out full size sensors later and get us to
    purchase new cameras, or is there some REAL problem with
    full size sensors being used with a 35 MM format using
    today's technology? Being ignorant of the details, it seems
    to me that whoever comes out with a full frame sensor NOW
    would own the market, but what do I know? :) In case you
    haven't figured it out, I LOVE wide angle shots; hence my
    fence-sitting for now.

    Bay Area Dave, May 24, 2004
    1. Advertisements

  2. Bay Area Dave

    adm Guest

    The basic problem is yield in the semiconductor manufacturing process. This
    is normally specified in terms of defects per square inch. I used to have a
    little table that would give number of die per wafer for all the different
    wafer and die sizes, then tell me how many good die were expected per wafer
    given the relevant defect density. Unfortunately, I can't find it now....

    However, given that the area of a full frame 35mm sensor is probably 2X that
    of a 2/3 size sensor, you can see that you will only get half the number of
    die per wafer, but the processing cost is the same. Given the same defect
    density, a defect that would make a 35mm sensor useless, might only kill 1
    out of 2 2/3 size sensors if they occupy the same amount of space on the
    wafer. So - not only do you get more sensors on a wafer, you'll get a
    greater percentage of good sensors out of the process too.

    I remember back in 1990, a company I used to work for were making CCD and
    CID sensors that were 1 per wafer ! (4 inch wafers) These were made for star
    trackers and other high end uses. I seem to recall that for every 1 good
    sensor, you'd have to run between 18 and 24 wafers !!!

    Obviously, things have improved since then.....
    adm, May 24, 2004
    1. Advertisements

  3. Given the same defect
    Nearly all sensors have defects. Same are masked by the camera software
    (replacing a defective pixel by the mean of the surrounding pixels (of that

    I suppose that each manufacturer has theirs own definition what distribution
    of bad pixels makes a sensor bad.

    Nonetheless you are right that (with nearly any definition of "bad sensor")
    the rate of bad devices will be double with 35mm.

    Moreover there is only half the count of sensors on a wafer. Thus the price
    is at least for times that of a small sensor.

    Moreover AFAIK it does not make much sense to use bigger pixels on a sensor,
    thus the CPU would need to handle the double amount of data.

    Nonetheless I can't imagine that the sensor price is such a huge percentage
    of the camera price that a 35mm model would not be sellable. The 70 shows
    that a not too expensive CPU is fast enough for that task.

    Michael Schnell, May 24, 2004
  4. Bay Area Dave

    Don Guest

    There are also other considerations. If focal plane A is full size and
    focal plane B is 1/4 of that area, then for the same optical characteristics
    focal plane A would require a lens with twice the focal length, about four
    time the area (for the same f:No), and would be about 8 times as heavy (and
    proportionately more expensive, probably). And. of course, the whole camera
    gets bigger along with it.

    There are also more subtle effects having to do with the size of the
    individual detectors (pixels). For any manufacturer's production line there
    is an "optimum" cell size, usually between 1 and 10 microns currently. This
    "optimum" takes into account yield in terms of good chips per wafer,
    detective quantum efficiency, dark current, etc. The "optimum" size may be
    much different for a very high quality sensor than it is for an inexpensive

    I have been involved in CCD programs where the yield was less than 0.01%.
    To get a few tens of good chips we needed several boats of 3" wafers. They
    were about 2 or 3 microns, if I remember correctly, and well below the
    line's optimum size at that time.

    Don, May 24, 2004
  5. Bay Area Dave

    adm Guest

    True - I didn't really mean defects in terms of dead (or "hot") pixels, but
    rather defects that would affect the fundamental circuitry of the device
    making it unuseable. There is probably also a spec for number of dead
    Maybe it's not actually PRICE that is the issue, but rather SUPPLY. It has
    to be tough - even with today's new fabs to get good yield on full frame
    adm, May 24, 2004
  6. Bay Area Dave

    adm Guest

    3" wafers ! Now we're going back. At a guess, I'd say you were working for
    GE maybe ?
    adm, May 24, 2004
  7. There is also the issue of the angle at which incident light hits the
    sensor. By keeping to half frame sensors, these issues are reduced.
    Silicon is not as tolerant as film to off-axis rays.

    David J Taylor, May 24, 2004
  8. Bay Area Dave

    Bill Hilton Guest

    From: Bay Area Dave
    There's a REAL problem due to the size of the sensor.
    Kodak offers two full frame bodies now, one for Nikon, one for Canon, for
    around $4,500. Canon has one, the 1Ds (which I own ... great camera) and it
    sells for around $7,400.

    Here's the problem ... you get random defects introduced by flaws in the
    materials and tiny particles introduced during processing. A particle of smoke
    can ruin a chip if in the wrong spot, and a human hair can wipe out dozens of
    them. Right now the defect densities make it extremely difficult and expensive
    to make chips this big.

    To put numbers on it, assume a 1" square chip (you'd need almost 1 x 1.5" for a
    35 mm sensor) ... with an 8 inch wafer you have potentially 50 die sites
    (actually closer to 40-42 due to edge die, but assume 50 possible chips on each
    wafer to make the math easy). If the random defect density is 1 per sq-inch in
    theory your yield will approach zero (in practice some chips will have more
    than one defect so others might have none since the defects aren't perfectly
    distributed). At any rate, yield is zero or close to it.

    Now assume the chip is half the area (about 70% each linear direction) or .5
    sq-inch. Now you have 100 possibles instead of 50 so twice as many chances,
    right? But if you have a defect every sq-inch you'd have 50 good die minimum
    (in practice probably 55-60 since the defects aren't totally random). Whatever
    price you had to get for the 1 inch sq die is probably 50x as much as a die
    half the area.

    You can perhaps buy a fabbed CMOS wafer from someone like TSMC (Taiwan
    Semiconductor Manufacturing Company) for say $300 (we used to be able to buy
    wafers from them cheaper than we could make them in our own fab :) and rule of
    thumb was you wanted to get at least 10x the cost of the finished wafer in chip
    sales to pay the bills and salaries, so $3,000 for our mythical work-up. Using
    the yields I just guessed at the one and only good die from the 1 inch chip (if
    you actually got a good one) might need to sell for $3,000 while 50 chips from
    the .5" die could sell for $60 each to get the same amount of revenue.

    So that's the problem ... the really big die are too big and small increases in
    size mean big increases in price.

    If I were still designing chips I'd look long and hard at ways to quarter the
    sensor so you could make small sensors and stitch them into one larger one,
    using software to clean up the intersection. Using my example above, a .25"
    chip would have 200 possibles and yield at least 150 good chips so four of them
    stitched together to make a 1" sensor would cost $80 instead of $3,000 ... the
    guy who figures this design problem out will probably become a billionaire :)

    Bill Hilton, May 24, 2004
  9. I don't know anything about chip design. But somehow, it doesn't make
    sense that putting, say, 100 million transistors on a CPU doesn't really
    cost anything, whereas putting 10 million cells plus read-out logic would
    be very costly. It basically suggests that the designs assume zero-defects.

    This suggests that some redundancy could solve the problem. I assume that
    the loss of a single pixel is not a big deal. The loss of an entire row
    or column is a problem. Multiple independent ways for read-out should not
    be that hard to design.
    Philip Homburg, May 24, 2004
  10. Bay Area Dave

    Bill Funk Guest

    While marketing is certainly part of it, it needs to be understood
    that technology doesn't spring forth fully matured; it must be
    "Earned", as it were.

    There are currently full-frame digital cameras on the market; both
    Canon & Nikon have then, at least.

    As ADM pointed out, larger chips take up more room on a die, and
    sample defects are more costly as chip size increases.
    This drives up the costs of the good chips, too, and they get
    Larger sensors inevitably drive demand for more pixels on those
    sensors, which means more processing in the camera, which drives up
    costs even higher. This also drives a demand for more powerful (more
    expensive) computers, raising the stakes even more.

    What seems simple becomes complicated, and expensive.

    As time goes on, costs will drop (in the tech area, this has proven
    Bill Funk, May 24, 2004
  11. All this makes sense...and yet some companies are making cameras with full
    frame chips...if they can do it...so can others. And over time...the numbers
    will get better and they will get more good chips per wafer.
    Gene Palmiter, May 24, 2004
  12. Bay Area Dave

    Ron Hunter Guest

    All sensors are 'full size', in that they are the size they are. The
    image is focused on the sensor, and fills the space of the sensor. This
    allows use of smaller lenses, and makes possible a smaller camera. As
    in everything involving photography, there are limits imposed by
    physics, and be economics, and then there are choices. If you mean why
    no sensors the size of a 35 mm film frame, look to economics.
    Ron Hunter, May 24, 2004
  13. Of course other companies could do it...if they thought it was an
    economically viable thing to do. Full frame dSLRs are so expensive, it's a
    very tiny market. Canon and Kodak do own the full frame market, and I have
    no doubt that Nikon and Fuji are glad to let them have it.

    No money in it . Yet.

    Howard McCollister, May 24, 2004
  14. thanks to everyone for clueing me in on the technical
    hurdles involved. I didn't realize how much more difficult
    (read: expensive) it is for the chips to be made larger.
    I'd love to get a digital SLR to supplant the S45 I've got,
    but I've GOT to have wide angle capability without having to
    buy some ungodly expensive super duper ultra wide angle lens
    to get a "true" 24 mm. I would expect that such a beast
    would be large, heavy, distortion prone, and it bears
    repeating, expensive. BTW, is "APS sized" the correct
    term for the size of the 1.5 or the 1.6 magnification factor
    sensors? It can't be both, right?? Please correct any
    incorrect assumptions I just made...

    Bay Area Dave, May 24, 2004
  15. Most transistors in CPU chips are about as small as it's practical to
    make them at a given time. As you scale down the feature sizes, the
    transistors pass less current, but they're also driving less capacitance
    so circuits still work when you shrink them. And smaller circuits can
    be clocked faster too.

    But in image sensors, the light-collecting cells need area to collect
    light. Shrink them, and you get an unusably noisy image beyond a
    certain point. So the sensors used in DSLRs are considerably larger
    than almost any digital chips made. That's one reason they cost more
    Redundancy is harder too. With RAM, it doesn't matter which bits get
    stored in which physical rows of the chip as long as you can reliably
    read what you wrote. So rendundancy is great there. CPUs can use spare
    functional units in place of one that died. But with an image sensor,
    physical location is important - you can't just replace a dead pixel
    with one from somewhere else.

    Sure, you could have redundant readout circuitry. But that's such a
    tiny portion of the chip area that few failed image sensors will fail
    solely due to the readout circuitry. Most of the area is sensors, and
    those aren't replaceable.

    Dave Martindale, May 24, 2004
  16. Bay Area Dave

    E. Magnuson Guest

    Well, there is one inexpensive way to get rather wide on a
    current affordable DSLR: A Zenitar 16mm fisheye and Panotools.
    It's compact (about the size of a 50mm) and manual focus/exposure,
    and sells for $120-$150. It's not real sharp wide open, but
    stopped down to f8 or so it's not bad.

    Here is one f11 shot I chose not to defish because I liked the horizon
    as-is: http://www.pbase.com/image/27975532

    If you look in my "test" gallery, you'll see the FOV compared
    to the EF-S 18-55 mm lens. The shots were taken on different days
    so the color and exposure are very different, but the location is
    pretty much the same.
    E. Magnuson, May 24, 2004
  17. The truth is, full size sensors are great for WA but really really bad
    for everything else. The tradeoff is too niche. Nikon has alread
    dismissed full size sensors for good, based on the poor tradeoff. I
    agree with them, I think most people would rather pay a few thousand
    less for the body and get much better all around shots, then spend a
    few hundred more on one or two wider lenses. Plus, you have the
    optical zoom benefit of the smaller sensor, which most would probably
    choose outright over the wide end anyway. That is all assuming, of
    course, the sensor count is the same on either choice.

    On the downside of the smaller format, while most understand the
    tremendous boost in optical quality from shooting thru center glass,
    few probably realize that the higher sensor density also ups the
    required shutter speed, essentially handicapping the effective ISO
    somewhat when hand holding.

    This is because the higher sensor density makes sensor blur/overlap
    during the open shutter time proportionally more likely to happen.
    This is the one good thing about Bayer's very low optical resolution.
    Bayers either have very low sensor counts on the order of 1.5MP in
    full color (manufacturer rated as 6MP) on an APS sized sensor, or
    normalish sensor counts on the order of 2 to 3MP in full color (rated
    11 to 13.5MP) on a full sized sensor, which has the same low density
    due to sensor size. Either case is good for hand-holding. This is
    why people (eventually) intuitively figure out that they seem to need
    to stabilize Foveon images more than Bayers to find all that
    incredible potential.

    Though obviously, higher resolution always brings with it higher lens
    demands and the need for better stabilization in order to take
    advantage--such is the curse of pro caliber gear.
    Georgette Preddy, May 25, 2004
  18. I suspect this oft-brought-up Boogie man is bogus. People with the 1Ds
    report that it does just fine with wide angle lenses, and the examples I've
    seen show the 1Ds doing better at the corners than film does with the same
    lens (no film flatness problems!).

    Remember that to do wide on a 1.6x camera, you have to use extremely wide
    angle lenses, and there's no practical way to get even a 24mm equivalent.

    Sure, the superwides are funky at the corners, but I'd guess the 35/2.0 (a
    cheap but very good lens) is less funky at the edges of a full-frame sensor
    than the 20/2.8 is at the edges of a 1.6x sensor, and similarly, I'd guess
    that the 20/2.8 is less funky at the edges of a full-frame sensor than the
    Sigma 12-24 is at the edges of a 1.6x sensor.

    The Canon wide angle lenses tend to have poor resolution at the extreme
    corners (22mm off axis), but hold up fairly well to the edges (18mm off
    axis). There isn't a lot of difference between 12mm off axis (the edges of
    the 1.6x frame) and 18mm off axis (the edges of the full frame).

    So it looks to me that for lenses that there is an equivalent with the 1.6x
    sensors, you're better off with full frame by most if not all the increase
    in size of the sensor (and a stop or two in speed as well), and you get
    better than film performance with extreme wide angles. A win all around.

    David J. Littleboy
    Tokyo, Japan
    David J. Littleboy, May 25, 2004
  19. So you are saying that faults in (full frame) sensors are mostly likely to
    effect just single pixels. Why can't you solve that problem in software?

    If the raw format files also contain the locations of the bad pixels,
    software can high light those areas for manual inspection.
    Philip Homburg, May 25, 2004
  20. []
    ... but you can, and cameras do, use interpolation to replace dead pixels
    with information averaged from the adjacent pixels. Without such
    dead-pixel replacement, consumer sensors would be even more expensive than

    David J Taylor, May 25, 2004
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.