What compact digicam has the biggest CCD pixels?

Discussion in 'Digital Photography' started by Paul Rubin, Apr 22, 2005.

  1. It is not so much gimmick as explainable with a single number. S/N is
    a bit hard to understand, but megapixels/wattage (a la
    stereos)/horsepower/etc forms a nice single number. For a camera there
    are multiple numbers to track. Not a problem for aficionados, but
    difficult for the general public. I suspect that big numbers on the
    high end stuff actually helps sell the lower end stuff.

    --
    Matt Silberstein

    All in all, if I could be any animal, I would want to be
    a duck or a goose. They can fly, walk, and swim. Plus,
    there there is a certain satisfaction knowing that at the
    end of your life you will taste good with an orange sauce
    or, in the case of a goose, a chestnut stuffing.
     
    Matt Silberstein, Apr 26, 2005
    #41
    1. Advertisements

  2. It will certainly require more glass, and be bigger and heavier. On the
    other hand, the machining and mechanical assembly tolerances get looser
    at the same time, so you'd save some money there. It's not obvious how
    much more the larger lens would cost, particularly if the camera could
    use a long-proven lens design from a film P&S camera.

    The *bit* extra-cost item is the sensor. Going from current digicam
    sensors to full-frame 35 is an increase in dimensions by a factor of 5
    or 6. The larger sensor would have at least 5^2 = 25 times as much
    silicon area, but the sensor would cost much more than 25 times as much
    per unit because of the way yield scales with chip size.


    Dave
     
    Dave Martindale, Apr 27, 2005
    #42
    1. Advertisements

  3. [/QUOTE]

    I'm afraid I don't see where your "almost" comes from.

    The quote above suggests that you take the "type" and multiply by 2/3
    to convert from "nominal tube diameter" to the diagonal of the image
    area. 2/3 of one inch (25.4 mm) is 16.9 mm. What I wrote suggested
    starting with 16 mm instead of 16.9. There's no disagreement between
    the two quotes above, because one says "about" 16 and the other says
    "roughly" 16.9 mm (from 2/3 inch).

    On the other hand, what I wrote gave an example of how to take the magic
    number 1/1.8, plus the aspect ratio of the image and the pixel count of
    the sensor, and calculate an actual pixel pitch. The result will be
    approximate because the 16 mm value is approximate and "1.8" doesn't
    have many significant figures. But it works pretty well in practice.

    Dave
     
    Dave Martindale, Apr 27, 2005
    #43
  4. Paul Rubin

    HvdV Guest

    Hi Roger & Ilya,
    On the danger of being pedantic:
    According to my text book on optics the original Rayleigh criterion is about
    two-line resolution in spectroscopes: the lines can be said to be 'just
    resolved' when the maximum of the image of one coincides with the minimum of
    the other.
    This was extended to two-point resolution by coinciding the maximum of one
    with the minimum of the Airy disk diffraction pattern of the other:
    R = 0.61 lambda/sin(half_aperture) (for aperture << 1, circular aperture,
    incoherent light)
    For small apertures sin(half_aperture) ~ 1/(2f_number) so:
    R = 0.61 * lambda * 2 * f_number
    To detect the minimum between the maxima you need two pixels per Rayleigh
    distance so the 'Rayleigh pixel size' is 2.44 * lambda * f_number.
    So this rediscovers Rayleigh's work from 1879!
    However, if we take the blue 470 nm light then the critical sampling rate is:
    delta_x = lambda/(4 * sin(half_aperture) ~ lambda * f_number/2
    so for 470nm the sampling distance is ~.66 micron. To capture all information
    passed by the lens this is the pixel spacing you need. It could very well be
    that readout noise problems and the like make it not feasible, but that is
    something else.
    I get the impression spherical aberration is so strong in these P&S lenses
    that all rays above say f/8 are 'lost'. With that the whole computation above
    goes to pieces. Add chromatic aberration and you end up with a 640x480 image
    at best.
    Quite likely such lenses can be made though, especially for smaller focal
    lengths, and using the space where the mirror is in a (d)SLR. But high
    quality camera makers have a hard time at the moment.

    -- Hans
     
    HvdV, Apr 28, 2005
    #44
  5. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    Almost right. Comparing half-frame to 2/3'' sensors (with same pixel
    count), it is 8x smaller area; so with the same throughput QE, it is
    comparing ISO800 with ISO100. (Although, as I said, the assumption of
    the same throughput QE does not currently hold.)
    Actually, it may be quite fair. Depends on the situation. E.g., you
    need 3 f-stops larger aperture to get the same depth of field with
    2/3''; this exactly compensates for the difference in sensibility.

    [Of course, this assumes that in both situations the lenses perform
    similarly good (with goodness measured w.r.t. diffraction-bound
    lens of the same f-stop). This is not always true, the zoom lenses used
    with current 2/3'' are only 2 f-stops "better in optical quality"
    than fixed-focal-length lenses used with film; do not know about
    new lenses designed for half-frames.]

    In addition, the smaller sensor is easier to motion-compensate; this
    may also improve "practical sensitivity" of this sensor.
    While correct, this is irrelevant, since current "leaders" have the
    same pixel count.
    a) You mean electron noise here, not photon noise; but let keep this
    discussion in one thread, not in many;

    b) This is very strange data for S60: with 8 times smaller sensor
    you quote 2.5 times smaller full well. Are you sure the data is
    for 100ISO?
    This is manifestly wrong. What matters is product of throughput QE
    and the area.

    Or do you mean "with the same technology for both sensors"? Then of
    course it is true. So with (imminent) 6x improvement in throughput
    you can get 10000ISO from large sensors vs 1200IS for small ones. The
    people who *need* 10000ISO will have a clear choice. But the people
    who do not need more than 1000ISO (with practically no noise) will
    *also* have a clear choice.
    I think 8x is theoretical, with practical about 6x; but let us keep it
    in a separate thread.
    I see that it is you who are in the world of fake math. Take a test
    shot, and see by yourselves. Or look at the example image I provided
    (actually, it is not the best possible; when I have time, I will
    upload the slightly improved one).

    BTW, diffraction spot size is absolutely irrelevant in the age of DSP.
    The only important thing is the throughput MTF curve vs throughput
    narrow-band noise; it shows what CAN'T be extracted from the image
    with DSP.

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Apr 29, 2005
    #45
  6. [A complimentary Cc of this posting was sent to
    HvdV
    All this is irrelevant in the age of DSP. But I see that your next
    paragraph has correct coefficients (ones which no amount of DSP can
    improve):
    Practical experiments show that with very low readout noise of today,
    one should be able to get quite close to this "critical sampling
    rate" (at least with higher f-stops).
    To the contrary; the lenses for current P&S are *incomparably better*
    than those for larger cameras. And this is as it should be: the same
    wavelength/40 tolerance is much more demanding if you have a lot of
    glass (as you do with a larger lens).

    E.g., consider an 8x zoom with sweet spot at f/4... The exact MTF of
    the lens is very hard to measure; but as I said, the throughput MTF of
    the whole process can be very high (while creating no artefacts and
    creating very little "extra" noise). It is practical to have MTF 75%
    at 150 lp/mm...

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Apr 29, 2005
    #46
  7. HvdV,
    Concerning both above and below, it seems your equation differs by a
    factor of 2 from that in "Star Testing Astronomical Telescopes,
    A Manual for Optical Evaluation and Adjustment" by HR Suiter,
    William Bell, Richmond, 1994, page 49, equation 3.5:

    S'max = 1/(F*w)

    where F = focal ratio, w = wavelength, and S'max is the critical
    sampling (0% MTF) in cycles (line pairs) per length (given by the units
    of the wavelength). For 0.47 micron wavelength, F=2.8 the
    equation gives S'max = 1/(2.8*0.00047 mm) = 760 line pairs/mm.
    The Rayleigh limit is at about 82% of the critical limit,
    which for 0.47 microns wavelength would be about 620 lp/mm.

    In a diffraction limited system, the Rayleigh resolution
    criteria occurs when the separation of two objects is precisely
    the radius of the theoretical diffraction disk (Suiter, 1994,
    page 286). The radius is defined from the maximum to the
    first minimum of the diffraction disk. This separation
    causes a minimum in the response at the radius, so the
    radius is the ideal sampling distance.

    The diffraction disk radius, R_airy, is (Suiter, 1994, page 11,
    equation 1.1):

    R_airy = 1.22*w*f/D

    where w = wavelength, f = focal length, and D = aperture
    diameter. This reduces to:

    R_airy = 1.22*w*F (where F = focal ratio).

    For 0.47 micron wavelength, F=2.8 R_airy = 1.22*.47*2.8 = 1.6 microns.

    This is the Nyquist sampling interval, meaning one sample at the
    peak, one sample at the minimum.

    I haven't gone through your equations in detail, but I
    suspect you used an equation assuming a diameter and divided
    by 2 getting the factor of two smaller sampling, when the
    equation was already for radius.

    I believe this is correct. It matches examples given in Suiter.
    Equation 1.1 is quite well known and far predates Suiter, 1994.
    I'm just quoting Suiter as it is an excellent book that brings
    a lot of information together into one place.

    Roger
     
    Roger N. Clark (change username to rnclark), Apr 29, 2005
    #47
  8. Paul Rubin

    HvdV Guest

    Hi Roger,You can find this relation in various sources, e.g. Born&Wolf, Principles of
    Optics. In my 7th ed. on p 468 and others. It is for incoherent light,
    paraxial approximation, medium refractive index 1, and two point objects.The approximation I used here is
    sin(half_aperture) = sin(arctan(D/2f))~ 1/2F, F the focal ratio.
    Good enough for F > 2.8 or so, at F 1.4 the error is 5%.
    Whereas the two-point Rayleigh distance (using your symbols) is
    R ~ 1.22*w*F.
    Two-point resolution is different from n-line resolution which might account
    for the 22% difference, IIRC lines are a bit easier to resolve than points.
    Yes, this matches the approximation above.
    When the two Airy patterns overlap in 'peak on the first dark ring of the
    other' fashion, a local minimum in between the peaks 26.5% lower than the
    peak results. To sample this configuration you need a (point) sample at the
    minimum, so the pseudo Nyquist interval is R_airy/2. In this case .8 microns.
    Incidently, the figure of the 26.5% dip is also sometimes referred to as
    'Rayleigh criterion'. In practice the value is greatly affected by the degree
    of coherency, not mentioning aberrations.
    Yes, the minimum between the two maxima.
    It's easy to loose a factor of two somewhere, but I think this solves it.
    I don't know it; is it astronomy oriented?
    You can find this equation for example in C.J.R Sheppard 'Optical resolution
    and the spatial frequency cut-off', Optik 66, 1983 and Optik 72, 1985.The fact that this value is different than the Rayleigh value above is no
    surprize since the idea behind the Rayleigh criterion is, though practical,
    arbitrary. If the Rayleigh criterion would have demanded a 1% dip in an
    n-line pattern the values would have been closer. The value of a bandwidth
    related criterion is, IMO, that it is independent of aberrations and the
    exact shape of the PSF.

    -- Hans
     
    HvdV, Apr 29, 2005
    #48
  9. Paul Rubin

    HvdV Guest

    Hi Ilya,
    Ah, but it is a nice world! Unless you mean the math itself is fake ;-)
    Well, you could say the PSF shape is equivalent to the MTF. Of course the
    actual PSF hardly ever looks like a diffraction limited PSF, diffraction
    limited in the sense of a aberration-free system. The PSF shape and the noise
    statistics together are the main ingredients to the image formation, ok.

    -- Hans
     
    HvdV, Apr 29, 2005
    #49
  10. Paul Rubin

    HvdV Guest

    Hi Ilya,
    With a bit of hindsight one could say Rayleigh's and Abbe's work is directly
    related to the optical bandwidth problem, so IMO not irrelevant. But I admit,
    it is a bit of a hobby horse.
    Nice to hear that; any examples available?
    Exactly the point I was trying to make in an earlier thread. What still
    baffles me however, is the poor quality of some 'test winning' P&Ss as
    compared to the *much* older SLR lenses I have.
    This can only be explained by manufacturers only aiming at an as cheap as
    possible lens, but with a high zoom number. Understandable from the
    manufacturer's standpoint, with the fierce competition survival is #1.
    Which lens is that?

    -- Hans
     
    HvdV, Apr 29, 2005
    #50
  11. HvDv,
    This is where we disagree. The minimum occurs at R_airy, not R_airy/2.
    The distance between samples is R_airy. You have double Nyquist sampling.
    Nyquist sampling occurs at the maxima and minima.
    Yes, I agree completely. The shape of the MTF is a major difference
    between film and digital cameras and the major factor in the film versus
    digital wars.

    Roger
     
    Roger N. Clark (change username to rnclark), Apr 29, 2005
    #51
  12. Yes! You can deconvolve the image to improve spatial resolution,
    but it is at the cost of signal-to-noise. High signal-to-noise
    images can go about a factor of 2 improvement. Here is one
    example:
    Image Restoration Using Adaptive Richardson-Lucy Iteration
    http://clarkvision.com/imagedetail/image-restoration1

    But when the pixel size is small and fewer photons are collected,
    I do not believe viewers of an image would tolerate the
    increased noise that added noise from deconvolution, meaning
    many/most would probably say the deconvolved image would look
    worse than the original.

    Roger
     
    Roger N. Clark (change username to rnclark), Apr 29, 2005
    #52
  13. It is right.
    Ah, more theory. So compare my DSLR large pixel system with an f/2.8
    lens, you will scale it down but increase the lens by 3 stops?
    So how many f/1 lenses do you know, and that have high performance
    over the field of view. Restricted theory versus reality.
    It is relevant. an 8-megapixel large pixel DSLR produces much
    better images than an 8-megapixel P&S, and the reason is fundamental
    physics. Always will be.
    It is photon noise limited. One electron = one photon. You want to say
    that because QE of the sensor is not 100% it is not photon noise limited.
    Incorrect. The engineering and scientific literature is full of
    photon noise limited systems, and none have QE=1.
    No, the plots clearly say ISO 50. The small area sensors have lower
    ISO because they collect so few photons.
    Of course. Are you changing properties of your sensor between
    large and small cameras? Well, that is not the way it works in
    the real world. Lens transmission, blur filters, color Bayer
    filters, all use the same basic technology between cameras.
    CCD versus CMOS is little difference in sensor QE, regardless
    of sensor size. If anything, small sensors will lose because
    of edge loss.
    Well, what changed? It used to be 10x. With current sensor QE
    in the 25 to 50% range, a factor of 2 to 4 is theoretically
    possible. It is unlikely that lens and filter transmission
    will improve more than the present multi-anti-reflection
    coatings allow.
    Diffraction limited is the BEST your can do. It only gets worse
    from there. So to say diffraction effects are irrelevant
    shows who is in the world of fake math. DSP, I assume you
    mean Digital Signal Processing. There is no free lunch.
    You can deconvolve to improve spatial resolution, but at the expense
    of signal-to-noise. Come on, now you are really grasping at straws.
    Note too that all deconvolution algorithms have artifacts.
    There is no perfect system.

    But, if you are so sure of yourself, go into business and make the
    perfect camera. You'll make billions of $.
    But I know there are a lot of brilliant people out there, and
    they haven't been able to do it (make the Ilya theoretical
    perfect camera).

    Roger
     
    Roger N. Clark (change username to rnclark), Apr 29, 2005
    #53
  14. Paul Rubin

    HvdV Guest

    Hi Roger,
    Nice; do you know what was adapted in the RL algorithm?
    It's the ringing artifacts which are so ugly. For example hairs transformed
    in a beads-on-a-string like structures.

    -- Hans
     
    HvdV, Apr 29, 2005
    #54
  15. This all explains why my pictures are so bad. By the time I have
    finished calculating all this the subject has moved. (Ok, by the time
    I have finished calculating any of this the subject has moved, the
    light has gone, the seasons have changed, my batteries have lost their
    charge, and my camera has gone out of warranty.)

    :)


    --
    Matt Silberstein

    All in all, if I could be any animal, I would want to be
    a duck or a goose. They can fly, walk, and swim. Plus,
    there there is a certain satisfaction knowing that at the
    end of your life you will taste good with an orange sauce
    or, in the case of a goose, a chestnut stuffing.
     
    Matt Silberstein, Apr 29, 2005
    #55
  16. Paul Rubin

    HvdV Guest

    Hi Roger,
    Indeed this is where we disagree!
    If you have access to a (university) library and some time, have a look at
    Born&Wolf, Fig. 7.62 illustrates the 'max on min' overlapping functions for
    the original two-line interferometer case.

    -- Hans
     
    HvdV, Apr 29, 2005
    #56
  17. [A complimentary Cc of this posting was sent to
    HvdV
    I think I read the *originals* (about 20 years ago, so my memory may
    be hazy). The argument went like this: consider a bell-like curve
    f(x); you need to distinguish 2f(x) from f(x-b) + f(x+b).

    With a "naked eye", there is a certain similarity between these two
    curves (if b is small). Now the main argument: you cannot tell these
    two curves apart unless there is a deep enough minimum between two
    maxima. That serious people are still falling for this fallacy is
    above me.

    Today the argument goes like this: we have a digitized g(x)+noise;
    since we know a lot about the noise, we can apply mathematical
    statistic to check two conjectures: one that g(x)=2f(x), and one that
    g(x)=f(x-b) + f(x+b). If one of them "clearly wins" for the given
    level of noise, we know the answer.

    I definitely saw astro papers (of probably as far as 40 years ago)
    which were applying similar arguments... So today the Rayleigh's
    limit is not relevant; these questions are governed mostly by the
    NOISE, not only by optical resolution.

    Of course, the Nyquist step for the cutoff frequency of diffraction is
    always applicable.
    I posted some DSP'ed result (see below; done with off the shelf
    software-for-grandmothers). It claims to be shot at f/4, and gives
    about 75% MTF at 150 lp/mm (with minimal visible artefacts and noise);
    the diffraction cutoff frequency is about 450 lp/mm.

    Of course, the lens is far from being diffraction-limited at f/4; but
    this being the sweet spot, I *expect* it to be almost exactly
    diffraction-limited at f/8. Then the cutoff frequency is about 225
    lp/mm, and the Nyquist frequency of the sensor is about 190 lp/mm.

    Given that "resolution" should not drop more than 40% when you close a
    sweet-spot f-stop by two f-steps, I expect the performance somewhat
    similar to what is quoted above (something like 45% instead of 75%).
    Of course, having neither camera, nor Adobe CS (nor a quality
    resolution chart ;-), I can't be absolutely sure.
    What metric, and what year of issue? A P&S Nikon of 5 years ago would
    perform not much better than my first Smena-8M. A top-of-the-line 8MP
    shooters of '04 are little miracles. (If you see the details, the
    quality started to *decrease* in '04, since these cameras compete so
    well with dSLR [in some metrics], and dSLRs are holy cash cows nobody
    can compete with.)
    To the contrary. They get *much better* performance *and* high zoom.
    Actually, it is claimed that current 8MP 2/3'' camp has lenses which
    have quite comparable optical quality. The one I have test charts
    shot at f/4 and postprocessed by quality demosaicer is the 8x zoom
    lens of 2001, first used in Minolta Dimage 7; today available with KM
    A200. Shot done at 53mm equivalent.

    The postprocessed shot is at
    http://ilyaz.org/software/tmp/KM_A2...raw-quadratic-58percent-quartic-60percent.jpg

    Yours,
    Ilya
     
    Ilya Zakharevich, Apr 29, 2005
    #57
  18. [A complimentary Cc of this posting was sent to
    HvdV
    No, Hans, I think this won't work. This is a thin lens approximation.
    For thick lenses (read: in photography) it does not work well. You
    compare what happens at exit pupil (sin()) with want happens at
    enterance pupil (f-stop).

    Actually, I cannot definitely guess which would be larger, sin(angle)
    or D/2f for small angle with actual photography lenses... Anybody
    knowning?

    Yours,
    Ilya
     
    Ilya Zakharevich, Apr 29, 2005
    #58
  19. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)

    While it *is* equivalent, you can't immediately tell which
    (mis)features of PSF can be fixed by postprocessing; the brain is not
    a very good FFTer. ;-)
    Note that it is signal-to-noise at higher spacial frequencies. My
    (short) experience shows that eye is not very sensitive to noise at
    the higher range. Thus the visual effect of sharpening is:

    the image looks noisier, but only a "tiny bit" noisier; at least
    much better than an image with similar "wide-band" noise.

    It is interesting to flip-flop (undo/redo) such sharpening when
    looking at image at different pixel magnification. When looking at
    magnification 1, the difference in noise is perceivable, but tiny.
    When looking at 500% expanded image, you see a very significant change
    of noise. Probably the noise gets at a different spot of the MTF of
    monitor+eye.
    Again: there is no relation between pixel size and the number of
    photons collected (unless you insist on using the same exposition).

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Apr 29, 2005
    #59
  20. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)

    [A lot of wrong and irrelevant arguments already addressed in this and
    other threads removed - for benefits of the reads who already know
    them.]
    Well, feel free to remain at this opinion. But it is not what I see.
    What changed?! Nothing changed. If you would bother to *actually
    read* my r.p.d posting, all the numbers are there; you would not need
    to invent them to then debank the invented by you numbers. It never
    was 10x.
    Thanks. I see a lot of progress in your statements in this month.
    First it was "it is a theoretical maximum". Then it became "a factor
    of 2 is possible". Today, another improvement.

    Well, in another month you will get used to the number I posted:

    Actually, even with sensor technology available now (for
    mass-production), the sensitivity of the sensor he considers can be
    improved 4.8 times; or the size can be decreased 2.2 times without
    affecting MP count, sensitivity, and noise.
    Yes, this is what I said. But you need to use a correct metric for
    diffraction. And "diffraction spot size" is not directly related to
    the optimal pixel size. What *is* directly related is the cutoff
    frequency on the MTF curve.

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Apr 29, 2005
    #60
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.