low light

Discussion in 'Digital Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    Lionel Guest

    Yes, after first seeing your site, I was interested in performing
    similar tests to yours, so I sat down & did some maths. I soon
    realised that it wasn't possible to get accurate data from jury-rigged
    setups using charts & the like. The obvious approach was to illuminate
    the sensor directly with a calibrated light source, (which is
    something I can design myself), but I'd need lab grade optical
    equiment to get a flat, accurate illumination field on the sensor, &
    access to people with a lot more optical expertise than myself, & I no
    longer have access to an optics lab.
    It shows. (And there's nothing quite like trying to duplicate someone
    else's work to make you realise how hard it was create in the first
    place. ;^)
    I'm sure I've said this before, but thank you for all the hard work
    you did to create a really useful photography resource.

    PS: I've stopped responding to John's posts on this topic, because the
    weird misconceptions he's expressing about data aquisition technology
    are getting so irritating that I feel more like flaming him than
    educating him.
    Lionel, Mar 23, 2007
    1. Advertisements

  2. Perhaps you need to look at this issue a little closer. There are
    very difficult problems in getting uniformity better than ~1%.
    Here are some of the issues:

    1) Even with diffuse light, it is very difficult to produce a uniform
    filed of illumination better than a percent. Try some computing
    of light source distance and angles to different spots.
    1/r^2 has a big impact. Scrambling the light may help, but
    it also scrambles knowledge. For example if one part of the
    diffuser has a fingerprint or slightly different reflectance
    for some reason, it produces a different field,
    and at the <1% level it becomes important. I have several diffuse
    illuminaters and run tests for uniformity and none pass the
    1% test in my lab.

    2) At the <1% level few targets are truly uniform. I have tested multiple
    surfaces in my lab for just this issue and most fail. There are
    large (many mm) variations in macbeth targets at the ~< 1% level.
    Here, for example is the macbeth color chart:
    Now here is the same chart with the top and bottom rows stretched
    panel by panel to show the variations:
    There are variations on a few mm range, small spots (those are
    not sensor dust spots--they are too in focus), and there are gradients
    from one side of a patch to the other. The variations are
    typically a couple of percent (which in my opinion is actually
    very very good for such a low cost mass produced test target.)

    3) The light field through the lens as projected onto the focal
    plane even given a perfectly uniformly lit test target, is not uniform.
    You have a) cosine angle changing the apparent size of the
    aperture, b) 1/r^ changes from center to edge of the frame,
    variations in optical coatings and optical surfaces translate
    to small variations in the throughput of the system, d) center
    optic rays pass through more glass than edge optic rays, and the
    percentage of light passing through different angles to the optical
    axis pass through different amounts of glass, thus have different absorption.

    All of these effects may be small in photographic terms (although light
    fall-off is commonly seen), but at the percent level, even few percent
    level, they become important. Some cameras collect enough photons
    that the noise from photons gives S/N > 200. With your methods
    you are likely limiting what you can achieve.

    Try replacing the macbeth chart with several sheets of white paper.
    Take a picture and stretch it. Can you see any variation in level?
    If you can't see any variation, please tell us how you compensated
    for all the above effects, which would require a careful balance
    of increasing illumination off axis to counter the light fall-off
    of your lens, let alone the other non-symmetric effects.

    If you are testing sensors and want answers better than 10%, your
    method requires field illumination to be better than 10 times
    the photon noise, or 0.0005%. There is a reason why sensor
    manufacturers have adopted the methods in use today.
    Your method, even defocusing the target (which introduces other
    issues), probably can't even meet a 2% uniformity requirement.

    (Actually I tried this too, thinking I could speed up the
    testing. It became obvious in my first tests it didn't work.)

    (I've designed illumination sources for laboratory spectrometers
    for 25+ years, where requirements are quite tight.)

    Roger N. Clark (change username to rnclark), Mar 23, 2007
    1. Advertisements

  3. ipy2006

    John Sheehy Guest

    What misconceptions?

    Almost every reply you or Roger has made to me has ignored what I have
    actually written, and assumed something else entirely.

    Look at the post you just replied to; I made it quite clear in my post
    that Roger responded to, that the effect only happens with *ONE CAMERA*,
    yet Roger replied as if my technique were at fault, in some elementary
    way. He didn't pay attention, and *YOU* didn't pay attention, made
    obvious by your ganging up with him and failing to point out to him that
    it only happened with one camera.

    Did you even notice that fact? (That post wasn't the first time I said
    it was only one camera, either).

    Did you point out to Roger that when he wrote that ADCs have an error of
    +/- 1 DN, that because there was no range of errors amongst ADCs, and
    that 1 AN = 1 DN, that it would seem that he was writing about the
    rounding or truncation aspect of the quantization, itself, but mistakenly
    doubled? Surely if he were talking about ADC noise not due directly to
    the mathematical part of quantization, he would have given the range of
    error the the best and worse, or typical ADC, none of which would likely
    be exactly +/- 1.

    It was not my fault that I thought he was talking about the mathematical
    aspect; he, as usual, is sloppy with his language, and doesn't care that
    it leads to false conclusions. He is more interested in maintaining his
    existing statements than seeking and propagating truth.

    If anyone is weird here, it is you and Roger. You agree with and support
    each other when an "adversary" appears, no matter how lame your
    statements or criticisms.

    Where was Roger when when you implied that microlenses can effect dynamic
    range, without qualifying that you meant mixing sensor well depths *and*
    microlenses? Or perhaps you didn't even have that in mind the first time
    you did; you came up with that exceptional, non-traditional situation to
    make yourself right, without giving me a chance to comment on such an
    unusual arrangement, to which I would have immediately said that
    different well depths and/or sensitivities would affect overall system
    DR. Your use of different well depths in the example brings things to
    another level of dishonesty on your part. That was nothing short of

    John Sheehy, Mar 24, 2007
  4. ipy2006

    John Sheehy Guest


    You can't just divide by 16, to drop 4 LSBs. 0 through 15 become 0. You
    have to add 8 first, and then divide by 16 (integer division), then
    multiply by 16, and subtract the 8, to get something similar to what you
    would get if the ADC were actually doing the quantization. The ADC is
    working with analog noise that dithers the results; you lose that
    benefit" when you quantize data that is already quantized. You won't
    notice the offset when the full range of DNs is high, but for one where a
    small range of DN is used for full scene DR, it is essential. I am
    amazed that you didn't stop and say to yourself, "I must have done
    something wrong" when you saw your quantized image go dark. That's what
    I said to myself, the first time I did it. I looked at the histograms,
    and saw the shift, and realized that an offset is needed unless the
    offset is a very small number relative to the full range of the scene.

    In the case of the mkIII image at 14, 12, 11, and 10 bits in another
    post, I used PS' Levels, because it simplifies the process, by doing the
    necessary offset to keep the distribution of tones constant.

    John Sheehy, Mar 24, 2007
  5. ipy2006

    ASAAR Guest

    . . .
    Ah, deja vu, yet again. You've distilled l'essence du Roger.

    That too, reminiscent of one of Lionel's bizarre, out of the blue,
    unprovoked attacks coming across almost as an RNC sock puppet. I
    wouldn't be surprised if it was true. It may be in the stars!
    ASAAR, Mar 24, 2007
  6. ipy2006

    Paul Furman Guest

    I finally took a shot where I wished I'd turned off RAW compression on
    my D200. It was the new moon, shot mid-day almost straight up, kinda
    hazy at +2 EC just before blowing then darkened in PP to a black sky and
    the remaining moon detail was pretty badly posterized. I actually got it
    to look good with a lot of PP work so I can't easily show the problem
    but I guess that was the cause. A rather unusual situation.
    Paul Furman, Mar 25, 2007
  7. ipy2006

    acl Guest

    That's interesting; I never managed to see any difference between
    compressed and uncompressed raw. Even when I tried to force it (by
    unrealistically extreme processing) I couldn't see it, even by
    subtracting the images in photoshop. Is it easy for you to post this
    somewhere? From what you say, it sounds like you did some heavy
    processing, did you do it in 16 bits or 8 (I mean after conversion)?
    This sort of extreme adjustment is just about the only place where I
    can see a difference between 8 and 16 bit processing (or 15 bit or
    whatever it is that photoshop actually uses).

    On the one hand, I find it hard to believe it's the compression, the
    gaps between the levels that are present are smaller than the
    theoretical photon noise, so basically the extra tonal resolution of
    uncompressed raw just records noise more accurately [and since you
    can't really see shot noise in reasonably high-key areas, that tells
    you it's irrelevant resolution anyway]. On the other hand, who knows?
    Maybe there is some indirect effect.
    acl, Mar 25, 2007
  8. I look at the big picture. It's not just one line of one of
    your responses that I have been responding to.
    Here are some of your posts, which involve MULTIPLE cameras:

    You said:
    and responding to data I've presented:
    and data others have derived using the same methods I use:
    and then you discuss conclusions from other cameras:
    and these are just from a coulple of your many posts in this thread.

    What I see is you attacking the data on multiple cameras from multiple
    sources, all of which paint a consistent picture. But as the details
    of your own testing come out, and shown to be inadequate,
    you start the personal attacks. A more appropriate response
    would be to 1) verify that your methods actually do not suffer
    from the problems I outlined, and 2) then explain why your results
    with your methods are actually correct and why they are better
    than those using industry standards.

    Roger N. Clark (change username to rnclark), Mar 25, 2007
  9. ipy2006

    Lionel Guest

    On Sat, 24 Mar 2007 22:15:26 -0800, "Roger N. Clark (change username


    I refuse to waste any more of my time trying to teach electronics to
    someone who clearly knows nothing about the subject, & who either
    ignores the data, or responds with childish insults about "defending
    myths" or "boogey-men". If John is that determined to make a fool of
    himself by pretending that he knows more about physics, optics &
    electronics than people in those industries, (including the people who
    /design/ the cameras, FFS!), then he can just go ahead & do so - it's
    no skin off my nose.
    Lionel, Mar 25, 2007
  10. Fair enough, I'll redo the test.

    Here is the full set of images:

    See figure 9 at:

    Here is the original raw data converted linearly in IP, scaled by 128:

    Now here is the same data with the bottom 4 bits truncated:

    Now here is the same data with the bottom 4 bits truncated, doing nearest integer
    using your formula. While subjectively it looks a little better, it has still
    lost a lot of image detail compared to the full 12-bits:

    You lose quite a bit in my opinion.
    It would be a disaster in astrophotography.


    The ADC is
    Roger N. Clark (change username to rnclark), Mar 25, 2007
  11. ipy2006

    Lionel Guest

    Of course you can.
    What a complete load of crap. Have you *ever* worked with ADC's in
    your life?
    It sounds like you might be confusing 2 quadrant ADC's that're used
    for audio applications with single quadrant ADC's that're used for
    this sort of device.
    Lionel, Mar 25, 2007
  12. ipy2006

    teflon Guest

    'Dropping LSB's'? 'Quadrant ADC's'? My brain's fallen out.

    Are there any real photographers here?
    teflon, Mar 26, 2007
  13. ipy2006

    Paul Furman Guest

    Here's a 'bad' curves version, what I got out of the raw converter & the
    -the final is up one folder
    I'll email the NEF file if you want to tinker, just remove the hyphens
    from my email. In the end I did salvage it pretty well just using ACR &
    8 bit photoshop.
    Paul Furman, Mar 26, 2007
  14. Roger N. Clark (change username to rnclark), Mar 26, 2007
  15. teflon wrote:
    Obviously there are, and ones who wish to have a better understanding of
    the equipment used. If you are uncertain about terms, why not ask or look
    them up?

    David J Taylor, Mar 26, 2007
  16. ipy2006

    Lionel Guest

    And if he doesn't care about the topic, nobody's forcing him to read
    this thread.
    Lionel, Mar 26, 2007
  17. ipy2006

    John Sheehy Guest

    That still posterizes the noise and signal a little bit. You're not
    likely to see it with any normal tonal curve; you really need to increase
    the contrast quite a bit, and you will see it. For example, I remember
    shooting in extreme fog a couple of years ago, where I used +2 EC with my
    20D, at ISO 400, and raised the effective blackpoint such that the dark
    parts of the Robins approached black. It brought up a bit of noise that
    would not normally be seen, with any exposure compensation level, while
    black was still anchored at black. Same with taking pictures of things
    reflected in glass over a white background, if you try to restore black
    in the processing.
    Recording noise better is a good thing, and the same conditions record
    signal better as well (and allows the brain and algorithms to separate
    them better, as well).

    In this particular case, it is only likely to be seen in extreme
    blackpointing, or perhaps extreme sharpening.

    John Sheehy, Mar 26, 2007
  18. ipy2006

    acl Guest

    Well yes, that is what I was thinking too (ie that posterising the
    noise could cause problems under extreme adjustments), but didn't
    actually see anything the couple of times I tried (by shooting in forg
    and moving the black and white points). I also played a bit with
    compressed and uncompressed raw files but could not see anything so
    far. Maybe I was not extreme enough.
    Yes, obviously you'd expect to see a difference under conditions that
    exaggerate small differences, ie extreme tonal stretching or
    sharpening (which is local tonal manipulation, after all). But I

    Well I'll try to play with Paul's example and see what happens
    (unfortunately I just remembered I have an early plane to catch
    tomorrow so it'll have to wait a bit).
    acl, Mar 26, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.