low light

Discussion in 'Digital Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    Lionel Guest

    On Thu, 22 Mar 2007 21:10:20 -0700, "Roger N. Clark (change username
    to rnclark)" <> wrote:

    >If you are testing sensors and want answers better than 10%, your
    >method requires field illumination to be better than 10 times
    >the photon noise, or 0.0005%. There is a reason why sensor
    >manufacturers have adopted the methods in use today.
    >Your method, even defocusing the target (which introduces other
    >issues), probably can't even meet a 2% uniformity requirement.
    >
    >(Actually I tried this too, thinking I could speed up the
    >testing. It became obvious in my first tests it didn't work.)


    Yes, after first seeing your site, I was interested in performing
    similar tests to yours, so I sat down & did some maths. I soon
    realised that it wasn't possible to get accurate data from jury-rigged
    setups using charts & the like. The obvious approach was to illuminate
    the sensor directly with a calibrated light source, (which is
    something I can design myself), but I'd need lab grade optical
    equiment to get a flat, accurate illumination field on the sensor, &
    access to people with a lot more optical expertise than myself, & I no
    longer have access to an optics lab.

    >(I've designed illumination sources for laboratory spectrometers
    >for 25+ years, where requirements are quite tight.)


    It shows. (And there's nothing quite like trying to duplicate someone
    else's work to make you realise how hard it was create in the first
    place. ;^)
    I'm sure I've said this before, but thank you for all the hard work
    you did to create a really useful photography resource.

    PS: I've stopped responding to John's posts on this topic, because the
    weird misconceptions he's expressing about data aquisition technology
    are getting so irritating that I feel more like flaming him than
    educating him.

    --
    W "Some people are alive only because it is illegal to kill them."
    . | ,. w ,
    \|/ \|/ Perna condita delenda est
    ---^----^---------------------------------------------------------------
     
    Lionel, Mar 23, 2007
    1. Advertising

  2. John Sheehy wrote:
    > "Roger N. Clark (change username to rnclark)" <> wrote
    > in news::
    >
    >> You are limiting the signal-to-noise ratio your derive because of
    >> variations in the target you are imaging.

    >
    > No, that is not the problem. I am quite aware of texture; that is why I
    > extremely unfocus the chart, and use diffuse light. I also window the
    > visible luminance range to exaggerate contrast for the squares, so I can
    > clearly see any dust or texture that might be present. I look for areas
    > that only vary at high frequency due to noise, and create a rectangular
    > selection, and try others, of sufficient size to get a good sample, but
    > small enough so that it is less likely to include a problem area.


    Perhaps you need to look at this issue a little closer. There are
    very difficult problems in getting uniformity better than ~1%.
    Here are some of the issues:

    1) Even with diffuse light, it is very difficult to produce a uniform
    filed of illumination better than a percent. Try some computing
    of light source distance and angles to different spots.
    1/r^2 has a big impact. Scrambling the light may help, but
    it also scrambles knowledge. For example if one part of the
    diffuser has a fingerprint or slightly different reflectance
    for some reason, it produces a different field,
    and at the <1% level it becomes important. I have several diffuse
    illuminaters and run tests for uniformity and none pass the
    1% test in my lab.

    2) At the <1% level few targets are truly uniform. I have tested multiple
    surfaces in my lab for just this issue and most fail. There are
    large (many mm) variations in macbeth targets at the ~< 1% level.
    Here, for example is the macbeth color chart:
    http://www.clarkvision.com/imagedetail/evaluation-1d2/target.JZ3F5201-700.jpg
    Now here is the same chart with the top and bottom rows stretched
    panel by panel to show the variations:
    http://www.clarkvision.com/imagedetail/evaluation-1d2/target.JZ3F5201-700-str1.jpg
    There are variations on a few mm range, small spots (those are
    not sensor dust spots--they are too in focus), and there are gradients
    from one side of a patch to the other. The variations are
    typically a couple of percent (which in my opinion is actually
    very very good for such a low cost mass produced test target.)

    3) The light field through the lens as projected onto the focal
    plane even given a perfectly uniformly lit test target, is not uniform.
    You have a) cosine angle changing the apparent size of the
    aperture, b) 1/r^ changes from center to edge of the frame,
    variations in optical coatings and optical surfaces translate
    to small variations in the throughput of the system, d) center
    optic rays pass through more glass than edge optic rays, and the
    percentage of light passing through different angles to the optical
    axis pass through different amounts of glass, thus have different absorption.

    All of these effects may be small in photographic terms (although light
    fall-off is commonly seen), but at the percent level, even few percent
    level, they become important. Some cameras collect enough photons
    that the noise from photons gives S/N > 200. With your methods
    you are likely limiting what you can achieve.

    Try replacing the macbeth chart with several sheets of white paper.
    Take a picture and stretch it. Can you see any variation in level?
    If you can't see any variation, please tell us how you compensated
    for all the above effects, which would require a careful balance
    of increasing illumination off axis to counter the light fall-off
    of your lens, let alone the other non-symmetric effects.

    If you are testing sensors and want answers better than 10%, your
    method requires field illumination to be better than 10 times
    the photon noise, or 0.0005%. There is a reason why sensor
    manufacturers have adopted the methods in use today.
    Your method, even defocusing the target (which introduces other
    issues), probably can't even meet a 2% uniformity requirement.

    (Actually I tried this too, thinking I could speed up the
    testing. It became obvious in my first tests it didn't work.)

    (I've designed illumination sources for laboratory spectrometers
    for 25+ years, where requirements are quite tight.)

    Roger

    >
    > The results vary from camera to camera as well; my 20D and my FZ50 have no
    > such limit to S/N, but the XTi does.
    >
     
    Roger N. Clark (change username to rnclark), Mar 23, 2007
    1. Advertising

  3. ipy2006

    John Sheehy Guest

    Lionel <> wrote in
    news::

    > PS: I've stopped responding to John's posts on this topic, because the
    > weird misconceptions he's expressing about data aquisition technology
    > are getting so irritating that I feel more like flaming him than
    > educating him.


    What misconceptions?

    Almost every reply you or Roger has made to me has ignored what I have
    actually written, and assumed something else entirely.

    Look at the post you just replied to; I made it quite clear in my post
    that Roger responded to, that the effect only happens with *ONE CAMERA*,
    yet Roger replied as if my technique were at fault, in some elementary
    way. He didn't pay attention, and *YOU* didn't pay attention, made
    obvious by your ganging up with him and failing to point out to him that
    it only happened with one camera.

    Did you even notice that fact? (That post wasn't the first time I said
    it was only one camera, either).

    Did you point out to Roger that when he wrote that ADCs have an error of
    +/- 1 DN, that because there was no range of errors amongst ADCs, and
    that 1 AN = 1 DN, that it would seem that he was writing about the
    rounding or truncation aspect of the quantization, itself, but mistakenly
    doubled? Surely if he were talking about ADC noise not due directly to
    the mathematical part of quantization, he would have given the range of
    error the the best and worse, or typical ADC, none of which would likely
    be exactly +/- 1.

    It was not my fault that I thought he was talking about the mathematical
    aspect; he, as usual, is sloppy with his language, and doesn't care that
    it leads to false conclusions. He is more interested in maintaining his
    existing statements than seeking and propagating truth.

    If anyone is weird here, it is you and Roger. You agree with and support
    each other when an "adversary" appears, no matter how lame your
    statements or criticisms.

    Where was Roger when when you implied that microlenses can effect dynamic
    range, without qualifying that you meant mixing sensor well depths *and*
    microlenses? Or perhaps you didn't even have that in mind the first time
    you did; you came up with that exceptional, non-traditional situation to
    make yourself right, without giving me a chance to comment on such an
    unusual arrangement, to which I would have immediately said that
    different well depths and/or sensitivities would affect overall system
    DR. Your use of different well depths in the example brings things to
    another level of dishonesty on your part. That was nothing short of
    pathetic.

    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
     
    John Sheehy, Mar 24, 2007
  4. ipy2006

    John Sheehy Guest

    "Roger N. Clark (change username to rnclark)" <> wrote
    in news::


    > Well, lets look at this another way. Go to:
    > http://www.clarkvision.com/imagedetail/dynamicrange2
    >
    > 4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
    > data file, that would be 16*16 = 256.
    >
    > Now go to Figure 7 and draw a vertical line at 256 on the
    > horizontal axis. Now note all the data below that line that
    > you cut off. Now go to Figure 8b and draw a vertical line
    > at 4 stops, and note all the data you cut off. Now go to
    > Figure 9D and draw the vertical line at 256 and
    > note all the data you cut off. (Note too how noisy the
    > 8-bit jpeg data are.)


    You can't just divide by 16, to drop 4 LSBs. 0 through 15 become 0. You
    have to add 8 first, and then divide by 16 (integer division), then
    multiply by 16, and subtract the 8, to get something similar to what you
    would get if the ADC were actually doing the quantization. The ADC is
    working with analog noise that dithers the results; you lose that
    benefit" when you quantize data that is already quantized. You won't
    notice the offset when the full range of DNs is high, but for one where a
    small range of DN is used for full scene DR, it is essential. I am
    amazed that you didn't stop and say to yourself, "I must have done
    something wrong" when you saw your quantized image go dark. That's what
    I said to myself, the first time I did it. I looked at the histograms,
    and saw the shift, and realized that an offset is needed unless the
    offset is a very small number relative to the full range of the scene.

    In the case of the mkIII image at 14, 12, 11, and 10 bits in another
    post, I used PS' Levels, because it simplifies the process, by doing the
    necessary offset to keep the distribution of tones constant.


    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
     
    John Sheehy, Mar 24, 2007
  5. ipy2006

    ASAAR Guest

    On Sat, 24 Mar 2007 13:46:43 GMT, John Sheehy wrote:

    > Almost every reply you or Roger has made to me has ignored what I have
    > actually written, and assumed something else entirely.


    . . .

    > It was not my fault that I thought he was talking about the mathematical
    > aspect; he, as usual, is sloppy with his language, and doesn't care that
    > it leads to false conclusions. He is more interested in maintaining his
    > existing statements than seeking and propagating truth.


    Ah, deja vu, yet again. You've distilled l'essence du Roger.


    > made obvious by your ganging up with him


    That too, reminiscent of one of Lionel's bizarre, out of the blue,
    unprovoked attacks coming across almost as an RNC sock puppet. I
    wouldn't be surprised if it was true. It may be in the stars!
     
    ASAAR, Mar 24, 2007
  6. ipy2006

    Paul Furman Guest

    acl wrote:

    > On Mar 22, 7:22 am, "Roger N. Clark (change username to rnclark)"
    > <> wrote:
    >
    >>acl wrote:
    >>
    >>>What I mean is this. As you say in your webpage
    >>>http://www.clarkvision.com/imagedetail/evaluation-nikon-d200/
    >>>the read noise at ISO 100 corresponds to about 1 DN; 10 electrons.

    >>
    >>Remember, a standard deviation of 1 means peak to peak variations on about
    >>4 DN. It is not simply you get 1 and only 1 all the time.
    >>

    >
    >
    > I've written papers on stochastic processes, and I know perfectly well
    > what a standard deviation is; the point is that if this thing occurs,
    > it is confined to extremely low signals. Maybe I should have replaced
    > "when s=n" by "when the signal is of the order of the noise", to
    > prevent this. Anyway, not much point in talking about this, as I think
    > it's gotten to the point where everybody is talking past each other
    > and we're just creating noise ourselves [which by now exceeds the
    > signal, methinks :) ]. I'll take some blackframes tomorrow and check
    > again.
    >
    >
    >>There is another issue with the Nikon raw data: it is not true raw, but
    >>depleted values. I think they did a good job in designing the
    >>decimation, as they made it below the photon noise.

    >
    >
    > The D200 (and more expensive models) have an option to save
    > uncompressed raw data. And yes, the resolution loss is indeed below
    > the shot noise (using your measured values for the well depth).
    > Although I guess it's now my turn to point out that this noise
    > obviously isn't always sqrt(n) so shot noise can exceed the resolution
    > limit (eg for a uniform subject it could be that you get zero photons
    > in one pixel and 80000 in the other; not terribly likely, though), but
    > never mind.


    I finally took a shot where I wished I'd turned off RAW compression on
    my D200. It was the new moon, shot mid-day almost straight up, kinda
    hazy at +2 EC just before blowing then darkened in PP to a black sky and
    the remaining moon detail was pretty badly posterized. I actually got it
    to look good with a lot of PP work so I can't easily show the problem
    but I guess that was the cause. A rather unusual situation.


    > But keep in mind that Nikons do process their "raw" data. I once wrote
    > a short program to count the number of pixels above a given threshold
    > in the data dumped by dcraw. I ran it on some blackframes. For a given
    > threshold, the number of these pixels increases as the exposure time
    > increases, up to an exposure time of 1s. At and above 1s, the number
    > drops immediately to zero for thresholds of x and above (I don't
    > remember what x was for ISO 800), except for a hot pixel which stays
    > there. So obviously some filtering is done starting at 1s (maybe
    > they're mapped, I don't know).
    >
    > It also looks to me (by eye) like more filtering is done at long
    > exposure times, but I have not done any systematic testing. Maybe
    > looking for correlations in the noise (in blackframes, for instance)
    > will show something, but if I am going to get off my butt and do so
    > much work I might as well do something publishable, so it won't be
    > this :)
    >
    > Well, plus I am rubbish at programming and extremely lazy.
    >
     
    Paul Furman, Mar 25, 2007
  7. ipy2006

    acl Guest

    On Mar 25, 6:52 am, Paul Furman <> wrote:

    >
    > I finally took a shot where I wished I'd turned off RAW compression on
    > my D200. It was the new moon, shot mid-day almost straight up, kinda
    > hazy at +2 EC just before blowing then darkened in PP to a black sky and
    > the remaining moon detail was pretty badly posterized. I actually got it
    > to look good with a lot of PP work so I can't easily show the problem
    > but I guess that was the cause. A rather unusual situation.


    That's interesting; I never managed to see any difference between
    compressed and uncompressed raw. Even when I tried to force it (by
    unrealistically extreme processing) I couldn't see it, even by
    subtracting the images in photoshop. Is it easy for you to post this
    somewhere? From what you say, it sounds like you did some heavy
    processing, did you do it in 16 bits or 8 (I mean after conversion)?
    This sort of extreme adjustment is just about the only place where I
    can see a difference between 8 and 16 bit processing (or 15 bit or
    whatever it is that photoshop actually uses).

    On the one hand, I find it hard to believe it's the compression, the
    gaps between the levels that are present are smaller than the
    theoretical photon noise, so basically the extra tonal resolution of
    uncompressed raw just records noise more accurately [and since you
    can't really see shot noise in reasonably high-key areas, that tells
    you it's irrelevant resolution anyway]. On the other hand, who knows?
    Maybe there is some indirect effect.
     
    acl, Mar 25, 2007
  8. John Sheehy wrote:
    > Almost every reply you or Roger has made to me has ignored what I have
    > actually written, and assumed something else entirely.
    >
    > Look at the post you just replied to; I made it quite clear in my post
    > that Roger responded to, that the effect only happens with *ONE CAMERA*,
    > yet Roger replied as if my technique were at fault, in some elementary
    > way. He didn't pay attention, and *YOU* didn't pay attention, made
    > obvious by your ganging up with him and failing to point out to him that
    > it only happened with one camera.
    >
    > Did you even notice that fact? (That post wasn't the first time I said
    > it was only one camera, either).


    I look at the big picture. It's not just one line of one of
    your responses that I have been responding to.
    Here are some of your posts, which involve MULTIPLE cameras:

    You said:
    > The results vary from camera to camera as well; my 20D and my FZ50 have no
    > such limit to S/N, but the XTi does.

    and responding to data I've presented:
    > Those 10D figures are way off. They are 1.9, 2.8, 4.9, 9.0, and 18.0.
    > Perhaps your figures were taken from a blackpointed RAW blackframe.

    and:
    > I don't recall seeing values this low at the low ISOs in the Nikon RAW
    > files I had.

    and data others have derived using the same methods I use:
    > The 5D figure is very high for ISO 1600, also. The 5D ISO 1600
    > blackframes I have here are all 4.6.

    and then you discuss conclusions from other cameras:
    > Here's the shadow area of a 1DmkIII ISO 100 RAW, at the original 14 bits,
    > and at quantizations to 12, 11, and 10 bits:
    > http://www.pbase.com/jps_photo/image/76001165
    > The demoasicing is a bit rough; it's my own quick'n'dirty one,


    and these are just from a coulple of your many posts in this thread.

    What I see is you attacking the data on multiple cameras from multiple
    sources, all of which paint a consistent picture. But as the details
    of your own testing come out, and shown to be inadequate,
    you start the personal attacks. A more appropriate response
    would be to 1) verify that your methods actually do not suffer
    from the problems I outlined, and 2) then explain why your results
    with your methods are actually correct and why they are better
    than those using industry standards.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 25, 2007
  9. ipy2006

    Lionel Guest

    On Sat, 24 Mar 2007 22:15:26 -0800, "Roger N. Clark (change username
    to rnclark)" <> wrote:

    [...]
    >and these are just from a coulple of your many posts in this thread.
    >
    >What I see is you attacking the data on multiple cameras from multiple
    >sources, all of which paint a consistent picture. But as the details
    >of your own testing come out, and shown to be inadequate,
    >you start the personal attacks.


    Exactly.

    I refuse to waste any more of my time trying to teach electronics to
    someone who clearly knows nothing about the subject, & who either
    ignores the data, or responds with childish insults about "defending
    myths" or "boogey-men". If John is that determined to make a fool of
    himself by pretending that he knows more about physics, optics &
    electronics than people in those industries, (including the people who
    /design/ the cameras, FFS!), then he can just go ahead & do so - it's
    no skin off my nose.

    --
    W "Some people are alive only because it is illegal to kill them."
    . | ,. w ,
    \|/ \|/ Perna condita delenda est
    ---^----^---------------------------------------------------------------
     
    Lionel, Mar 25, 2007
  10. John Sheehy wrote:
    > "Roger N. Clark (change username to rnclark)" <> wrote
    > in news::
    >
    >
    >> Well, lets look at this another way. Go to:
    >> http://www.clarkvision.com/imagedetail/dynamicrange2
    >>
    >> 4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
    >> data file, that would be 16*16 = 256.
    >>
    >> Now go to Figure 7 and draw a vertical line at 256 on the
    >> horizontal axis. Now note all the data below that line that
    >> you cut off. Now go to Figure 8b and draw a vertical line
    >> at 4 stops, and note all the data you cut off. Now go to
    >> Figure 9D and draw the vertical line at 256 and
    >> note all the data you cut off. (Note too how noisy the
    >> 8-bit jpeg data are.)

    >
    > You can't just divide by 16, to drop 4 LSBs. 0 through 15 become 0. You
    > have to add 8 first, and then divide by 16 (integer division), then
    > multiply by 16, and subtract the 8, to get something similar to what you
    > would get if the ADC were actually doing the quantization.


    Fair enough, I'll redo the test.

    Here is the full set of images:

    See figure 9 at:
    http://www.clarkvision.com/photoinfo/night.and.low.light.photography

    Here is the original raw data converted linearly in IP, scaled by 128:
    http://www.clarkvision.com/photoinf...nightscene_linear_JZ3F7340_times128-876px.jpg

    Now here is the same data with the bottom 4 bits truncated:
    http://www.clarkvision.com/photoinf...linear_JZ3F7340-lose-4bits_times128-876px.jpg

    Now here is the same data with the bottom 4 bits truncated, doing nearest integer
    using your formula. While subjectively it looks a little better, it has still
    lost a lot of image detail compared to the full 12-bits:
    http://www.clarkvision.com/photoinf...7340-lose-4bits-nearestint_times128-876px.jpg

    You lose quite a bit in my opinion.
    It would be a disaster in astrophotography.

    Roger

    The ADC is
    > working with analog noise that dithers the results; you lose that
    > benefit" when you quantize data that is already quantized. You won't
    > notice the offset when the full range of DNs is high, but for one where a
    > small range of DN is used for full scene DR, it is essential. I am
    > amazed that you didn't stop and say to yourself, "I must have done
    > something wrong" when you saw your quantized image go dark. That's what
    > I said to myself, the first time I did it. I looked at the histograms,
    > and saw the shift, and realized that an offset is needed unless the
    > offset is a very small number relative to the full range of the scene.
    >
    > In the case of the mkIII image at 14, 12, 11, and 10 bits in another
    > post, I used PS' Levels, because it simplifies the process, by doing the
    > necessary offset to keep the distribution of tones constant.
    >
    >
     
    Roger N. Clark (change username to rnclark), Mar 25, 2007
  11. ipy2006

    Lionel Guest

    On Sat, 24 Mar 2007 14:00:08 GMT, John Sheehy <> wrote:

    >"Roger N. Clark (change username to rnclark)" <> wrote
    >in news::
    >
    >
    >> Well, lets look at this another way. Go to:
    >> http://www.clarkvision.com/imagedetail/dynamicrange2
    >>
    >> 4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
    >> data file, that would be 16*16 = 256.
    >>
    >> Now go to Figure 7 and draw a vertical line at 256 on the
    >> horizontal axis. Now note all the data below that line that
    >> you cut off. Now go to Figure 8b and draw a vertical line
    >> at 4 stops, and note all the data you cut off. Now go to
    >> Figure 9D and draw the vertical line at 256 and
    >> note all the data you cut off. (Note too how noisy the
    >> 8-bit jpeg data are.)

    >
    >You can't just divide by 16, to drop 4 LSBs.


    Of course you can.

    > 0 through 15 become 0. You
    >have to add 8 first, and then divide by 16 (integer division), then
    >multiply by 16, and subtract the 8, to get something similar to what you
    >would get if the ADC were actually doing the quantization.


    What a complete load of crap. Have you *ever* worked with ADC's in
    your life?
    It sounds like you might be confusing 2 quadrant ADC's that're used
    for audio applications with single quadrant ADC's that're used for
    this sort of device.

    --
    W "Some people are alive only because it is illegal to kill them."
    . | ,. w ,
    \|/ \|/ Perna condita delenda est
    ---^----^---------------------------------------------------------------
     
    Lionel, Mar 25, 2007
  12. ipy2006

    teflon Guest

    On 25/3/07 17:00, in article ,
    "Lionel" <> wrote:

    > On Sat, 24 Mar 2007 14:00:08 GMT, John Sheehy <> wrote:
    >
    >> "Roger N. Clark (change username to rnclark)" <> wrote
    >> in news::
    >>
    >>
    >>> Well, lets look at this another way. Go to:
    >>> http://www.clarkvision.com/imagedetail/dynamicrange2
    >>>
    >>> 4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
    >>> data file, that would be 16*16 = 256.
    >>>
    >>> Now go to Figure 7 and draw a vertical line at 256 on the
    >>> horizontal axis. Now note all the data below that line that
    >>> you cut off. Now go to Figure 8b and draw a vertical line
    >>> at 4 stops, and note all the data you cut off. Now go to
    >>> Figure 9D and draw the vertical line at 256 and
    >>> note all the data you cut off. (Note too how noisy the
    >>> 8-bit jpeg data are.)

    >>
    >> You can't just divide by 16, to drop 4 LSBs.

    >
    > Of course you can.
    >
    >> 0 through 15 become 0. You
    >> have to add 8 first, and then divide by 16 (integer division), then
    >> multiply by 16, and subtract the 8, to get something similar to what you
    >> would get if the ADC were actually doing the quantization.

    >
    > What a complete load of crap. Have you *ever* worked with ADC's in
    > your life?
    > It sounds like you might be confusing 2 quadrant ADC's that're used
    > for audio applications with single quadrant ADC's that're used for
    > this sort of device.


    'Dropping LSB's'? 'Quadrant ADC's'? My brain's fallen out.

    Are there any real photographers here?
     
    teflon, Mar 26, 2007
  13. ipy2006

    Paul Furman Guest

    acl wrote:

    > On Mar 25, 6:52 am, Paul Furman <> wrote:
    >
    >
    >>I finally took a shot where I wished I'd turned off RAW compression on
    >>my D200. It was the new moon, shot mid-day almost straight up, kinda
    >>hazy at +2 EC just before blowing then darkened in PP to a black sky and
    >>the remaining moon detail was pretty badly posterized. I actually got it
    >>to look good with a lot of PP work so I can't easily show the problem
    >>but I guess that was the cause. A rather unusual situation.

    >
    >
    > That's interesting; I never managed to see any difference between
    > compressed and uncompressed raw. Even when I tried to force it (by
    > unrealistically extreme processing) I couldn't see it, even by
    > subtracting the images in photoshop. Is it easy for you to post this
    > somewhere?


    Here's a 'bad' curves version, what I got out of the raw converter & the
    original:
    http://www.edgehill.net/1/?SC=go.php&DIR=Misc/moon/2007-03-22/tech
    -the final is up one folder
    I'll email the NEF file if you want to tinker, just remove the hyphens
    from my email. In the end I did salvage it pretty well just using ACR &
    8 bit photoshop.

    > From what you say, it sounds like you did some heavy
    > processing, did you do it in 16 bits or 8 (I mean after conversion)?
    > This sort of extreme adjustment is just about the only place where I
    > can see a difference between 8 and 16 bit processing (or 15 bit or
    > whatever it is that photoshop actually uses).
    >
    > On the one hand, I find it hard to believe it's the compression, the
    > gaps between the levels that are present are smaller than the
    > theoretical photon noise, so basically the extra tonal resolution of
    > uncompressed raw just records noise more accurately [and since you
    > can't really see shot noise in reasonably high-key areas, that tells
    > you it's irrelevant resolution anyway]. On the other hand, who knows?
    > Maybe there is some indirect effect.
    >
    >
     
    Paul Furman, Mar 26, 2007
  14. Roger N. Clark (change username to rnclark), Mar 26, 2007
  15. teflon wrote:
    []
    > 'Dropping LSB's'? 'Quadrant ADC's'? My brain's fallen out.
    >
    > Are there any real photographers here?


    Obviously there are, and ones who wish to have a better understanding of
    the equipment used. If you are uncertain about terms, why not ask or look
    them up?

    David
     
    David J Taylor, Mar 26, 2007
  16. ipy2006

    Lionel Guest

    On Mon, 26 Mar 2007 07:35:49 GMT, "David J Taylor"
    <-this-bit.nor-this-part.co.uk> wrote:

    >teflon wrote:
    >[]
    >> 'Dropping LSB's'? 'Quadrant ADC's'? My brain's fallen out.
    >>
    >> Are there any real photographers here?

    >
    >Obviously there are, and ones who wish to have a better understanding of
    >the equipment used. If you are uncertain about terms, why not ask or look
    >them up?


    And if he doesn't care about the topic, nobody's forcing him to read
    this thread.

    --
    W "Some people are alive only because it is illegal to kill them."
    . | ,. w ,
    \|/ \|/ Perna condita delenda est
    ---^----^---------------------------------------------------------------
     
    Lionel, Mar 26, 2007
  17. ipy2006

    John Sheehy Guest

    "acl" <> wrote in
    news::

    > On the one hand, I find it hard to believe it's the compression, the
    > gaps between the levels that are present are smaller than the
    > theoretical photon noise,


    That still posterizes the noise and signal a little bit. You're not
    likely to see it with any normal tonal curve; you really need to increase
    the contrast quite a bit, and you will see it. For example, I remember
    shooting in extreme fog a couple of years ago, where I used +2 EC with my
    20D, at ISO 400, and raised the effective blackpoint such that the dark
    parts of the Robins approached black. It brought up a bit of noise that
    would not normally be seen, with any exposure compensation level, while
    black was still anchored at black. Same with taking pictures of things
    reflected in glass over a white background, if you try to restore black
    in the processing.

    > so basically the extra tonal resolution of
    > uncompressed raw just records noise more accurately [and since you
    > can't really see shot noise in reasonably high-key areas, that tells
    > you it's irrelevant resolution anyway]. On the other hand, who knows?
    > Maybe there is some indirect effect.


    Recording noise better is a good thing, and the same conditions record
    signal better as well (and allows the brain and algorithms to separate
    them better, as well).

    In this particular case, it is only likely to be seen in extreme
    blackpointing, or perhaps extreme sharpening.

    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
     
    John Sheehy, Mar 26, 2007
  18. ipy2006

    acl Guest

    On Mar 27, 2:39 am, John Sheehy <> wrote:
    > "acl" <> wrote innews::
    >
    > > On the one hand, I find it hard to believe it's the compression, the
    > > gaps between the levels that are present are smaller than the
    > > theoretical photon noise,

    >
    > That still posterizes the noise and signal a little bit. You're not
    > likely to see it with any normal tonal curve; you really need to increase
    > the contrast quite a bit, and you will see it. For example, I remember
    > shooting in extreme fog a couple of years ago, where I used +2 EC with my
    > 20D, at ISO 400, and raised the effective blackpoint such that the dark
    > parts of the Robins approached black. It brought up a bit of noise that
    > would not normally be seen, with any exposure compensation level, while
    > black was still anchored at black. Same with taking pictures of things
    > reflected in glass over a white background, if you try to restore black
    > in the processing.


    Well yes, that is what I was thinking too (ie that posterising the
    noise could cause problems under extreme adjustments), but didn't
    actually see anything the couple of times I tried (by shooting in forg
    and moving the black and white points). I also played a bit with
    compressed and uncompressed raw files but could not see anything so
    far. Maybe I was not extreme enough.

    >
    > > so basically the extra tonal resolution of
    > > uncompressed raw just records noise more accurately [and since you
    > > can't really see shot noise in reasonably high-key areas, that tells
    > > you it's irrelevant resolution anyway]. On the other hand, who knows?
    > > Maybe there is some indirect effect.

    >
    > Recording noise better is a good thing, and the same conditions record
    > signal better as well (and allows the brain and algorithms to separate
    > them better, as well).
    >
    > In this particular case, it is only likely to be seen in extreme
    > blackpointing, or perhaps extreme sharpening.


    Yes, obviously you'd expect to see a difference under conditions that
    exaggerate small differences, ie extreme tonal stretching or
    sharpening (which is local tonal manipulation, after all). But I
    didn't.

    Well I'll try to play with Paul's example and see what happens
    (unfortunately I just remembered I have an early plane to catch
    tomorrow so it'll have to wait a bit).
     
    acl, Mar 26, 2007
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Mike O.

    How low is "low light"?

    Mike O., Jan 3, 2004, in forum: Digital Photography
    Replies:
    15
    Views:
    619
    Michael Meissner
    Jan 4, 2004
  2. ishtarbgl
    Replies:
    0
    Views:
    586
    ishtarbgl
    Apr 1, 2004
  3. D

    Sony HC5E low frame rate in low light

    D, May 21, 2008, in forum: Digital Photography
    Replies:
    12
    Views:
    648
  4. low end digital / low light and macro ?

    , Jun 27, 2008, in forum: Digital Photography
    Replies:
    19
    Views:
    682
    Chris Malcolm
    Jul 11, 2008
  5. Brian
    Replies:
    31
    Views:
    1,173
    Bob Larter
    Jun 14, 2009
Loading...

Share This Page