Re: This is not an advertisement ...

Discussion in 'Digital Photography' started by otter, Dec 23, 2012.

  1. Eric Stevens <> wrote:
    > On Sun, 3 Feb 2013 19:18:42 +0100, Wolfgang Weisselberg
    >>Eric Stevens <> wrote:
    >>> On Sun, 27 Jan 2013 19:01:04 +0100, Wolfgang Weisselberg
    >>>>Eric Stevens <> wrote:
    >>>>> On Tue, 22 Jan 2013 23:12:07 +0100, Wolfgang Weisselberg
    >>>>>>Eric Stevens <> wrote:
    >>>>>>> On Sun, 20 Jan 2013 18:41:15 +0100, Wolfgang Weisselberg
    >>>>>>>>Eric Stevens <> wrote:
    >>>>>>>>> On Thu, 10 Jan 2013 18:54:12 +0100, Wolfgang Weisselberg
    >>>>>>>>>>Eric Stevens <> wrote:
    >>>>>>>>>>> On Fri, 28 Dec 2012 11:35:58 +1300, Eric Stevens
    >>>>>>>>>>>>On Thu, 27 Dec 2012 19:59:08 +0100, Wolfgang Weisselberg
    >>>>>>>>>>>>>Eric Stevens <> wrote:
    >>>>>>>>>>>>>> On Wed, 26 Dec 2012 15:16:34 -0800, Savageduck
    >>>>>>>>>>>>>>>On 2012-12-26 14:22:39 -0800, Eric Stevens <> said:


    >>>>Additionally, please explain why a low variation between
    >>>>measurements of a set of lenses does matter more than
    >>>>measuring the actual variations of a specific lens that may
    >>>>or may not be in the set --- given that we are talking about
    >>>>people who want and need the last bit of quality.


    >>> You seem to be overlooking the fact that there are two different
    >>> objectives.


    >>> (A) to characterise a particular type of lens.


    >>> (B) to characterise a particular lens.


    >>> (A) is what DxO sets out to do and so too does Adobe when determining
    >>> lens corrections to be built into Adobe software. From what I have
    >>> been told, the Adobe method can also be used by a lens-owner to (B)
    >>> characterise their particular lens, which characterisation can then be
    >>> loaded into Adobe software for the lens-owner's personal use. There is
    >>> nothing wrong with any of this.


    >>As to that: Adobe gets (A) by getting a number of (B)
    >>reports, too --- and for most any lens they'll get more (B)
    >>reports than DxO (assuming they did test multiple copies of
    >>lenses and bodies each) could amass.


    >>> Accuracy and repeatability is of paramount importance if tests are
    >>> going to be carried out on several examples of a particular lens type.


    >>A final accuracy for (A) that's much greater than the
    >>difference between typical lenses doesn't help.


    > Yes it does. The more precisely you can measure the more confidence
    > you can have that the variability you are seeing is real and not just
    > measurement error.


    I was unaware that typical photographers used their DxO programs
    to measure variability of their own lenses. I thought they
    used it to improve their shots.

    Secondly, you have still not shown that DxO actually does
    test several copies of the same lens on several copies of the
    same body, instead of simply rounding values to taste
    (whereby the accuracy is hacked to tiny bits by vicious
    alligators).


    >>Repeatability is only important as too low repeatability
    >>affects accuracy.


    > High repeatability to a low standard of precision is of little use to
    > anybody.


    A machinegun firing (with low precision, i.e. rarely hitting)
    on enemy soldiers tend to make these soldiers duck and hide
    instead of shooting back. Which is very very useful to the
    guys just assaulting these spldiers' position.

    The opposite would be a sniper who usually hits with each
    shot, but only fires every other minute. Useful, somewhat
    (more if key personel can be targeted), deadlier per minute,
    maybe, as valuable to the poor guys running towards these
    enemy soldiers, no way.


    >>> Accuracy is particularly important if different types of lenses are
    >>> going to be later compared on the basis of test results.


    >>And that's why DxO *voluntarily* rounds the values ---
    >>reducing accuracy a lot --- because of lens variability.
    >>(Says so in the URL you gave and cited!)


    > The results are still meaningful.


    Sure. Simply because many relevant digits are simply not
    needed in first place?


    >>So obviously DxO disagrees very much with your claim.


    > So why does DxO still use their own results to make their comparisons?


    Their own rounded results? Because they simply don't need very
    accurate numbers and because the variation between copies
    makes the accurate numbers meaningless.


    >>> The highest degree of accuracy requires controlled test conditions and
    >>> certainly cannot be achieved by tests being carried out by different
    >>> people under different physical and lighting conditions.


    >>Guess what: DxO has different people and different physical
    >>conditions. (Or do you really think DxO has one lab guy
    >>who's barred from ever leaving the company? Or do you think
    >>they have the same distance between camera and test chart for
    >>varying focal lengths? QED.)


    > OK, so controlled testing conditions don't matter. You could have
    > fooled me.


    They matter only in so far as they can influence the results.
    Since the results are rounded anyway, anything that only affects
    that which is rounded away doesn't matter. Additionally you
    don't need to tightly control what only invluences the results
    little: you don't need to make sure that no sack of rice topples
    over in china if you measure the movements and vibrations of
    the earth in the UK.

    >>As to the lighting conditions: Pray tell why distortion
    >>correction should be sensitive to them!


    > Who said DxO is only measuring distortion?


    Who said you could measure your own lenses with DxO?


    >>> That's why I
    >>> expressed concern at the possibility that the Adobe data includes the
    >>> results of tests carried out under inconsistent or less than ideal
    >>> conditions.


    >>If Adobe did try to correct the colour rendition in one
    >>single condition you might have a claim. And I'd guess Adobe
    >>filters out outliers in the large number of data points they'll
    >>be getting.


    > You would guess ....


    > In other words you don't know.


    Since I am not Adobe, I don't know *and* I am big enough to
    say so.

    However, since the time involved is needed only once --- not
    for every lens-body combination --- and the effort is small ---
    unlike testing, say 5 lenses with 5 bodies, which means 25 times
    the effort needed for one single lens and one single body ---
    and since Adobe does seem to have programmers and does seem to
    know what they are doing, I'd be as surprised if they didn't
    as I would be winning the first price in the national lottery
    when I didn't even enter a ticket.

    >>> Then there is the problem tha,t as far as the final image is
    >>> concerned, the characteristics of the lens depends upon the camera on
    >>> which it was tested.


    >>Which characteristics would that be? Distortion?
    >>Vignetting? CA? In which way would they be camera model
    >>sensitive more than camera copy senistive? Apart from
    >>sensor size (as in: a smaller sensor can't tell about the
    >>outer image circle of a larger sensor), that is?


    > The ability (or inability) of a sensor to detect sharpness is of
    > importance to some.



    You probably meant that some camera bodies have problems with
    aliassing and others do not. This does NOT change the
    characteristics of the lens. Using the same lens (ideally
    using multiple copies of them and the bodies) against several
    body models you can easily detect how strong the influence of
    aliassing and anti-aliassing really is --- both straight out
    of the camera and after post-processing using settings that
    work best for said camera.

    > That's why DxO take into account what they call
    > 'softness'.


    It's a correction factor, nothing complicated.


    >>> It is largely a waste of time to haracterise a
    >>> lens without controling for the type of camera being used. I have no
    >>> knowledge of how Adobe manages this (if at all).


    >>You can bet even Adobe allows you to tell it what type of
    >>camera you're using.


    > But do they take that into account when using the data for image
    > corection?


    For which correction that Adobe does would that give a
    significant improvement?


    >>>>>>>>>>> The procedure only "characterises three common types of lens
    >>>>>>>>>>> aberrations, namely geometric distortion, lateral chromatic
    >>>>>>>> ^^^^^^^^^^
    >>>>>>>>>>> aberration, and vignetting". There is no mention of the measurement of
    >>>>>>>>>>> what DXO calls lens 'softness'. Nor is there any mention of color.
    >>>>>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


    >>>>>>>>>>Look up "chroma" and "chrominance". Contrast to "luminance"
    >>>>>>>> ^^^^^^^^^^^^^^^^


    >>>>>>>>> None of which are mentioned in connection with the Adobe profile
    >>>>>>>>> procedure or data.


    >>>>>>>>I've underlined the parts you should reread.


    >>>>>>> With what end in mind?


    >>>>>>There was a slim chance you'd recognize that your claim is
    >>>>>>true only if you meant the literal word "color", spelled
    >>>>>>exactly like that.


    >>>>> You are playing around. Tell me exactly what you mean and give quotes
    >>>>> and cites.


    >>>>Please look up what chroma means.
    >>>>Then please look up what a chromatic aberration is.
    >>>>Then tell me why you think that has nothing to do with colour.


    >>> Much beating around the bush and eventually you get to the point.


    >>Mostly you being dense as tungsten and too lazy to use a
    >>search engine or a dictionary. Do you get your food pre-chewed
    >>as well?


    >>> Both
    >>> DxO and Adobe assess chromatic aberation.


    >>Which is "colour fringing" to you. Which is a clear
    >>mentioning of "color".


    > And you think that that is the only aspect of colour worth measuring.


    Putting words in my mouth? What next? Assaulting my
    character?

    > Fortunately DxO don't agree with you. I've already told you that.


    DxO has not said a word on your "Nor is there any mention of
    color". If I'm wrong on that count, URL please.


    >>> As far as I am aware only
    >>> DxO measures the colour profile of the camera's sensor.


    >>They don't do that. It wouldn't make much sense, either.


    > Actually, they do do that. And it does make sense.


    Actually, you THINK that they do that. But Adobe measures
    *a* profile (there is no *the*), and since that profile is
    influenced by the lens (especially with third party lenses),
    it is *a* profile of a specific *lens-camera combination*.
    (That's why a profile of the sensor it wouldn't make much
    sense, part one, part two, see next paragraph.)

    Worse, as you know (just shoot with the same colour settings
    in open shade, sunlight, tungsten and a bonfire, as well as
    under green trees and see for yourself), the light that reaches
    your subjects to be reflected and reemitted to your camera is
    a variable --- a *very* sensitive variable --- that influences
    the colour.

    Thus DxO can only produce colour profiles of the lens-camera
    combination *for the specific light* used for the profiling.

    And they have no idea what light was used at any exposure,
    except that they might look at the brightness (-> exif) and
    guess "outside, sun shining" if it's very bright and that they
    might guess what white constitutes. To know would mean having
    a spectrogramm of the incident light.


    >>>>>>>>>>> In particular, unlike DXO, what is being measured appears to be the
    >>>>>>>>>>> behaviour of a particular lens/camera combination.


    >>>>>>>>>>You meant to say "exactly like DxO, however, unlike DxO it'll
    >>>>>>>>>>be your camera and your lens you test".


    >>>>>>>>> No. Unlike DXO.


    >>>>>>>>Really?


    >>>>>>>>> DXO will test a lens on a number of different cameras They will also
    >>>>>>>>> test a camera on a number of different lenses.


    >>>>>>>>If I test 2 cameras I borrowed and 3 lenses I rented in each
    >>>>>>>>combination, I'm doing it exactly like DxO (cause I'm not
    >>>>>>>>testing my own stuff and most of the tests don't help me for
    >>>>>>>>a lack of having that gear), but if I test my own camera(s)
    >>>>>>>>to my own lens(es), it's different from DxO?


    >>>>>>No answer?


    >>>>> You are now proposing something completely different from your "unlike
    >>>>> DxO it'll be your camera and your lens you test".


    >>>>Stop playing around and dancing around and fooling around.
    >>>>You do understand perfectly well --- if not, ask a five year
    >>>>old! --- you just don't wnat to admit it.


    >>> It cannot be "you camera and lens you test" if you test is of a number
    >>> of different borrowed and rented cameras and lenses.


    >>Finally. You're beginning to understand. Testing copies
    >>of lenses and cameras I don't own to make a profile for *my*
    >>copies ... that's DxO.


    > No one ever claimed that DxO made a profile for your camera. In any
    > case, unless you are using second-rate equipment, I can't imagine that
    > you can determine the profile for your equipment to a standard of
    > accuracy sufficient to distinguish it from other similar equipment.


    You can't imagine? So you're basically saying it's impossible
    to set a microfocus adjustment for a first class lens-camera
    combination when both are within the calibration range as
    specified by the producer because any other lens-camera
    combination (same model, different copy) would reach the
    identical microfocus setting?

    Well, I can imaginge you imagining that.




    >>>>>>>>> This class of research is way above the level of photographing
    >>>>>>>>> a brick wall.


    >>>>>>>>Eric, if my photo of a brick wall shows strong distortions, I
    >>>>>>>>don't care if DxO labs says about that lens-camera combination
    >>>>>>>>that there's no distortion.


    >>>>>>> Then you need DxO software to rectify the distortion, don't you?


    >>>>>>Epic fail. Facepalm.


    >>>>>>Slow down, stop knee jerking, start thinking.


    >>>>>>If DxO says: "no distortion", all they can correct is "no
    >>>>>>distortion", not "strong distortions", since they don't have
    >>>>>>the data in first place they need for rectifying.


    >>>>> You keep putting up straw men.


    >>>>The straw man of photograping brick walls? That one is yours.


    >>See?


    > "DXO will test a lens on a number of different cameras They will
    > also test a camera on a number of different lenses.


    Yes. One single copy each.

    > By a suitable
    > choice of combinations they are able to characterise to a
    > sufficient degree of accuracy the properties of the cameras and the
    > properties of the lenses.


    Sufficient == after rounding they're in the right ballpark.
    Differences between various copys are all nulled out by the
    rounding.

    > (Look up the design of experiments if you
    > want to know more.)


    URL to the theory and the mathematical proof is where?

    > With this information they are able assess the
    > performance of combinations of lenses and cameras which they have
    > not actually tested.


    Ken Rockwell can do that, too.

    > This class of research is way above the level
    > of photographing a brick wall."


    > That's not a straw man.


    There's your brick wall, and if my photo of a brick wall
    shows something DxO claims is different, guess you'd believe
    DxO and I'd believe my eyes.


    >>>>>>On the other hand, I can just use Adobe's method (or use hugin,
    >>>>>>for heaven's sake) and correct the distortion really observed
    >>>>>>with my specific gear. I don't need DxO at all for that.


    >>>>> If I wanted evidence that you don't know what DxO actually does, the
    >>>>> paragraph above is more than sufficient. You have trapped yourself
    >>>>> into thinking that in the field of camera and lens characterisation
    >>>>> DxO does the same thing as Adobe. In fact, if you look at their
    >>>>> software you will discover that DxO does considerably more.


    >>>>If I needed evidence that reading comprehension is a skill you
    >>>>lack sorely, I'd only needed to point out that I was talking
    >>>>about distortion.


    >>> It's funny that: you keep wanting to avoid the extra things that DxO
    >>> does.


    >>Tell me which extra things they do that *provide value* to
    >>the user. Some colour measuring for a specific situation
    >>that's not reproduceable by the user and for which the user
    >>who may need it already has better and more flexible tools?
    >>That's not providing value to the user.


    > You can't keep on denying that DxO does extra things so you Squirm.


    "Tell me which extra things they do that *provide value* to
    the user." I don't care if they dance naked or sing Klingon
    operas. I pay for results, not for doing 'extra things'.

    > We used to refer this as 'squink'.


    You being a multiple personality explains why I need to
    repeat myself all the time with you.


    >>Not being able to profile
    >>your own lenses? Testing only one copy and rounding the values?
    >>Offering some "optimized" sharpening based on a completely
    >>different lens copy on a completely different body copy? Yep,
    >>sounds cool, but if you want the last bit of sharpness ---
    >>where that tool may help --- you already have good lenses and
    >>the differences between copies start to become the limiting
    >>factor, not where in the frame somthing is.


    > Squink.


    You can't come up with a better answer than squirming in
    public?


    >>so much for your accuracy for comparison claim.


    >>>>> How do you think they assess the variability? Test the same lens five
    >>>>> times, perhaps?


    >>>>That would be a valid method of testing their testing method's
    >>>>variability.


    >>> How would you go about testing the variability of a particular type of
    >>> lens?


    >>Who says DxO is testing the variability? Care to show some URL?


    > Are you really suggesting they pull numbers out of the air?


    More or less, yes. Educated guess and all that. It's *much*
    *much* cheaper than actually testing every lens.


    >>>>How come they don't use averaged test results?
    >>>>How come they don't give upper and lower limits?
    >>>>How come they have to "factor in variability" by rounding numbers?


    >>> Because they are trying to reduce a very complicated subject to
    >>> something which can be understood by the ordinary mortal.


    >>I see. Averaging test results --- resulting in the same amount
    >>of numbers --- is too difficult for "ordinary mortals".
    >>Upper and lower bounds --- well, even price comparison
    >>engines use them, and they're designed for very ordinary,
    >>non-technical mortals.


    > DxO are trying to relate things such as lines/mm to final image
    > quality in a way that the non-expert will understand.


    And that's why they do lotsa calculations, but averaging a
    number is too complicated.


    >>The only reducing they do is simply measuring one lens on
    >>one body.


    >>>>The simplest, most natural solution (look up Occham's razor)
    >>>>for the observed results and the claims provided by you is that
    >>>>they only test "a single copy of each lens", and variability
    >>>>is guessed at, bolstered by testing a very few lenses with
    >>>>more than one copy.


    >>> You are implying they use dishonest test procedures.


    >>I am impying that you can't provide any proof they test more
    >>than one lens copy on one body copy, while I can bring lots
    >>of facts and observations that point to that. I can't help
    >>you feel cheated because you always thought they did test
    >>multiple copies.


    > Squink again.


    Go on, squirm in public!


    >>>>>>>>> For example, I don't think Adobe can embed
    >>>>>>>>> corrections for colour innaccuracy in their profiles. DXO most
    >>>>>>>>> certainly does.


    >>>>>>>>Please show the cite where DxO does that.


    >>>>>>> It's in their software. You can select camera makes and models and see
    >>>>>>> the effects.


    >>>>>>I see the same effects when I use a different calibration curve.
    >>>>>>I see the same effects when I switch off the monitor profile.
    >>>>>>I see the same effects when I switch to a linear colourspace.
    >>>>>>I see the same effects when I move the RGBCMY_sliders around.


    >>>>> But none of the above bullshit is tied to the characteristics of your
    >>>>> own particular type of camera.


    >>>>It's extremely trivial to tie a different setting of the
    >>>>bullshit to each and every type of camera. Would you note
    >>>>the difference?


    >>> Oh yes.


    >>How?
    >>Did you measure it?


    > I different have to measure anything. You asked me if I would note the
    > difference and I answered "Oh yes".


    You'd note the difference when switching between two sets of
    calibration curves --- I believe that.

    My question was how you noted that ONE SPECIFIC correction
    was the right one, instead of only a different one. Knowing
    that the eye is a lousy judge I asked if you could back up
    your claim by measurements ...

    .... but I guess you just note the difference and *believe*
    with religous fervor that this one is the right one. (And
    that when you've been told that there is no right one for
    all light situations.)


    > --- snip ---


    > I've had enough.


    > You only want to argue.


    > You don't want to reach a conclusion.


    > Goodbye.


    I'm tired of having to write the same facts over and over in
    10 different ways because you don't get them 9 times out of 10.
    I'm tired of trivially finding counter examples to your claims.
    I'm tired of you imagining things and taking them as gospel
    truth even though not even the gospel from DxO says so ---
    but you getting all huffy when I clearly say that I assume
    something.

    -Wolfgang
     
    Wolfgang Weisselberg, Feb 7, 2013
    #21
    1. Advertising

  2. Eric Stevens <> wrote:
    > On Sun, 3 Feb 2013 19:21:49 +0100, Wolfgang Weisselberg
    >>Eric Stevens <> wrote:
    >>> On Sun, 27 Jan 2013 19:19:09 +0100, Wolfgang Weisselberg
    >>>>Eric Stevens <> wrote:
    >>>>> rOn Tue, 22 Jan 2013 23:53:49 +0100, Wolfgang Weisselberg
    >>>>>>Eric Stevens <> wrote:


    >>>>>>> But then he sends the results to Adobe. Unless they have already
    >>>>>>> received the results of other immaculately performed tests on the same
    >>>>>>> kind of gear Adobe will have no idea of whether the required
    >>>>>>> corrections fall in the upper, lower or middle sections of the range
    >>>>>>> of possible corrections. They won't know whether the corrections are
    >>>>>>> required to correct defects in the camera or the lens.


    >>>>>>Same with DxO not knowing it from just one lens copy. Except
    >>>>>>tha Adobe may have tested the lens themselves, and 10.000
    >>>>>>other people did so as well. Does DxO test 10.000 copies?


    >>>>> You are now pulling numbers out of the air. Why do you have to resort
    >>>>> to this?


    >>>>Because you are pulling numbers like "Adobe will only get one
    >>>>result" and "DxO tests multiple copies" out of the air.


    >>> Where did I ever say that?


    >>You implied it. You also required "immaculately performed
    >>tests", which not even DxO can produce, because you don't
    >>grasp statistics, error boundaries and averages.


    >>>>10.000 is a good estimate for a popular lens-body-combibation.


    >>> Why?


    >>Count of lenses and bodies in use
    >>% of people testing their lenses with Adobe
    >>% of people who then send in their results


    > All of which data you can only guess at.


    Yep. But my guesses are
    a) marked as such, not simply implied like yours
    b) plausible, and I can show why
    c) end up in the right ballpark

    > For reasons which I have given in another article - Goodbye.


    Ah, the need for a parting shot. Two can play THAT game.

    -Wolfgang
     
    Wolfgang Weisselberg, Feb 7, 2013
    #22
    1. Advertising

  3. otter

    John Turco Guest

    On 1/27/2013 12:22 PM, Wolfgang Weisselberg wrote:

    <heavily edited for brevity>

    > Occham's razor supports me.
    >
    > -Wolfgang



    How many shaves does Occham's razor give him?

    John
     
    John Turco, Feb 10, 2013
    #23
  4. John Turco <> wrote:
    > On 1/27/2013 12:22 PM, Wolfgang Weisselberg wrote:


    > <heavily edited for brevity>


    >> Occham's razor supports me.


    > How many shaves does Occham's razor give him?


    It's not how many, it's how close.

    -Wolfgang
     
    Wolfgang Weisselberg, Feb 11, 2013
    #24
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Arun
    Replies:
    0
    Views:
    924
  2. Ivan Ostres

    BGP conditional advertisement

    Ivan Ostres, Aug 2, 2004, in forum: Cisco
    Replies:
    8
    Views:
    4,995
    Ivan Ostres
    Aug 9, 2004
  3. Lars L. Christensen

    Advertisement of a default route

    Lars L. Christensen, Jan 27, 2005, in forum: Cisco
    Replies:
    3
    Views:
    6,895
    Ivan Ostreš
    Jan 31, 2005
  4. Garry
    Replies:
    6
    Views:
    870
    Cisco Fan
    Mar 21, 2005
  5. joe

    funny humor - not an advertisement

    joe, Oct 15, 2003, in forum: Computer Support
    Replies:
    1
    Views:
    428
    anthonyberet
    Oct 16, 2003
Loading...

Share This Page