why device independent color?

Discussion in 'Digital Photography' started by Dale, Jan 23, 2014.

  1. Dale

    Dale Guest

    if you want to purpose an image to more than one output device color,
    and have the output look the same

    or

    if you want different input device color purposed to different output
    device color(s) and want the output to look the same

    then

    you need to convert the device colors through device independent color
    space like XYZ,CIELAB,CIELUV

    I remember the introduction of the sRGB standard color space

    I remember speaking on Kodak's internal ICC ( http://www.color.org )
    mailing list, espousing that sRGB would be an excuse NOT to make device
    profiles with regard to the device independent color space(s)

    I think the use of SWOP CMYK standards had a similar result

    it's been almost 20 years and it seems like most cameras are using sRGB
    or ProPhotoRGB as default profiles instead of getting a REAL profile
    from the vendor of the hardware or making such a profile itself

    people who don't consider how far an image accurately when it is
    multi-purposed in device independent color

    there are few vertical imaging workflows left, perhaps there you can
    translate the color by matching filtration, etc.

    the only place I see for sRGB and SWOP is consumer related imaging

    not to say that RGB/CMY (with/without maintenance of black channel)
    isn't the best working space, I just don't see it as a profile
    connection space, since there are MANY RGBs, they are device dependent,
    and have device dependent color, whereas XYZ,CIELAB,CIELUV are
    independent of device

    let me take this time to also say that the print reference medium has
    been a start with ICC to tackle appearance matching instead of color
    matching, ought to be more reference mediums and implementations of such
    use-cases to make better workflows
     
    Dale, Jan 23, 2014
    #1
    1. Advertisements

  2. Dale

    Guest Guest

    completely wrong.

    what is needed is a colour managed workflow, with the image and each
    device along the way having a profile.
     
    Guest, Jan 23, 2014
    #2
    1. Advertisements

  3. Dale

    Guest Guest

    users do not need to convert the image.

    what they need to do is use a colour managed workflow and the computer
    takes care of the details.

    if you choose a different printer, pick the relevant profile and
    whatever conversions are necessary are done automatically.

    once again, let the computer do the work.
     
    Guest, Jan 24, 2014
    #3
  4. Dale

    Dale Guest

    no, let lab dudes do gamut compression math, etc., by hand, for each image
     
    Dale, Jan 24, 2014
    #4
  5. Dale

    Dale Guest

    that's how you get the profiles
     
    Dale, Jan 24, 2014
    #5
  6. Dale

    Dale Guest

    profiles are calculated to go from device space to device independent
    space, or vice versa

    there are other considerations ...

    but sRGB or SWOP or ProPhotoRGB are NOT device independent color spaces,
    they are device standard spaces with which to match by design of
    equipment/media to such device standard space

    like a TV and a TV Camera, or like consumer imaging nowadays

    even those might want to repurpose the image outside such a chain, in
    which case you need to go through a device independent space with a profile

    with all the different things happening in television besides P22 and
    EBU phosphor CRT display, there are LCD, LED, Plasma, OLED, maybe more,
    I think sRGB is going to die, same with ProphotoRGB and like SWOP
    already might have
     
    Dale, Jan 24, 2014
    #6
  7. Dale

    Guest Guest

    what lab dudes? what labs?

    people process their own images on their own computers, and all they
    need to do is adopt a colour managed workflow and let the computer do
    the work.

    there is no need to do the math by hand for each image.
     
    Guest, Jan 24, 2014
    #7
  8. Dale

    Guest Guest

    no, you get the profiles by running the appropriate profiling software.

    what the software does internally doesn't matter. users do not need to
    understand all the math behind it to be able to use it.

    what matters is does the user get what they expect, and the answer is
    yes.
     
    Guest, Jan 24, 2014
    #8
  9. Dale

    Guest Guest

    the computer knows how to convert it. the authors of the profiling
    software need to understand the math to write the software to do the
    conversions. that's about the extent of it.

    the end users do not need to understand any of it, other than how to
    use profiles in a colour managed workflow.
     
    Guest, Jan 24, 2014
    #9
  10. Dale

    Guest Guest

    of course it is.
    no it isn't.

    the user wants as close a match as possible, given the limits of a
    device. that requires a colour managed workflow.

    they don't need to know the math as to how it works.
    not at all.
     
    Guest, Jan 25, 2014
    #10
  11. Dale

    Guest Guest

    given that users do not have to do that, what exactly am i missing?

    the *computer* might do it internally (or it might not), depending on
    what needs to be done to produce the result the user wants.

    the user does not need to worry about that nor do they nee to know what
    any of that means.

    what matters is if they get the expected results, and with a colour
    managed workflow, they do.
     
    Guest, Jan 25, 2014
    #11
  12. Dale

    Guest Guest

    i did, and it's wrong.
     
    Guest, Jan 25, 2014
    #12
  13. Dale

    Dale Guest

    I think the value of device independent color is underestimated

    sure, light sources and filtration can make it easier, Eikonix/Kodak
    has/had a patent of filtration for scanners and maybe cameras that
    matched XYZ which makes it a lot easier, you could probably use a matrix
    and 1D LUT instead of a 3D LUT, ICC can accommodate both math constructs

    but I see a problem with digital cameras

    chrome film/scanners were easy for device independent workflows, you
    only needed to match the chrome

    with a digital camera you have to match the scene, appearance as opposed
    to color considerations come into play, like white balance

    I have heard people are using the RAW camera files without the white balance

    I have heard Kodak has a patent on how to characterize cameras without a
    target

    targets are a hassle for photographers

    I don't think this is going to be resolved until camera manufacturers
    make profiles for their cameras instead of using sRGB, ProPhotoRGB, etc.

    they could to this with information of sensor sensitivity, sensor
    filtration, etc., and stuff it into an ICC profile
     
    Dale, Jan 25, 2014
    #13
  14. Dale

    Dale Guest

    the way things are NOW, photographers and lab techs/engineers have to
    know about the making of profiles, once device/driver manufacturers make
    profiles for their devices, it will be more like you are getting at, it
    shouldn't matter to the user, but sometimes a user might want to
    make/edit his own profiles

    this leads to measurement instrumentation

    when I worked at Kodak we had spectroradiometers, colorimeters etc.,
    that cost over $100,000

    the software I see now is for instruments like X-Rite and MacBeth
    levels, works okay for software like Kodak's ColorFlow where you are
    actually creating an edited profile, but I think the ICC needs to get
    more influences to device/driver manufacturers

    someone needs to have the high priced instruments

    then again there is such a thing as "good enough", especially when
    applied to consumer imaging, television is trying to get into the high
    quality professional markets though
     
    Dale, Jan 25, 2014
    #14
  15. Dale

    me Guest


    Really, I've heard people have seen UFOs. Instead of repeatedly laying
    out your perceived problem (real or not), why not lay out the solution
    other than I heard, or all you have to do is...

    How exactly do you make a single profile which fits an infinite
    variation of lighting? If you don't characterize the lighting you
    don't have enough information to solve the problem at hand.
     
    me, Jan 25, 2014
    #15
  16. Dale

    Guest Guest

    making a profile is easy. just run the software.

    what photographers and techs don't need to know is the math behind the
    conversions and everything else about colour management.
    no they don't.

    the low priced colour pucks work exceptionally well, and since they are
    affordable by just about anyone, they actually get used.
    today's low price products are *better* than the overpriced stuff you
    may have had long ago.
     
    Guest, Jan 25, 2014
    #16
  17. Dale

    Guest Guest

    sure, but users need not concern themselves with any of that.

    all they need to do is adopt a colour managed workflow.
     
    Guest, Jan 26, 2014
    #17
  18. Dale

    PeterN Guest

    Interested people want to know.
     
    PeterN, Jan 26, 2014
    #18
  19. Dale

    Martin Brown Guest

    Yes. But only a handful of people who work on the design of imaging
    systems actually need to understand the details of the mathematics that
    underpins moving between colour spaces reliably. The end user merely
    needs to be able to see clearly what parts of his image cannot be
    rendered accurately on the final destination medium and preview what it
    will look like after the compromises are made for gamut capability.
    You don't need that many of the high end instruments - modern simple
    color measurement devices are now surprisingly good. The dye
    manufacturers and printer/display makers labs will need such kit to
    characterise the properties of new inks and papers or OLED/plasma/LCD
    but end users can get by with very modest colorimetry.
    Photograph a few colour paint sample charts and then do a calibrated
    workflow then compare the resulting print against the original - as a
    concrete example. The human eye is very good at spotting small
    differences in hue - especially on near flesh tones.

    Heck this is already so well established that there is paint
    manufacturer software to allow you to photograph a small test chart with
    a hole in it for the unknown target colour on your mobile phone. Email
    it to the paint maker and they will send you back a mix formula to match
    it that can be taken to your nearest DIY store and works.

    It isn't that long ago that individual batches of paint with nominally
    the same colour formulation could have radically different properties.

    American NTSC TV used to amuse Europeans because the newscaster would
    drift between having ghoulish green and surreal purple flesh tones or
    else be clamped to an unearthly pale orange by the flesh bodger. I
    always assumed it was an inherent limitation of NTSC until I saw the
    Japanese domestic implementation of it which works flawlessly.
    I was involved in some of the very early dyesub printing in Japan. They
    kept separate colour profiles for printing souvenir images of visiting
    VIPs - Westerners and Japanese. These were largely subjective and
    neither group liked seeing a neutral balanced version of their portrait!

    When a westerner was due one of us would be photographed and printed to
    check the calibration. A Westerner printed on the Japanese setting would
    look pink like they were drunk and a Japanese person printed on the
    Westerner setting would look jaundiced. Neither setting represented true
    calibrated neutral reality but the "customers" didn't like reality!
     
    Martin Brown, Jan 27, 2014
    #19
  20. Dale

    Martin Brown Guest

    But that is clearly not true!

    It is a lot more convenient to convert to a device independent colour
    space and from there to whatever output medium you want to use because
    the number of profiles need for N different image sources and M
    destinations is limited to N+M colour profiles.

    But you could with a *lot* more work compute direct colour profiles for
    every possible combination of source and destination N*M. In the early
    days when N was about 3 and M was about 4 that was what happened.

    It may still make a lot more sense to store the original image in the
    colour space where it was measured and only ever compute the device
    independent form as a hidden step on the way to the output device.

    You lose a bit to rounding errors in every colourspace conversion.
    (with a handful of exceptions that are exactly invertable)
     
    Martin Brown, Jan 27, 2014
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.