HDR/composite pictures with JPEG?

Discussion in 'Digital Photography' started by none, Nov 20, 2005.

  1. none

    none Guest

    It is useful to make composite pictures with JPEG images? Or is a
    losless/RAW format necessary?

    Thanks,
    -Mike
     
    none, Nov 20, 2005
    #1
    1. Advertising

  2. "none" <> wrote in message
    news:G7Qff.4192$Ze6.1644@trndny04...
    > It is useful to make composite pictures with JPEG images?


    Yes it is, even if you lose a tiny bit of dynamic range versus what
    Raw can offer. You do need software that can blend the images in Gamma
    1.0 space, if you want to avoid potential blending errors (colors
    and/or exposure mismatches). It is also easier to automatically match
    the exposure differences when all data is in gamma 1.0 space (exposure
    differences can be expressed as a simple multiplier).

    > Or is a losless/RAW format necessary?


    In fact, the first implementation of HDR in Photoshop CS2 uses
    8-bit/channel images. In that case it is beneficial to start with a
    gamma adjusted image if one wants to avoid too much posterization.
    That doesn't mean that it's not better to start with as much data as
    available, but you'll have to do with the tools available.

    Bart
     
    Bart van der Wolf, Nov 20, 2005
    #2
    1. Advertising

  3. none

    none Guest

    Bart van der Wolf wrote:
    > Yes it is, even if you lose a tiny bit of dynamic range versus what Raw
    > can offer.


    Thanks -- that is great to hear!


    > You do need software that can blend the images in Gamma 1.0
    > space, if you want to avoid potential blending errors (colors and/or
    > exposure mismatches).


    You're saying that all the images have to be normalized in some way?


    > In fact, the first implementation of HDR in Photoshop CS2 uses
    > 8-bit/channel images.


    boggle...won't images have 8-bit/channel regardless of whether they are
    in jpeg, tiff, or whatever else?

    -Mike
     
    none, Nov 20, 2005
    #3
  4. none <> writes:

    >> In fact, the first implementation of HDR in Photoshop CS2 uses
    >> 8-bit/channel images.

    >
    > boggle...won't images have 8-bit/channel regardless of whether they
    > are in jpeg, tiff, or whatever else?


    Many cameras store 12 bits per channel in the raw files. Some file
    formats can only store 8 bits per channel, others can store more. In
    fact, the JPEG specification allows 12 bits per channel.
    Unfortunately, application support for this is rare.

    --
    Måns Rullgård
     
    =?iso-8859-1?q?M=E5ns_Rullg=E5rd?=, Nov 20, 2005
    #4
  5. "none" <> wrote in message
    news:8O6gf.2614$BU2.1801@trndny01...
    > Bart van der Wolf wrote:
    >> Yes it is, even if you lose a tiny bit of dynamic range versus what
    >> Raw can offer.

    >
    > Thanks -- that is great to hear!
    >
    >
    >> You do need software that can blend the images in Gamma 1.0 space,
    >> if you want to avoid potential blending errors (colors and/or
    >> exposure mismatches).

    >
    > You're saying that all the images have to be normalized in some way?


    The purpose of creating a composite of different exposures is to
    create a dynamic range that exceeds what can be accurately encoded in
    an 8-b/ch or even a 12 or 16-b/ch. The simplest way to combine the
    different exposure levels is by converting to linear gamma, one could
    call that normalizing but it's more like adding/blending the
    individual exposures levels after multiplying/shifting their data
    values to same values per pixel.

    >> In fact, the first implementation of HDR in Photoshop CS2 uses
    >> 8-bit/channel images.

    >
    > boggle...won't images have 8-bit/channel regardless of whether they
    > are in jpeg, tiff, or whatever else?


    The HDR images can be assembled from 8-b/ch (e.g. gamma adjusted
    JPEGs), 12-b/ch (e.g. linear gamma digicam Raw data), 16-b/ch (e.g.
    computer generated or converted from other bit depths), etc.

    The assembled image can be e.g. 32-b/ch or more b/ch and is usually
    linear (or Log) gamma floating-point data, not an image as we know it.
    To output that data to (usually) an 8-b/ch output device, it needs to
    be tonemapped to fit the dynamic range into a smaller range.

    Bart
     
    Bart van der Wolf, Nov 21, 2005
    #5
  6. "Bart van der Wolf" <> writes:

    >Yes it is, even if you lose a tiny bit of dynamic range versus what
    >Raw can offer. You do need software that can blend the images in Gamma
    >1.0 space, if you want to avoid potential blending errors (colors
    >and/or exposure mismatches). It is also easier to automatically match
    >the exposure differences when all data is in gamma 1.0 space (exposure
    >differences can be expressed as a simple multiplier).


    Why is gamma 1.0 space better than gamma 0.45 space (or gamma 2.2,
    depending on how you look at it)?

    An exposure changes multiplies all pixel values by a constant in gamma
    1.0 space, and it also multiplies all pixel values by a (different)
    constant in gamma-0.45 space.

    For example, 1 stop increase in exposure should multiply pixel values by
    2 in gamma-1 space, and by 2^0.45 = 1.37 in gamma-0.45 space.

    Now, if you're going to convert everything to floating point anyway, you
    might as well work in linear space because the gamma encoding doesn't
    buy you anything, and linear space is better for some other operations
    like filtering. But I don't see why it's better if you're going to
    work with integer pixel values.

    Dave
     
    Dave Martindale, Nov 21, 2005
    #6
  7. "Dave Martindale" <> wrote in message
    news:dls0hn$s5m$...
    > "Bart van der Wolf" <> writes:
    >
    >>Yes it is, even if you lose a tiny bit of dynamic range versus what
    >>Raw can offer. You do need software that can blend the images in
    >> Gamma 1.0 space, if you want to avoid potential blending errors
    >> (colors and/or exposure mismatches). It is also easier to
    >> automatically match the exposure differences when all data is in
    >> gamma 1.0 space (exposure differences can be expressed as a
    >> simple multiplier).

    >
    > Why is gamma 1.0 space better than gamma 0.45 space (or
    > gamma 2.2, depending on how you look at it)?


    One reason is, that source images seldomly are gamma 1/2.2, in fact
    JPEGs often use a slope limited sRGB encoding. I'm not saying it is
    impossible to linearize that data, but it requires more processing
    (which usually degrades precision).

    Another reason is that not all cameras produce linear gamma data, not
    even those that allow linear Raw output, but especially those that
    produce in-camera JPEG conversion. Many exhibit a reducing contrast
    towards underexposure (like a toe in film). The underexposure response
    may be partly due to lens flare. So some kind of calibration is
    therefore unavoidable for the best results.

    SNIP
    > Now, if you're going to convert everything to floating point anyway,
    > you
    > might as well work in linear space because the gamma encoding
    > doesn't
    > buy you anything, and linear space is better for some other
    > operations
    > like filtering. But I don't see why it's better if you're going to
    > work with integer pixel values.


    There shouldn't be much difference *if* things are designed perfectly.
    If they are not, then starting with (semi-)linear will exhibit less
    error (colored edges between high contrast edges), but will require
    more b/ch for the input files (not a real problem with 12-b/ch data
    stored in 16-b/ch files).

    Bart
     
    Bart van der Wolf, Nov 22, 2005
    #7
  8. none

    none Guest

    On Mon, 21 Nov 2005 01:03:00 +0100, Bart van der Wolf wrote:

    >
    > "none" <> wrote in message
    > news:8O6gf.2614$BU2.1801@trndny01...
    >> You're saying that all the images have to be normalized in some way?

    >
    > The purpose of creating a composite of different exposures is to
    > create a dynamic range that exceeds what can be accurately encoded in
    > an 8-b/ch or even a 12 or 16-b/ch


    Doesn't 8 bits per channel already allow for more dynamic range than the
    eye can see? I thought the idea was to get more dynamic range from the
    camera itself -- for example, exposing both the moon and the earth
    correctly in a night landscape.


    > The simplest way to combine the
    > different exposure levels is by converting to linear gamma, one could
    > call that normalizing but it's more like adding/blending the
    > individual exposures levels after multiplying/shifting their data
    > values to same values per pixel.


    OK -- I have done some more googling and I think I am beginning to
    understand the difference between the different ways used to represent
    brightness.

    Thanks,
    -Mike
     
    none, Nov 22, 2005
    #8
  9. "none" <> wrote in message
    news:p...
    SNIP
    > Doesn't 8 bits per channel already allow for more dynamic range
    > than the eye can see?


    No, it allows to accurately encode 256 discrete levels per channel.
    Suppose the world around us would exist of only graytones, that would
    mean that it can accurately represent a luminocity contrast range of
    255:1 in unity steps. However, the real world around us can easily
    offer a dynamic range of 100,000:1 or more if you want to capture
    anything from specular highlights to dark subject colors in the shade.
    Adding color to the equation, makes matters more complex.

    One could decide to tonemap that e.g. 100,000:1 range to the 255:1
    range of codes, but then each increase of 1 digital number would mean
    skipping a large number of real values. A type of Log encoding would
    help to encode in perceptually uniform steps, but still rather
    inaccurate. Adding more bits per channel would help to increase the
    accuracy and range, which is exactly the purpose of HDR encoding.

    Bart
     
    Bart van der Wolf, Nov 22, 2005
    #9
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. kl
    Replies:
    6
    Views:
    966
    Martin Brown
    Nov 12, 2003
  2. certsnsearches

    Exiff-jpeg and jpeg

    certsnsearches, Jan 7, 2004, in forum: Digital Photography
    Replies:
    2
    Views:
    3,333
    Jim Townsend
    Jan 7, 2004
  3. Amit
    Replies:
    3
    Views:
    1,314
    Ed Ruf (REPLY to E-MAIL IN SIG!)
    Mar 17, 2006
  4. Conrad

    jpeg and jpeg 2000

    Conrad, Jan 25, 2007, in forum: Digital Photography
    Replies:
    72
    Views:
    1,752
    Barry Pearson
    Feb 3, 2007
  5. Liam O'Connor
    Replies:
    6
    Views:
    157
    Liam O'Connor
    May 13, 2014
Loading...

Share This Page