Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Computing > Digital Photography > HDR/composite pictures with JPEG?

Reply
Thread Tools

HDR/composite pictures with JPEG?

 
 
none
Guest
Posts: n/a
 
      11-20-2005
It is useful to make composite pictures with JPEG images? Or is a
losless/RAW format necessary?

Thanks,
-Mike
 
Reply With Quote
 
 
 
 
Bart van der Wolf
Guest
Posts: n/a
 
      11-20-2005

"none" <(E-Mail Removed)> wrote in message
news:G7Qff.4192$Ze6.1644@trndny04...
> It is useful to make composite pictures with JPEG images?


Yes it is, even if you lose a tiny bit of dynamic range versus what
Raw can offer. You do need software that can blend the images in Gamma
1.0 space, if you want to avoid potential blending errors (colors
and/or exposure mismatches). It is also easier to automatically match
the exposure differences when all data is in gamma 1.0 space (exposure
differences can be expressed as a simple multiplier).

> Or is a losless/RAW format necessary?


In fact, the first implementation of HDR in Photoshop CS2 uses
8-bit/channel images. In that case it is beneficial to start with a
gamma adjusted image if one wants to avoid too much posterization.
That doesn't mean that it's not better to start with as much data as
available, but you'll have to do with the tools available.

Bart

 
Reply With Quote
 
 
 
 
none
Guest
Posts: n/a
 
      11-20-2005
Bart van der Wolf wrote:
> Yes it is, even if you lose a tiny bit of dynamic range versus what Raw
> can offer.


Thanks -- that is great to hear!


> You do need software that can blend the images in Gamma 1.0
> space, if you want to avoid potential blending errors (colors and/or
> exposure mismatches).


You're saying that all the images have to be normalized in some way?


> In fact, the first implementation of HDR in Photoshop CS2 uses
> 8-bit/channel images.


boggle...won't images have 8-bit/channel regardless of whether they are
in jpeg, tiff, or whatever else?

-Mike
 
Reply With Quote
 
=?iso-8859-1?q?M=E5ns_Rullg=E5rd?=
Guest
Posts: n/a
 
      11-20-2005
none <(E-Mail Removed)> writes:

>> In fact, the first implementation of HDR in Photoshop CS2 uses
>> 8-bit/channel images.

>
> boggle...won't images have 8-bit/channel regardless of whether they
> are in jpeg, tiff, or whatever else?


Many cameras store 12 bits per channel in the raw files. Some file
formats can only store 8 bits per channel, others can store more. In
fact, the JPEG specification allows 12 bits per channel.
Unfortunately, application support for this is rare.

--
Måns Rullgård
http://www.velocityreviews.com/forums/(E-Mail Removed)
 
Reply With Quote
 
Bart van der Wolf
Guest
Posts: n/a
 
      11-21-2005

"none" <(E-Mail Removed)> wrote in message
news:8O6gf.2614$BU2.1801@trndny01...
> Bart van der Wolf wrote:
>> Yes it is, even if you lose a tiny bit of dynamic range versus what
>> Raw can offer.

>
> Thanks -- that is great to hear!
>
>
>> You do need software that can blend the images in Gamma 1.0 space,
>> if you want to avoid potential blending errors (colors and/or
>> exposure mismatches).

>
> You're saying that all the images have to be normalized in some way?


The purpose of creating a composite of different exposures is to
create a dynamic range that exceeds what can be accurately encoded in
an 8-b/ch or even a 12 or 16-b/ch. The simplest way to combine the
different exposure levels is by converting to linear gamma, one could
call that normalizing but it's more like adding/blending the
individual exposures levels after multiplying/shifting their data
values to same values per pixel.

>> In fact, the first implementation of HDR in Photoshop CS2 uses
>> 8-bit/channel images.

>
> boggle...won't images have 8-bit/channel regardless of whether they
> are in jpeg, tiff, or whatever else?


The HDR images can be assembled from 8-b/ch (e.g. gamma adjusted
JPEGs), 12-b/ch (e.g. linear gamma digicam Raw data), 16-b/ch (e.g.
computer generated or converted from other bit depths), etc.

The assembled image can be e.g. 32-b/ch or more b/ch and is usually
linear (or Log) gamma floating-point data, not an image as we know it.
To output that data to (usually) an 8-b/ch output device, it needs to
be tonemapped to fit the dynamic range into a smaller range.

Bart

 
Reply With Quote
 
Dave Martindale
Guest
Posts: n/a
 
      11-21-2005
"Bart van der Wolf" <(E-Mail Removed)> writes:

>Yes it is, even if you lose a tiny bit of dynamic range versus what
>Raw can offer. You do need software that can blend the images in Gamma
>1.0 space, if you want to avoid potential blending errors (colors
>and/or exposure mismatches). It is also easier to automatically match
>the exposure differences when all data is in gamma 1.0 space (exposure
>differences can be expressed as a simple multiplier).


Why is gamma 1.0 space better than gamma 0.45 space (or gamma 2.2,
depending on how you look at it)?

An exposure changes multiplies all pixel values by a constant in gamma
1.0 space, and it also multiplies all pixel values by a (different)
constant in gamma-0.45 space.

For example, 1 stop increase in exposure should multiply pixel values by
2 in gamma-1 space, and by 2^0.45 = 1.37 in gamma-0.45 space.

Now, if you're going to convert everything to floating point anyway, you
might as well work in linear space because the gamma encoding doesn't
buy you anything, and linear space is better for some other operations
like filtering. But I don't see why it's better if you're going to
work with integer pixel values.

Dave
 
Reply With Quote
 
Bart van der Wolf
Guest
Posts: n/a
 
      11-22-2005

"Dave Martindale" <(E-Mail Removed)> wrote in message
news:dls0hn$s5m$(E-Mail Removed)...
> "Bart van der Wolf" <(E-Mail Removed)> writes:
>
>>Yes it is, even if you lose a tiny bit of dynamic range versus what
>>Raw can offer. You do need software that can blend the images in
>> Gamma 1.0 space, if you want to avoid potential blending errors
>> (colors and/or exposure mismatches). It is also easier to
>> automatically match the exposure differences when all data is in
>> gamma 1.0 space (exposure differences can be expressed as a
>> simple multiplier).

>
> Why is gamma 1.0 space better than gamma 0.45 space (or
> gamma 2.2, depending on how you look at it)?


One reason is, that source images seldomly are gamma 1/2.2, in fact
JPEGs often use a slope limited sRGB encoding. I'm not saying it is
impossible to linearize that data, but it requires more processing
(which usually degrades precision).

Another reason is that not all cameras produce linear gamma data, not
even those that allow linear Raw output, but especially those that
produce in-camera JPEG conversion. Many exhibit a reducing contrast
towards underexposure (like a toe in film). The underexposure response
may be partly due to lens flare. So some kind of calibration is
therefore unavoidable for the best results.

SNIP
> Now, if you're going to convert everything to floating point anyway,
> you
> might as well work in linear space because the gamma encoding
> doesn't
> buy you anything, and linear space is better for some other
> operations
> like filtering. But I don't see why it's better if you're going to
> work with integer pixel values.


There shouldn't be much difference *if* things are designed perfectly.
If they are not, then starting with (semi-)linear will exhibit less
error (colored edges between high contrast edges), but will require
more b/ch for the input files (not a real problem with 12-b/ch data
stored in 16-b/ch files).

Bart

 
Reply With Quote
 
none
Guest
Posts: n/a
 
      11-22-2005
On Mon, 21 Nov 2005 01:03:00 +0100, Bart van der Wolf wrote:

>
> "none" <(E-Mail Removed)> wrote in message
> news:8O6gf.2614$BU2.1801@trndny01...
>> You're saying that all the images have to be normalized in some way?

>
> The purpose of creating a composite of different exposures is to
> create a dynamic range that exceeds what can be accurately encoded in
> an 8-b/ch or even a 12 or 16-b/ch


Doesn't 8 bits per channel already allow for more dynamic range than the
eye can see? I thought the idea was to get more dynamic range from the
camera itself -- for example, exposing both the moon and the earth
correctly in a night landscape.


> The simplest way to combine the
> different exposure levels is by converting to linear gamma, one could
> call that normalizing but it's more like adding/blending the
> individual exposures levels after multiplying/shifting their data
> values to same values per pixel.


OK -- I have done some more googling and I think I am beginning to
understand the difference between the different ways used to represent
brightness.

Thanks,
-Mike
 
Reply With Quote
 
Bart van der Wolf
Guest
Posts: n/a
 
      11-22-2005

"none" <(E-Mail Removed)> wrote in message
news(E-Mail Removed) d...
SNIP
> Doesn't 8 bits per channel already allow for more dynamic range
> than the eye can see?


No, it allows to accurately encode 256 discrete levels per channel.
Suppose the world around us would exist of only graytones, that would
mean that it can accurately represent a luminocity contrast range of
255:1 in unity steps. However, the real world around us can easily
offer a dynamic range of 100,000:1 or more if you want to capture
anything from specular highlights to dark subject colors in the shade.
Adding color to the equation, makes matters more complex.

One could decide to tonemap that e.g. 100,000:1 range to the 255:1
range of codes, but then each increase of 1 digital number would mean
skipping a large number of real values. A type of Log encoding would
help to encode in perceptually uniform steps, but still rather
inaccurate. Adding more bits per channel would help to increase the
accuracy and range, which is exactly the purpose of HDR encoding.

Bart

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
12-megapixel pictures as sharp when enlarged as 18-meg pictures? Robert Montgomery Digital Photography 95 06-21-2012 01:24 PM
BIG pictures or SMALL pictures ? Albert Digital Photography 11 02-20-2007 02:23 AM
MML4 Coverage: Day 1 Pictures @ ThinkComputers.org Silverstrand Front Page News 0 06-25-2005 08:13 PM
Column with pictures instead of 0s and 1s (boolean value representation in pictures) Martin Raychev ASP .Net Datagrid Control 1 03-02-2004 03:00 PM
Sending pictures via email Via OE6+Win XP - I no longer get prompt to "make all my pictures smaller" Ian Roberts Digital Photography 3 09-21-2003 04:57 PM



Advertisments