Re: 35mm scan quality Vs Digital

Discussion in 'Digital Photography' started by Charlie D, Aug 31, 2003.

  1. Charlie D

    Charlie D Guest

    In article <hTc4b.75085$>,
    "Joseph Brown" <> wrote:

    > They used a Fuji SP-2000 Frontier 350 commercial scanner to
    > scan the negative at 300 dpi in sRGB color space, jpeg, 24-bit.


    According to Scott Kelby in his book, "The Photoshop Book for Digital
    Photographers,"

    "sRGB is arguably the worst possible color space for professional
    photographers. This color space was designed for use by 'Web designers
    and it mimics an "el cheapo" PC monitor from tour or five years ago."
    He recommends Adobe RGB (1998)

    That probably doesn't help with the grain, but in the future I'd ask for
    a different color space.

    --
    Charlie Dilks
    Newark, DE USA
    Charlie D, Aug 31, 2003
    #1
    1. Advertising

  2. Hi Charlie

    > According to Scott Kelby in his book, "The Photoshop Book for Digital
    > Photographers,"
    >
    > "sRGB is arguably the worst possible color space for professional
    > photographers. This color space was designed for use by 'Web designers
    > and it mimics an "el cheapo" PC monitor from tour or five years ago."
    > He recommends Adobe RGB (1998)


    I used to use Adobe RGB. These days I use sRGB. I prefer
    the colors in the sRGB images, both on-screen and in
    prints. This is for both digital and scanned negative
    images.

    Hyperbolic recommendations of Adobe RGB over sRGB remind me of the
    days when "experts" blithely opined that the c language was for
    application software only, and operating systems had to be
    coded in assembly language. That myth continued for several
    years, long after both Apple and Microsoft had switched the vast
    majority of their OS code to higher-level non-assembly languages.

    Stan
    Stanley Krute, Aug 31, 2003
    #2
    1. Advertising

  3. Charlie D

    wally Guest

    In article <Vst4b.8605$>, "Stanley Krute" <> wrote:
    >Hi Charlie
    >
    >> According to Scott Kelby in his book, "The Photoshop Book for Digital
    >> Photographers,"
    >>
    >> "sRGB is arguably the worst possible color space for professional
    >> photographers. This color space was designed for use by 'Web designers
    >> and it mimics an "el cheapo" PC monitor from tour or five years ago."
    >> He recommends Adobe RGB (1998)

    >
    >I used to use Adobe RGB. These days I use sRGB. I prefer
    >the colors in the sRGB images, both on-screen and in
    >prints. This is for both digital and scanned negative
    >images.
    >
    >Hyperbolic recommendations of Adobe RGB over sRGB remind me of the
    >days when "experts" blithely opined that the c language was for
    >application software only, and operating systems had to be
    >coded in assembly language. That myth continued for several
    >years, long after both Apple and Microsoft had switched the vast
    >majority of their OS code to higher-level non-assembly languages.
    >

    The myth of the myth is that C was "invented" to write the second version of
    Unix which predates both Apple and Microsoft. I can remember attempts of
    people trying to make stripped down C-compliers to run on 8080/Z80 CP/M
    systems. C has very good support for in line assembly language when needed
    for optimum bit twiddling in device driver code, but with GHz processors being
    the low end these days, its rarely needed.

    Not having color calibration hardware, I too seem to get better inkjet prints
    using sRGB than AdobeRGB colorspace. I think its because printer drivers are
    tweaked by the manufacturer for sRGB and thus sRGB generally gives the best
    results unless you have the tools to tweak the ICC profiles yourself for your
    specific monitor and printer.

    --wally.
    wally, Sep 1, 2003
    #3
  4. Charlie D

    Tom Monego Guest

    Adobe 1998 RGB will give you a wider gamut (color range) than sRgb. I do large
    format printing for a living and much prefer to see an Adobe RGB file. That
    said, most personal printers are now calibrated for sRgb as is the Fuji
    Frontier and the similar AGFA photo paper printer. Consiquently I feel the
    prints look a little flat and lacking the red and yellow sides of the spectrum.

    Tom

    >>I used to use Adobe RGB. These days I use sRGB. I prefer
    >>the colors in the sRGB images, both on-screen and in
    >>prints. This is for both digital and scanned negative
    >>images.
    >>



    >Not having color calibration hardware, I too seem to get better inkjet prints
    >using sRGB than AdobeRGB colorspace. I think its because printer drivers are
    >tweaked by the manufacturer for sRGB and thus sRGB generally gives the best
    >results unless you have the tools to tweak the ICC profiles yourself for your
    >specific monitor and printer.
    >
    >--wally.
    Tom Monego, Sep 1, 2003
    #4
  5. Charlie D

    Joseph Brown Guest

    > 48-bit image capture is "the key" to good color.
    > What's odd is that even world-class experts disagree


    Why would somebody disagree that 48-bit capture provides better color than a
    24-bit?
    Joseph Brown, Sep 1, 2003
    #5
  6. Charlie D

    Rafe B. Guest

    On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    <> wrote:

    >> 48-bit image capture is "the key" to good color.
    >> What's odd is that even world-class experts disagree

    >
    >Why would somebody disagree that 48-bit capture provides better color than a
    >24-bit?
    >



    Because it simply isn't true.


    rafe b.
    http://www.terrapinphoto.com
    Rafe B., Sep 1, 2003
    #6
  7. (Rafe B.) writes:
    > On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    > <> wrote:
    >
    > >> 48-bit image capture is "the key" to good color.
    > >> What's odd is that even world-class experts disagree

    > >
    > >Why would somebody disagree that 48-bit capture provides better color than a
    > >24-bit?


    > Because it simply isn't true.


    24 bit color is only 8 bit (256 colors) per channel. In combination, the
    three channels give you millions of colors, most of which do not occur in
    nature and are totally useless. Going to 16 bits per channel gives much
    better results. Just about any color scanner does 12 bits per channel,
    for 36 bit color, which gives 8191 colors per channel and substantially
    improved results.

    --
    http://home.teleport.com/~larryc
    Larry Caldwell, Sep 2, 2003
    #7
  8. Larry Caldwell wrote:

    > (Rafe B.) writes:
    > > On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    > > <> wrote:
    > >
    > > >> 48-bit image capture is "the key" to good color.
    > > >> What's odd is that even world-class experts disagree
    > > >
    > > >Why would somebody disagree that 48-bit capture provides better color than a
    > > >24-bit?

    >
    > > Because it simply isn't true.

    >
    > 24 bit color is only 8 bit (256 colors) per channel. In combination, the
    > three channels give you millions of colors, most of which do not occur in
    > nature and are totally useless. Going to 16 bits per channel gives much
    > better results. Just about any color scanner does 12 bits per channel,
    > for 36 bit color, which gives 8191 colors per channel and substantially
    > improved results.


    Well, a 6-megapixel camera only needs 6 million colors!
    So having 16-million colors means 10 million aren't used.
    Then in any image, I bet no one could tell the difference
    between a 6 million color image versus a 16 million color
    image, let alone the "gazillions" (2.8x10^14) 48 bit gives.

    The use of high #bits is to bring out tonal detail over
    a large dynamic range, but when you view on screen or
    print, you need relatively few actual colors.
    When you view on screen, with say a 1280x1024 pixel monitor,
    you need only 1.31 million colors.

    Many years ago (1970s) when image processing systems (these were the
    high end tens of thousands of dollars for 24-bit) and X-windows
    was just coming out (8-bit color), a group of us did
    some experimenting and at 10 to 12 bits people couldn't
    see the difference between that and full 24bit. Where you
    need more (e.g. 12 bit color, or 4096 colors) is when you
    have smooth gradients, like sky. In complex scenes, your
    eye/brain gets confused and fewer colors are needed.
    For a while, we went with 8-bit systems to save a lot of money.
    Of course now days, that $100,000 24-bit imaging system had less
    capability than the low end PC that someone threw in the trash!

    Roger
    Roger N. Clark, Sep 2, 2003
    #8
  9. Charlie D

    Robert Lynch Guest

    "Roger N. Clark" <> wrote in message
    news:...
    > Well, a 6-megapixel camera only needs 6 million colors!
    > So having 16-million colors means 10 million aren't used.


    This is one of the dumbest statements that I have seen here in a while.
    Robert Lynch, Sep 2, 2003
    #9
  10. (Rafe B.) writes:
    > On Tue, 02 Sep 2003 01:11:44 GMT, Larry Caldwell <>
    > wrote:
    >
    > > (Rafe B.) writes:
    > >> On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    > >> <> wrote:
    > >>
    > >> >> 48-bit image capture is "the key" to good color.
    > >> >> What's odd is that even world-class experts disagree
    > >> >
    > >> >Why would somebody disagree that 48-bit capture provides better color than a
    > >> >24-bit?

    > >
    > >> Because it simply isn't true.

    > >
    > >24 bit color is only 8 bit (256 colors) per channel. In combination, the
    > >three channels give you millions of colors, most of which do not occur in
    > >nature and are totally useless. Going to 16 bits per channel gives much
    > >better results.

    >
    > Prove it. Lots of folks have tried.
    > Look up Dan Margulis (a world class expert on
    > Photoshop) -- he's not buying it.


    If you paid any attention to video editing software, you would see the
    information about how many colors the image is using out of the palette.
    The vast majority of colors in a photo come from only a small portion of
    the palette (5%), and any primary only has 256 choices.

    If Dan Margulis can't read his own computer screen, it's not my problem.

    > Check out my website (link below). Every image
    > there was done with 24 bit color. Many hundreds
    > of prints sold -- each one done with 24 bit color.


    Don't confuse color fidelity with inadequate color reproduction. Just
    because you can't tell the difference from one print to the next doesn't
    mean the colors are accurate.

    > >Just about any color scanner does 12 bits per channel,


    > No doubt about it, but that doesn't prove the necessity
    > of 48 bit color. The main thing you get with 48 bit color is
    > the ability to be sloppy without necessarily killing your
    > images from the outset.


    The main thing you get out of 16 bits per channel is accurate color. 8
    bits per channel only gives you 256 colors. When you multiply all the
    possible combinations it sounds impressive, but it is still just cartoon
    color.

    --
    http://home.teleport.com/~larryc
    Larry Caldwell, Sep 2, 2003
    #10
  11. Robert Lynch wrote:

    > "Roger N. Clark" <> wrote in message
    > news:...
    >
    >>Well, a 6-megapixel camera only needs 6 million colors!
    >>So having 16-million colors means 10 million aren't used.

    >
    >
    > This is one of the dumbest statements that I have seen here in a while.


    Besides being obviously true, you mean?

    The key point RNC skipped (but definitely not because he forgot/didn't
    know about it) is that you sometimes would like to be able to select
    exactly _which_ 6 million colors to use.

    If it is a photo with a _lot_ of finely graduated blues from a big sky,
    then you might very well need more than 8 bits in the blue channel to
    avoid any banding.

    Terje

    --
    - <>
    "almost all programming can be viewed as an exercise in caching"
    Terje Mathisen, Sep 2, 2003
    #11
  12. Charlie D

    wally Guest

    In article <>, Rafe B. <> wrote:
    >On Tue, 02 Sep 2003 01:11:44 GMT, Larry Caldwell <>
    >wrote:
    >
    >> (Rafe B.) writes:
    >>> On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    >>> <> wrote:
    >>>
    >>> >> 48-bit image capture is "the key" to good color.
    >>> >> What's odd is that even world-class experts disagree
    >>> >
    >>> >Why would somebody disagree that 48-bit capture provides better color than

    > a
    >>> >24-bit?

    >>
    >>> Because it simply isn't true.

    >>
    >>24 bit color is only 8 bit (256 colors) per channel. In combination, the
    >>three channels give you millions of colors, most of which do not occur in
    >>nature and are totally useless. Going to 16 bits per channel gives much
    >>better results.

    >
    >Prove it. Lots of folks have tried.
    >Look up Dan Margulis (a world class expert on
    >Photoshop) -- he's not buying it.
    >


    Didn't Dan pay off $100 in his 16-bit challenge last year?

    Seemed to recall a web site showing the "winners".

    "Better" is subject to artistic interpretation and intent, but that's the
    point, with 16-bits you have the most to work with to get the results you
    want. What makes showing that 16-bits/pixel is "better" difficult is that all
    monitors and printers in common use can only handle 8-bit/pixel data so you
    can only see the results in 24-bit color anyways. Most output devices don't
    have even the full 24-bit gamut.

    --wally.
    wally, Sep 2, 2003
    #12
  13. "Roger N. Clark" <> writes:

    > Larry Caldwell wrote:
    >
    > > (Rafe B.) writes:
    > > > On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    > > > <> wrote:
    > > >
    > > > >> 48-bit image capture is "the key" to good color.
    > > > >> What's odd is that even world-class experts disagree
    > > > >
    > > > >Why would somebody disagree that 48-bit capture provides better color than a
    > > > >24-bit?

    > >
    > > > Because it simply isn't true.

    > >
    > > 24 bit color is only 8 bit (256 colors) per channel. In combination, the
    > > three channels give you millions of colors, most of which do not occur in
    > > nature and are totally useless. Going to 16 bits per channel gives much
    > > better results. Just about any color scanner does 12 bits per channel,
    > > for 36 bit color, which gives 8191 colors per channel and substantially
    > > improved results.

    >
    > Well, a 6-megapixel camera only needs 6 million colors!
    > So having 16-million colors means 10 million aren't used.
    > Then in any image, I bet no one could tell the difference
    > between a 6 million color image versus a 16 million color
    > image, let alone the "gazillions" (2.8x10^14) 48 bit gives.


    Yes, but you don't know in advance *which* 6 million. If you were
    using a palleted system with optimized palette, you could do that, but
    we're *not*; the camera output formats aren't like that.

    > The use of high #bits is to bring out tonal detail over
    > a large dynamic range, but when you view on screen or
    > print, you need relatively few actual colors.
    > When you view on screen, with say a 1280x1024 pixel monitor,
    > you need only 1.31 million colors.


    Remember that the digital camera original is the *input* to the image
    processing process. Lots of things not visible in the original
    directly displayed will make a difference after processing.
    --
    David Dyer-Bennet, <>, <www.dd-b.net/dd-b/>
    RKBA: <noguns-nomoney.com> <www.dd-b.net/carry/>
    Photos: <dd-b.lighthunters.net> Snapshots: <www.dd-b.net/dd-b/SnapshotAlbum/>
    Dragaera mailing lists: <dragaera.info/>
    David Dyer-Bennet, Sep 2, 2003
    #13
  14. Charlie D

    Flux Guest

    Larry Caldwell wrote:
    > (Rafe B.) writes:
    >
    >>On Mon, 1 Sep 2003 14:40:03 -0400, "Joseph Brown"
    >><> wrote:
    >>
    >>
    >>>>48-bit image capture is "the key" to good color.
    >>>>What's odd is that even world-class experts disagree
    >>>
    >>>Why would somebody disagree that 48-bit capture provides better color than a
    >>>24-bit?

    >
    >>Because it simply isn't true.

    >
    > 24 bit color is only 8 bit (256 colors) per channel.


    256 shades of one colour is quite a lot. For display purposes, it is enough
    - if you paint two shaded areas of colour with the equivalent of one-bit of
    difference from a 24 bit pallete, most people won't be able to see the
    difference between the two shades.

    16 bits per channel will put 256 shades of colour between those two 8 bit
    shades that most people can't already differentiate anyway.

    > In combination, the three channels give you millions of colors, most of which do not occur in
    > nature and are totally useless.


    Not occuring in nature doesn't make a colour useless. ;)

    > Going to 16 bits per channel gives much better results.


    Ah, so you can have even more colours that don't occur in nature? :)

    As already posted by some, the only reason a higher number of bits per
    channel is desirable is for processing. In digital photography, if you could
    capture an image with perfect exposure and balance everytime, and then had
    no need to make any other further processing, 8 bits per channel should be
    enough for capture.

    But of course, for most of us who make less than perfect photographs and/or
    take their images through heavy post-processing, a few more bits are quite
    welcome.


    Flux
    Flux, Sep 2, 2003
    #14
  15. Charlie D

    TCS Guest

    On Tue, 02 Sep 2003 22:31:33 GMT, <> wrote:
    > In message
    ><625.kaosol.net>,
    > TCS <> wrote:
    >
    >>On Tue, 02 Sep 2003 08:28:15 +0200, Terje Mathisen <> wrote:

    >
    >>>If it is a photo with a _lot_ of finely graduated blues from a big sky,
    >>>then you might very well need more than 8 bits in the blue channel to
    >>>avoid any banding.

    >
    >>Sheesh. Another idiot.

    >
    >>You really shouldn't post when you haven't the slightest clue what you're
    >>babbling about.

    >
    > You shouldn't call someone an idiot without giving an explanation.


    There are shades of blue that aren't 100% saturated.
    TCS, Sep 3, 2003
    #15
  16. Charlie D

    Guest

    In message <Uj05b.34962$>,
    (wally) wrote:

    >"Better" is subject to artistic interpretation and intent, but that's the
    >point, with 16-bits you have the most to work with to get the results you
    >want. What makes showing that 16-bits/pixel is "better" difficult is that all
    >monitors and printers in common use can only handle 8-bit/pixel data so you
    >can only see the results in 24-bit color anyways. Most output devices don't
    >have even the full 24-bit gamut.


    The biggest benefit comes from manipulation of the data;
    8-bits-per-channel dissolves very quickly with any kind manipulation.
    The ugly experiences of over-manipulating 8-bit images sets up electric
    fences in people's minds, where they begin to believe that you shouldn't
    really be able to manipulate data levels to any great extent, and then
    pat themselves on the back, saying, "See! 8-bit is all we need".

    Digital pictures have not had a big banding problem in the past, because
    sensors have been very noisy until recently. I don't know how the other
    cameras fare, but the Canon 10D in ISO 100 mode has noise so low that
    you can get solid 8-bit-truncated colors over large expanses in out-of
    focus areas.
    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
    , Sep 3, 2003
    #16
  17. Robert Lynch wrote:

    > "Roger N. Clark" <> wrote in message
    > news:...
    > > Well, a 6-megapixel camera only needs 6 million colors!
    > > So having 16-million colors means 10 million aren't used.

    >
    > This is one of the dumbest statements that I have seen here in a while.


    OK, I challenge anyone to PROVE that a 6 million pixel
    image can show more than 6 million colors.

    A pixel is a red-green-blue representation of a single
    color (or CMYK, etc). Each pixel has only one
    color, whether it be pure red, cyan, flesh, brown, sea-green,
    etc. By definition the number of colors in an image exactly
    equals the number of pixels. In practice, there are many
    pixels with the same color, or colors so close they can't
    be distinguished by humans, and the effective number
    of colors is almost always less than the number of pixels
    in the image.

    Roger
    Roger N. Clark, Sep 3, 2003
    #17
  18. "Roger N. Clark" <> wrote in message
    news:...
    SNIP
    > OK, I challenge anyone to PROVE that a 6 million pixel
    > image can show more than 6 million colors.


    You might want to change "show" into "capture at the same time".

    You are correct that the number of pixels limits the 'final' number of
    potentially different colors, BUT as a choice from a larger palette. The
    larger palette is possibly (2^12)^3 (=68.7 billion) colors after
    demosaicing, but a maximum of 6 million selected ones before. Gamma
    adjusting the linear gamma capture, will compress some colors in the
    highlights to the same values.

    Bart
    Bart van der Wolf, Sep 3, 2003
    #18
  19. Charlie D

    z Guest

    On Mon, 01 Sep 2003 20:06:56 -0600, "Roger N. Clark"
    <> wrote:

    >Larry Caldwell wrote:
    >
    >
    >Many years ago (1970s) when image processing systems (these were the
    >high end tens of thousands of dollars for 24-bit) and X-windows
    >was just coming out (8-bit color), a group of us did
    >some experimenting and at 10 to 12 bits people couldn't
    >see the difference between that and full 24bit. Where you
    >need more (e.g. 12 bit color, or 4096 colors) is when you
    >have smooth gradients, like sky. In complex scenes, your
    >eye/brain gets confused and fewer colors are needed.
    >For a while, we went with 8-bit systems to save a lot of money.
    >Of course now days, that $100,000 24-bit imaging system had less
    >capability than the low end PC that someone threw in the trash!
    >
    >Roger


    Roger on that.
    In 1970, memory was $30+ per byte, and fast memory was 25 microsecond
    Just the VGA memory (very modest 4MB) would be over 100 million
    dollars, with interlaced addressing to get the speed. And that is
    just for the memory(s). It would take 2000 chassis, or 250 full size
    6 foot racks just to hold the memory. Now it is just part of one
    card, that is just thrown away. The underlying manufacturing
    technology has changed a lot. A rendering of a picture takes the
    blink of an eye (literally). It used to be a four day batch job for
    just one picture. <no bloatware in those days>
    z, Sep 3, 2003
    #19
  20. Charlie D

    zuuum Guest

    I think one of the biggest blinds spots of people comparing digital imaging
    quality to film and film scans is not considering the color depth and
    fidelity differences between triple CCD and interpolated color (single CCD
    capture devices) Anyone who has compared a 3CCD vid cam to a single CCD
    (prosumer) knows that color saturation and fidelity is one of the first
    things you notice.

    An 1800x1200 pixel full-color image would need an 1800x1200 RED scan +
    1800x1200 GREEN scan + 1800x1200 BLUE scan, times the bit depth of each
    color channel. So you can see how the data required for even just 1,000
    shades of each color channel adds up rapidly.

    Ever notice that down-sampled digicam images always look better than full
    resolution? Probably because interpolated color makes for soft edge
    definition.

    "DTJ" <> wrote in message
    news:...
    > On Tue, 02 Sep 2003 08:28:15 +0200, Terje Mathisen
    > <> wrote:
    >
    > >Robert Lynch wrote:
    > >
    > >> "Roger N. Clark" <> wrote in message
    > >> news:...
    > >>
    > >>>Well, a 6-megapixel camera only needs 6 million colors!
    > >>>So having 16-million colors means 10 million aren't used.
    > >>
    > >>
    > >> This is one of the dumbest statements that I have seen here in a while.

    > >
    > >Besides being obviously true, you mean?

    >
    > How?
    >
    > You could have a picture taken with a 6-megapixel camera that only
    > used a single color. It could use 10, 100, 1000, 10000. It is not
    > logical to claim what the OP claimed. It is simply wrong.
    >
    > However, had he said that AT MOST it could use approximately 6 million
    > colors, it would have made more sense.
    zuuum, Sep 4, 2003
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Aidan

    Re: 35mm scan quality Vs Digital

    Aidan, Aug 31, 2003, in forum: Digital Photography
    Replies:
    0
    Views:
    418
    Aidan
    Aug 31, 2003
  2. JK

    Re: 35mm scan quality Vs Digital

    JK, Aug 31, 2003, in forum: Digital Photography
    Replies:
    0
    Views:
    395
  3. David Dyer-Bennet

    Re: 35mm scan quality Vs Digital

    David Dyer-Bennet, Aug 31, 2003, in forum: Digital Photography
    Replies:
    0
    Views:
    959
    David Dyer-Bennet
    Aug 31, 2003
  4. Tony Whitaker

    Re: 35mm scan quality Vs Digital

    Tony Whitaker, Aug 31, 2003, in forum: Digital Photography
    Replies:
    4
    Views:
    3,691
    Roger N. Clark
    Sep 1, 2003
  5. Andrew McDonald

    Re: 35mm scan quality Vs Digital

    Andrew McDonald, Aug 31, 2003, in forum: Digital Photography
    Replies:
    2
    Views:
    318
    Andrew McDonald
    Aug 31, 2003
Loading...

Share This Page