taking digital images in black and white mode

Discussion in 'Digital Photography' started by Brandon Moore, May 4, 2007.

  1. From what I've been told, the sensor in a digital camera measures only
    light intensity, not color, and so in order to capture a color
    photograph three filter (red, blue, and green) have to be rotated in
    front of the sensor (basically taking three pictures). So, my
    question is this: if I'm shooting in black and white mode, does the
    camera only take one picture and, more importantly, does this mean I
    use a faster shutter speed to get the same exposure? Basically, I'm
    wondering if shooting in black and white mode has any advantage over
    shooting in color and just desaturating the image in post-processing.
     
    Brandon Moore, May 4, 2007
    #1
    1. Advertisements

  2. Okay, I actually spent five minutes on research and learned that the
    answer to my first question is no; a permanent Bayer filter of mixed
    red,blue, and green elements is used, so exposure times are the same
    for shooting in color or in black and white. But my second question
    still stands: is there any advantage to shooting in BW instead of
    shooting in color and converting to BW later. I guess if you have RAW
    capture, then the answer to that is no, but what if you only have JPEG
    and the camera is doing the "demosaicing" of the photo?
     
    Brandon Moore, May 4, 2007
    #2
    1. Advertisements

  3. No. You have no control over how your camera converts to monochrome,
    while when converting color to grayscale on your computer you can juggle
    the RGB values to emulate red/yellow/green filters similar to glass
    filters with B&W film, and also customize the final result for the best
    effect you can achieve in monochrome. This technique can be used with
    both RAW and JPEG images for superior monochrome results.

    For example, emphasize the red channel for grayscale portraits of women,
    the green channel for men, to get that rugged masculine look.

    Whether shooting film or digital, when taking monochrome photos you have
    to THINK and VISUALIZE in grayscale while the scene is still in your
    viewfinder. Converting color to grayscale later allows you much more
    time for that thinking, visualizing, and experimentation. Just treat the
    color image as a "negative" for your ultimate B&W printing.

    Look on the bright side... you're getting the color for free, plus the
    opportunity to transform it into an even more interesting monochrome :^)
     
    Charles Gillen, May 4, 2007
    #3
  4. Brandon Moore

    timeOday Guest

    That was how my first digital camera worked - the NewTek Digiview Gold
    for the Amiga. It came with a color wheel you manually turned between
    each of 3 manual exposures.

    And it's also how single-chip DLP projectors work.
     
    timeOday, May 4, 2007
    #4
  5. Brandon Moore

    Koekje Guest

    ["Followup-To:" header set to alt.photography.]
    Brandon Moore enlightened us with:
    Nope, you always shoot in RAW mode. When shooting JPEG you just let
    the camera convert the RAW to JPEG for you.

    Koekje
     
    Koekje, May 4, 2007
    #5
  6. Shoot colour: you can then apply colour filters before desaturating on the
    computer; an option you will throw away if you shoot B&W.

    The camera doesn't make any better use of the incoming light in B&W mode; it
    is just converting the colour image to B&W after capture anyway. So just take
    the maximum information from the camera and you have more to play with later.

    In case you're not a B&W shooter historically, colour plays an important role
    in B&W, which is why B&W shooters use strong coloured filters over the lens.

    You can play with these effects at will at postprocessing time if you've
    captured the colour to start with.
     
    Richard Polhill, May 4, 2007
    #6
  7. Brandon Moore

    John Sheehy Guest

    That's what you would do with a camera *designed* for B&W. Color digital
    cameras have the filters there all of the time, in a pattern like this:

    RGRGRGRG
    GBGBGBGB
    RGRGRGRG
    GBGBGBGB
    RGRGRGRG
    GBGBGBGB

    so each pixel only sees one color, ever.
    That could depend on how the camera made its B&W. It's B&W JPEG could
    potentially be a truer B&W image than what you'd get by desaturating the
    camera's JPEG.

    The best way is to shoot RAW, if the camera has it, and make a linear
    tiff from the RAW, so you don't get any of the color distortion that
    occurs when RAW images are converted to RGB color spaces.


    --
     
    John Sheehy, May 4, 2007
    #7
  8. Brandon Moore

    John Sheehy Guest

    Or blue, if you really want to make someone to look like a ghoul.

    --
     
    John Sheehy, May 4, 2007
    #8
  9. Brandon Moore

    John Sheehy Guest

    Well, we don't know that for a fact. It is possible that some cameras
    don't convert to color first, but just demosaic and then use the
    interpolated RAW channels for B&W. That's what I would do, if I were
    designing a camera. It's faster and better.

    Whether or not anyone in the companies are smart enough to think of this is
    another issue.

    --
     
    John Sheehy, May 4, 2007
    #9
  10. Brandon Moore

    John Sheehy Guest

    I imagine that much better color information could be had if you used even
    more colors than 3. You'd probably need new software algorithms to
    interpret it, though.

    The same could probably be done with color cameras; just a little more
    math.

    --
     
    John Sheehy, May 4, 2007
    #10
  11. Brandon Moore

    timeOday Guest

    I guess it depends on your definition of "better." The human eye only
    has 3 different types of color receptors (cone cells), so 3 filters is
    enough to reproduce any color we can see.

    On the other hand, special-purpose imaging (like NASA probes) have a
    greater variety of filters available for their cameras to reveal
    information a human eye cannot see.
     
    timeOday, May 4, 2007
    #11

  12. Some women may have four: http://en.wikipedia.org/wiki/Tetrachromatic

    Even ignoring this possibility there are some other reasons why more
    than 3 channels could help. Do we all have the same three types of
    receptor? Or could it be that you and I have different ranges for our
    receptors? If so, the perfect 3 channel camera and display or printer
    for you may not be the perfect one for me. Yet another possibility and
    I think this one is likely, the frequency responses of the channels in
    the camera may not exactly match those of our eyes. If the only
    problem is the last one then it could be solved by improving the
    sensors or their filters but extra channels may be an easier
    solution. Display or printing could be important, we need 3 colours
    which fire only one of our receptor types do our displays and inks
    achieve that?

    As well as three types of cone we also have the more sensitive rods.
    These do not come in colour variants and are more sensitive which is
    why the world looks black and white at night. I have often wondered
    why these do not provided a fourth channel in brighter conditions when
    the cones are also active. The answer could be that there are no rods
    in the fovea. I have looked a few times for an answer to this puzzle
    but not found one.

    <snip>
     
    =?iso-8859-1?B?U2XhbiBPJ0xlYXRobPNiaGFpcg==?=, May 4, 2007
    #12
  13. That's not quote how it works. The eye has a complicated system of receptors
    which are matrixed at a ratio of 100:1 in the optic nerve into, what appears
    to be, Blue/Yellow and Red/Green. But this is then interpreted by the brain,
    which is at least as much sensitive to movement (changes) and outlines
    (boundaries) as it is to colour and tone.

    The eye does not capture a "picture" in the same way that a camera, still or
    video, does, but rather sends a constant stream of parallel analogue signals
    which are built up into a mental image of the view.

    The nearest we could get to that would be to wire up a video camera with very
    accurate motion tracking to a computer which can interpolate the constant
    stream of data to a single moving image with each new frame adding detail but
    with a limited persistence. The computer will have to compare each frame with
    the previous recursively with reducing weight over time, and with the motion
    of the camera.
     
    Richard Polhill, May 4, 2007
    #13
  14. [snip]

    The Sony F828 uses 4 colours. Some of the greens are replaced by what
    we would call cyan. Apparently, 4 colours were more common than now in
    earlier digital cameras.
     
    Barry Pearson, May 4, 2007
    #14
  15. Brandon Moore

    Matt Ion Guest

    That sums it up nicely, I think.
     
    Matt Ion, May 4, 2007
    #15
  16. Bees can actually see UV (some flowers mark their most delicious parts with
    UV points)and a colour beyond human understanding, called bee-magenta.
    That computer is the most complicated structure in the universe.IMHO people
    are quite standard;mostly blood group A+ 3-4 sizes of shoes most men wear
    average weight average intelligence and so on.Our Maker must have had His
    thought on this....
     
    Tzortzakakis Dimitrios, May 4, 2007
    #16
  17. Fascinating-what was its resolution?640 * 480?Which Amiga did you have?What
    could you do with your photos?You could neither print them, nor email them
    of course.Just a slight peek into the future...
     
    Tzortzakakis Dimitrios, May 4, 2007
    #17
  18. Brandon Moore

    John Sheehy Guest

    The most immediately useful thing I can think of is that CA could be
    corrected more accurately.

    Scaling 3 channels as is usually done move wavelength X in one channel,
    differently than the same wavelength in another channel. The green channel
    of Bayer cameras is fairly sensitive to all visible wavelengths, while the
    red and blue tend to be relatively blind at some wavelengths (red LEDs
    hardly register in the blue channel, and visa versa).

    --
     
    John Sheehy, May 5, 2007
    #18
  19. Personally, I prefer to take B&W photos in B&W mode in the camera. If
    I take the photo in color, I tend to fiddle with the color channels a
    lot when I desaturate the photo. Doing it in-camera saves me time
    later and I can decide if I like what I see right at the moment I take
    the shot. If I think it needs improvement, I can always use a colored
    filter.

    I guess from the other responses, it's not the popular way to do it,
    but it works for me. ymmv.
     
    Homer_Simpson, May 7, 2007
    #19
  20. Brandon Moore

    Aaron Guest

    And lo,
    It seems to me that if you always have filters on you, know you want
    to work in B&W, and are experienced enough to use the filters
    properly, that's a perfectly fine way to do it!

    The reason I hear most often regarding why B&W mode is to be avoided
    is the way the camera desaturates the image. If the image is
    desaturated without any other processing, then B&W mode with a colored
    filter should have exactly the same effect as it would using B&W film.

    If, however, the camera does any sort of processing, such as slight
    contrast increase, etc., you'd be better off channel mixing and
    desaturating in post-processing.
     
    Aaron, May 31, 2007
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.