``Sharpness'' option?

Discussion in 'Digital Photography' started by Ron Hardin, Dec 21, 2005.

  1. Ron Hardin

    Ron Hardin Guest

    What does the ``sharpness'' do on my Sony DSC-P93?

    I think it has undesireable effects on a black dog against snow but
    am not sure what's going on.

    Is it correcting for out-of-focus or some such common mistake, or
    is it something about pixels that's being corrected?

    I see I can add sharpness or blur with irfanview but don't know
    that the camera does the same thing already, and perhaps it's always
    better to add it later than try to take it out later.
     
    Ron Hardin, Dec 21, 2005
    #1
    1. Advertisements

  2. "Sharpness" has the effect of increasing the apparent "sharpness" or
    "crispness" of the picture. Some cameras apply sharpness routines more
    than others, as part of the processing of the picture in the camera. I
    don't know if all cameras are adjustable in this respect, but certainly
    many are. The processing for sharpness serves to increase the contrast
    between adjacent pixels of distinctly different
    color/lightness/darkness. However, overuse of sharpness causes
    "artifacts" to appear in the picture - eyebrows can grow light
    splotches, for example. Properly used, I think sharpness can add quite
    a bit to a picture, but if overapplied, it is ugly and you can spot it
    a mile away.
     
    davidjchurchill, Dec 21, 2005
    #2
    1. Advertisements

  3. Ron Hardin

    Tesco News Guest


    Hi.

    And after the camera has applied too much sharpening and spoiled your
    picture, you can not undo it in Post Processing.

    Roy G
     
    Tesco News, Dec 21, 2005
    #3
  4. Ron Hardin

    Dave Cohen Guest

    As stated above this varies from camera to camera. I believe the digitizing
    process creates a condition requiring sharpening to be applied, you'll have
    to search and read relevant web pages for details.
    On my A95, sharpening seems to do nothing on face portraits, might improve
    other aspects such as hair and clothing but seems most needed for flowers,
    foliage, detail on buildings etc. In other word, areas of the image where
    detail is important. Best thing is to just play but again I agree with the
    previous post, over sharpening can look worse than no sharpening. My camera
    doesn't offer a choice, probably a good thing, one less thing to go wrong,
    although if it did I would probably just take the default.
    Note, in the old days, lenses for large format cameras often came in a soft
    version for portrait photography and gave a pleasing effect, minimizing
    small facial blemishes and wrinkles.
    Dave Cohen
     
    Dave Cohen, Dec 21, 2005
    #4
  5. Ron Hardin

    Pete Fenelon Guest

    In the most general terms, sharpening increases contrast at edges so as
    to make images look 'crisper' and less fuzzy. There is a fine line
    between sharpening that makes certain types of image look good, and
    oversharpening which leaves halos and shadows on your images...

    Ideally, the camera should only apply just enough sharpening to overcome
    the gemoetrical issues caused by a Bayer sensor (which lead to some
    softening of focus).

    In general, you want your camera to do as little as possible. Rule 1 of
    digital photography -- "don't bruise the pixels in the camera" -- if you
    want to sharpen more than the camera does by default, get the images
    out of the camera and sharpen *copies* of the images in a decent
    photo-editing tool!
    Yes; it's generally better to sharpen out of the camera than in it. You
    can see what changes you're making and you can experiment and undo!

    There are many tutorials on sharpening workflows out there; I swear by a
    technique Scott Kelby introduced in one of his books.

    pete
     
    Pete Fenelon, Dec 21, 2005
    #5
  6. A simple sharpening algorithm looks for contrast gradients (edge detection)
    and enhances them. Works, but does indeed have unpleasant side effects.
    Correcting for out of focus is magic and thus is just not possible. Ideal
    sharpening corrects for the anti-aliasing filter glued to the front of the
    camera sensor. Best done in post-processing, at the current state of the
    art.
     
    Charles Schuler, Dec 21, 2005
    #6
  7. Ron Hardin

    Ron Hardin Guest

    Is the anti-aliasing just a diffuser? If so it shouldn't extend over more than
    a pixel anyway, but maybe for bright sources it does?
     
    Ron Hardin, Dec 21, 2005
    #7
  8. It is an optical low-pass filter. It reduces high frequencies and that
    translates to fine details. It has little or no effect on gradual
    transitions (low frequencies).
     
    Charles Schuler, Dec 21, 2005
    #8
  9. Ron Hardin

    Ron Hardin Guest

    The idea (I don't know , just guessing) would be that the sensor is
    much smaller than the pixel separation. So a regular pattern of close
    bright/dark/bright/dark at a higher wavenumber (many to a pixel separation)
    will appear as a much lower wavenumber pattern of bright and dark ; so, for
    instance, a distant chain-link fence will appear as a solid bar in one
    place and invisible in another place, instead of a uniform unresolved grey
    that film would produce.

    If you put in a diffuser, it spreads the energy out over the area it diffuses
    over, which you want to be about one pixel separation. So the fence, which
    has to be unresolved because it's finer than the pixel separation, is in fact
    unresolved, as a uniform grey.

    The cost is that a sharp bright/dark edge, that you do want to be crisp, is
    also diffused over a pixel, where you could (with processing, or without a
    diffuser) have a sharp edge reproduced.

    Presumably all that is implied by what you said , that there's anti-aliasing
    (the chain link fence doesn't appear as alternating large bars and vacancies) ,
    but the sharpening restores edges of large objects as sharp where they've been
    softened by the diffuser.

    ``diffuser'' I made up. There's probably a name for it.

    All guesswork on my part based on what you said, anyway it makes sense to me
    that way.

    ``High frequency...'' etc would be a bad way to put it, since blue is high
    frequency and red is low, and what you want is spatial frequencies, which are
    called (conventionally) wavenumbers.
     
    Ron Hardin, Dec 22, 2005
    #9
  10. As I understand it, wavenumber is just the inverse of frequency - in
    either the time or spatial domain. To be precise, you can write
    "spatial frequency" to distinguish it from temporal frequency. But in
    still image processing, just "frequency" is generally understood to mean
    spatial frequency.

    Movies and video involve both spatial and temporal information, so it
    can be more confusing there unless you're careful to say what you're
    talking about.

    Dave
     
    Dave Martindale, Dec 22, 2005
    #10
  11. Ron Hardin

    Ron Hardin Guest

    Wavenumber k is the ``radian'' frequency, eg. sin (k . x)
    instead of sin (2 . pi . f . x) or sin (2 . pi . x / L)
    where L is wavelength

    so frequency and wavenumber go up together.

    However when both time and space variation are involved, physicists
    and engineers, at least, use wavenumber for space and either a radian
    frequency (physicists) or regular frequency (engineers sometimes) for time.

    Thus a travelling wave is described by sin ( omega . t - k . x )
    with (radian) frequency omega and wavenumber k.

    k and x can be vectors and then k . x is the usual dot product; k points
    in a direction normal to the wave front.

    If you (space) fourier transform F the wave, you get the vector k as the
    conjugate to space dimension x. k . F is the fourier transform of the
    divergence, and k x F is the fourier transform of the curl.

    Those being indicators that you want radian frequencies so you don't get
    two pi's all over everywhere when they're not at all necessary.
     
    Ron Hardin, Dec 22, 2005
    #11
  12. Ron Hardin

    Alan Meyer Guest

    I _think_ (but I'm no expert) that when Charles used the term
    "high frequency" he must have been referring to rapid changes of
    value between a string of adjacent pixels, not within a single
    pixel - which, of course, cannot record more than a single value
    - ultimately translated to a number representing the luminance of
    a pixel under a standard red, green or blue mask filter.

    In theory, it seems to me that applying sharpness as a post
    process is a better way to do it because you can make a human
    decision about when it works and when it doesn't. You can
    also use more CPU horsepower over more time than is practical in
    a camera, and therefore use more sophisticated algorithms. You
    can even custom set the radius, threshold and amount values to
    exactly suit the particular image.

    In practice however, you may not want to have to post process
    every image you shoot in order to get satisfactory results. It
    adds a lot of time and effort to your photography (though some of
    the serious photographers here are happy to do that.)

    So it may be most practical to play with the sharpness setting in
    your camera until you get it so that it produces images you like
    most of the time. But if you are going to err, you may want to
    err more on the undersharpening side than on the oversharpening
    side.

    Alan
     
    Alan Meyer, Dec 23, 2005
    #12
  13. Ron Hardin

    Ron Hardin Guest

    The key words were aliasing and filter. From which one can (probably correctly)
    deduce the entire situation.

    Aliasing always means undersampling for the bandwidth you're actually experiencing.

    It must be, therefore, that a single pixel responds to only a small part of the
    area it occupies in its array of pixels (otherwise it would average all by itself,
    which is what you wish it would do, but it does not do). So if it happens to
    be hit by a bit of picket fence that has fence, it comes out white; even though
    several pickets hit the same pixel and a better value would be grey for the distant
    fence.

    If you do nothing, the on-picket / off-picket situation will differ regularly
    from pixel to pixel, in some pattern that may be slow, giving you wide white
    and black bars across the picture instead of your nice grey blurred fence. That's
    aliasing, the production of wide bars out of narrow ones owing to undersampling.
    You're producing an artifact of a wide bandwidth by sampling it too slowly and then
    giving a full pixel width to each sampled value.

    A filter that diffuses, in front of the array of pixels, does the averaging, so that
    the sensitive place in the pixel does indeed see grey, an average over several
    fence pickets, when the incoming picture is finer resolving than the pixels can
    manage.

    Another consequence, however, is that a perfectly slow sharp edge acquires a grey
    area at the edge; sharpening puts the sharp edge back.

    Ideally, it's the inverse of the diffusing filter, but since that has infinite gain
    as you go up in wavenumber, practically it would be cut off at some pretty high
    wavenumber, so as not to amplify hints that come from pure noise.

    As I say, this is all hallucinated from a couple words. I'm making it all up.
     
    Ron Hardin, Dec 23, 2005
    #13
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.