Fixing an unfocused image theoretically straightforward?!

Discussion in 'Digital Photography' started by 223rem, Apr 20, 2007.

  1. 223rem

    223rem Guest

    If an unfocused image = convolution of the focused image with a Gaussian
    kernel, then this is a linear thus invertible transform, and therefore
    the focused image could in principle be recovered by guessing the sigma
    of the blurring Gaussian!

    Right? Most likely not, but why?
     
    223rem, Apr 20, 2007
    #1
    1. Advertising

  2. 223rem

    Mark² Guest

    223rem wrote:
    > If an unfocused image = convolution of the focused image with a
    > Gaussian kernel, then this is a linear thus invertible transform, and
    > therefore the focused image could in principle be recovered by
    > guessing the sigma of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    But how would it detect which portions of the image are blurred simply due
    to desired, normal DOF limitations vs. the focussing mistake within the
    frame? A perfectly focussed subject/portion is rarely surrounded by
    perfectly-focussed elements unless you're shooting a wholly distant
    landscape, a flat-to-sensor subject, or at such huge DOF settings as to be
    nearly identically sharp, near to far. I frankly wouldn't even attempt to
    answer your question with math, proof, etc. I'm just thinking the above is
    one reason why there would be problems. Maybe Roger Clark, or perhaps David
    Littlewoodcan help you with this one...

    --
    Images (Plus Snaps & Grabs) by Mark² at:
    www.pbase.com/markuson
     
    Mark², Apr 20, 2007
    #2
    1. Advertising

  3. In article <>,
    223rem <> wrote:

    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    Not true.

    First, the signal to noise ratio quickly heads towards zero when you
    undo a blur. The worse the blur, the less unique signal is left in each
    pixel to extract.

    Second, focus blur is not uniform for objects with different distances.
    The amount of blur correction can only be guessed.


    Mild haze from a cheap lens can often be completely corrected but lack
    of focus can only be slightly corrected. Some enhancement applications
    will use pattern matching to guess what the picture used to look like
    and redraw it as a sharp image. It can produce a pleasing image if
    enough of its guessing is right. Sometimes is goes horribly wrong, too.
     
    Kevin McMurtrie, Apr 20, 2007
    #3
  4. 223rem

    HvdV Guest

    Hi 223rem,
    >
    >
    >>If an unfocused image = convolution of the focused image with a Gaussian
    >>kernel, then this is a linear thus invertible transform, and therefore
    >>the focused image could in principle be recovered by guessing the sigma
    >>of the blurring Gaussian!
    >>
    >>Right? Most likely not, but why?

    If it were indeed Gaussian, and the image noise-free, then indeed you could
    do an inversion resulting in unlimited resolution. However, the blur function
    is band limited, varies from point to point in your image AND there is noise,
    always.

    But that doesn't mean there is nothing you can do...

    -- Hans

    > Mild haze from a cheap lens can often be completely corrected but lack
    > of focus can only be slightly corrected. Some enhancement applications
    > will use pattern matching to guess what the picture used to look like
    > and redraw it as a sharp image. It can produce a pleasing image if
    > enough of its guessing is right. Sometimes is goes horribly wrong, too.

    Yes!
     
    HvdV, Apr 20, 2007
    #4
  5. 223rem

    Mike Russell Guest

    "223rem" <> wrote in message
    news:...
    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore the
    > focused image could in principle be recovered by guessing the sigma of the
    > blurring Gaussian!
    >
    > Right? Most likely not, but why?


    Probably not, or it would have been done already, at least for a flat
    subject with low noise.

    Here's a similar problem I've always thought might be interesting, that may
    be easier to solve: inverting a bevel. Since a bevel is computed by
    subtracting pixels offset by a constant distance along a line, integrating
    along the same line should restore the original image, plus or minus a
    constant.
    --
    Mike Russell
    www.curvemeister.com/forum/
     
    Mike Russell, Apr 20, 2007
    #5
  6. 223rem

    Ron Hunter Guest

    223rem wrote:
    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    One needs to know the parameters of the lack of focus, and much
    processing is required for mediocre results, at least in my experience.
     
    Ron Hunter, Apr 20, 2007
    #6
  7. [A complimentary Cc of this posting was sent to
    HvdV
    <>], who wrote in article <46287046$0$2026$>:

    > If it were indeed Gaussian, and the image noise-free, then indeed
    > you could do an inversion resulting in unlimited
    > resolution. However, the blur function is band limited, varies from
    > point to point in your image AND there is noise, always.


    Another point is quantization (which could be considered as noise too,
    of course). E.g., at 5sigma, one loses 18 bits of S/N; even if there
    is no noise, one needs to quantize the result at about 28bits to get
    decent results. ;-) [Here "at 5sigma" means the spacial frequency;
    e.g., for a Gauss blur with radius r, this corresponds to wavenumber
    5/r, or half-wavelength of r*pi/5. E.g., this is applicable to
    maximal resolution details blurred with a gaussian of radius 1.6 pixels.]

    But indeed, the main "theoretical" reason is that diffraction is not
    Gaussian; it COMPLETELY cuts off the high frequencies (see keywords
    "Fourier optic" for math behind this; this is what is called "band
    limited" above, but I wanted to emphasize it more).

    And adding lens aberrations on top of diffraction can only worsen
    things by adding some additional zeros in the MTF...

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Apr 20, 2007
    #7
  8. 223rem

    bugbear Guest

    223rem wrote:
    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    Only if there are no depth of field issues.

    BugBear
     
    bugbear, Apr 20, 2007
    #8
  9. 223rem

    HvdV Guest

    Hi Ilya,
    >
    >
    > Another point is quantization (which could be considered as noise too,
    > of course). E.g., at 5sigma, one loses 18 bits of S/N; even if there
    > is no noise, one needs to quantize the result at about 28bits to get
    > decent results. ;-) [Here "at 5sigma" means the spacial frequency;
    > e.g., for a Gauss blur with radius r, this corresponds to wavenumber
    > 5/r, or half-wavelength of r*pi/5. E.g., this is applicable to
    > maximal resolution details blurred with a gaussian of radius 1.6 pixels.]

    Good point! -- Hans
     
    HvdV, Apr 20, 2007
    #9
  10. 223rem

    Robert Haar Guest

    On 4/20/07 12:56 AM, "223rem" <> wrote:

    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    Digitization effects
     
    Robert Haar, Apr 20, 2007
    #10
  11. On Apr 19, 11:56 pm, 223rem <> wrote:
    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    Depends on the object. Such deconvolution depends on knowing the
    characteristics of the object. Astronomers can easily deconvolve
    stars, for instance.

    However, if the image is of irregular, ungeometric sources, it becomes
    very hard.

    What frequently happens is that if you turn up the "gain" on a
    correlation too much, you guarantee that the results WILL look like
    what you assumed it was. Sort of like, "if all you have is a hammer,
    everything will look like a nail" :)
     
    Don Stauffer in Minnesota, Apr 20, 2007
    #11
  12. 223rem wrote:
    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    So far I see a lot of misguided poorly focused responses. ;-)

    1) There is vast research in this area, dating back at least
    35 years (that is when I first encountered research articles
    on the subject). The Hubble telescope before the optics fix
    is an example of where such an application was used.

    2) What you say does work, but as some have pointed out, noise
    limits results. Because the signal of the subject is averaged
    between adjacent pixels in the defocussed/blurred image,
    to recover, you must estimate the signal that is spread out,
    subtract it off of adjacent pixels and add it back in to
    the pixel where the signal belongs. The problem is noise.
    As you do the deconvolution, noise increases.
    So you trade spatial resolution for noise. The other
    problem is if you don't get the blur function exact,
    then you have artifacts. This usually shows up in the form of
    ringing.

    3) The signal-to-noise (S/N) trade discussed above limits how far you
    can push the reconstruction. In my experience, I feel I can get
    effectively about a factor of 2 increase in spatial detail
    using image reconstruction techniques with DSLR low ISO (high S/N)
    images. Example:

    Image Restoration Using Adaptive Richardson-Lucy Iteration
    http://www.clarkvision.com/imagedetail/image-restoration1

    At the end of the article, there are some references to research articles.
    I think that current reconstruction is also limited by upsampling
    artifacts, like "jaggies" on edges. If I had better upsampling,
    I feel I could make even larger images. Note, even with 2x increase,
    an 8 megapixel camera makes images similar to 32 megapixels.
    I'm making beautiful 16x24 inch prints at 305 ppi that I think
    are impressively sharp (you can "stick your noise" to the print)
    from 8 mpixels.

    4) The idea that focus is different in different parts of the image
    is no different than saying all images are out of focus therefore
    bad. If you had a slightly defocused image, and applied a reconstruction
    method using a blur model, that simply restores the image
    to the state as if you took the image at better focus. Those
    areas that are further out of focus will be changed to the slightly
    better focus, but no different than taking the image at that focus
    in the first place (all ignoring the noise from reconstruction).
    Then you can also apply different blur models to different parts of the
    image, just as one might select a portion of an image and apply
    unsharp mask in photoshop.

    Roger
     
    Roger N. Clark (change username to rnclark), Apr 20, 2007
    #12
  13. 223rem

    jpc Guest

    On Fri, 20 Apr 2007 00:56:19 -0400, 223rem <> wrote:

    >If an unfocused image = convolution of the focused image with a Gaussian
    >kernel, then this is a linear thus invertible transform, and therefore
    >the focused image could in principle be recovered by guessing the sigma
    >of the blurring Gaussian!
    >
    >Right? Most likely not, but why?



    Recent research in wavelet noise reduction claims to do this.

    Google "HPL-2006-103" to bring up the paper.

    Perhaps some of the math experts in this thread might want to look at
    the paper and comment. If this research is as out-of-the-box and
    elegant as I think it is, we can expect some very interesting small
    sensor camera from HP.

    jpc
     
    jpc, Apr 20, 2007
    #13
  14. 223rem

    Rich Guest

    On Apr 20, 12:56 am, 223rem <> wrote:
    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    If there was a good way to fix this using software, the Hubbel
    Telescope wouldn't have needed a $700M repair mission.
     
    Rich, Apr 20, 2007
    #14
  15. On Apr 20, 12:42 pm, Rich <> wrote:
    > On Apr 20, 12:56 am, 223rem <> wrote:
    >
    > > If an unfocusedimage= convolution of the focusedimagewith a Gaussian
    > > kernel, then this is a linear thus invertible transform, and therefore
    > > the focusedimagecould in principle be recovered by guessing the sigma
    > > of the blurring Gaussian!

    >
    > > Right? Most likely not, but why?

    >
    > If there was a good way to fix this using software, the Hubble
    > Telescope wouldn't have needed a $700M repair mission.


    If the Hubble sensor was essentially noiseless, and had a very large
    bit depth, a deconvolution may have been able to prevent the $700M
    repair mission. If you look at the Hubble pics on http://quarktet.com/,
    the clean-up is a decided improvement compared to the original.
    However, there is little comparison between the these and current
    Hubble pictures. Deconvolution can only correct for imperfections in
    the system, bad focus, atmosphere effects etc. There is great value
    in having a better system to start with (or repair).

    Jim at Quarktet
     
    JimAtQuarktet, Apr 20, 2007
    #15
  16. 223rem <> writes:
    >If an unfocused image = convolution of the focused image with a Gaussian
    >kernel, then this is a linear thus invertible transform, and therefore
    >the focused image could in principle be recovered by guessing the sigma
    >of the blurring Gaussian!


    First, defocus blur is convolution with a disc, or a rounded hexagon, or
    a rounded pentagon, or something else depending on the shape of the
    aperture in the lens. It is most definitely *not* a Gaussian, except
    for the small portion of the blur that's due to diffraction.

    Second, all the theory about invertible transforms assumes that the
    intermediate result (the blurred image) has infinite resolution in
    intensity and no added noise. Neither of these is true of a camera
    image. Where the blur reduces the amplitude of a particular frequency
    by 1/A, the deblur must boost its amplitude by a factor A, amplifying
    photon noise and quantization error and dark current noise by the same
    factor. And if the blur transform does not pass a particular frequency
    at all, the information is lost - the blur is not invertible, not even
    theoretically.

    Dave
     
    Dave Martindale, Apr 20, 2007
    #16
  17. JimAtQuarktet <> writes:

    >> If there was a good way to fix this using software, the Hubble
    >> Telescope wouldn't have needed a $700M repair mission.


    >If the Hubble sensor was essentially noiseless, and had a very large
    >bit depth, a deconvolution may have been able to prevent the $700M
    >repair mission.


    Also if the cameras had infinite intensity resolution, which no real
    camera does.

    Hubble's mirror is not very large by ground telescope standards, even at
    the time it was launched, but it was supposed to be able to see very dim
    objects because the lack of atmosphere and (supposedly) very accurate
    and smooth optics would concentrate most the light from the star into a
    0.1 arcsecond span of pixels at the prime focus. But in fact the
    focused star image had a point spread function something like 1 arc
    second in size, so the same amount of light was spread over 100 times as
    many pixels. The light from faint stars could no longer be reliably
    detected above the noise.

    So Hubble got used to image brighter things for a while, things that
    could be captured and sharpened by image processing. But it wasn't
    until the optics were fixed that it could be used for faint-object
    work.

    Dave
     
    Dave Martindale, Apr 20, 2007
    #17
  18. On Fri, 20 Apr 2007 00:56:19 -0400, 223rem <> wrote:

    >If an unfocused image = convolution of the focused image with a Gaussian
    >kernel, then this is a linear thus invertible transform, and therefore
    >the focused image could in principle be recovered by guessing the sigma
    >of the blurring Gaussian!
    >
    >Right? Most likely not, but why?


    This isn't directly in response to what you asked, but I've had good
    luck "sharpening" somewhat out-of-focus digital images by resampling
    at a *lower* resolution. With fewer pixels, the transitional zones of
    edges are made harder, because intermediate shades/colors are
    eliminated. This is counterintuitive, but it seems to work
    (sometimes).
     
    Alexander Arnakis, Apr 20, 2007
    #18
  19. 223rem

    Rich Guest

    On Apr 20, 3:46 pm, (Dave Martindale) wrote:
    > JimAtQuarktet <> writes:
    > >> If there was a good way to fix this using software, the Hubble
    > >> Telescope wouldn't have needed a $700M repair mission.

    > >If the Hubble sensor was essentially noiseless, and had a very large
    > >bit depth, a deconvolution may have been able to prevent the $700M
    > >repair mission.

    >
    > Also if the cameras had infinite intensity resolution, which no real
    > camera does.
    >
    > Hubble's mirror is not very large by ground telescope standards, even at
    > the time it was launched, but it was supposed to be able to see very dim
    > objects because the lack of atmosphere and (supposedly) very accurate
    > and smooth optics would concentrate most the light from the star into a
    > 0.1 arcsecond span of pixels at the prime focus. But in fact the
    > focused star image had a point spread function something like 1 arc
    > second in size, so the same amount of light was spread over 100 times as
    > many pixels. The light from faint stars could no longer be reliably
    > detected above the noise.
    >
    > So Hubble got used to image brighter things for a while, things that
    > could be captured and sharpened by image processing. But it wasn't
    > until the optics were fixed that it could be used for faint-object
    > work.
    >
    > Dave


    In fact the optics were superbly smooth, about 1/40th wave in green
    light. But, the perfect Kodak mirror was on the ground while the
    improperly ground (the shape, not the surface quality) was in the
    telescope and horribly spherically aberrated. And, Hughes Danbury
    Optical Systems (who made the error) never really paid back what it
    cost to fix it
    using compensating optics called, "Costar."
     
    Rich, Apr 21, 2007
    #19
  20. 223rem

    John Sheehy Guest

    223rem <> wrote in
    news::

    > If an unfocused image = convolution of the focused image with a Gaussian
    > kernel, then this is a linear thus invertible transform, and therefore
    > the focused image could in principle be recovered by guessing the sigma
    > of the blurring Gaussian!
    >
    > Right? Most likely not, but why?


    One simple thought experiment makes it clear that you can never do it with
    100% accuracy.

    Say that you have an image (or a crop) with a certain bit depth, and a
    certain number of pixels. An unfocused image has stricter rules in it,
    about how much contrast there can be between neighboring pixels, limiting
    the possible number of combinations. There are, therefore, more possible
    sharp images than there are possible unsharp images, so two different sharp
    images can become the same through unfocusing, and they will also become
    the same through deconvolution.

    --

    <>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
    John P Sheehy <>
    ><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
     
    John Sheehy, Apr 21, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Noah Davids

    fixing port speed at 1000 mbps

    Noah Davids, Oct 18, 2004, in forum: Cisco
    Replies:
    6
    Views:
    2,946
    Walter Roberson
    Oct 20, 2004
  2. Marko
    Replies:
    1
    Views:
    507
    The Poster Formerly Known as Kline Sphere
    Jan 7, 2004
  3. Fixing a network

    , Jan 20, 2006, in forum: Cisco
    Replies:
    0
    Views:
    419
  4. Mcploppy ©

    Microsoft Fixing Another Faulty Patch

    Mcploppy ©, Jul 30, 2003, in forum: Computer Support
    Replies:
    0
    Views:
    404
    Mcploppy ©
    Jul 30, 2003
  5. Dick Reuben

    Chkdsk Not Fixing Problems

    Dick Reuben, Jan 12, 2004, in forum: Computer Support
    Replies:
    1
    Views:
    1,593
    °Mike°
    Jan 12, 2004
Loading...

Share This Page