# Fixing an unfocused image theoretically straightforward?!

Discussion in 'Digital Photography' started by 223rem, Apr 20, 2007.

1. ### 223remGuest

If an unfocused image = convolution of the focused image with a Gaussian
kernel, then this is a linear thus invertible transform, and therefore
the focused image could in principle be recovered by guessing the sigma
of the blurring Gaussian!

Right? Most likely not, but why?

223rem, Apr 20, 2007

2. ### Mark²Guest

223rem wrote:
> If an unfocused image = convolution of the focused image with a
> Gaussian kernel, then this is a linear thus invertible transform, and
> therefore the focused image could in principle be recovered by
> guessing the sigma of the blurring Gaussian!
>
> Right? Most likely not, but why?

But how would it detect which portions of the image are blurred simply due
to desired, normal DOF limitations vs. the focussing mistake within the
frame? A perfectly focussed subject/portion is rarely surrounded by
perfectly-focussed elements unless you're shooting a wholly distant
landscape, a flat-to-sensor subject, or at such huge DOF settings as to be
nearly identically sharp, near to far. I frankly wouldn't even attempt to
answer your question with math, proof, etc. I'm just thinking the above is
one reason why there would be problems. Maybe Roger Clark, or perhaps David

--
Images (Plus Snaps & Grabs) by Mark² at:
www.pbase.com/markuson

Mark², Apr 20, 2007

3. ### Kevin McMurtrieGuest

In article <>,
223rem <> wrote:

> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

Not true.

First, the signal to noise ratio quickly heads towards zero when you
undo a blur. The worse the blur, the less unique signal is left in each
pixel to extract.

Second, focus blur is not uniform for objects with different distances.
The amount of blur correction can only be guessed.

Mild haze from a cheap lens can often be completely corrected but lack
of focus can only be slightly corrected. Some enhancement applications
will use pattern matching to guess what the picture used to look like
and redraw it as a sharp image. It can produce a pleasing image if
enough of its guessing is right. Sometimes is goes horribly wrong, too.

Kevin McMurtrie, Apr 20, 2007
4. ### HvdVGuest

Hi 223rem,
>
>
>>If an unfocused image = convolution of the focused image with a Gaussian
>>kernel, then this is a linear thus invertible transform, and therefore
>>the focused image could in principle be recovered by guessing the sigma
>>of the blurring Gaussian!
>>
>>Right? Most likely not, but why?

If it were indeed Gaussian, and the image noise-free, then indeed you could
do an inversion resulting in unlimited resolution. However, the blur function
is band limited, varies from point to point in your image AND there is noise,
always.

But that doesn't mean there is nothing you can do...

-- Hans

> Mild haze from a cheap lens can often be completely corrected but lack
> of focus can only be slightly corrected. Some enhancement applications
> will use pattern matching to guess what the picture used to look like
> and redraw it as a sharp image. It can produce a pleasing image if
> enough of its guessing is right. Sometimes is goes horribly wrong, too.

Yes!

HvdV, Apr 20, 2007
5. ### Mike RussellGuest

"223rem" <> wrote in message
news:...
> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore the
> focused image could in principle be recovered by guessing the sigma of the
> blurring Gaussian!
>
> Right? Most likely not, but why?

Probably not, or it would have been done already, at least for a flat
subject with low noise.

Here's a similar problem I've always thought might be interesting, that may
be easier to solve: inverting a bevel. Since a bevel is computed by
subtracting pixels offset by a constant distance along a line, integrating
along the same line should restore the original image, plus or minus a
constant.
--
Mike Russell
www.curvemeister.com/forum/

Mike Russell, Apr 20, 2007
6. ### Ron HunterGuest

223rem wrote:
> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

One needs to know the parameters of the lack of focus, and much
processing is required for mediocre results, at least in my experience.

Ron Hunter, Apr 20, 2007
7. ### Ilya ZakharevichGuest

[A complimentary Cc of this posting was sent to
HvdV
<>], who wrote in article <46287046\$0\$2026\$>:

> If it were indeed Gaussian, and the image noise-free, then indeed
> you could do an inversion resulting in unlimited
> resolution. However, the blur function is band limited, varies from
> point to point in your image AND there is noise, always.

Another point is quantization (which could be considered as noise too,
of course). E.g., at 5sigma, one loses 18 bits of S/N; even if there
is no noise, one needs to quantize the result at about 28bits to get
decent results. ;-) [Here "at 5sigma" means the spacial frequency;
e.g., for a Gauss blur with radius r, this corresponds to wavenumber
5/r, or half-wavelength of r*pi/5. E.g., this is applicable to
maximal resolution details blurred with a gaussian of radius 1.6 pixels.]

But indeed, the main "theoretical" reason is that diffraction is not
Gaussian; it COMPLETELY cuts off the high frequencies (see keywords
"Fourier optic" for math behind this; this is what is called "band
limited" above, but I wanted to emphasize it more).

And adding lens aberrations on top of diffraction can only worsen

Hope this helps,
Ilya

Ilya Zakharevich, Apr 20, 2007
8. ### bugbearGuest

223rem wrote:
> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

Only if there are no depth of field issues.

BugBear

bugbear, Apr 20, 2007
9. ### HvdVGuest

Hi Ilya,
>
>
> Another point is quantization (which could be considered as noise too,
> of course). E.g., at 5sigma, one loses 18 bits of S/N; even if there
> is no noise, one needs to quantize the result at about 28bits to get
> decent results. ;-) [Here "at 5sigma" means the spacial frequency;
> e.g., for a Gauss blur with radius r, this corresponds to wavenumber
> 5/r, or half-wavelength of r*pi/5. E.g., this is applicable to
> maximal resolution details blurred with a gaussian of radius 1.6 pixels.]

Good point! -- Hans

HvdV, Apr 20, 2007
10. ### Robert HaarGuest

On 4/20/07 12:56 AM, "223rem" <> wrote:

> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

Digitization effects

Robert Haar, Apr 20, 2007
11. ### Don Stauffer in MinnesotaGuest

On Apr 19, 11:56 pm, 223rem <> wrote:
> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

Depends on the object. Such deconvolution depends on knowing the
characteristics of the object. Astronomers can easily deconvolve
stars, for instance.

However, if the image is of irregular, ungeometric sources, it becomes
very hard.

What frequently happens is that if you turn up the "gain" on a
correlation too much, you guarantee that the results WILL look like
what you assumed it was. Sort of like, "if all you have is a hammer,
everything will look like a nail"

Don Stauffer in Minnesota, Apr 20, 2007
12. ### Roger N. Clark (change username to rnclark)Guest

223rem wrote:
> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

So far I see a lot of misguided poorly focused responses. ;-)

1) There is vast research in this area, dating back at least
35 years (that is when I first encountered research articles
on the subject). The Hubble telescope before the optics fix
is an example of where such an application was used.

2) What you say does work, but as some have pointed out, noise
limits results. Because the signal of the subject is averaged
between adjacent pixels in the defocussed/blurred image,
to recover, you must estimate the signal that is spread out,
subtract it off of adjacent pixels and add it back in to
the pixel where the signal belongs. The problem is noise.
As you do the deconvolution, noise increases.
So you trade spatial resolution for noise. The other
problem is if you don't get the blur function exact,
then you have artifacts. This usually shows up in the form of
ringing.

3) The signal-to-noise (S/N) trade discussed above limits how far you
can push the reconstruction. In my experience, I feel I can get
effectively about a factor of 2 increase in spatial detail
using image reconstruction techniques with DSLR low ISO (high S/N)
images. Example:

Image Restoration Using Adaptive Richardson-Lucy Iteration
http://www.clarkvision.com/imagedetail/image-restoration1

At the end of the article, there are some references to research articles.
I think that current reconstruction is also limited by upsampling
artifacts, like "jaggies" on edges. If I had better upsampling,
I feel I could make even larger images. Note, even with 2x increase,
an 8 megapixel camera makes images similar to 32 megapixels.
I'm making beautiful 16x24 inch prints at 305 ppi that I think
are impressively sharp (you can "stick your noise" to the print)
from 8 mpixels.

4) The idea that focus is different in different parts of the image
is no different than saying all images are out of focus therefore
bad. If you had a slightly defocused image, and applied a reconstruction
method using a blur model, that simply restores the image
to the state as if you took the image at better focus. Those
areas that are further out of focus will be changed to the slightly
better focus, but no different than taking the image at that focus
in the first place (all ignoring the noise from reconstruction).
Then you can also apply different blur models to different parts of the
image, just as one might select a portion of an image and apply

Roger

Roger N. Clark (change username to rnclark), Apr 20, 2007
13. ### jpcGuest

On Fri, 20 Apr 2007 00:56:19 -0400, 223rem <> wrote:

>If an unfocused image = convolution of the focused image with a Gaussian
>kernel, then this is a linear thus invertible transform, and therefore
>the focused image could in principle be recovered by guessing the sigma
>of the blurring Gaussian!
>
>Right? Most likely not, but why?

Recent research in wavelet noise reduction claims to do this.

Google "HPL-2006-103" to bring up the paper.

Perhaps some of the math experts in this thread might want to look at
the paper and comment. If this research is as out-of-the-box and
elegant as I think it is, we can expect some very interesting small
sensor camera from HP.

jpc

jpc, Apr 20, 2007
14. ### RichGuest

On Apr 20, 12:56 am, 223rem <> wrote:
> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

If there was a good way to fix this using software, the Hubbel
Telescope wouldn't have needed a \$700M repair mission.

Rich, Apr 20, 2007
15. ### JimAtQuarktetGuest

On Apr 20, 12:42 pm, Rich <> wrote:
> On Apr 20, 12:56 am, 223rem <> wrote:
>
> > If an unfocusedimage= convolution of the focusedimagewith a Gaussian
> > kernel, then this is a linear thus invertible transform, and therefore
> > the focusedimagecould in principle be recovered by guessing the sigma
> > of the blurring Gaussian!

>
> > Right? Most likely not, but why?

>
> If there was a good way to fix this using software, the Hubble
> Telescope wouldn't have needed a \$700M repair mission.

If the Hubble sensor was essentially noiseless, and had a very large
bit depth, a deconvolution may have been able to prevent the \$700M
repair mission. If you look at the Hubble pics on http://quarktet.com/,
the clean-up is a decided improvement compared to the original.
However, there is little comparison between the these and current
Hubble pictures. Deconvolution can only correct for imperfections in
the system, bad focus, atmosphere effects etc. There is great value

Jim at Quarktet

JimAtQuarktet, Apr 20, 2007
16. ### Dave MartindaleGuest

223rem <> writes:
>If an unfocused image = convolution of the focused image with a Gaussian
>kernel, then this is a linear thus invertible transform, and therefore
>the focused image could in principle be recovered by guessing the sigma
>of the blurring Gaussian!

First, defocus blur is convolution with a disc, or a rounded hexagon, or
a rounded pentagon, or something else depending on the shape of the
aperture in the lens. It is most definitely *not* a Gaussian, except
for the small portion of the blur that's due to diffraction.

Second, all the theory about invertible transforms assumes that the
intermediate result (the blurred image) has infinite resolution in
intensity and no added noise. Neither of these is true of a camera
image. Where the blur reduces the amplitude of a particular frequency
by 1/A, the deblur must boost its amplitude by a factor A, amplifying
photon noise and quantization error and dark current noise by the same
factor. And if the blur transform does not pass a particular frequency
at all, the information is lost - the blur is not invertible, not even
theoretically.

Dave

Dave Martindale, Apr 20, 2007
17. ### Dave MartindaleGuest

JimAtQuarktet <> writes:

>> If there was a good way to fix this using software, the Hubble
>> Telescope wouldn't have needed a \$700M repair mission.

>If the Hubble sensor was essentially noiseless, and had a very large
>bit depth, a deconvolution may have been able to prevent the \$700M
>repair mission.

Also if the cameras had infinite intensity resolution, which no real
camera does.

Hubble's mirror is not very large by ground telescope standards, even at
the time it was launched, but it was supposed to be able to see very dim
objects because the lack of atmosphere and (supposedly) very accurate
and smooth optics would concentrate most the light from the star into a
0.1 arcsecond span of pixels at the prime focus. But in fact the
focused star image had a point spread function something like 1 arc
second in size, so the same amount of light was spread over 100 times as
many pixels. The light from faint stars could no longer be reliably
detected above the noise.

So Hubble got used to image brighter things for a while, things that
could be captured and sharpened by image processing. But it wasn't
until the optics were fixed that it could be used for faint-object
work.

Dave

Dave Martindale, Apr 20, 2007
18. ### Alexander ArnakisGuest

On Fri, 20 Apr 2007 00:56:19 -0400, 223rem <> wrote:

>If an unfocused image = convolution of the focused image with a Gaussian
>kernel, then this is a linear thus invertible transform, and therefore
>the focused image could in principle be recovered by guessing the sigma
>of the blurring Gaussian!
>
>Right? Most likely not, but why?

This isn't directly in response to what you asked, but I've had good
luck "sharpening" somewhat out-of-focus digital images by resampling
at a *lower* resolution. With fewer pixels, the transitional zones of
eliminated. This is counterintuitive, but it seems to work
(sometimes).

Alexander Arnakis, Apr 20, 2007
19. ### RichGuest

On Apr 20, 3:46 pm, (Dave Martindale) wrote:
> JimAtQuarktet <> writes:
> >> If there was a good way to fix this using software, the Hubble
> >> Telescope wouldn't have needed a \$700M repair mission.

> >If the Hubble sensor was essentially noiseless, and had a very large
> >bit depth, a deconvolution may have been able to prevent the \$700M
> >repair mission.

>
> Also if the cameras had infinite intensity resolution, which no real
> camera does.
>
> Hubble's mirror is not very large by ground telescope standards, even at
> the time it was launched, but it was supposed to be able to see very dim
> objects because the lack of atmosphere and (supposedly) very accurate
> and smooth optics would concentrate most the light from the star into a
> 0.1 arcsecond span of pixels at the prime focus. But in fact the
> focused star image had a point spread function something like 1 arc
> second in size, so the same amount of light was spread over 100 times as
> many pixels. The light from faint stars could no longer be reliably
> detected above the noise.
>
> So Hubble got used to image brighter things for a while, things that
> could be captured and sharpened by image processing. But it wasn't
> until the optics were fixed that it could be used for faint-object
> work.
>
> Dave

In fact the optics were superbly smooth, about 1/40th wave in green
light. But, the perfect Kodak mirror was on the ground while the
improperly ground (the shape, not the surface quality) was in the
telescope and horribly spherically aberrated. And, Hughes Danbury
Optical Systems (who made the error) never really paid back what it
cost to fix it
using compensating optics called, "Costar."

Rich, Apr 21, 2007
20. ### John SheehyGuest

223rem <> wrote in
news::

> If an unfocused image = convolution of the focused image with a Gaussian
> kernel, then this is a linear thus invertible transform, and therefore
> the focused image could in principle be recovered by guessing the sigma
> of the blurring Gaussian!
>
> Right? Most likely not, but why?

One simple thought experiment makes it clear that you can never do it with
100% accuracy.

Say that you have an image (or a crop) with a certain bit depth, and a
certain number of pixels. An unfocused image has stricter rules in it,
about how much contrast there can be between neighboring pixels, limiting
the possible number of combinations. There are, therefore, more possible
sharp images than there are possible unsharp images, so two different sharp
images can become the same through unfocusing, and they will also become
the same through deconvolution.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><

John Sheehy, Apr 21, 2007