Re: Possible to extract high resolution b/w from a raw file?

Discussion in 'Digital Photography' started by David Dyer-Bennet, May 10, 2011.

  1. On Tuesday, May 10, 2011 9:40:11 AM UTC-5, Floyd L. Davidson wrote:
    > Wolfgang Weisselberg <> wrote:
    > >bob <> wrote:
    > >
    > >> since one color pixel is made up
    > >> of 4 b/w pixels with color filters?

    > >
    > >It is not.

    >
    > It is. And even your description below says that it is.


    Isn't!

    Specifically, my D700 has about 12 million active photosites, and produces
    pictures with about 12 million pixels of image. Saying a color pixel
    is "made up of" 4 B&W pixels with filters implies that the number of
    color pixels will be 1/4 the number of actual active photosites, and
    that's not true of any Bayer filter camera.

    It's accurate to say that each pixel includes information from at least
    4 photosites (I believe it's generally much more than 4, though), but
    the exact details are proprietary.

    > >It's made up of the pixel itself and then (with some
    > >intelligent processing) of the values of it's neighbours with
    > >different colours.

    >
    > Actually, the minimum number of sensor locations that
    > could be used per pixel is 4, and in fact what actually
    > is used will be a matrix of at least 9 sensor locations
    > (and maybe more than that). They *all* contribute to
    > the RGB values for a pixel produced by interpolation.


    There we go, that's the "generally more than 4" that I mentioned.

    > It is grossly inaccurate to consider each sensor
    > location as directly related to a given pixel location
    > of the image. It just doesn't work that way.


    Okay, maybe we're arguing about nits rather than substantive
    differences.

    I strongly suspect that the majority of the luminance information in
    at least half the pixels in my images comes from one photosite. So
    I think it's not badly inaccurate to suggest that photosites map
    very roughly to luminance of pixels (though color info has a much
    wider source).

    But it sounds like we have the same understanding of roughly what's
    going on down there, and just pick different ways to describe it.
     
    David Dyer-Bennet, May 10, 2011
    #1
    1. Advertising

  2. Floyd L. Davidson <> wrote:
    > David Dyer-Bennet <> wrote:
    >>On Tuesday, May 10, 2011 9:40:11 AM UTC-5, Floyd L. Davidson wrote:
    >>> Wolfgang Weisselberg <> wrote:
    >>> >bob <> wrote:


    >>Specifically, my D700 has about 12 million active photosites, and produces
    >>pictures with about 12 million pixels of image.


    > Which means exactly one thing: The total numbers are
    > nearly a match. But even then, if you look more
    > carefully you'll find that there are several thousands
    > more sensor locations used than there are pixels in the
    > final image.


    Tell me, what is "several thousand" divideb by "12 million"?

    And tell me what the function of these pixels is. And how
    they can be added to the final image.

    >>I strongly suspect that the majority of the luminance information in
    >>at least half the pixels in my images comes from one photosite. So


    > In any given pixel "at least half" the luminance
    > information comes from the green sensor locations.


    Plural? Even for the green sensor locations themselves?
    And how comes you quote completely outside the context of
    what you quote? Can't you read or is that your style?

    > That
    > is certain to be at least two sensor locations (and is
    > almost certain to be many more than that).


    For red and blue, maybe. What about the other half?

    > There is *never* a case where either the luminance or
    > the color information comes from a single sensor
    > location,


    Please provide the algorithm used to provide luminance
    information on a green sensor location.

    > Think it out a little better. Read to the end of an
    > article and understand the entire statement before
    > starting to mumble what you think at the beginning and
    > progressing as you learn (which is clearly how you
    > formulated this article).


    There's a reason you're no success as a teacher.

    -Wolfgang
     
    Wolfgang Weisselberg, May 16, 2011
    #2
    1. Advertising

  3. Floyd L. Davidson <> wrote:
    > Wolfgang Weisselberg <> wrote:
    >>Floyd L. Davidson <> wrote:
    >>> David Dyer-Bennet <> wrote:
    >>>>On Tuesday, May 10, 2011 9:40:11 AM UTC-5, Floyd L. Davidson wrote:
    >>>>> Wolfgang Weisselberg <> wrote:
    >>>>> >bob <> wrote:


    >>>>Specifically, my D700 has about 12 million active photosites, and produces
    >>>>pictures with about 12 million pixels of image.


    >>> Which means exactly one thing: The total numbers are
    >>> nearly a match. But even then, if you look more
    >>> carefully you'll find that there are several thousands
    >>> more sensor locations used than there are pixels in the
    >>> final image.


    >>Tell me, what is "several thousand" divideb by "12 million"?


    > It's not 1.0, which is significant.


    10,000/12,000,000 ~= 0.000833

    It's very surely not 1.0, you got that right.

    > Lets take an example. The Nikon D3S is listed by Nikon
    > as having 12.87 million "total pixels". It is also
    > listed as having 12.1 million "effective pixels". The
    > JPEG images produced by the camera in fact are
    > 4256x2832, or 12,052,991 pixels. But I regularly
    > produce TIFF or JPEG images from the same RAW file,
    > except they are 4284x2844, or 12,183,696 pixel images.


    So your RAW converter's JPEG engine displays borders of 13 and 6
    pixels that the camera JPEG engine doesn't display. WOW.
    You must get whole person groups in these 6 pixels.

    > So it seems that in round numbers, there are 12.87
    > million sensor locations, but Nikon produces 12.05 pixel
    > images and another interpolator produces 12.18 million
    > pixel images. That is, Nikon's images have 820,000
    > fewer pixels than the number of sensor locations, while
    > the other interpolator's images have 690,000 fewer
    > pixels than there are sensors (and 130,000 more than the
    > images Nikon produces).


    Throwing round numbers without seeming to understand them.
    Answer me:

    >>And tell me what the function of these pixels is. And how
    >>they can be added to the final image.


    > None of that is necessary for the point that was made.


    Of course it is. These pixels are black, and are used for noise
    reduction (black levels, banding, etc.).

    > The fact is, no matter how you want to fidget, there are
    > more sensor locations (used to generate the final image)
    > than there are image pixels. And that is because sensor
    > locations are not a one to one direct relationship with
    > pixel locations.


    Clearly you don't understand that every pixel corresponds to
    exactly one sensor location, even though the data for that
    pixel comes also from surrounding pixels.

    (This is not true for some hexagonally ordered sensels.)

    The camera JPEG engine simply crops the resulting image by a
    few pixels.

    > Until you understand that, the rest of this is not going
    > to help you.


    Ah, yes, the reality and the claim don't match very well.
    Which am I gonna believe?

    >>>>I strongly suspect that the majority of the luminance information in
    >>>>at least half the pixels in my images comes from one photosite. So


    >>> In any given pixel "at least half" the luminance
    >>> information comes from the green sensor locations.


    >>Plural? Even for the green sensor locations themselves?
    >>And how comes you quote completely outside the context of
    >>what you quote? Can't you read or is that your style?


    > Continue to obfuscate if you like,


    So your answer is that the green sensor locations represent
    themselves, but you don't want to say that.

    > but it won't change the
    > fact that there is no direct one to one relationship...


    Prove it. dcraw is open source ...

    >>Please provide the algorithm used to provide luminance
    >>information on a green sensor location.


    > At the simplest level, G = (g1 + g3 + g5 + g7 + g9) / 5


    So you do intentional blurring?
    What's the reason for that?

    > That is, within a 3x3 matrix where the center sensor has a green
    > filter, of the 9 sensors there will be 5 that are green and they
    > are averaged to get the value of green for an RGB pixel with the
    > same coordiates.


    Nikon has a median filter for it's RAW, true.
    For other RAW converters ... prove it.

    > If the 3x3 matrix is centered on a either a red or blue filtered
    > sensor, there are only 4 green sensors within the matrix, and
    > for that the formula is G = (g2 + g4 + g6 + g8) / 4


    Only for a rather bad demosaicer. Better ones respect
    borders.


    > The red and blue RGB values for a given pixel coordinate might
    > be the average of 1, 2, or 4 sensor locations. You can figure
    > out the possible combinations on your own...


    So you say there can be a 1:1 relationship at least for red/blue pixels.


    >>There's a reason you're no success as a teacher.


    > How would you know?


    Observing you.

    -Wolfgang
     
    Wolfgang Weisselberg, May 17, 2011
    #3
  4. Floyd L. Davidson <> wrote:

    > And you still haven't understood what the significance is?


    > Wow, you still don't understand what is being discussed!


    > I can't really do much about you not understanding them.
    > Most of your problem seems to be "willful ignorance", in
    > that you make an effort not to learn.


    Ah, dear Floyd, you always are so passive agressive when
    someone disagrees with you.

    > None of that addresses the point that was made. See
    > above, which clearly shows that is not the only use, nor
    > the significance.


    Your point is moot. I explained why in many words. And you make
    every effort not to understand me.

    >>Clearly you don't understand that every pixel corresponds to
    >>exactly one sensor location, even though the data for that
    >>pixel comes also from surrounding pixels.


    > Sigh. You just contradicted yourself.


    No, I didn't. You are thinking very one-dimensional.

    Read:

    >>(This is not true for some hexagonally ordered sensels.)


    Thanks.

    >>>>> In any given pixel "at least half" the luminance
    >>>>> information comes from the green sensor locations.


    >>>>Plural? Even for the green sensor locations themselves?
    >>>>And how comes you quote completely outside the context of
    >>>>what you quote? Can't you read or is that your style?


    >>> Continue to obfuscate if you like,


    >>So your answer is that the green sensor locations represent
    >>themselves, but you don't want to say that.


    > Because it isn't true.


    >>> but it won't change the
    >>> fact that there is no direct one to one relationship...


    >>Prove it. dcraw is open source ...


    > I can even quote you: "data for that pixel comes also
    > from surrounding pixels."


    Yes? Does that mean that, for example, there is no spatial
    1:1 correspondence of pixel and sensel?

    Does that mean that a sensel of colour X does not represent
    itself in the pixel with the colour X?

    >>>>Please provide the algorithm used to provide luminance
    >>>>information on a green sensor location.


    >>> At the simplest level, G = (g1 + g3 + g5 + g7 + g9) / 5


    >>So you do intentional blurring?
    >>What's the reason for that?


    > You asked, and got the answer. Didn't understand it, eh?


    I understand it perfectly, thank you very much.

    You don't seem to understand that you are blurring the image
    by your method. Needlessly, too. But then you don't really
    understand what you are doing.

    > That is the essence of the way it is done.


    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.116.8477&rep=rep1&type=pdf
    http://www.accidentalmark.com/research/papers/Hirakawa03MNdemosaic_ICIP.pdf
    seem to disagree with you. A lot.

    >>> That is, within a 3x3 matrix where the center sensor has a green
    >>> filter, of the 9 sensors there will be 5 that are green and they
    >>> are averaged to get the value of green for an RGB pixel with the
    >>> same coordiates.


    >>Nikon has a median filter for it's RAW, true.
    >>For other RAW converters ... prove it.


    > A "median filter"?


    Yes.

    > For any Bayer encoded data set, that is true.


    A data set is independent of it's decoding. It can even be
    decoded in several different ways.

    > It has
    > nothing to do with Nikon.


    Wrong.
    | Nikon apparently applies a mathematical median blurring
    | filter to their images (in addition to the low-pass filter in
    | front of the sensor) after the in-camera dark frame
    | subtraction for built in noise-reduction. This occurs even
    | before the raw image is written to the file. To work around
    | this and get a true raw file, it is necessary to physically
    | turn the camera off during the in-camera dark frame
    | acquisition.
    http://www.astropix.com/HTML/I_ASTROP/NIK_CAN.HTM

    > What do you mean "prove it"? You asked, and that is the
    > answer.


    Ah, stamp your foot. Harder. I still don't believe you.

    > If you don't know what Bayer encoding is, look
    > it up because I'm not going to explain it to you.


    I know what it is.
    You're still wrong.

    >>> If the 3x3 matrix is centered on a either a red or blue filtered
    >>> sensor, there are only 4 green sensors within the matrix, and
    >>> for that the formula is G = (g2 + g4 + g6 + g8) / 4


    >>Only for a rather bad demosaicer. Better ones respect
    >>borders.


    > We aren't talking about "better ones".


    YOU aren't.

    > You asked for an
    > algorithm showing how it is that multiple sensor
    > locations are involved in the generation of per pixel
    > data (granted you worded it in a way that indicates you
    > don't know what it is, but...), and I've provided you
    > with the bare bones simplest algorithm that works.


    Wrong. I asked for "*the* algorithm used to provide luminance
    information on a green sensor location".
    You offered me *an* algorithm, obviously one you made up, showing
    that you have not researched the problem. It has very
    obvious problems in that it acts as a blur filter.

    > It shows exactly the point you questioned.


    Judge: "Can you prove the defendant guilty?"
    You: "I saw him walking through the air to the top of the sky
    scraper and slay the man who's corpse the police found."

    Sure, *your* algorithm shows exactly what you claim.
    However, your algorithm is also terribly broken.

    > It is also very commonly used too!


    ROFL. That's a good one. More advanced algorithms then
    surely use hundreds of green sensels to calculate the green
    channel of one pixel.

    >>> The red and blue RGB values for a given pixel coordinate might
    >>> be the average of 1, 2, or 4 sensor locations. You can figure
    >>> out the possible combinations on your own...


    >>So you say there can be a 1:1 relationship at least for red/blue pixels.


    > Pay attention.


    Sure ...

    > If a 3x3 matrix of sensor locations is used to generate
    > a pixel's RGB values, either the red or the blue value
    > (but not both for the same pixel) might actually come
    > from only a single sensor location. But that is just a
    > part of the pixel's value.


    So you say there can be a 1:1 relationship at least for red/blue pixels.

    > The green value and the other non-green value for that
    > pixel come from multiple sensor locations, so there is
    > *never* a case where one pixel's values are generated
    > from the data of a single sensor location.


    Pay attention.

    I never said it did.

    >>>>There's a reason you're no success as a teacher.


    >>> How would you know?


    >>Observing you.


    > Your lack of competence has nothing to do with my success.


    See? You do it again. You're a terrible teacher. No manners ...

    -Wolfgang
     
    Wolfgang Weisselberg, May 21, 2011
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. bob
    Replies:
    66
    Views:
    1,773
    J. Clarke
    Jul 3, 2011
  2. David Dyer-Bennet

    Re: Possible to extract high resolution b/w from a raw file?

    David Dyer-Bennet, May 10, 2011, in forum: Digital Photography
    Replies:
    3
    Views:
    244
    Wolfgang Weisselberg
    May 21, 2011
  3. David Dyer-Bennet

    Re: Possible to extract high resolution b/w from a raw file?

    David Dyer-Bennet, May 10, 2011, in forum: Digital Photography
    Replies:
    24
    Views:
    670
    Wolfgang Weisselberg
    Jun 8, 2011
  4. nospam
    Replies:
    180
    Views:
    2,858
    John Turco
    Jul 15, 2011
  5. Wolfgang Weisselberg

    Re: Possible to extract high resolution b/w from a raw file?

    Wolfgang Weisselberg, May 16, 2011, in forum: Digital Photography
    Replies:
    31
    Views:
    701
    John Turco
    May 27, 2011
Loading...

Share This Page