Re: I Miss my Viewfinder !

Discussion in 'Digital Photography' started by Ray Fischer, Jun 12, 2011.

  1. Ray Fischer

    Ray Fischer Guest

    Elliott Roper <> wrote:
    > Eric Stevens
    >> Mxsmanic <>
    >> >Wolfgang Weisselberg writes:


    >> >> Random noise is harder to summarize (compress accurately) than
    >> >> meaningful words.
    >> >
    >> >Random data cannot be compressed at all.

    >>
    >> That's not correct. You have to go back to what is meant by random.

    >
    >Eric, it pains me to say it, but Msmanic, for once, is right.[1]
    >If it is compressible in the long run, then it ain't random.


    You're making a claim about the content of random data. By
    definition, random data cannot be predicted. Your claim fails.

    --
    Ray Fischer | Mendocracy (n.) government by lying
    | The new GOP ideal
     
    Ray Fischer, Jun 12, 2011
    #1
    1. Advertising

  2. Ray Fischer

    Ray Fischer Guest

    Mxsmanic <> wrote:
    >Ray Fischer writes:
    >> Elliott Roper <> wrote:
    >> > Eric Stevens
    >> >> Mxsmanic <>
    >> >> >Wolfgang Weisselberg writes:

    >>
    >> >> >> Random noise is harder to summarize (compress accurately) than
    >> >> >> meaningful words.
    >> >> >
    >> >> >Random data cannot be compressed at all.
    >> >>
    >> >> That's not correct. You have to go back to what is meant by random.
    >> >
    >> >Eric, it pains me to say it, but Msmanic, for once, is right.[1]
    >> >If it is compressible in the long run, then it ain't random.

    >>
    >> You're making a claim about the content of random data. By
    >> definition, random data cannot be predicted. Your claim fails.

    >
    >You're mistaken. Random data cannot be compressed BECAUSE it cannot be
    >predicted.


    Non sequitur.

    >Compression is just a form of encoding information. A typical compression


    I'm not interested in more of your uninformed idiocy.

    >Since random messages are all equally probable, no compression scheme can
    >shorten them.


    A string of 1000 zero bits is just as probable as any other string of
    bits. 1000 zero bits is highly compressible. Your claim is crap.

    --
    Ray Fischer | Mendocracy (n.) government by lying
    | The new GOP ideal
     
    Ray Fischer, Jun 12, 2011
    #2
    1. Advertising

  3. Ray Fischer

    Ray Fischer Guest

    Elliott Roper <> wrote:
    > Ray Fischer
    >> Elliott Roper <> wrote:
    >> > Eric Stevens
    >> >> Mxsmanic <>
    >> >> >Wolfgang Weisselberg writes:

    >>
    >> >> >> Random noise is harder to summarize (compress accurately) than
    >> >> >> meaningful words.
    >> >> >
    >> >> >Random data cannot be compressed at all.
    >> >>
    >> >> That's not correct. You have to go back to what is meant by random.
    >> >
    >> >Eric, it pains me to say it, but Msmanic, for once, is right.[1]
    >> >If it is compressible in the long run, then it ain't random.

    >>
    >> You're making a claim about the content of random data. By
    >> definition, random data cannot be predicted. Your claim fails.

    >
    >Ray, when you are in a hole, stop digging.


    Take your own advice.

    >Please go away and learn some maths.


    Please go away and stop being a clueless asshole.

    --
    Ray Fischer | Mendocracy (n.) government by lying
    | The new GOP ideal
     
    Ray Fischer, Jun 12, 2011
    #3
  4. Ray Fischer

    Whisky-dave Guest

    On Jun 12, 8:57 pm, Mxsmanic <> wrote:
    > Ray Fischer writes:
    > > Elliott Roper  <> wrote:
    > > > Eric Stevens
    > > >>  Mxsmanic <>
    > > >> >Wolfgang Weisselberg writes:

    >
    > > >> >> Random noise is harder to summarize (compress accurately) than
    > > >> >> meaningful words.

    >
    > > >> >Random data cannot be compressed at all.

    >
    > > >> That's not correct. You have to go back to what is meant by random.

    >
    > > >Eric, it pains me to say it, but Msmanic, for once, is right.[1]
    > > >If it is compressible in the long run, then it ain't random.

    >
    > > You're making a claim about the content of random data.  By
    > > definition, random data cannot be predicted.  Your claim fails.

    >
    > You're mistaken. Random data cannot be compressed BECAUSE it cannot be
    > predicted.


    Complete rubbish.
    I can't predict what next best seller in books will be but it will be
    compressible.


    >
    > Compression is just a form of encoding information. A typical compression
    > algorithm accepts input messages of a fixed length,


    No it can be variable length

    > and outputs messages of a
    > variable length, and encodes messages such that the most probable messages in
    > the input stream are encoded with the smallest number of bits in the output
    > stream. However, whenever there are output messages that are shorter thanthe
    > corresponding input messages, there must also be output messages that are
    > longer than the corresponding input messages.


    That isn't compression then is it.

    > This in turn means that, unless
    > the input messages are not all equally probable, no actual compression can
    > occur in the output, since the average length of all output messages willbe
    > the same as the length of the input messages.


    What has probability got to do with it.

    >
    > Since random messages are all equally probable,

    How can they be if random ?



    >
    > Thus, random data is incompressible,


    yes it is compressible.

    >and one test for randomness is to run
    > data through different compression algorithms and see if it gets any smaller.
    > That's not a perfect test, though, since there may be some undetected pattern
    > in the input that makes it less than random and would allow compression with
    > the right algorithm.


    You're talking crap again over and over again.
     
    Whisky-dave, Jun 13, 2011
    #4
  5. Ray Fischer

    Whisky-dave Guest

    On Jun 13, 2:10 pm, Mxsmanic <> wrote:
    > Whisky-dave writes:
    > > Complete rubbish.
    > > I can't predict what next best seller in books will be but it will be
    > > compressible.

    >
    > Books do not contain random data.


    Yes they do, unless you can predict every single word in order
    of the next best seller, can you ?

    Why is it that you think random data can't be compressed ?


    >
    > > No it can be variable length

    >
    > The set of possible input messages is always finite. The lengths of the
    > messages can be variable, but their number must be finite. A compression
    > algorithm produces a one-to-one reversible encoding of this set of messages
    > into another set of equal, finite size.


    No idea where you get that from, but is your arse sore ?


    >
    > > What has probability got to do with it.

    >
    > Everything. If all input messages are equally probable, no compression is
    > possible. That's why random data cannot be compressed.


    More rubbish.
    But give an example rather than random noise.


    >
    > > How can they be if random ?

    >
    > By definition, if they are random, they are all equally probable.


    NO, a random heads and tails flip aren't equally probbaly.
    The chances of getting 6 heads in a row is less probbaly than not
    getting 6 heads in a row.

    >
    > > yes it is compressible.

    >
    > No, it is not, for reasons I have explained again and again.
     
    Whisky-dave, Jun 13, 2011
    #5
  6. Ray Fischer

    Whisky-dave Guest

    On Jun 13, 11:17 pm, Mxsmanic <> wrote:
    > Savageduck writes:
    > > > True, but the probability of getting any specific sequence of heads and tails
    > > > is equal to the probability of getting any other specific sequence. That is,
    > > > 101101 is just as improbable as 000000.

    >
    > > Check on the "Gambler's Fallacy"

    >
    > This is the opposite of the gambler's fallacy.
    >
    > > Also remember that there are also issues on starting the coin flip heads up,
    > > or tails up. So the minimum 50% sampling with a "fair coin" should be 4flips,
    > > 2 starting heads up, and 2 starting tails up.

    >
    > The actual mechanism of a coin flip is irrelevant. It is merely a metaphor for
    > a string of random bits.
    >
    > In a string of random bits, every sequence of ones and zeroes is just as
    > likely as every other sequence.


    So are you saying information transmitted in binary can't be
    compressed ?
     
    Whisky-dave, Jun 14, 2011
    #6
  7. Ray Fischer

    Whisky-dave Guest

    On Jun 13, 11:36 pm, Mxsmanic <> wrote:
    > Eric Stevens writes:
    > > The question in my mind is, what is the probability of a particular
    > > string having two or more identical sub-strings. If this is greater
    > > than zero then the string is compressible.

    >
    > Incorrect. If the two substrings together comprise the entire string, andif
    > they are of equal length, then the chances of both substrings being the same
    > in a random data stream are 1/(2^(n/2)), where n is the total length of the
    > string. So in a random bit string of 20 bits, the chances of the first 10-bit
    > substring being the same as the second 10-bit substring are 1 in 1024. This
    > means that more than 99.9% of the time, the substrings will not be identical,
    > which in turn means that an algorithm that compresses identical substrings
    > will produce no overall compression.
    >
    > So, the probability would be greater than zero, but compression would still be
    > effectively nil. You cannot compress random data.


    Why not ?
    Suppose I give a child a drum stick and a plectrum and send him off to
    play with
    some musical instruments that he hasn't been trained to play.
    Of if you prefer a 1000s monkeys typing on a keyboard.
    Now take the output of each by recording what happened :-
    In the case of a child hitting a drum or hitting the guitar with a
    drum stick
    I would say the notes produced are pretty random and unpredictable.
    Now if I record the sound/tune/song/rock anthem what every you wish to
    call this
    stream of data on to a recoding device are you saying it can't be
    exported to an iPod
    or other music playing device as an MP3 or ACC or MP4 or FLAC or Apple
    Quicktime lossless.
    How about the moneys on a keyboard are you saying the text document
    can't be compressed
    because you don;t know what they typed ?
     
    Whisky-dave, Jun 14, 2011
    #7
  8. Ray Fischer

    Whisky-dave Guest

    On Jun 14, 11:59 am, Mxsmanic <> wrote:
    > Whisky-dave writes:
    > > So are you saying information transmitted in  binary can't be
    > > compressed ?

    >
    > Random data cannot be compressed, be it in binary or not.

    ---------------------------------------------------------------------------------------
    Mxsmanic Jun 14, 1:43 am

    Elliott Roper writes:
    > Msxmanic asserts "there is always some algorithm that will produce
    > compression, even if the stream is random"


    There is always some algorithm that will produce compression of a
    given finite
    stream of bits, even if it is from a random source.
    --------------------------------------------------------------------------------------

    So I'll ask again can random data be compressed YES or NO
     
    Whisky-dave, Jun 14, 2011
    #8
  9. Eric Stevens <> wrote:
    > On Mon, 13 Jun 2011 15:06:51 +0200, Mxsmanic <>
    >>Eric Stevens writes:


    Eric, in the general case random strings are incompressible.

    Of course, you can always create an algorithm that compresses a
    very small subset of any (even random) strings efficiently (but not
    the rest), the simplest *containing* the random string in question.
    Obviously, that does not save space as you have to transmit the
    special algorithm (with the string) and the compressed output ---
    that is again longer than the string as such.

    What is wanted is a more general compressor like lossless JPEG,
    FLAC, ZIP, RAR --- something that can be used on a whole class
    of data (i.e. infinitely many cases), not just on some very few
    specific strings.

    >>> I cannot conceive of a situation where it is neccesary to encode
    >>> a message for the purpose of compression where the encoded message is
    >>> longer than the original.


    >>Actually, it's easy enough to construct a message that will do this with a
    >>given compression algorithm.


    > "given compression algorithm". All algorithms have foibles which
    > render their use inappropriate for particular cases or applications.
    > But the information which can be properly compressed by a particular
    > algorith is a sub-set of all information.


    But is it a finite subset?
    And if it is a finite subset, needn't you add the algorithm
    to the message, for the recipient won't have it?

    >>> One must ask why, if the encoded message is
    >>> longer than the original, one should bother encoding the message for
    >>> the purpose of compression at all.


    >>One shouldn't. But compression algorithms are designed for specific patterns
    >>in input data. If those patterns are not present, no compression will occur.
    >>If the required patterns never occur in the input data, then the output will
    >>actually be longer than the input.


    > Depending on what the algorithm does with it.


    The algorithm will need at least one pattern to signal 'switch
    on compression such and such'. If none such pattern happen in
    the input, and the input is completely not compressible by the
    algorithm, then the output can be a verbatim copy of the input.
    (Usually algorithms also say "this is encoded by me with those
    and those settings, dictionary, checksum", so in this case the
    real world output will be slightly longer.) If such a pattern
    exists in the input, it must be escaped (for else the algorithm
    cannot handle all data at all) and that means a length increase.

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 14, 2011
    #9
  10. Eric Stevens <> wrote:
    > On Mon, 13 Jun 2011 15:10:54 +0200, Mxsmanic <>
    >>Whisky-dave writes:


    >>> Complete rubbish.
    >>> I can't predict what next best seller in books will be but it will be
    >>> compressible.


    >>Books do not contain random data.


    > See
    > http://en.wikipedia.org/wiki/A_Million_Random_Digits_with_100,000_Normal_Deviates


    $ ls -lSr digits.txt*
    -rw-r--r-- 1 1099 1099 560534 14. Jun 22:24 digits.txt.7z.LZMA.max
    -rw-r--r-- 1 1099 1099 560561 14. Jun 22:09 digits.txt.7z
    -rw-r--r-- 1 1099 1099 560841 14. Jun 22:26 digits.txt.7z.LZMA2
    -rw-r--r-- 1 1099 1099 570380 14. Jun 22:26 digits.txt.7z.PPMd
    -rw-r--r-- 1 1099 1099 576308 14. Jun 21:57 digits.txt.bz2.7-times
    -rw-r--r-- 1 1099 1099 577203 14. Jun 21:48 digits.txt.bz2
    -rw-r--r-- 1 1099 1099 662293 14. Jun 21:48 digits.txt.gz
    -rw-r--r-- 1 1099 1099 663943 14. Jun 21:47 digits.txt.zip
    -rw-r--r-- 1 1099 1099 1440000 19. Dez 2005 digits.txt

    Quite compressible.


    >>> How can they be if random ?


    >>By definition, if they are random, they are all equally probable.


    Nope.
    A string containing the digits 1-6 from die throws is random,
    but will not contain any letter or the digit 7 etc.

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 14, 2011
    #10
  11. Eric Stevens <> wrote:
    > On Tue, 14 Jun 2011 21:41:27 +0200, Wolfgang Weisselberg
    >>Eric Stevens <> wrote:
    >>> On Mon, 13 Jun 2011 15:06:51 +0200, Mxsmanic <>
    >>>>Eric Stevens writes:


    >>Eric, in the general case random strings are incompressible.


    > I never claimed otherwise. What I said is that some _may_ be
    > compressible and explained the circumstances which may allow them to
    > be compressed.


    Since random strings also encompass e.g. this posting, ... :)


    >>What is wanted is a more general compressor like lossless JPEG,
    >>FLAC, ZIP, RAR --- something that can be used on a whole class
    >>of data (i.e. infinitely many cases), not just on some very few
    >>specific strings.


    > But even then you have to have somehow ensured the decompressing
    > algorithm has been transmitted.


    Yes. Although since almost everyone has already decompressed
    JPEG and ZIP (and those who are interested, FLAC), ...

    >>The algorithm will need at least one pattern to signal 'switch
    >>on compression such and such'. If none such pattern happen in
    >>the input, and the input is completely not compressible by the
    >>algorithm, then the output can be a verbatim copy of the input.
    >>(Usually algorithms also say "this is encoded by me with those
    >>and those settings, dictionary, checksum", so in this case the
    >>real world output will be slightly longer.) If such a pattern
    >>exists in the input, it must be escaped (for else the algorithm
    >>cannot handle all data at all) and that means a length increase.


    > This applies even when not compressing. Think TCIP.


    This always applies when you want to add out of band
    information into an information stream.

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 15, 2011
    #11
  12. Ray Fischer

    Whisky-dave Guest

    On Jun 15, 10:03 pm, Mxsmanic <> wrote:
    > Whisky-dave writes:
    > > Why not?

    >
    > How many times and in how many ways do I have to explain this before certain
    > people here will understand?


    because you have no understanding of it.
    But why not use infinity you seem very fond of using it to prove your
    point.


    >
    > Compression requires that the incoming bit stream contain sequences of varying
    > probability.


    What's varying probability ?

    >By encoding the sequences of high probability with shorter output
    > strings, and those of low probability with longer output strings, compression
    > may be effected.


    If you go on and on for infinitely long time we will not know what
    you're talking about since there is no end.
    I can compress this infinite drivel, by saying "Your talking shit and
    don't realise it"

    > However, all sequences are equally probable in a random bit
    > stream, so no compression is possible.


    Why not ?
    Give a real example.
    Tossing s coin 100 times, are you saying the data that gives the
    results of 100 coin
    tosses can not be compressed ?

    >
    > > Now if I record the sound/tune/song/rock anthem what every you wish to
    > > call this
    > > stream of data on to a recoding device are you saying it can't be
    > > exported to an iPod
    > > or other music playing device as an MP3 or ACC or MP4 or FLAC or Apple
    > > Quicktime lossless.

    >
    > Random noise cannot be compressed.


    Yes it can, you've been talking shit for ages snow and all this can be
    compressed
    it is done. I can take any line of you're random drivel type it in to
    word as text
    then say it out, that will will be compressible to some degree.
    In fact I'm betting 99% can be delete as rubbish, but if rubbish is
    random
    and you're speaking it, that still doesn't mean it can't be
    compressed.
    Although I'd prefer to bin it as random rubbish.
     
    Whisky-dave, Jun 16, 2011
    #12
  13. Ray Fischer

    Ray Fischer Guest

    Mxsmanic <> wrote:
    >Ray Fischer writes:
    >
    >> >Compression is just a form of encoding information. A typical compression

    >>
    >> I'm not interested in more of your uninformed idiocy.

    >
    >Should I switch to uninformed idiocy, I'll keep that in mind.


    You already have.

    >> A string of 1000 zero bits is just as probable as any other string of
    >> bits. 1000 zero bits is highly compressible. Your claim is crap.

    >
    >Your grasp of my explanation is apparently nil. But, as I've said, these kinds
    >of concepts are difficult for many people to grasp. I'll try again:


    You're not smart enought o be a condescending asshole.

    ....
    >However, if all the messages from the input set are equally likely to appear
    >in your input stream, then the number of short output encodings will always be
    >equal to the number of long output encodings, which means that there will be
    >no compression at all--the average length of encoded messages in the output
    >stream will be exactly n.


    LOL!

    You're still an idiot.

    >Random data streams contain all messages with equal probability, and so they
    >cannot be compressed.


    You make the stupid mistake of asumming that the set of all random
    streams is the same as all sets of random streams. Nobody deals with
    the set of all random streams.

    --
    Ray Fischer | Mendocracy (n.) government by lying
    | The new GOP ideal
     
    Ray Fischer, Jun 16, 2011
    #13
  14. Ray Fischer

    Ray Fischer Guest

    Mxsmanic <> wrote:
    >Whisky-dave writes:
    >> Why is it that you think random data can't be compressed?

    >
    >I've already explained that, again and again.


    And you keep getting it wrong.

    You start with the premise that the set of all random sequences cannot
    be compressed and draw the conclusion that none of the sequences can
    be compressed.

    > In order to compress a stream of
    >input messages, they must not all have the same probability of occurrence.


    Nobody tries to compress ALL of the random sequences.

    --
    Ray Fischer | Mendocracy (n.) government by lying
    | The new GOP ideal
     
    Ray Fischer, Jun 16, 2011
    #14
  15. Mxsmanic <> wrote:

    > How many times and in how many ways do I have to explain this before certain
    > people here will understand?


    Until you get your story straight.

    > Compression requires that the incoming bit stream contain sequences of varying
    > probability.


    It doesn't.
    Take all finite bit streams using only the bit 1.
    They can be compressed by saying "n times 1".

    Yet the sequences (any shorter streams of 1 bits) are all of the
    same probability, they appeart to 100% in the incoming bit stream.

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 16, 2011
    #15
  16. Mxsmanic <> wrote:
    > Eric Stevens writes:


    >> Off course infinitely long bit streams cannot be compressed: they
    >> remain infinitely long.


    > Streaming compression alogirhtms regularly compress streams of infinite
    > length.


    Complete bull. Unless you claim compression speed is infinite, they'll
    only compress finite streams in the time from the invention of the computer
    to the heat death of the universe.

    You probably mean 'indeterminate length'.

    > The entire string is not compressed at once, but the number of bits
    > input over a finite period of time is larger than the number of bits output,
    > if the stream is non-random and the compression algorithm is appropriate.


    Wrong. It is easy to prove that there can be (rare(?)) bit
    patterns in a non-random stream that will be as large or larger
    after compression. Even when the average "string" (you mean
    stream)-parts are compressed well.

    Also most compression algorithms don't compress entire
    strings at once, even with known length finite strings.

    > This
    > will always be true, for eternity, so the infinitely long stream is still
    > being compressed.


    And it will be infinitely long, too.
    In other words, the compression ratio will be 1:1.

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 16, 2011
    #16
  17. Mxsmanic <> wrote:
    > Eric Stevens writes:


    >> From where do they get these streams of infinite length?


    > Telecom systems deal with them every day. The streams have no end.


    Would the end of all life and the heat death of the universe not
    stop said streams infinite aeons before they reach infinite length?

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 17, 2011
    #17
  18. Ray Fischer

    Ray Fischer Guest

    Mxsmanic <> wrote:
    >Eric Stevens writes:
    >
    >> Every string has the same probability

    >
    >If they all have the same probability, they cannot be compressed.


    If you had a clue you wouldn't write such obvious nonsense.

    --
    Ray Fischer | Mendocracy (n.) government by lying
    | The new GOP ideal
     
    Ray Fischer, Jun 19, 2011
    #18
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. MC

    Re: I Miss my Viewfinder !

    MC, May 14, 2011, in forum: Digital Photography
    Replies:
    7
    Views:
    373
    ASCII
    May 24, 2011
  2. ray

    Re: I Miss my Viewfinder !

    ray, May 15, 2011, in forum: Digital Photography
    Replies:
    729
    Views:
    9,205
    John Turco
    Sep 16, 2011
  3. Wolfgang Weisselberg

    Re: I Miss my Viewfinder !

    Wolfgang Weisselberg, May 16, 2011, in forum: Digital Photography
    Replies:
    2
    Views:
    238
    Wolfgang Weisselberg
    May 18, 2011
  4. David Dyer-Bennet

    Re: I Miss my Viewfinder !

    David Dyer-Bennet, May 24, 2011, in forum: Digital Photography
    Replies:
    2
    Views:
    242
    David Dyer-Bennet
    May 25, 2011
  5. David Dyer-Bennet

    Re: I Miss my Viewfinder !

    David Dyer-Bennet, May 24, 2011, in forum: Digital Photography
    Replies:
    3
    Views:
    261
    Wolfgang Weisselberg
    May 26, 2011
Loading...

Share This Page