Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Java > Re: number of bytes for each (uni)code point while using utf-8 asencoding ...

Reply
Thread Tools

Re: number of bytes for each (uni)code point while using utf-8 asencoding ...

 
 
Daniele Futtorovic
Guest
Posts: n/a
 
      07-10-2012
On 10/07/2012 21:45, lbrt chx _ gemale allegedly wrote:
>> On 10/07/2012 12:21, lbrt chx _ gemale allegedly wrote:

>
>>> How can you get the number of bytes you "get()"?

>
>> Well, UTF-8 always encodes the same char to the same (number of) bytes,
>> doesn't it?

> ~
> What about files, which (author's) claim to be UTF-8 encoded but they aren't, and/or get somehow corrupted in transit? There are quite a bit of "monkeys" (us) messing with the metadata headers of html pages
> ~
> Sometimes you must double check every file you keep in a text bank/corpus, because, through associations, one mistake may propagate and create other kinds of problems
> ~
>> So you could just build a map char -> size /a priori/.

> ~
> ...
> ~
>> But really, what's the use? ...

> ~
> to you there is none but I am trying pinpoint the closest I possibly can:
> ~
> .onMalformedInput(CodingErrorAction.REPORT);
> .onUnmappableCharacter(CodingErrorAction.REPORT);
> ~
> errors
> ~
> There should be a way to get sizes as you get UTF-8 encoded sequences from a file. Also I how found that quite a few files get corrupted while in transmission and sometimes I wonder how safe that naive mapping you mention is, since those file formatting don't have any kind of built-in error correction measures


And what's that knowledge about the mapping size going to tell you?

Assume the file is corrupted. Then you can't know the original character
(since it's corrupted). Hence even if you know to how many bytes each
character maps, you can't tell whether the size you're seeing is wrong
or right.

At least that's how it seems to me.

Even the malformedness is no reliable indicator. Your data might get
corrupted and the outcome be well-formed, as far as the character
encoding is concerned.

I have to agree with Lew. Only the transmission layer can reliably
tackle this problem. Just pass a checksum and be done with it.

--
DF.
 
Reply With Quote
 
 
 
 
Lew
Guest
Posts: n/a
 
      07-10-2012
Daniele Futtorovic wrote:
> lbrt chx _ gemale allegedly wrote:
> lbrt chx _ gemale allegedly wrote:
> >
> >>> How can you get the number of bytes you "get()"?
> >
> >> Well, UTF-8 always encodes the same char to the same (number of)bytes,
> >> doesn't it?
> > ~
> > What about files, which (author's) claim to be UTF-8 encoded but they aren't, and/or get somehow corrupted in transit? There are quitea bit of "monkeys" (us) messing with the metadata headers of html pages
> > ~
> > Sometimes you must double check every file you keep in a text bank/corpus, because, through associations, one mistake may propagate and createother kinds of problems
> > ~
> >> So you could just build a map char -> size /a priori/.
> > ~
> > ...
> > ~
> >> But really, what's the use? ...
> > ~
> > to you there is none but I am trying pinpoint the closest I possibly can:
> > ~
> > .onMalformedInput(CodingErrorAction.REPORT);
> > .onUnmappableCharacter(CodingErrorAction.REPORT);
> > ~
> > errors
> > ~
> > There should be a way to get sizes as you get UTF-8 encoded sequences from a file. Also I how found that quite a few files get corrupted whilein transmission and sometimes I wonder how safe that naive mapping you mention is, since those file formatting don't have any kind of built-in error correction measures
>
> And what's that knowledge about the mapping size going to tell you?
>
> Assume the file is corrupted. Then you can't know the original character
> (since it's corrupted). Hence even if you know to how many bytes each
> character maps, you can't tell whether the size you're seeing is wrong
> or right.
>
> At least that's how it seems to me.
>
> Even the malformedness is no reliable indicator. Your data might get
> corrupted and the outcome be well-formed, as far as the character
> encoding is concerned.
>
> I have to agree with Lew. Only the transmission layer can reliably
> tackle this problem. Just pass a checksum and be done with it.


Even the file being corrupt has no bearing on the correctness of the Java
code. The file itself may actually be corrupt and the Java code yet
working perfectly.

--
Lew
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: number of bytes for each (uni)code point while using utf-8 asencoding ... Joshua Cranmer Java 0 07-12-2012 04:03 AM
Re: number of bytes for each (uni)code point while using utf-8 asencoding ... Lew Java 0 07-11-2012 09:05 PM
Re: number of bytes for each (uni)code point while using utf-8 asencoding ... Robert Klemme Java 0 07-11-2012 08:03 PM
Re: number of bytes for each (uni)code point while using utf-8 asencoding ... Lew Java 0 07-10-2012 07:57 PM
Re: number of bytes for each (uni)code point while using utf-8 asencoding ... Daniele Futtorovic Java 0 07-10-2012 06:13 PM



Advertisments