Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Javascript > Input Character Set Handling

Reply
Thread Tools

Input Character Set Handling

 
 
Kulgan
Guest
Posts: n/a
 
      11-10-2006
Hi

I am struggling to find definitive information on how IE 5.5, 6 and 7
handle character input (I am happy with the display of text).


I have two main questions:


1. Does IE automaticall convert text input in HTML forms from the
native character set (e.g. SJIS, 8859-1 etc) to UTF-8 prior to sending
the input back to the server?

2. Does IE Javascript do the same? So if I write a Javascript function
that compares a UTF-8 string to a string that a user has inputted into
a text box, will IE convert the user's string into UTF-8 before doing
the comparison?


I think that the answer to question 1 is probably "YES", but I cannot
find any information on question 2!


Many thanks for your help


Kulgan.

 
Reply With Quote
 
 
 
 
Bart Van der Donck
Guest
Posts: n/a
 
      11-10-2006
Kulgan wrote:

> 1. Does IE automaticall convert text input in HTML forms from the
> native character set (e.g. SJIS, 8859-1 etc) to UTF-8 prior to sending
> the input back to the server?


With <form method="get"> , the browser tries to pass the characters
to the server in the character set of the page, but it will only
succeed if the characters in question can be represented in that
character set. If not, browsers calculate "their best bet" based on
what's available (old style) or use an Unicode set (new style).

Example: western browsers send '' as '%E9' by default (URL encoding).
But when the page is in UTF-8, the browser will first lookup the
Unicode multibyte encoding of ''. In this case, it are 2 bytes
because '' lies in UTF code point range 128-256. Those two bytes
correspond to and , and will result in '%C3%A9' (URL encoding) in
the eventual query string.

<form method="post" enctype="application/x-www-form-urlencoded"> is
the same as <form method="post"> and uses the same general principle
as GET.

In <form method="post" enctype="multipart/form-data"> there is no
default encoding at all, because this encoding type needs to be able to
transfer non-base64-ed binaries. '' will be passed as '' and that's
it.

> 2. Does IE Javascript do the same? So if I write a Javascript function
> that compares a UTF-8 string to a string that a user has inputted into
> a text box, will IE convert the user's string into UTF-8 before doing
> the comparison?


Browsers only encode form values between the moment that the user
submits the form and the moment that the new POST/GET request is made.
You should have no problem to use any of the Unicode characters in
javascript as long as you haven't sent the form.

Hope this helps,

--
Bart

 
Reply With Quote
 
 
 
 
Kulgan
Guest
Posts: n/a
 
      11-10-2006

> Browsers only encode form values between the moment that the user
> submits the form and the moment that the new POST/GET request is made.
> You should have no problem to use any of the Unicode characters in
> javascript as long as you haven't sent the form.
>

Thanks for the helpful info.

On the Javascript subject, if the user's input character set is not
UTF-8 (e.g. it is the Japanese SJIS set), but the page character set is
UTF-8, how does Javascript see the characters? Does the browser do an
SJIS to UTF-8 conversion on the characters before they are used (e.g.
to find the length of the string?)

Thanks,

Kulgan.

 
Reply With Quote
 
VK
Guest
Posts: n/a
 
      11-10-2006
Kulgan wrote:
> 2. Does IE Javascript do the same? So if I write a Javascript function
> that compares a UTF-8 string to a string that a user has inputted into
> a text box, will IE convert the user's string into UTF-8 before doing
> the comparison?


That is confusion inspired by Unicode, Inc. and W3C (I'm wondering
rather often if they have any clue at all about Unicode).

Unicode is a *charset* : a set of characters where each character unit
is represented by two bytes (taking the original Unicode 16-bit
encoding). At the same time TCP/IP protocol is an 8-bit media: its
atomic unit is one byte. This way one cannot directly send Unicode
entities over the Internet: same way as you cannot place a 3D box on a
sheet of paper, you can only emulate it (making its 2D projection). So
it is necessary to use some 8-bit *encoding* algorithm to split Unicode
characters onto sequences of bytes, send them over the Internet and
glue them back together on the other end. Here UTF-8 *encoding* (not
*charset*) comes into play. By some special algorithm it encodes
Unicode characters into base ACSII sequences and send them to the
recipient. The recipient - informed in advance by Content-Type header
what i's coming - uses UTF-8 decoder to get back the original Unicode
characters.
The Fact Number One unknown to the majority of specialists, including
the absolute majority of W3C volunteers - so feel yourselve a choosen
one -
Pragma <?xml version="1.0" encoding="utf-8"?> which one sees left and
right in XML and pseudo-XHTML documents *does not* mean that this
document is in UTF-8 encoding. It means that the document is in Unicode
charset and it must be transmitted (if needed) over an 8-bit media
using UTF-8 encoding algorithm. Respectively if the document is not
using Unicode charset then you are making a false statement with
numerous nasty outcomes pending if ever used on the Internet.
Here is even more secret knowledge, shared between myself and Sir
Berners-Lee only -
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
*does not* mean that the characters you see on your screen are in
"UTF-8 charset" (there is not such). It means: "The input stream was
declared as Unicode charset characters encoded using UTF-8 transport
encoding. The result you are seeing (if seeing anything) is the result
of decoding the input stream using UTF-8 decoder".
"charset" term here is totally misleading one - it remained from the
old times with charsets of 256 entities maximum thus encoding matching
charset and vice versa. The proper header W3C should insist on is
....content="text/html; charset=Unicode; encoding=UTF-8"
As I said before very few people on the Earth knows the truth and the
Web did not collapse so far for two main reason:
1) Content-Type header sent by server takes precedence over META tag on
the page. This HTTP standard is one of most valuable ones left to us by
fathers. They saw in advance the ignorance ruling so left the chance to
server admins to save the world
2) All modern UA's have special neuristic built in to sort out real
UTF-8 input streams and authors mistakes. A note for the "Content-Type
in my heart" adepts: it means that over the last years a great amount
of viewer-dependant XML/XHTML documents was produced.

Sorry for such extremely long preface, but I considered dangerous to
just keep giving "short fix" advises: it is fighting with symptoms
instead of the sickness. And the sickness is growing worldwide: out
helpdesk is flooded with requests like "my document is in UTF-8
encoding, why..." etc.

Coming back to your original question: the page will be either Unicode
or ISO-8859-1 or something else: but it *never* will be UTF-8: UTF-8
exists only during the transmission and parsing stages. The maximum one
can do is to have UTF-8 encoded characters right in the document like
%D0%82... But in such case it is just row UTF-8 source represented
using ASCII charset.
>From the other side JavaScript operates with Unicode only and it sees

the page content "through the window of Unicode" no matter what the
actual charset is. So to reliably compare user input / node values with
JavaScript strings you have to:
1) The most reliable one for an average-small amount of non-ASCII
characters:
Use \u Unicode escape sequences

2) Lesser reliable as can be easily smashed once open in a non-Unicode
editor:
Have the entire .js file in Unicode with non-ASCII characters typed as
they are and your server sending the file in UTF-8 encoding.

P.S. There is whole another issue which could be named "How do I handle
Unicode 32-bit characters or How did Unicode, Inc. screw the whole
world". But your primary question is answered, and it's beer time
anyway.

 
Reply With Quote
 
Bart Van der Donck
Guest
Posts: n/a
 
      11-10-2006
Kulgan wrote:

> [...]
> On the Javascript subject, if the user's input character set is not
> UTF-8 (e.g. it is the Japanese SJIS set), but the page character set is
> UTF-8, how does Javascript see the characters?


Always the same, as their Unicode code points.

> Does the browser do an SJIS to UTF-8 conversion on the characters
> before they are used (e.g. to find the length of the string?)


No conversion/encoding is possible on that level. I think you're not
fully aware of the distinction between
(1) the user's (available) charsets
(2) the charset of the web page
(3) how javascript handles characters internally

Only (3) is of importance in your case:

Paste into input field:<br>
ヤツカ
<hr>
<form>
<input name="i">
<input type="button" value="check" onClick="
if (document.forms[0].i.value == '\uFF94\uFF82\uFF76') {
alert('equal') }
else {
alert('not equal')
}
">
</form>

Note that it doesn't matter whether the user has SJIS installed. It
also doesn't matter what the charset of the page is.

--
Bart

 
Reply With Quote
 
Bart Van der Donck
Guest
Posts: n/a
 
      11-10-2006
VK wrote:

> [...]
> Unicode is a *charset* : a set of characters where each character unit
> is represented by two bytes (taking the original Unicode 16-bit
> encoding).
> [...]


I wouldn't put it that way. Some Unicode characters consist of 2 bytes,
yes, but Unicode's primary idea is the multi-byte concept; characters
can also consist of 1 byte, or more than 2.

--
Bart

 
Reply With Quote
 
VK
Guest
Posts: n/a
 
      11-10-2006

Bart Van der Donck wrote:
> > [...]
> > Unicode is a *charset* : a set of characters where each character unit
> > is represented by two bytes (taking the original Unicode 16-bit
> > encoding).
> > [...]


> I wouldn't put it that way. Some Unicode characters consist of 2 bytes,
> yes, but Unicode's primary idea is the multi-byte concept; characters
> can also consist of 1 byte, or more than 2.


I humbly disagree: the very original Unicode idea is that 8 bits cannot
accommodate all charcodes for all characters currently used in the
world. This way it was an obvious idea to use a two bytes encoding with
respectively 65,535 possible character units: to represent all
*currently used* systems of writing. While some Far East systems
(Hangul, Traditional Chinese) would be a space challenge - the majority
of other systems are based on the Phoenician phonetic alphabet (>Greek
> Latin > Others) so relatively very compact. This way 65,535 storage units were more than generous for the task.
>From the other end at the moment the project started the US English

(base ASCII) texts were absolutely prevailing in the transmission so
the task was do not double the HTTP traffic with useless 0x00 bytes. To
avoid that it was decided that the bytes 0-127 will be treated
literally as base ASCII characters and anything 128-255 will be treated
as the beginning of a double-byte Unicode sequence. Alas it meant that
0x8000 - 0xFFFF ( a good half of the table) would be unusable. Lucky
Pike and Thompson found a way of an economic unambiguous transmission
of any characters in 0-65535 range meeting the core requirement do not
double the traffic with Unicode-encoded base-ASCII characters. This
algorithm - later called UTF-8 - went into wide production. It
doesn't mean that English "A" is represented with a single byte
in Unicode: it means that Unicode double byte character 0x0041 (Basic
Latin LATIN CAPITAL LETTER A) has an universally recognized single-byte
shortcut 0x41
That would be a happy ending but misfortunately Unicode, Inc. treated
65,535 storage places as a teenager would treat his first credit card
- thus rolling it on the first occasion without thinking of the
consequences. Any shaister coming with any kind of crap tables was
immediately welcome and accounted. This way Unicode, Inc. started to
work on the "first came - first got" basis and the original idea
"all currently used charsets" was seamlessly transformed into
"all symbolic systems ever used for any purposes by the human
civilization". Well predictably for language specialists - but
surprisingly for Unicode, Inc. amateurs - it appeared that the
humanity produced a countless amount f systems to denote sounds,
syllables, words, ideas, musical sounds, chemical elements and an
endless amount of other material and spiritual entities. This way they
spent all available storage space for rarely used crap before even
fixing the place for such "minor" issues as Chinese or Japanese. As
the result they had to go from 2-byte system to 3-byte system and now
they seem exploring the storage space of a 4-byte system. And this is
even without touching yet Egyptian hieratic/demotic and all variants of
Cuneiform. And there is no one so far to come, send the fn amateurs to
hell and to bring the Unicode system in order.

You come to say to any Java team guy "Unicode" (unlike
"Candyman" one time will suffice and then run away quickly
before he started beating you.

Yes I am biased on the matter: I hate "volunteers" ensured that
whatever they are doing is right just because they are doing it for
free (and seemly for free).

 
Reply With Quote
 
Michael Winter
Guest
Posts: n/a
 
      11-10-2006
VK wrote:
> Kulgan wrote:
>> 2. Does IE Javascript do the same? So if I write a Javascript
>> function that compares a UTF-8 string to a string that a user has
>> inputted into a text box, will IE convert the user's string into
>> UTF-8 before doing the comparison?

>
> That is confusion inspired by Unicode, Inc. and W3C (I'm wondering
> rather often if they have any clue at all about Unicode).


Oh, here we go.

> Unicode is a *charset* ...


It's a character encoding: characters are encoded as an integer within a
certain "codespace", namely the range 0..10FFFF. There are then
"encoding forms" that transform values in this range to "code units",
specifically the three Unicode Transformation Formats, UTF-8, -16, and
-32. These code units can be used to store or transport sequences of
"encoded characters". The "encoding scheme" (which includes big- and
little-endian forms for UTF-16 and -32) defines precisely how each form
is serialised into octets.

[snip]

> Here UTF-8 *encoding* (not *charset*) comes into play. By some
> special algorithm it encodes Unicode characters into base ACSII
> sequences and send them to the recipient.


Whilst some encoded characters will map directly to ASCII (specifically
the Unicode code points, 0..7F), most won't. For a start, ASCII is a
7-bit encoding (128 characters in the range 0..7F), whereas UTF-8 is an
8-bit, variable-width format.

The word you are looking for is "octet".

[snip]

> Pragma <?xml version="1.0" encoding="utf-8"?>


It is the XML declaration and takes the form of a processing instruction.

> ... *does not* mean that this document is in UTF-8 encoding.


That depends on what you mean by "in UTF-8 encoding". If you meant
"serialised using the UTF-8 encoding scheme", then that's precisely what
it means. However, it is unnecessary to include an XML declaration for
documents that use either the UTF-8 or -16 encoding form (see 4.3.3
Character Encoding in Entities).

> It means that the document is in Unicode charset ...


All XML documents (and HTML, for that matter) use the Unicode
repertoire. The issue is the form in which the document is transported.
Should a higher protocol not signal the encoding form in use (UTF-8,
ISO-8859-1, etc.) then the XML declaration serves that purpose.

[snip]

> Coming back to your original question: the page will be either Unicode
> or ISO-8859-1 or something else: but it *never* will be UTF-8: UTF-8
> exists only during the transmission and parsing stages.


UTF-8 can be used any time the document needs to be serialised into a
sequence of octets. Therefore, a document might stored on disk using
UTF-8, and then transmitted verbatim across a network.

[snip]

Mike
 
Reply With Quote
 
Kulgan
Guest
Posts: n/a
 
      11-11-2006
> > On the Javascript subject, if the user's input character set is not
> > UTF-8 (e.g. it is the Japanese SJIS set), but the page character set is
> > UTF-8, how does Javascript see the characters?

>
> Always the same, as their Unicode code points.
>


Many thanks for the advice. I am starting to get an understanding of
what is going on now!! Are you saying that if the user's Windows
character set is not Unicode that Javascript sees characters inputted
into text boxes as Unicode? Or are modern Windows (XP) installations
always Unicode for data input anyway??

Can of worms...!

Kulgan.

 
Reply With Quote
 
Bart Van der Donck
Guest
Posts: n/a
 
      11-11-2006
Jim Land (NO SPAM) wrote:

> "Bart Van der Donck" <(E-Mail Removed)> wrote in
> news:(E-Mail Removed) oups.com:


Posts like yours are dangerous; Gougle Groups displays html char/num
entities where you haven't typed them and vice versa. I can imagine
that most News Readers will have trouble with it too; that's why I've
put some work to restrict my previous post to ISO-8859-1 so everybody
sees it correctly.

> > Paste into input field:<br>
> > ヤツカ
> > <hr>
> > <form>
> > <input name="i">
> > <input type="button" value="check" onClick="
> > if (document.forms[0].i.value == '\uFF94\uFF82\uFF76') {
> > alert('equal') }
> > else {
> > alert('not equal')
> > }
> > ">
> > </form>

> Not equal.
>
> 2 Paste ヤ
> if (document.forms[0].i.value == '\uFF94;')
> Not equal
>
> 3 Paste ヤ
> if (document.forms[0].i.value == 'ヤ')
> Not equal
>
> 4 Paste &amp;
> if (document.forms[0].i.value == '&amp;')
> Not equal
>
> 5 Paste abc
> if (document.forms[0].i.value == 'abc')
> Equal
>
> 6 Paste &
> if (document.forms[0].i.value == '&')
> Equal
>
> 7 Paste &
> if (document.forms[0].i.value == '&') //ascii decimal
> Equal
>
> 8 Paste &
> if (document.forms[0].i.value == '\x26') //ascii hex
> Equal
>
> 9 Paste &
> if (document.forms[0].i.value == '\46') //ascii octal
> Equal
>
> 10 Paste &
> if (document.forms[0].i.value == '\u0026') //unicode
> Equal
>
> 11 Paste &
> if (document.forms[0].i.value == '&amp;') //html character entity
> Equal


I suppose your testing results should be fine, two thoughts:
- beware of leading/trailing spaces when you copy/paste
- (document.forms[0].i.value == '\uFF94;') doesn't equal because the
semicolon shouldn't be there

> Are the following conclusions correct?
>
> 1. When a single character is typed in an input box, Javascript can
> correctly recognize it as itself,


Yes.

> as its ascii code (decimal, hex, or octal),


Yes, but only when it's an ASCII character (which is nowadays too
narrow to work with).

> as its unicode,


Yes.

> or as its html character entity.


I'ld say this is a bridge too far; there might be browser dependencies
when it comes too num/char entity handling in forms. I would tend to
not rely too much on this kind of stuff.

> 2. However, Javascript does *not* correctly recognize a character entered
> by typing its ascii code, unicode, or html character entity into a text
> box.


Correct by definition; eg when you type "\x41", it will be treated as
"\x4" and not as "A", because you typed "\x4" and not "A" But it's
possible to write a script too modify such behaviour.

--
Bart

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
character encoding +missing character sequence raavi Java 2 03-02-2006 05:01 AM
getting the character code of a character in a string Velvet ASP .Net 9 01-19-2006 09:27 PM
warning: multi-character character constant...help me! mimmo C Programming 4 04-10-2004 08:15 PM
Character reference "&#c" is an invalid XML character cgbusch XML 6 09-02-2003 07:04 PM
question: reading character for character from stdin KwikRick Python 1 08-22-2003 05:56 PM



Advertisments