> I'm not highly skilled in either win32 or NSIS, so excuse me if I say
> something stupid but;  how can an additional charset imply so much work?
> NSIS supports a lot of charsets for different languages already.  It feeds
> text with a given charset and tells the OS which charset is it.

> Perhaps the problem is not about Unicode but about wide chars?  (in which
> case, I wonder why they use utf-16 instead of utf-8).

Well, win32 has two sets of APIs, one for wide chars which is UTF-16
and the ANSI charset (multi-byte, potentially).  There's also two sets
of API for the standard C library, the ANSI and the wide-char, which
you are also probably aware of.  Why?  Because the two kinds of
strings are arrays of two different types: unsigned short vs char.  So
you can see the complexity there already.  Now in the Unix world
wchar_t is a 32-bit type but not so in the Windows world.  That's why
UTF-16 is the preferred encoding of Unicode in Unicode NSIS.

So simply put, it's not a matter of different charsets (Unicode is not
an ANSI codepage), it's a matter of different types altogether.

- Jim



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to