On 21/09/12 14:59, Steven D'Aprano wrote: <snip>
(Although in fairness, given the technical limitations back in 1963, the designers of ASCII did a reasonable job of making something that was usable for a subset of American English.)
<snip>
Therein lies the problem - in 1960s there was damn-all chance of a computer having the memory needed to hold a full character set for what was a minor problem to those working on them, and a similar small chance of a TTY display being able to render it.
Fast forward 40 years and the limitations are all too obvious, hence the idea of unicode to represent them all.
What I hate about unicode was the idea of adopting 16-bit characters and thus breaking so much byte-orientated code that was written, tested, and integrated over the history of computing. So the use of UTF-8 is so much better, though the variable-length coding nature has its own minor problems.
But yes to all who point out MS' deficiencies in following simple standards, it is almost like they want to prevent interoperability...
Regards, Paul _______________________________________________ Pan-users mailing list Pan-users@nongnu.org https://lists.nongnu.org/mailman/listinfo/pan-users