Eli Zaretskii writes: > > Been discussing this elsewhere, and its come to my attention that not > > only do all unicode code-points not fit into UTF-16, but all unicode > > characters don't fit into unicode code-points :-). Presumably this is > > why emacs expanded to 22bits? > > Not sure what you mean here. All Unicode characters do fit into the > Unicode codepoint space. Emacs extends that codepoint space beyond 22 > bits because it needs to support cultures which don't want unification > yet.
I suppose he means grapheme complexes, such as various accented characters that can be constructed from composing characters but do not have precomposed forms in Unicode. As you say, that's not why Emacs extended the code space. > > Did you consider leaving aref, char-code and code-char alone and writing > > unicode functions on top of these, i.e. unicode-length!=length, as > > opposed to making aref itself do this translation under the hood, > > thereby violating the expectation of O(1) access, (which is certainly > > offered in other kinds of arrays, though it is questionable whether real > > users actually expect this for strings)? Actually, originally Emacs allowed you to treat text (buffers and strings) either as sequences of characters or arrays of bytes, and this was a real bug-breeder (and why XEmacs chose the pain of the incompatible separation of integer type from character type). I'm not sure if the feature is present in modern Emacs, but at the very least the usage is so rare today that I'm unaware of any. That's not what you asked, but it implies the answer "no, and you shouldn't, either" to your question. This is despite the fact that yes, in many languages and applications users *do* expect O(1) access to individual characters in text. _______________________________________________ Gcl-devel mailing list [email protected] https://lists.gnu.org/mailman/listinfo/gcl-devel
