> My point in the original comment (and to some extent in the preceding one) was to emphasize that a lot of these issues are at the input method level - we should not have to think about encoding as long as it accurately and unambiguously represent whatever we want it to represent.
I might be sympathetic to this, except that keyboard layouts and input (esp. on mobile devices) is an even bigger mess and even more fragmented than character encoding. Furthermore, while keys are not a 1:1 mapping with Unicode codepoints, they are very strongly influenced by the defined codepoints.
It'd be nice to separate those two components cleanly, but since language is defined by how it's used, this abstraction is always going to be very porous.
> Just out of curiosity - I would be interested to know more about your learning experience that you feel is not well aligned with the representation of jophola as it is currently.
I have literally never once heard the jophola referred to as a viram and a য, except in contexts such as this one. Especially since the jophola creates a vowel sound (instead of the consonant য), and especially since the jophola isn't even pronounced like "y". (I understand why the jophola is pronounced the way it does, but arguing on the basis of phonetic Sanskrit is a poor representation of Bengali today - by that point, we might as well be arguing that the thorn[0] is equivalent to "th" today, or that æ should be a separate letter and not a dipthong[1])
I'm not going to claim that it's completely without pre-digital precedent, but it certainly is not universal, and it's inconsistent in one of the above ways no matter how one slices it, especially when looking at some incredibly obscure and/or antiquated modifiers that are given their own characters, despite being undeniably joined of other characters that are already Unicode codepoints[2].
Not necessarily disagreeing with your broader point but I just want to point out that the examples are only obscure and antiquared in English. æ is common in modern Danish and unambiguously a separate letter, as is þ in Icelandic.
And Norwegian (for æ). We have æ/Æ and ø/Ø (distinct, in theory, from the symbol for the empty set, btw), while å/Å used to be written aa/AA a long time ago (but is obviously not a result of combining two a's). Swedish uses ö/Ö for essentially ø/Ø, ä/Ä for æ/Æ. And both of those I can only easily type by combining the dot-dot, with o/O, a/A because my Norwegian keyboard layout has key for/labelled øæå, not öäå.
For Japanese (and Chinese and a few others) things are even more complicated. It's tricky to fit ~5000 symbols on a keyboard, so typically in Japan one types on either a phonetic layout, or a latin layout, and translate to Kanji as needed (eg: "nihongo" or "にほんご" is transformed to "日本語" -- note also that "ご" itself is a compound character, "ko" modified by two dots to become "go" -- which may or may not be entered as a compound, with a modifier key).
As I currently don't have any Japanese input installed under xorg, that last bit I had to cut and paste.
It is entirely valid to view ø as a combination of o and a (short) slash, or å as a combination of "a" and "°" -- but if one does that while typing, it is important that software handles the compound correctly (and distinct from ligatures, as mentioned above). My brother's name, "Ståle" is five letters/symbols long, if reversed it becomes "elåtS", not "el°atS" (six symbols).
So, yeah, it's complicated. Remember that we've spent many years fighting the hack that was ascii, extended ascii (which may be (part of) why eg: Norwegian gets to have å rather than a+°). You still can't easily use utf8 with neither C nor, as I understand it C++ (almost, but not quite -- AFAIK one easy workaround is to use QT's strings if one can have qt as a dependency -- and it's still a mess on Windows, due to their botched wide char hacks... etc).
All in all, while it's nice to think that one can take some modernized, English-centric ideas evolved from the Gutenberg press, and mash it together with what constitutes a "letter" (How hard is it to reverse a string!? How hard is it to count letters!? How hard is it to count words?!) -- that approach is simply wrong.
There will always be magic, and there'll be very few things that can be said with confidence to be valid across all locales. What is to_upper("日本語"), reverse("Ståle"), character_count("Ståle"), word_count("日本語") etc.
This turned into a bit more of an essay than I intended, sorry about that :-)
To be fair, proper codepoint processing is a pain even in Java, which was created back when Unicode was in 16-bit mode. Now that it's extended to 32-bits, proper Unicode string looping looks something like this:
for(int i = 0; i < string.length();) {
final int codepoint = string.codePointAt(i);
i += Character.charCount(codepoint);
}
Actually, that's not correct, and it's the exact same mistake I made when using that API. codePointAt returns the codepoint at index i, where i is measured in 16-bit chars, which means you could index into the middle of a surrogate pair.
The correct version is:
for (int i = 0; i < string.length(); i = string.offsetByCodePoints(i, 1))
{
int codepoint = string.codePointAt(i);
}
Java 8 seems to have acquired a codePoints() method on the CharSequence interface which seems to do the same thing.
But this just adds to the fact, proper Unicode string processing is a pain :).
I think you missed the part where `i` is not incremented in the for statement, but inside the loop using `Character.charCount`, which returns the number of `char` necessary to represent the code point. If there's something wrong with this, my unit tests have never brought it up, and I am always sure to test with multi-`char` codepoints.
> except that keyboard layouts and input (esp. on mobile devices) is an even bigger mess and even more fragmented than character encoding.
Encoding was in a similar place 10-15 years ago. Almost every publisher in Bengali had their own encoding, font, and keyboard layout - the bigger ones built their own in-house systems, while the smaller ones used systems that were built or maintained by very small operators. To make things even more complicated, these systems needed a very specific combination of operating system and page layout software to work. Now the situation is quite better with most publishers switching to Unicode, at least for public facing content.
With input methods, I expect to see at least some consolidation - I don't necessarily think we need standards here, but there will clear leaders that emerge. Yes, keyboard layouts are influenced by Unicode code-points, but only in a specific context. Usually when people who already have experience with computers start to type in Bengali (or any other Indic language), they use a phonetic keyboard, which is influenced mostly by the QWERTY layout. Then, if they write a significant amount, they find that the phonetic input is not very efficient (typing kha everytime to get খ is painful), and they switch to a system where there's a one-to-one mapping between commonly used characters and keys. This does tend to have a relationship between defined codepoints and keys, but that's probably because the defined codepoints cover the basic characters in the script (so in your case, ্য would need to have a separate key, which I think is fine). There will still be awkward gestures, but that's again, a part of adjusting to the new medium. No one bats an eyelid when hitting "enter" to get a newline - but when we learn to write on paper, we never encounter the concept of a carriage-return.
> I have literally never once heard the jophola referred to as a viram and a য
Interesting - I guess we have somewhat different mental models. For me, I did think of jophola as a "hoshonto + jo", possibly because of the "jo" connection, and this was true even before I started to mess around with computers or Unicode. I always thought about jophola as a "yuktakshar", and if it's a "yuktakshar", I always mentally broke it down to its constituents.
> [...] especially when looking at some incredibly obscure and/or antiquated modifiers that are given their own characters
I think those exist because of backwards compatibility reasons. For Bengali I think Unicode made the right choice to start with the minimum number of code points (based on what ISCII had at that time). As others have pointed out elsewhere in the thread - it is an evolving standard, and additions are possible. Khanda-ta did get accepted, and contrary to what many think, non-consortium members can provide their input (for example, I am acknowledged in the khanda-ta document I linked to earlier, and all I did was participate in the mailing list and provide my suggestions and some evidence).
I might be sympathetic to this, except that keyboard layouts and input (esp. on mobile devices) is an even bigger mess and even more fragmented than character encoding. Furthermore, while keys are not a 1:1 mapping with Unicode codepoints, they are very strongly influenced by the defined codepoints.
It'd be nice to separate those two components cleanly, but since language is defined by how it's used, this abstraction is always going to be very porous.
> Just out of curiosity - I would be interested to know more about your learning experience that you feel is not well aligned with the representation of jophola as it is currently.
I have literally never once heard the jophola referred to as a viram and a য, except in contexts such as this one. Especially since the jophola creates a vowel sound (instead of the consonant য), and especially since the jophola isn't even pronounced like "y". (I understand why the jophola is pronounced the way it does, but arguing on the basis of phonetic Sanskrit is a poor representation of Bengali today - by that point, we might as well be arguing that the thorn[0] is equivalent to "th" today, or that æ should be a separate letter and not a dipthong[1])
I'm not going to claim that it's completely without pre-digital precedent, but it certainly is not universal, and it's inconsistent in one of the above ways no matter how one slices it, especially when looking at some incredibly obscure and/or antiquated modifiers that are given their own characters, despite being undeniably joined of other characters that are already Unicode codepoints[2].
(Out of curiosity, where in Bengal are you from?)
[0] https://en.wikipedia.org/wiki/Thorn_%28letter%29
[1] As it was in Old English.
[2] Such as the ash (æ)!