Looks like a nice project. I'm currently searching for a Unicode library and it appears to me that ICU is the de-facto standard here, which has the benefit of comming pre-installed on pretty much any Linux distribution. Any reason why I should use Unicorn instead? I couldn't find information on how it compares to ICU in the documentation (well, except for the most welcome usage of modern C++).
It looks like Unicorn can apply operations (such as regexes) to text that is natively in UTF-8, giving it a distinct advantage over ICU, which was written back when UTF-16 seemed like a good idea and has to convert everything into UTF-16.
It's hard but needed to differentiate between UTF-16 and UChar byte array. UChar byte array are not essentially an well-formed UTF-16 string. Beyond, why bother use UnicodeString? It's fairly easy to use. It covers the detail from your sight.
It's indeed super cool to see a modern Unicode C++ library. But anyway, is it really useful for production usage? The answer could be no. In contrast, ICU was old, battle-tested, compact and well-tested.
I'm talking about using UTF-8 as the string representation, not UChars. UChars are an artifact of UTF-16, and thus require converting all text on input and output, unless you work in a Windows API world where I/O is UTF-16.
Modern programming languages such as Rust gain efficiency by working with unmodified UTF-8. All you lose is constant-time arbitrary indexing, which is a bad idea in most cases anyway.
Both utf8 and utf16 can contain multicharacter elements. If you split a string at an arbitrary point you risk splitting it inside a multicharacter element.
This will be very common in utf8 that contains non-ascii characters, and very rare with utf16 (only happens with characters outside the BMP).
Neither is something you want in your code, unless you think it's a good idea to corrupt your users' data.
Edit: It's not too difficult to handle these cases and make sure you only split at valid positions, but you do need to be careful and there are a number of edge cases you might not think through or even encounter unless you have the right sort of data to test with - which leads to lots of faulty implementations. e.g. for years MySQL couldn't handle utf8 characters outside the BMP.
My parent was speaking about indexing at the code points level, not at the encoding (byte / character) level.
I do know that Unicode has combining code points (confusingly called combining characters) and nasty things like rtl switching code points. I guess it's turtles all the way down.
Again, my original parent's statement was not about encoding or memory savings. The statement was that it was a bad idea to index into an (abstract) unicode string (of unicode code points -- not compositions thereof whatsoever).
I didn't question that, but hoped to get some inspiration for sane usage of unicode handling (which I'm not sure is humanly possible except for treating it as a rather black box and make no promises).
Your original parent was all about encodings, and mentioned it was a bad idea to arbitrarily index in to utf8 strings, (no mention of abstract strings of unicode codepoints).
> languages such as Rust gain efficiency by working with unmodified UTF-8. All you lose is constant-time arbitrary indexing
So it's saying Rust mostly benefits from using utf8, but in doing so, it loses the ability to arbitrarily index a character in a string (in constant time).
If it was abstract strings of unicode codepoints then there is no problem - except you'd then be using 32bits per codepoint.
Comparison with ICU would be interesting but probably unfair given size and age of ICU. Personally I'd like to see it compared to utf8rewind (previously discussed on HN [1]).
The unicode portion looks reasonable, but why is it necessary for it to include its own flags, file io, file management, and environment classes?
Why is it so many C++ libraries fall into this habit of trying to build one big framework. I'm perfectly happy with gflags -- a unicode library would be nice for my project, but now I won't consider this library.
Because the whole point is to handle anything that needs Unicode support. A library that only manipulated Unicode strings would be incomplete if you still couldn't use Unicode in command line options, file names, etc.
I would recommend breaking them off into separate additional libraries. I don't need unicode for flags, so paying for it at compile and link time seems unwise. Or provide adapter classes that can be used over other frameworks. Just a suggestion.
That's what will happen until there's a defacto/standard library for this stuff. Languages like Python and Go have a wider base in the standard library. C++14 still only gives you platform dependent 'wide' strings, UTF-8 string literals, and UTF-8 conversion... which makes things awkward.
I just tried that on several browsers; Safari and Chrome are fine, it seems to be only Firefox that has a problem with that. I have no idea whether that's a bug in Firefox or Github, and either way there's nothing I can do about it, sorry.
I guess he should have said that there's nothing reasonable he can do about it. Creating an entirely separate set of HTML pages would require a new publishing flow, add a new step every time docs update, and generally encourage the docs to fall out of sync with the repo. He could do all of this, or he could do the sensible thing and leave the docs exactly like they are.
That's not fair. It's pretty well known that Github uses JS to hijack page navigation and make it "smoother" for people. And of course that's going to be faulty, and I emailed them years ago when they made the switch, and asked them to make it an optional behavior because I hate it. But that has nothing to do with OP or OP's link or content. It's like judging a book by the book store.