The craziest issue I had was I couldn't predict what char encoding the text in my database was in. Most users entered Windows-1252, some text blobs were UTF-16, others were European character sets, and some were UTF-8. Some were Japanese SHIFT_JIS. Don't ask me how any of this happened. I retrospect, I should have dumped all the tables from MySQL and used the excellent PyPy Chardet [1] library to see what I was dealing with, do the conversions and then re-import the data. But then someone could copy UTF-16 from a Windows document and paste it in, so you have to convert going in to the database.
You have set Apache to UTF-8, PHP to UTF-8, MySQL to UTF-8, and the MySQL driver you are using to UTF-8. It's not clear how these setting interact. Are there silent conversions happening or do you always have to detect the encoding on data coming from the server? HTML pages have a character encoding specifier, but the BOM at the start of the file takes precedence (I think.) I got it to work by always detecting encoding for any text coming from the database and using iconv, but this turned out to be really slow and unreliable. It was truly the biggest mess by an order of magnitude than any other programming problem I faced in my career.
You have set Apache to UTF-8, PHP to UTF-8, MySQL to UTF-8, and the MySQL driver you are using to UTF-8. It's not clear how these setting interact. Are there silent conversions happening or do you always have to detect the encoding on data coming from the server? HTML pages have a character encoding specifier, but the BOM at the start of the file takes precedence (I think.) I got it to work by always detecting encoding for any text coming from the database and using iconv, but this turned out to be really slow and unreliable. It was truly the biggest mess by an order of magnitude than any other programming problem I faced in my career.
Would not attempt again.
[1] https://github.com/chardet/chardet