You show a bunch of English speakers some text that’s cut off, and ask them to predict the next letter. Their success at prediction tells you the information content of the text. Shannon ran this experiment and got a result of about 1 bit per letter: https://archive.org/details/bstj30-1-50/page/n5/mode/1up
OK. When talking about language I find it's always good to be explicit about what level you're talking about, especially when you're using terms as overloaded as "information". I'm not really sure how to connect this finding to semantics.
If the text can be reproduced with one bit per letter, then the semantic information content is necessarily at most equal to N bits where N is the length of the text in letters. Presumably it will normally be much less, since there are things like synonyms and equivalent word ordering which don’t change the meaning, but this gives a solid upper bound.