To go on a tangent: lots of people even like to think back nostalgically to the time when bankers were at the golf course by 3pm. See https://en.wikipedia.org/wiki/3-6-3_Rule
(I don't agree, but it's a popular enough sentiment.)
You would think, but not in a way that matters. Everyone still compresses their mixes. People try to get around normalization algorithms by clever hacks. The dynamics still suffer, and bad mixes still clip. So no, I don’t think streaming services fixed the loudness wars.
What's the history on the end to the loudness war? Do streaming services renormalize super compressed music to be quieter than the peaks of higher dynamic range music?
Yes. Basically the streaming services started using a decent model of perceived loudness, and normalise tracks to roughly the same perceived level. I seem to remember that Apple (the computer company, not the music company) was involved as well, but I need to re-read the history here. Their music service and mp3 players were popular back in the day.
So all music producers got out of compressing their music was clipping, and not extra loudness when played back.
It hasn't really changed much in the mastering process, they still are doing the same old compression. Maybe not the to the same extremes, but dynamic range is still usually terrible. They do it a a higher LUFS target than the streaming platforms normalize to because each streaming platform has a different limit and could change it at any time, so better to be on the safe side. Also the fact that majority of music listening doesn't happen on good speakers/environment.
> Also the fact that majority of music listening doesn't happen on good speakers/environment.
Exacly this. I usually do not want high dynamic audio because that means it's either to quiet sometimes or loud enough to annoy neighbors at other times, or both.
I hope they end up removing HDR from videos with HDR text.
Recording video in sunlight etc is OK, it can be sort of "normalized brightness" or something. But HDR text on top is terrible always.
If you are on a mobile device, decoding without hardware assistance might not overwhelm the processors directly, but it might drain your battery unnecessarily fast?
In 90% of the cases. And if you don't know how to spot that other 10%, you are still screwed, cause someone else will found that (and you don't even need to be an elite black hat to find it).
But money is a way to motivate the people who created AI to create better AI. Because if it doesn't perform as expected, either people won't use it or they'll turn to a competitor next time they need to do something. And these companies need recurring revenue.
If we're saying the way to ensure competency is to instill fear of not getting money tomorrow as a consequence of failure, then AI companies and humans are on equal footing.
You can run multiple chatbots in parallel. Use different models and different setups.
It's like having multiple people audit your systems. Even if everyone only catches 90%, as long as they don't catch exactly the same 90%, this parallel effort helps.
(I don't agree, but it's a popular enough sentiment.)
reply