Hacker Newsnew | past | comments | ask | show | jobs | submit | eru's commentslogin

To go on a tangent: lots of people even like to think back nostalgically to the time when bankers were at the golf course by 3pm. See https://en.wikipedia.org/wiki/3-6-3_Rule

(I don't agree, but it's a popular enough sentiment.)


Coca-Cola already has different ingredients in different markets.

> It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR".

That sounds like a job our new AI overlords could probably handle. (But that might be overkill.)


Interestingly, the loudness war was essentially fixed by the streaming services. They were in a similar situation as Tik Tok is now.

You would think, but not in a way that matters. Everyone still compresses their mixes. People try to get around normalization algorithms by clever hacks. The dynamics still suffer, and bad mixes still clip. So no, I don’t think streaming services fixed the loudness wars.

What's the history on the end to the loudness war? Do streaming services renormalize super compressed music to be quieter than the peaks of higher dynamic range music?

Yes. Basically the streaming services started using a decent model of perceived loudness, and normalise tracks to roughly the same perceived level. I seem to remember that Apple (the computer company, not the music company) was involved as well, but I need to re-read the history here. Their music service and mp3 players were popular back in the day.

So all music producers got out of compressing their music was clipping, and not extra loudness when played back.


It hasn't really changed much in the mastering process, they still are doing the same old compression. Maybe not the to the same extremes, but dynamic range is still usually terrible. They do it a a higher LUFS target than the streaming platforms normalize to because each streaming platform has a different limit and could change it at any time, so better to be on the safe side. Also the fact that majority of music listening doesn't happen on good speakers/environment.

> Also the fact that majority of music listening doesn't happen on good speakers/environment.

Exacly this. I usually do not want high dynamic audio because that means it's either to quiet sometimes or loud enough to annoy neighbors at other times, or both.


I hope they end up removing HDR from videos with HDR text. Recording video in sunlight etc is OK, it can be sort of "normalized brightness" or something. But HDR text on top is terrible always.

If you are on a mobile device, decoding without hardware assistance might not overwhelm the processors directly, but it might drain your battery unnecessarily fast?

Whatever Google does internally would be a much stricter standard, but I'm not sure they've written it up for outsiders to use, alas.

Sometimes scandals affect these things. But it's hard to predict.

> it's almost as if there's more stuff we do than just write code..

Yes, but adding these common sense considerations is actually something LLMs can already do reasonably well.


In 90% of the cases. And if you don't know how to spot that other 10%, you are still screwed, cause someone else will found that (and you don't even need to be an elite black hat to find it).

What’s to say a human would catch this 10% either?

The salary you pay them, typically

Salaries make humans infallible?

No, but it makes them motivated to be thorough. There is no way to motivate a chatbot (to do better or to any end).

But money is a way to motivate the people who created AI to create better AI. Because if it doesn't perform as expected, either people won't use it or they'll turn to a competitor next time they need to do something. And these companies need recurring revenue.

If we're saying the way to ensure competency is to instill fear of not getting money tomorrow as a consequence of failure, then AI companies and humans are on equal footing.


You can run multiple chatbots in parallel. Use different models and different setups.

It's like having multiple people audit your systems. Even if everyone only catches 90%, as long as they don't catch exactly the same 90%, this parallel effort helps.


Humans are pretty good at edge cases.

If you explicitly request it which means you need to know about it.

OpenAI can put that in the system prompt for their CTO-as-a-service once, and then forget about it.

Or you need to guess that it exists, or you need to scan for places it exists.

Clearly not

Perhaps some tariff shenanigans?

When you adopt the probability distribution point of view, this is often called 'burn-in'. See eg the usage in https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_al...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: