Hacker Newsnew | past | comments | ask | show | jobs | submit | masspro's commentslogin

Also they are horrifically broken if you use OS-level magnifier (ctrl+scroll etc). I don't know if this is the application devs' fault or not; I haven't investigated OS mouse warping APIs. Warping the mouse back to the center of the knob goes in a feedback loop with the magnifier and spams crazy mouse events such that every knob will immediately go to min or max. Really shameful accessibility fail that no one cares about.

I read that whole (single) paragraph as “I made really, really, really sure I didn’t violate any NDAs by doing these things to confirm everything had a public source”


This is literally the second paragraph in the article. There is no need for interpretation here.

Unless the link of the article has changed since your comment?


MacOS does wash out SDR content in HDR mode specifically on non-Apple monitors. An HDR video playing in windowed mode will look fine but all the UI around it has black and white levels very close to grey.

Edit: to be clear, macOS itself (Cocoa elements) is all SDR content and thus washed out.


Define "washed out"?

The white and black levels of the UX are supposed to stay in SDR. That's a feature not a bug.

If you mean the interface isn't bright enough, that's intended behavior.

If the black point is somehow raised, then that's bizarre and definitely unintended behavior. And I honestly can't even imagine what could be causing that to happen. It does seem like that it would have to be a serious macOS bug.

You should post a photo of your monitor, comparing a black #000 image in Preview with a pitch-black frame from a video. People edit HDR video on Macs, and I've never heard of this happening before.


That's intended behavior for monitor limited in peak brightness


I don't think so. Windows 11 has a HDR calibration utility that allows you to adjust brightness and HDR and it maintains blacks being perfectly black (especially with my OLED). When I enable HDR on macOS whatever settings I try, including adjusting brightness and contrast on the monitor the blacks look completely washed out and grey. HDR DOES seem to work correctly on macOS but only if you use Mac displays.


That’s the statement I found last time I went down this rabbit hole, that they don’t have physical brightness info for third-party displays so it just can’t be done any better. But I don’t understand how this can lead to making the black point terrible. Black should be the one color every emissive colorspace agrees on.


Actually, intended behavior in general. Even on their own displays the UI looks grey when HDR is playing.

Which, personally, I find to be extremely ugly and gross and I do not understand why they thought this was a good idea.


Oh, that explains why it looked so odd when I enabled HDR on my Studio.


Huh, so that’s why HDR looks like shit on my Mac Studio.


* “which of the 3 big data structures in this part of the program/function/etc is this int/string key an index into?”

* some arithmetic/geometry problems for example 2D layout where there are several different things that are “an integer number of pixels” but mean wildly different things

In either case it can help pick apart dense code or help stay on track while writing it. It can instead become anti-helpful by causing distraction or increasing the friction to make changes.


I don’t think I can take that claim by itself as necessarily implying the cause is hardware. Consumer OSes were on the verge of getting protected memory at that time, as an example of where things were, so if I imagine “take an old application and try to run it” then I am immediately imagining software problems, and software bit rot is a well-known thing. If the claim is “try to run Windows 95 on bare metal”, then…well actually I installed win98 on a new PC about 10 years ago and it worked. When I try to imagine hardware changes since then that a kernel would have to worry about, I’m mostly coming up with PCI Express and some brave OEMs finally removing BIOS compatibility and leaving only UEFI. I’m not counting lack of drivers for modern hardware as “hardware still changes” because that feels like a natural consequence of having multiple vendors in the market, but maybe I could be convinced that is a fundamental change in and of itself…however even then, that state of things was extremely normalized by the 2000s.


Drivers make up a tiny portion of the software on our computer by any measure (memory or compute time) and they're far longer lasting than your average GUI app.


Thread is talking about kids knowing how to request emergency services with a nearby phone in case something happens to their parent(s). Nothing to do with giving kids their own phones.


A nearby phone implies a nearby phone user that would presumably understand how to place an emergency call, especially if they are being asked by a frantic five year old.


if it’s only the kid and the nearby phone user, and the nearby phone user is having an emergency (that’s also preventing them from being able to call themselves) then the kid is able to do it.


Do you have an M1? I’m really hoping this is a USB-chipset-specific problem that got fixed. That hope is supported by…one random Reddit comment.


Can we collectively retcon an unencumbered replacement name for such things? Odoromop?


Sounds like a naming idea for a cleaning implement that came up in Mr Clean marketing meeting and was immediately dismissed.


I think it's probably a mop that spreads odors instead of getting rid of them.


Actually "mop" is just a metaphor cooked up by the marketing department. Odoromap uses advanced catalytic technology to convert nasty odors to pleasant ones.


I don't think I can trust TTS for language learning. I could be internalizing wrong pronunciation, and I wouldn't know. One time I tried Duolingo for Japanese already knowing a bit. To their credit I assumed it was recorded clips, until it read 'oyogu' as something like 'oyNHYAOgu', like it concatenated two syllable clips that don't go together. If I didn't already know, would I be trying to study and replicate that nonsense? So I don't know if I could trust TTS audio for language study regardless of what kind of tech it is. Sure mistakes can be unlearned over time spent immersing, but at much more effort than just not internalizing them in the first place.

Also Japanese specifically has this meme where it literally is a pitch-accent language but many people say it's not and teaching resources ignore it. E.g. 'ima' means either 'now' or 'living room' depending if syllable #2 is higher or lower. Clearly only applies to some languages, but is another dimension even harder to a learner to know there's a mistake. I have to imagine even other Latin languages probably have reading quirks where this could happen to me.


Also a Japanese learner here—albeit a beginner. As I understand it, the pitch accent is about stress, languages can stress a syllable with length, volume, pitch, etc. Spanish uses vowel length, Icelandic uses volume, English uses a combination of length and volume, and Swedish (just like Japanese) uses pitch. Just like in English if you put the wrong stress on the word it can range anything from sounding foreign to being incomprehensible. (Aside: I always remember trying to say the name of the band Duran Duran to an English speaker, while putting the stress on the first syllable like is normal in Icelandic, but my listener had no idea what I was saying, it took probably 30 attempts before I was corrected with the correct stress).

I think Japanese is somewhat special though for a large number of homonyms (i.e. words that are spelled the same) so speaking with the correct pitch becomes somewhat more important.


Somewhat more important, but as someone with decent Japanese who knows about pitch accent but can barely hear the difference in real time, and never actively learned it except for the few well known examples like bridge/chopstick, I don't think it matters all that much. Yes, you'll sound foreign. But you'll be understood nevertheless, in the vast majority of cases.


Speaking of bridge/chopsticks, I created a video to try to spot the difference my self a couple of months ago:

https://imgur.com/KJXanqc


Here's the problem: pitch accent is easy to hear in isolation and/or in comparison. Under real life conditions, in the middle of a sentence, it's a completely different experience. But then you're saved by context. Because candy is most likely not falling from the sky. Homophones that are still ambiguous in context are possible, but a rare occurrence in my experience.


Minimax's new model is quite good. We use their voices for some of our Japanese tutors. The pitch accent is almost perfect.

There are incorrect reading or Chinese readings occasionally, but you can tell when that happens due to the furigana being different


If you have the correct furigana, you could even detect when the TTS model picked the wrong reading and regenerate.

But how do you know the furigana are correct? Unless you start out fully human-annotated text, you need some automated procedure to add furigana, which pushes the problem from "TTS AI picked the wrong reading" to "furigana AI picked the wrong reading."


Yes it pushes the problem, but it's a much easier problem, and models like Gemini flash 2.5 do very well.


Yeah Japanese TTS is a lot harder than it looks. I’m also building a language learning application, and constantly ran into incorrect readings. Eleven labs, eleven labs v3, OpenAI, play.ht, azure, google, Polly — I’ve tried them all. They are all really bad (more than 1/3 the expressions had an error in them somewhere).

It _is_ fixable though. It took me about a week, but I have yet to find a mistaken reading now. This also seems to just be the case with Japanese - most tonal languages seem to have the correct tones (I’m not qualified to comment on how natural the tones sound, but I have yet to find a mismatch like in Japanese)


Yes. AI transcription is great, AI translation is OK (depending on language pair), but TTS is still pretty awful for most languages.


In this analogy, they are only retesting things reported by a previous red team who did that target.


Not really. More like testing a list of known exploits. That can also be greenfield.


Game exploits are extremely game-specific


I was talking about red teaming a system. Red teaming a system is compared to a speedrunner attempting a run using known exploits. A security researcher is a speedrunner attempting to find new exploits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: