Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>There's many stories from workers at these third parties about how much intimate detail they've heard when listening to these clips.

Assuming those internet stories are even true - can anyone show me actual harm occurring to anyone based on Apple's use of Siri training? I've heard some Alexa stories, but frankly, Apple seems to do a really good job of protecting that information, at least so far, at least as far as the public knows.



A lack of reported harm does not imply a lack of the ability to harm.

"Siri, order medicine X and deliver to address Y" is a simple example of how a simple command - whether valid or not - can expose someone's medical history, and while an ethical reviewer (probably 99%+ of reviewers) would do nothing with it, an unethical reviewer could.


> "Siri, order medicine X and deliver to address Y" is a simple example of how a simple command…

FWIW, Siri isn't capable of anything resembling this. For fun, ask "Siri, how many days are left in this year?"


IME, using "virtual assistants" is as much about training you as it is about training them.

If you as the question as "Hey Siri: how many days until the end of the year?", you get a valid response. I intuitively guessed that the word "until" was a trigger that would make Siri understand what I meant.

In other words - it's not that Siri isn't _capable_ of doing things like ordering medication, but that the syntax for doing so is still obscure and specific enough that most people aren't using it for purposes like that.

I've set up a number of shortcuts that I regularly use. For instance: "Tell my wife I made it" will send her a text message letting her know that I've arrived wherever I was going. "Tell _them_ I made it" will send the same message to my wife, daughter, and mom.

Until the NLP algorithms get better at inferring intent, a combination of user training and custom shortcuts will be needed to do things like this.


No, Siri isn't capable of doing things like ordering medication, because that requires more than just improving NLP capabilities.

It requires there being some specific facility for Siri to place orders. It requires some mechanism for conveying payment. It requires being able to determine where to order from in any given case (because Apple, unlike Amazon, does not attempt to be the one-stop-shop for everything under the sun).

This really seems to be just perpetuating the popular, but dangerous, fallacy that "AI" is all basically the same thing, and if you keep feeding enough data to your NLP algorithms, at some point they'll become able to do things that affect the real world purely by some kind of handwaved "ability to connect to the network".


Which I acknowledged in the following four words.

Not all commands are valid, but the invalid commands are reviewed as much (perhaps even more) than the valid ones. "Was it incorrectly invalid" is a question I, as a developer, would want to know.


>A lack of reported harm does not imply a lack of the ability to harm.

The assertion wasn't that there is a possibility that Siri could cause privacy harm. The claim was that it DOES cause privacy harm. So yes, a lack of reported harm is actually evidence here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: