Alexa listens to more then the wake word. I've had this recommend products on Amazon while my wife and I were having a conversation at the dinner table. It also recommended calling 911 while I was talking about fire.
It's a spying device people willingly put in their homes for the convenience of a timer you can active with your voice.
Edit: It could be that it activated mishearing "Alexa". I don't have hard evidence of mass spying. I think this wouldn't be hard to prove intercepting the data using something like Wireshark. Even if it's encrypted, you could tell by the data size. The recommending products while chatting with my wife anecdote happened multiple times tho, which convinced me to relocate the Alexa device in the garbage. It seems unlikely to me to not mine the voice data to generate ads, or to do law enforcement.
Let me float an idea based on this study: it doesn't need to listen for more than the wake word (and some variations of same) in order to activate pretty regularly. The study indicates that just watching TV in a room w an echo device will cause it to wake 1-4 times per hour with half of all wakes resulting in at least 4 seconds of recording and virtually all recordings being sent to the cloud for processing. Even absent any "secret wake words" the device activates regularly enough that it will occasionally react to things you say in conversation as though it's secretly listening.
Also, just thinking like someone who is simultaneously evil and competent, if I were building a device like this that listened for secret keywords I wouldn't have it announce the fact that it heard one.
I have Alexa and a few Siri devices next to me and I just said a bunch of phrases indicating fire, choking, that we should call 911 etc and nothing triggered. So yeah - this is just internet bullshit until proven otherwise.
it both is and isn't secret bullshit. there's no evidence that there's a list of secret keywords Lord Bezos is listening for, but there's plenty of evidence that these devices active unintentionally all the time and that those unintentional activations lead to you being recorded and that recording being sent off into the cloud
I don't think it's any secret that the device can unintentionally activate in certain circumstances (and whether or not that's due to it thinking it heard its name is another debate)... but my problem with OP's statement is that they seem to frame it as if it's intentionally and maliciously listening more often than it should, and I just don't see any evidence to support that claim.
What I'm saying is that intentionality doesn't have to be relevant to this discussion. All you need to do in order to be maliciously spying on someone, given that you have this bug in the first place, is to
1) not fix the bug
2) quietly remove the option to opt out of remote processing
and then all of a sudden you've got a situation where of course no one is actively spying because We Would Never(tm)(c)(r) but there's a really reliable pipeline by which recordings of me talking to me family in my home end up on a remote server somewhere where they're used to train AI and maybe even automatically scanned for certain keywords that might indicate that I'm some sort of troublemaker and need flagged for additional "attention". It's a plausibly-deniable panopticon. In fact having it activate by purposefully unremediated mistake rather than by keyword makes it a better spy. You can discover a list of keywords and avoid them but ambient noise causing the device to randomly sample and exfiltrate recordings means you can never know when you're being recorded and thus have no choice but to always act like you're being recorded, just in case.
I'm not sure whether it's listening to more than the wake word, but I've seen Siri wake up quite often when I very definitely haven't said anything approaching "Siri", and see it occasionally on other people's devices too. I remember listening to a BBC podcast in the car once and there was once piece of audio from it that would reliably activate Siri. I was a bit nonplussed by it and rewound it four or five times to check, which it did every time. I think accidental activation is a much more likely explanation, which is still dreadful from a privacy perspective.
Corpos can't resist not to spy when it's at their fingertips, too irresistible to them, they just can't help it. That's why we should take our privacy back and offer no benefit of the doubt.
That's always been my perspective. The incentive for busting Amazon on this is so high, if it were provable then someone would have done it and the press would love to share that.
Everyone carries a little snitch on them. Even if you opt out of using a mobile device, the chances of the person you are talking to having it on them is effectively 100%. And I am nearly certain that one way or another 'they' have voice biometrics on all of us. Thank god we live in a country with strong checks & balances...
Every starlink station (and probably) tesla, scoop up every mac address they ever see. This is one is unique in that it puts all that data into a single actor's hands.
Of course starlink stations scoop mac addresses. They are in this way equivalent to and on par with every other wifi router.
A Tesla vehicle could also scoop up visible mac addresses, and is equally as capable of doing so as every other wifi-enabled device with closed source firmware.
Privacy-wise, Tesla is shitty but not extraordinarily shitty. Their surveillance capabilities do not differentiate them from among the multitudes. Let's assume maximum maliciousness. Assuming you don't own one, could Tesla track you particularly better than, say, Square? Or Google? Or Palantir? Or Comcast? Or any cell phone company? Or whomever it is that owns the cameras at each traffic light intersection?
The person in charge is irrelevant. If you think that the other companies I mentioned aren't in the business of selling surveillance on you as well, your head is in the sand. It's the primary business model of several.
if people really think Musk is a Nazi, this would be like literally putting mindless order-following gestapo right in your house.
Surveillance? Shit they could just kill you the moment you were discovered to be some undesirable. We're talking about a humanoid-ish robot, after all. If it can help you with the laundry it can bash your head in, too.
If there’s one thing about AI, it’s that you cannot avoid it. The idea that individuals can just “opt out” of plastic, sugar, artificial ingredients, factory farms, social media and all the other negative extrnalities the corporations push on us is a fantasy that governments and industry push on individuals to keep us distracted: https://magarshak.com/blog/?p=362
On HN, people hate on Web3 because of its limited upside. But really look at the downside dynamics of a technology! With Web3, you can only ever lose what you voluntarily put in (at great effort and slippage LOL). So that caps the downside. Millions of people who never got a crypto wallet and never sent their money to some shady exchange never lost a penny.
Now compare that to AI. No matter what you do, no matter how far you try to avoid it millions will lose their jobs, get denied loans, be surveiled, possibly arrested for precrime, micromanaged and controlled, practically enslaved in order to survive and reproduce etc.
It won’t even work to retreat into gated communities or grandfathered human-verified accounts because defectors will run bots in their accounts and their neuralink cyborg hookups and meta glasses, to gain an advantage and approach at least some of the advantages of the bots. Not to mention of course that the economic power and efficiency of botless communities will be laughably uncompetitive.
You won’t even be able to move away anywhere to escape it. You can see an early preview of that with the story of Ted Kazinsky — the unabomber (google it). While the guy was clearly a disturbed maniac who sent explosives to people, as a mathematician following things to its logical conclusion he did sort of predict what will happen to everyone when technology reaches a certain point. AI just makes it so that you can’t escape.
If HN cared about AI unlimited downsides like it cared about Web’s lack of large upsides, the sentiment here would be very different. But the time has not come yet. Set an alarm to check back on this comment in exactly 7 years.
Never mind "proving", there are plenty of low-effort steps they could take to foster trust (as outlined elsewhere in this thread) that they choose not to do. They choose not to meet even the bare minimum.
We are in a thread that is literally about how Amazon plans to disable the option to not send voice recordings. I get playing devil's advocate, but at some point logic has to prevail, eh?
Should not the burden of proof be on Amazon to prove it's not always recording?
In 2025, it feels like we're 5 to 10 years past the time a consumer should default to assuming their cloud-connected device isn't extracting the maximum possible revenue from them.
Assume all companies are amoral, and you'll never be disappointed.
They have a lot of ways they could’ve built trust without a full negative burden: which of them, if any, are they doing?
Open sourcing of their watch word and recording features specifically, so people can self-verify it does what it says and that it’s not doing sketchy things?
Hardware lights such that any record functionality past the watch words is visible and verifiable by the end user and it can’t record when not lit?
Local streaming and auditable downloads of the last N hours of input as heard by amazon after watchwords, so you can check for misrecordings and also compare “intended usage” times to observed times, such that you can see that you and Amazon get the same stuff?
If you really wanna go all out, putting in their TOS protections like explicit no-train permissions on passing utterances without intent, or adding an SLA into their subscription to refund subscription and legal costs and to provide explicit legal cause of action, if they were recording when they said they weren’t?
If you explicitly want to promote trust, there are actually a ton of ways to do it, one of them isn’t “remove even more of your existing privacy guardrails”.
On the first two, if you already think they're blatantly lying about functionality, why would you think the software in the device is the same as the source you got, or that it can't record with the light off?
It's not at all unreasonable for consumers to demand vendors--especially those with as much market power as Amazon--to take steps to foster trust that, though they may not rise to the level of "proving a negative," still go some ways towards assuring us they are not violating our privacy.
The fact that they don't take any of those steps (and the fact that we are in a thread about they're disabling this privacy feature in the first place!) goes to show that consumers have every right to be skeptical and indeed to refuse to bring these products into our lives.
I think it's inane to complain that consumers are placing an impossibly high standard on Amazon when Amazon themselves choose not to meet even the lowest of standards.
It's their product and their code, there is no reasonable way I can responsible for knowing what it does as opposed to Amazon, who is in complete control of the device and system. I can't even believe I have to explain this.
At the very least, they can provide a full log of all interactions and recording in an audit log. Have that verified with researchers conducting their own analysis on dial home activity and I think we'll be significantly closer to a good answer here about generalized mass capture of customer sensitive data. This still wouldn't be enough if you're worried about targetted spying, because we can't know when bad actors flip your device into spy aggressively mode unless you're auditing the device while targetted).
Okay..but then why should I trust that Alexa isn't listening? That's clearly a pretty valuable thing for Amazon to provide to their customers. Is it impossible? If it is..then yeah people should just light these things on fire or have a hard switch on them at least.
Only in circles that don’t understand technology and frankly logic. To prove that it’s happening _one_ hacker needs to show that there’s constant flash drive / network traffic while the mic is enabled that also correlates with the entropy in the audio.
I have personally verified that my device most certainly does not send constant internet traffic... however I think we can't rule out the possibility that it might buffer the data and send it later.
We can, in fact, rule it out by dissecting the device and monitoring chip traffic. That’s my whole point - people who understand technology know that it’s nearly impossible for Amazon devices to routinely spy on conversations in people’s homes without detection.
It's a spying device people willingly put in their homes for the convenience of a timer you can active with your voice.
Edit: It could be that it activated mishearing "Alexa". I don't have hard evidence of mass spying. I think this wouldn't be hard to prove intercepting the data using something like Wireshark. Even if it's encrypted, you could tell by the data size. The recommending products while chatting with my wife anecdote happened multiple times tho, which convinced me to relocate the Alexa device in the garbage. It seems unlikely to me to not mine the voice data to generate ads, or to do law enforcement.