Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alexa listens to more then the wake word. I've had this recommend products on Amazon while my wife and I were having a conversation at the dinner table. It also recommended calling 911 while I was talking about fire.

It's a spying device people willingly put in their homes for the convenience of a timer you can active with your voice.

Edit: It could be that it activated mishearing "Alexa". I don't have hard evidence of mass spying. I think this wouldn't be hard to prove intercepting the data using something like Wireshark. Even if it's encrypted, you could tell by the data size. The recommending products while chatting with my wife anecdote happened multiple times tho, which convinced me to relocate the Alexa device in the garbage. It seems unlikely to me to not mine the voice data to generate ads, or to do law enforcement.



https://moniotrlab.khoury.northeastern.edu/publications/smar...

Let me float an idea based on this study: it doesn't need to listen for more than the wake word (and some variations of same) in order to activate pretty regularly. The study indicates that just watching TV in a room w an echo device will cause it to wake 1-4 times per hour with half of all wakes resulting in at least 4 seconds of recording and virtually all recordings being sent to the cloud for processing. Even absent any "secret wake words" the device activates regularly enough that it will occasionally react to things you say in conversation as though it's secretly listening.

Also, just thinking like someone who is simultaneously evil and competent, if I were building a device like this that listened for secret keywords I wouldn't have it announce the fact that it heard one.


> wouldn't have it announce the fact that it heard one

Alexa has an option to play a sound for "Start of Request", i.e. wake word heard.

There is a ring light on the device, blue when it has heard a wake word, red when the hardware mute button is pressed.


I have Alexa and a few Siri devices next to me and I just said a bunch of phrases indicating fire, choking, that we should call 911 etc and nothing triggered. So yeah - this is just internet bullshit until proven otherwise.


it both is and isn't secret bullshit. there's no evidence that there's a list of secret keywords Lord Bezos is listening for, but there's plenty of evidence that these devices active unintentionally all the time and that those unintentional activations lead to you being recorded and that recording being sent off into the cloud

https://moniotrlab.khoury.northeastern.edu/publications/smar...


I don't think it's any secret that the device can unintentionally activate in certain circumstances (and whether or not that's due to it thinking it heard its name is another debate)... but my problem with OP's statement is that they seem to frame it as if it's intentionally and maliciously listening more often than it should, and I just don't see any evidence to support that claim.


What I'm saying is that intentionality doesn't have to be relevant to this discussion. All you need to do in order to be maliciously spying on someone, given that you have this bug in the first place, is to

1) not fix the bug

2) quietly remove the option to opt out of remote processing

and then all of a sudden you've got a situation where of course no one is actively spying because We Would Never(tm)(c)(r) but there's a really reliable pipeline by which recordings of me talking to me family in my home end up on a remote server somewhere where they're used to train AI and maybe even automatically scanned for certain keywords that might indicate that I'm some sort of troublemaker and need flagged for additional "attention". It's a plausibly-deniable panopticon. In fact having it activate by purposefully unremediated mistake rather than by keyword makes it a better spy. You can discover a list of keywords and avoid them but ambient noise causing the device to randomly sample and exfiltrate recordings means you can never know when you're being recorded and thus have no choice but to always act like you're being recorded, just in case.


Alexa option to "recognize sounds" (e.g. baby crying, fire alarm, appliance beeping) might increase risk of false positives.


As always, there's a hilariously apropos "imagine if technology were this bad" that turns out to just be reality.

Amazon: We're listening.

https://m.youtube.com/watch?v=JCQhygMdFr4&t=57s



I'm not sure whether it's listening to more than the wake word, but I've seen Siri wake up quite often when I very definitely haven't said anything approaching "Siri", and see it occasionally on other people's devices too. I remember listening to a BBC podcast in the car once and there was once piece of audio from it that would reliably activate Siri. I was a bit nonplussed by it and rewound it four or five times to check, which it did every time. I think accidental activation is a much more likely explanation, which is still dreadful from a privacy perspective.


Corpos can't resist not to spy when it's at their fingertips, too irresistible to them, they just can't help it. That's why we should take our privacy back and offer no benefit of the doubt.


IoT WiFi baseband firmware is also an attack surface.


Yeah, what’s the best thing that can happen? It is just an Alexa device, and the worst? It leaks your data and spies on you. No thanks.


And if that’s really all you’re using it for there’s no reason it needs Internet at all. All of the intelligence can be run on-device.


This is just an anecdote from an anonymous internet account.

I don't necessarily trust Amazon, but I trust anonymous internet accounts far less.

If Amazon does spy this blatantly, it should be easy to verify and reproduce empirically. Has anyone serious done that?


That's always been my perspective. The incentive for busting Amazon on this is so high, if it were provable then someone would have done it and the press would love to share that.

My point is it no longer matters. We're out.


The fact you have two such anecdotes means you kept the spying device in your home.


Just wait until people start buying Tesla’s optimus robots!

Cheap and ubiquitous surveillance will be the norm in the future.


Cheap and ubiquitous surveillance is the norm in the present.


Everyone carries a little snitch on them. Even if you opt out of using a mobile device, the chances of the person you are talking to having it on them is effectively 100%. And I am nearly certain that one way or another 'they' have voice biometrics on all of us. Thank god we live in a country with strong checks & balances...


Every Tesla is constantly recording, so if anyone has been within 50 feet of a Tesla, they already have your picture.


Between cell phones, IOT doorbells, and incredibly inexpensive security cameras, even a world free from Tesla has cheap and ubiquitous surveillance.

But you're correct, Tesla is yet another among the multitudes with a robust surveillance network


Every starlink station (and probably) tesla, scoop up every mac address they ever see. This is one is unique in that it puts all that data into a single actor's hands.


Of course starlink stations scoop mac addresses. They are in this way equivalent to and on par with every other wifi router.

A Tesla vehicle could also scoop up visible mac addresses, and is equally as capable of doing so as every other wifi-enabled device with closed source firmware.

Android phones have been scooping wifi mac addresses, pairing them with GPS data, and sending that to google for at least 7 years: https://slate.com/technology/2018/06/how-google-uses-wi-fi-n...

Apple probably has an equivalent system.

Privacy-wise, Tesla is shitty but not extraordinarily shitty. Their surveillance capabilities do not differentiate them from among the multitudes. Let's assume maximum maliciousness. Assuming you don't own one, could Tesla track you particularly better than, say, Square? Or Google? Or Palantir? Or Comcast? Or any cell phone company? Or whomever it is that owns the cameras at each traffic light intersection?


Given who is in charge and how much power they have shown to wield over those systems, yes, definitely.

Nothing I have said makes light of those other systems and the grotesque data gathering that they do.


The person in charge is irrelevant. If you think that the other companies I mentioned aren't in the business of selling surveillance on you as well, your head is in the sand. It's the primary business model of several.


Totally disagree on the first part.

Totally agree on the second part.

They were all on stage together, regardless of how they got there. They are all there.


As opposed to every Android phone doing the same?


Continue


Telsa has an uncertain future, much less capability to actually pull off robots.


That’s how Skynet wins!


if people really think Musk is a Nazi, this would be like literally putting mindless order-following gestapo right in your house.

Surveillance? Shit they could just kill you the moment you were discovered to be some undesirable. We're talking about a humanoid-ish robot, after all. If it can help you with the laundry it can bash your head in, too.


To be fair, we don’t need a new product, a tesla can do the same[1].

With AIs becoming more powerful and expanding to new areas, it makes even more sense to avoid businesses that are consistently user hostile.

I wonder if anti tesla protests and related bad pr will contribute to increased consumer awareness around the topic.

[1]: e.g. it can crash into a will e coyote style wall on autopilot: https://www.youtube.com/watch?v=IQJL3htsDyQ&t=899s


If there’s one thing about AI, it’s that you cannot avoid it. The idea that individuals can just “opt out” of plastic, sugar, artificial ingredients, factory farms, social media and all the other negative extrnalities the corporations push on us is a fantasy that governments and industry push on individuals to keep us distracted: https://magarshak.com/blog/?p=362

On HN, people hate on Web3 because of its limited upside. But really look at the downside dynamics of a technology! With Web3, you can only ever lose what you voluntarily put in (at great effort and slippage LOL). So that caps the downside. Millions of people who never got a crypto wallet and never sent their money to some shady exchange never lost a penny.

Now compare that to AI. No matter what you do, no matter how far you try to avoid it millions will lose their jobs, get denied loans, be surveiled, possibly arrested for precrime, micromanaged and controlled, practically enslaved in order to survive and reproduce etc.

It won’t even work to retreat into gated communities or grandfathered human-verified accounts because defectors will run bots in their accounts and their neuralink cyborg hookups and meta glasses, to gain an advantage and approach at least some of the advantages of the bots. Not to mention of course that the economic power and efficiency of botless communities will be laughably uncompetitive.

You won’t even be able to move away anywhere to escape it. You can see an early preview of that with the story of Ted Kazinsky — the unabomber (google it). While the guy was clearly a disturbed maniac who sent explosives to people, as a mathematician following things to its logical conclusion he did sort of predict what will happen to everyone when technology reaches a certain point. AI just makes it so that you can’t escape.

If HN cared about AI unlimited downsides like it cared about Web’s lack of large upsides, the sentiment here would be very different. But the time has not come yet. Set an alarm to check back on this comment in exactly 7 years.


> With Web3, you can only ever lose what you voluntarily put in (at great effort and slippage LOL). So that caps the downside.

Nitpick: That's not considering how it it has turbocharged and even commodified certain types of crime, such as ransomware.


Pretty sure the “deletion” of undesirables is part of the plan, if a little bit further down the line.

I thought they were waiting for ubiquitous AI micro drone technology. Maybe not.

https://www.vcinfodocs.com/weapons-startups


On the bright side, at least I'll die with a clean shirt on.


It might even clean up the blood stains. Letting the blood dry would ruin the carpet and furniture


Extraordinary claims require extraordinary evidence.


The extraordinary claim requiring extraordinary evidence is the idea that Big Tech companies are not spying on you any time they can.


Ah, so Amazon needs to prove a negative, now. I get the lack of trust, but at some point logic has to prevail.


Never mind "proving", there are plenty of low-effort steps they could take to foster trust (as outlined elsewhere in this thread) that they choose not to do. They choose not to meet even the bare minimum.

We are in a thread that is literally about how Amazon plans to disable the option to not send voice recordings. I get playing devil's advocate, but at some point logic has to prevail, eh?


That's pretty easy, just open source it.


Should not the burden of proof be on Amazon to prove it's not always recording?

In 2025, it feels like we're 5 to 10 years past the time a consumer should default to assuming their cloud-connected device isn't extracting the maximum possible revenue from them.

Assume all companies are amoral, and you'll never be disappointed.


> Should not the burden of proof be on Amazon to prove it's not always recording?

What's a satisfactory burden of proof for Amazon to meet here? There's only so much you can do to prove that something is NOT happening.


They have a lot of ways they could’ve built trust without a full negative burden: which of them, if any, are they doing?

Open sourcing of their watch word and recording features specifically, so people can self-verify it does what it says and that it’s not doing sketchy things?

Hardware lights such that any record functionality past the watch words is visible and verifiable by the end user and it can’t record when not lit?

Local streaming and auditable downloads of the last N hours of input as heard by amazon after watchwords, so you can check for misrecordings and also compare “intended usage” times to observed times, such that you can see that you and Amazon get the same stuff?

If you really wanna go all out, putting in their TOS protections like explicit no-train permissions on passing utterances without intent, or adding an SLA into their subscription to refund subscription and legal costs and to provide explicit legal cause of action, if they were recording when they said they weren’t?

If you explicitly want to promote trust, there are actually a ton of ways to do it, one of them isn’t “remove even more of your existing privacy guardrails”.


These are great tangible suggestions for a standard to hold these recording devices to.


They have the third thing -- you can see the recordings in your history. [0]

[0] https://www.amazon.com/gp/help/customer/display.html?nodeId=...

On the first two, if you already think they're blatantly lying about functionality, why would you think the software in the device is the same as the source you got, or that it can't record with the light off?


Well, for starters, I guess keeping the "do not send voice recordings" toggle would be a good idea.


It's not at all unreasonable for consumers to demand vendors--especially those with as much market power as Amazon--to take steps to foster trust that, though they may not rise to the level of "proving a negative," still go some ways towards assuring us they are not violating our privacy.

The fact that they don't take any of those steps (and the fact that we are in a thread about they're disabling this privacy feature in the first place!) goes to show that consumers have every right to be skeptical and indeed to refuse to bring these products into our lives.

I think it's inane to complain that consumers are placing an impossibly high standard on Amazon when Amazon themselves choose not to meet even the lowest of standards.


It's their product and their code, there is no reasonable way I can responsible for knowing what it does as opposed to Amazon, who is in complete control of the device and system. I can't even believe I have to explain this.


At the very least, they can provide a full log of all interactions and recording in an audit log. Have that verified with researchers conducting their own analysis on dial home activity and I think we'll be significantly closer to a good answer here about generalized mass capture of customer sensitive data. This still wouldn't be enough if you're worried about targetted spying, because we can't know when bad actors flip your device into spy aggressively mode unless you're auditing the device while targetted).


Okay..but then why should I trust that Alexa isn't listening? That's clearly a pretty valuable thing for Amazon to provide to their customers. Is it impossible? If it is..then yeah people should just light these things on fire or have a hard switch on them at least.


Semper necessitas probandi incumbit ei qui agit.


Victori spolia.


This claim is not extraordinary.

The anecdotes have added up, and it is now an ordinary claim.


Only in circles that don’t understand technology and frankly logic. To prove that it’s happening _one_ hacker needs to show that there’s constant flash drive / network traffic while the mic is enabled that also correlates with the entropy in the audio.


I have personally verified that my device most certainly does not send constant internet traffic... however I think we can't rule out the possibility that it might buffer the data and send it later.


We can, in fact, rule it out by dissecting the device and monitoring chip traffic. That’s my whole point - people who understand technology know that it’s nearly impossible for Amazon devices to routinely spy on conversations in people’s homes without detection.


> We can, in fact, rule it out by dissecting the device and monitoring chip traffic

Has this been done?


LOGIC he screamed as he threw his hands up in the air, LOGIC!!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: