Hi, author here. This article was many years in the making. It's ostensibly a story about reading minds but really it's about the unreasonable power of high-dimensional vector spaces.
That made it pretty tough to write: how do you explain dimensionality reduction, PCA, word2vec, etc., and the wonders of high-dimensional "embeddings" (of the sort you find in deep neural nets) when a lot—or all—of these ideas might be new to the reader? I'm not sure—but this was my attempt!
Thought you might like this one: A Geometric Analysis of Five Moral Principles (OUP 2017)
Ethics using vectors or from a description of the technique: The geometric approach derives its normative force from the Aristotelian dictum that we should “treat like cases alike.” The more similar a pair of cases are, the more reason do we have to treat the cases alike. These similarity relations can be analyzed and represented geometrically. In such a geometric representation, the distance in moral space between cases reflects their degree of similarity. The more similar a pair of cases are from a moral point of view, the shorter is the distance between them.
That was a fascinating article, I liked how you covered the human element in how it helps the paralyzed and the intuitions/visuals of the researchers in the field.
Future applications can be good or bad, but of course that makes it even more important to record the early history of the field and these kind of articles will also help in start the ethics discussion at an earlier stage.
Great article! I think you’ve done a great job of introducing these difficult concepts in simple language. I saved to Pocket and it’s already got a Best Of label there.
PCA (KLT) can be introduced as a generalization of the Fourier Transform. This can follow from using a cocktail mix analogy to Fourier Series. When I was a TA this was the approach I took with students, which seemed to make things easier for them.
Personal note: Susan Dumais, mentioned in the article also did great early work in text summarization, just after she joined Microsoft. I tried using some of her approached in video summarization in my PHDin early 2000s. How time flies.
PCA applies an orthogonal linear transformation, while FA uses a series of coefficients to scale a sequence of functions, which are then integrated. They are similar in use but very different in method. Calling one a generalization of the other seems misguided?
Great article! I’m developing Machine learning systems and my partner is working on psychiatric use of Deep-Brain stimulation, so a rare moment that we can share.
Very minor point: the King - male + female = Queen is a good example, but widely decried as not true by specialists. I don’t have much better examples (I haven’t been able to tell if Paris - France + England = London, for instance) but if you reuse that story, it makes sense to investigate that myth. There’s a lot there too.
I think you did a very good job - it captured the feeling that it's almost sorcery which still hits me any time I successfully apply it, without getting bogged down in technicalities. I think it's OK to be superficial as long as you give people enough information to look up and learn more about it. Mentioning word2vec will certainly give interested readers a head start.
i find the simplest way to explain pca to a general audience is to draw a ellipse of points off center and tilted in 3d space, and then draw plots for x, y and z. then center the ellipse and rotate the axes to match the major and minor axes of the ellipse and then show how it can be drawn in just x and y and that those x and y plots are far easier to interpret. done.
I'm not sure if you're reading the same Hacker News that I've been, but mine's mostly been about the confluence of surveillance capitalism, laissez-faire treatment of vulnerabilities in the tech stacks that power it (or even eg absentmindedly putting customer data into an unsecured S3 bucket), and the inability/unwillingness of governments or regulatory bodies to do anything about any of it. In light of these modern realities, I have difficulty believing in a positive final form of this technology. "Mobile pocket telephones" have evolved into "expensive powerful swiftly-obsolescent general-purpose computers mostly used for providing telemetry on the user to unaccountable corporations". Even if the HN crowd end up being able to opt out of the worst aspects of this, like one can with a modern smartphone via GrapheneOS or whatever, we still have to live alongside everyone else who can't.
I can think of lots of nefarious uses for this sort of thing, and I'm just some asshole who's read some science fiction. The real nefarious uses will be architected by people much smarter than me, whose moral difficulties will be dismissed by The Profit Motive, psychopathy, or both.
And here I am thinking about the potential benefits to para- and quadriplegics of circumventing a damaged spinal cord if only we could reliably interpret signal from the brain.
For all the bad you've listed, there's a reason people voluntarily choose to carry those surveillance devices in their pockets: the boons outweigh the ills by an order of magnitude. They're rarely dwelt upon because they're ubiquitous... much like nobody bothers to extol the virtues of fire.
We talk here about what's wrong because there is room for improvement, not because we should halt progress.
"unsubstantive"? This technology will be used against people. I just wanted to point it out in the most unflattering way possible. Like graphic warning and danger signs, it makes people stop and think.
And strange to hear from you for a second time in the last week or so.
Of course it's unsubstantive. It's not just an internet cliché, it's nowhere near the top of the internet cliché barrel. If you have something interesting to say, please say it explicitly, without tedious tropes, and without flamebait.
> And strange to hear from you for a second time in the last week or so.
I had no idea I'd replied to you repeatedly, but if so, the simplest explanation is that you've been breaking the site guidelines repeatedly. It would be helpful if you'd review them and stick to them from now on: https://news.ycombinator.com/newsguidelines.html.
We should simply make mind reading technology illegal, full-stop.
If we do anything less than criminalizing it outright, it will turn into something “voluntary”, but opting out will exclude you from certain events, or you’ll have to pay some sort of premium to maintain your privacy. This will have the side effect of making you seem suspicious.
I simply don’t want any entity, public or private, knowing my thoughts, at all, for any reason whatsoever.
I don’t think that it will ever be widely used the way you describe, just like the "Truth serum" sodium pentothal wasn’t outside of counter-intellgence agencies. I would also be surprised if that system worked well in an adversarial context: polygraph are another example of a technique that have proven a lot less effective than it’s nickname of "Lie detector" implies.
Sure, we can _imagine_ issues, but long before we can get to anything ominous, there are countless applications to give back to people with major health issues their mobility, their ability to communicate. For that, they would need to focus voluntarily, intensely on one specific activity for seconds — a miracle today, but a frustrating practice when it takes 30 seconds trying to tell your nurse that you need the pan. I feel like there’s a couple of years between that and anything Orwellian.
We’ll worry about AI being super-intelligent _after_ Amazon recommendation engine is still stuck on assuming that I’ve started a vacuum cleaner collection.
What if instead of completely banning it, we set a cap and trade market with a fairly small maximum number of people (say, a few hundred within all of the US) that it can be applied to per year?
Banning it all but guarantees human obsolescence. We certainly need to tread very carefully here, but foregoing this advancement would be a grave mistake.
I interviewed at a BCI startup that wants to implant on the order of 10k light sensors around the brain in order to directly gather data. "Reading" thoughts is just their first goal, eventually they want to communicate bidirectionally.
I don't see how fMRI or any external device can approach the sensitivity, specificity, or resolution to "read" thoughts beyond the level of gross guesstimates, i.e., deception, sexual attraction, arousal, etc.
Furthermore, it seems necessary to understand peculiarities of a particular individual's brain would be a prerequisite to mapping functional observations into approximate thoughts.
Now all I can think about is putting a "thinking hat" on my dog and recording everything he does, and then playing the process in reverse when he's dreaming!
Are you assuming a brain-scanning technology wouldn't be opt-in? I figure the mechanical nature of it would make it difficult to implement in any other way... If someone doesn't want to be scanned, they can take the helmet off.
Maybe in court we can make it like discovery or cross-examination...
"Ok, I'll agree to using the mind-reader, but only if I can use the mind-reader on you as well..."
Daniel Suarez touched on a proto-society developing with fMRI mind-reading in his Daemon series of books. Higher levels of authority required more frequent rounds of time in the mind-reader taking loyalty/polygraph tests.
These are machines that weight more than a train, countries health budget have to be provisioned around a single one of them, they require special architectural provision because of how heavy they are, need dedicated supply chain for liquid Helium and will attract any ferromagnetic object within meters, even through walls… and you are worried Meta is going to sneak those into the millimetre-thin strap of a $250 headset?
Look, all I want to say is: you confidence and enthusiasm for technology and innovation is remarkable. Please, write science fiction. Sincerely.
That made it pretty tough to write: how do you explain dimensionality reduction, PCA, word2vec, etc., and the wonders of high-dimensional "embeddings" (of the sort you find in deep neural nets) when a lot—or all—of these ideas might be new to the reader? I'm not sure—but this was my attempt!