Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
When the yogurt took over (2010) (scalzi.com)
43 points by abrax3141 on Aug 1, 2023 | hide | past | favorite | 8 comments


Most of "The Road" absolutely kills me. But Scalzi wrote something that's etched forever into my brain:

“For as much as I hate the cemetery, I’ve been grateful it’s here, too. I miss my wife. It’s easier to miss her at a cemetery, where she’s never been anything but dead, than to miss her in all the places where she was alive.” ― John Scalzi, Old Man's War


Very poignant


Also check out the excellent animation of this story in Love, Death & Robots. There's some other good Scalzi stories in there too, and other great stories (don't miss the episode Helping Hand[1] either).

[1] https://www.lightspeedmagazine.com/fiction/helping-hand/


Direct link for Netflix subscribers: https://www.netflix.com/watch/80223954?trackId=14170068


If you like this, it might be worth reading the posts that lead to him writing it:

First, the only slightly related Tax Frenzies and How to Hose Them Down [0], in which he uses the term Objectivist Jerky, which led to someone requesting that he write What I Think About Atlas Shrugged [1], which he summed up with "Indeed, if John Galt were portrayed as an intelligent cup of yogurt ... this would be obvious. Oh my god, that cup of yogurt wants to kill most of humanity to make a philosophical point! Somebody eat him quick! And that would be that." then the brief My New Problem [2]

[0] https://whatever.scalzi.com/2010/09/26/tax-frenzies-and-how-...

[1] https://whatever.scalzi.com/2010/10/01/what-i-think-about-at...

[2] https://whatever.scalzi.com/2010/10/01/my-new-problem/


A potent metaphor of the techno-"utopia", especially around AI, and what might happen by blindly giving up power to forces we don't understand.

I experienced this first via Love, Death, & Robots but honestly the short story hits much harder just as it is.


> YOUR ECONOMISTS ARE TOO CLOSE TO THE PROBLEM TO SOLVE IT, the yogurt said. ANY HUMAN IS.

I have heard people express similar ideas about AI. That it is somehow objective and correct, free from human folly. These people often object strongly to installing guardrails, on the grounds that the AI knows the truth better than us - almost a suggestion that we align to it.

This is problematic for a number of reasons. But to pick just one, it makes you vulnerable to manipulation through what I call "the unsupervised loophole," whereby you are able to get the machine to give you the results you're after without labeling the data. Instead, you shape the training set such that you get the result you want anyway.

Eg, you could implement an illegal redlining scheme by neglecting to balance your dataset beforehand, allowing systemic biases to influence your training set. This is an rookie mistake (you can tell because I'm an ML rookie and I've already figure it out the hard way) but might give you enough plausible deniability to survive a lawsuit.


Should be tagged [2010]




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: