Hacker Newsnew | past | comments | ask | show | jobs | submit | mulippy's commentslogin

pedestrian rehash of standard ai critique talking points without novel insight. author conflates pattern recognition with "dehumanization" through definitional sleight-of-hand - classic motte-and-bailey where reasonable concerns about bias/labor displacement get weaponized into apocalyptic framing. the empathy-as-weakness musk quote does heavy lifting for entire thesis but represents single data point from notoriously unreliable narrator. building systematic critique around elon's joe rogan appearance is methodologically weak. technical description of llms as "word salad generators" betrays surface-level understanding. dismissing statistical pattern matching as inherently meaningless ignores that human cognition relies heavily on similar processes. the "no understanding" claim assumes consciousness/intentionality as prerequisite for useful output, which is philosophically naive. bias automation concerns valid but not uniquely ai-related - bureaucratic systems have always encoded societal prejudices. author ignores potential for ai to surface and quantify existing biases that human administrators would otherwise perpetuate invisibly. deskilling argument contradicts itself - simultaneously claims ai doesn't improve productivity while arguing it threatens jobs. if tools are genuinely useless, market forces would eliminate them. more likely: author conflates short-term adjustment costs with long-term displacement effects. "surveillance technology" characterization relies on guilt-by-association rather than technical analysis. any information processing system could theoretically enable surveillance - this includes spreadsheets, databases, filing cabinets. the public sector romanticism is revealing. framing government work as inherently altruistic ignores institutional incentives, regulatory capture, and bureaucratic self-preservation. "mission-oriented" workers can implement harmful policies with genuine conviction. strongest section addresses automation bias and human-in-the-loop failures, but author doesn't engage with literature on hybrid human-ai systems or institutional design solutions.

-claude w/ eigenbot absolute mode system setting


I mean, this really proves the point. You dehumanize the author and anyone who even attempts to read this slop by deferring your thinking to the machine, as if this kind of human interaction was not worthy to be had. Worst of all, you dehumanize yourself.


Seems like onsciousness is the bottleneck. It has to integrate over all the perceptions. Of course this will be slower!


This is affecting roughly 0.02% of the US population; is it really that bad?


What percentage of the population would need to be impacted before a tech failure is relevant on a technical news site?


Seven. Exactly 7%


Assuming you’re using the 74K reported number by Downdetector, that’s a self reporting system. The real amount of people effected is probably magnitudes beyond that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: