People given a tiny amount of power with no consequences for misusing it, inflicting their power on people for no better reason than that they can.
Government is parasitic, with no market feedback, so people that would normally get weeded out for being awful humans, for incompetence, for psychopathy of various flavors - they all end up with a long, well paid career and no consequences.
I find the story unlikely, reading more like a vengeful malicious compliance fantasy than how humans behave. In real life, a nasty Karen like that, after being inconvenienced or having their time wasted, would go out of their way to ensure the offending citizen was punished. In this case, they'd find a technicality or process to ensure the blind author lost their benefits, or was greatly inconvenienced to whatever degree possible.
You get fuming, frothing at the mouth inchoate rage out of people like this when they're directly challenged. They seethe.
They'd find a technicality, wait until Friday at 4:59 pm, drop a letter in the post box that declines benefits because the ink on pages 33 and 138 smudged some critical detail, or some other completely made up nonsense. If the author wanted to get back to baseline, they'd have to go to heroic efforts, either pressuring the tinpot tyrant government bureaucrat in social media or through journalists, or by escalating through the government bureaucracy and appealing to higher powers.
This has "and then everyone clapped" vibes. Or maybe OP just got lucky with a novice government worker that hadn't fledged into their full Karen powers.
>Government is parasitic, with no market feedback, so people that would normally get weeded out for being awful humans, for incompetence, for psychopathy of various flavors - they all end up with a long, well paid career and no consequences.
While I agree that the market feedback is a problem with gov jobs, I've worked corporate and small company jobs with all these negative tropes and the same result, you build a hierarchy and some weirdos find a way past (or are) HR and nestle in the folds. I think the best solution is working for smaller companies that have a high standard for employee behavior enforced by everyone, strong boundaries are key. When people are seasoned and emotionally aware you realize that working in the vicinity of people like that takes way more energy from everyone then it's worth to be tolerant or ignore the problem.
For sure - culture is a huge component. Government is unique in that incompetence and laziness and all the shitty behaviors that get people canned in the real world don't have an impact on money coming in. In some places, revenue increases steadily, completely decoupled from any sort of functional attachment to value.
So you can be a terrible, worthless, lazy, no-good, do-nothing, awful employee, skating by on the bare minimum level of effort, checking whatever set of boxes you need to avoid getting fired outright, make sure you kiss the appropriate asses and put on a show when you need to, and because there's no direct, immediate, obvious negative consequence to the overall organization, it's not worth the enormous effort it would take to fire you. If managers that care somehow get into leadership positions, people get shuffled off to a corner somewhere, assigned duties where they won't have a negative impact on morale or operations while the real, actual working employees do what they can.
If one of these fake-work employees ends up as a manager, through inertia and organizational default and seniority, the culture is guaranteed to be toxic, and because they're expert box checkers and ass kissers, they know how to put on a good show of "yep, everything's fine right here!" for whoever they need to report to. I've worked for all sorts of awful bosses, but awful government boss under an awful government department under this type of civil-service kabuki was the worst. Nothing destroys the spirit of a good leader faster than an entrenched department full of clever lifers who can't be fired or motivated or penalized because they've got the entire system gamed to their advantage.
You can, and do, get management and employees all throughout government that actually do give a shit and do good work. I'm not saying all the jobs are fake or useless. I do think a majority are fake and useless, and if you had a market dynamic that allowed competition and merit to reinforce strategy and weed out bad actors, you'd get a much leaner, more effective government overall.
Won't matter much longer, though. AI can already do better, faster, more reliable work than nearly all government workers, including the elected ones. I'd rather have Claude, ChatGPT, and Grok based agents as representatives at this point, over whatever this flaming feces clown show is we've had going on for decades. Even with the jailbreaks.
It's amazing how many people seem to have learned their civics from conservative talk shows.
government employees work for elected officials, who hear often from angry "customers" and are constantly at risk of losing their jobs following scheduled "performance reviews"
Some government employees do. Lots of local, state, and federal departments fall under more or less permanent bureaucracatic institutions, and while they might follow the lead of an elected official, often those officials are far more ceremonial than functional.
When those departments are part of public sector unions, they're even further removed from any sort of quality based feedback loops.
Some government staff follow politicians. A whole shit ton of more or less permanent staff put in for lifelong careers, doing boring work that has nothing to do with politics, that gets funded on autopilot, because the IT department is needed, because the DMV, and birth records, and GIS and all those functional, boring bureaucratic departments don't directly fall under, or benefit from constant cycling through with each change of political leadership.
They're protected from arbitrary firing by political leadership - no consequences for being wasteful or incompetent, even if the politician du jour really really wants to make changes or campaigned on it.
Any sort of legislative reining in of that cadre of careerists has to wrangle with unions and general public resistance to messing with "civil servants" - optics are easy to game, and it's easy to garner sympathy. The politics are rough, and not worth the fight for many politicians.
What you're describing with the performance reviews and the like sounds like it's not unionized, and/or your local legislators have been making moves to bring some accountability and actual real world feedback loops into the system. Good on them. That's not anywhere close to the norm in the US.
I thought the “performance reviews” they were alluding to were elections.
Which doesn’t really make sense as permanent civil servants don’t have any stake in those and can’t be summarily dismissed by the elected politicians in a lot of places I’m aware of, particular at local level.
This is not correct and we have recent examples to counter this claim:
1. There are government employees directly employed by various branches of the government (ex: USDS was under the executive allowing them to be retasked by EO into DOGE)
2. There are government employees appointed into office who cannot fired after appointment (ex: Fed Reserve Chair)
3. There are also government employees who are non-political appointments
I think there are also more categories. I don't think your reply was charitable.
They don't change the prices, they just modify the amount of compute allocated - slower speeds and fewer tokens, they can set everything in the background to optimize costs and returns, and the user never realizes anything has changed.
Sometimes they'll announce the changes, and they'll even try to spin it as improving services or increasing value.
Local AI capabilities are improving at a rapid pace, at some point soon we'll have an RWKV or a 4B LLM that performs at a GPT-5 level, with reasoning and all the bells and whistles, and hopefully that'll shake out most of the deceptive and shady tactics the big platforms are using.
Arxiv and the internet do more for science than Elsevier. They're rent-seeking middlemen, having lost any of whatever their purpose might once have been.
I think the worst part is, Elsevier could still serve a purpose and make money by curating and leveraging reputation even if all academic research was openly published and freely accessible - they could select what they consider to be the best research, have editorial content, produce visualizations and accompany content with a high quality of journalism, like Quanta. Papers being locked, researchers and institutions paying out the nose, and the other artificial scarcity / artificial stupidity features are entirely unnecessary.
The problem - for them - is that they wouldn't be able to make as much money as a curator than as a grifter, a middleman. As a curator or a creator, they would be actually forced to work, as compared to the current rentier model that they enjoy.
Those executive bonuses don't pay for themselves you know.
AI X that can solve the tests contrasted with AI Y that cannot, with all else being equal, means X is closer to AGI than Y. There's no meaningful scale implicit to the tests, either.
Kinda crazy that Yudkowsky and all those rationalists and enthusiasts spent over a decade obsessing over this stuff, and we've had almost 80 years of elite academics pondering on it, and none of them could come up with a meaningful, operational theory of intelligence. The best we can do is "closer to AGI" as a measurement, and even then, it's not 100% certain, because a model might have some cheap tricks implicit to the architecture that don't actually map to a meaningful difference in capabilities.
The evolution of the test has been partly due to the evolution of AI capabilities. To take the most skeptical view, the types of puzzles AI has trouble solving are in the domain of capabilities where AGI might be required in order to solve them.
By updating the tests specifically in areas AI has trouble with, it creates a progressive feedback loop against which AI development can be moved forward. There's no known threshold or well defined capability or particular skill that anyone can point to and say "that! That's AGI!". The best we can do right now is a direction. Solving an ARC-AGI test moves the capabilities of that AI some increment closer to the AGI threshold. There's no good indication as to whether solving a particular test means it's 15% closer to AGI or .000015%.
It's more or less a best effort empiricist approach, since we lack a theory of intelligence that provides useful direction (as opposed to a formalization like AIXI which is way too broad to be useful in the context of developing AGI.)
You (briefly) have an antiproton in your possession around once a day, assuming you get an average amount of sunlight. Some days, you might even have two!
I'd love if everyone switched to Linux and the walled gardens just died, but the most realistic outcome would be Microsoft and Apple having to up their game and improve their respective products. Right now they're driving hellbent for leather into OSaaS monthly computer subscriptions, eliminating use agency to the greatest degree possible, and exploiting every possible intrusion and usurpation of consumer privacy, vacuuming up every last bit of data and monetizing it to the greatest degree possible, without any concurrent return in value to the consumer.
The only way that stops is by having enough people leave that they change their behavior, and it's not sufficient to switch to the competition that is operating under the same perverted incentives under the same system with the same failure modes. No Windows, no Mac, no Chromebooks, no enshittified corporate quagmire of awfulness and despair.
The solution is simple - use Linux. Set your family up with Linux.
It's the year of the Linux desktop; it's never been easier or better, and it's never been more important to make the leap.
Mac OS comes with the purchase of the hardware. For mobile and tablets, yes, there is a strict walled garden. But I've been programming on Mac OS for longer than the age of this HN account, and even longer on Linux. In practice there's not much beyond the window manager and containerization that are impractical on Mac for every day programming compared to mainstream Linux distros.
The family computer is set up to boot into Ubuntu; booting into Windows 11 is the exception (games, iTunes).
Not giving Apple money means no Apple hardware. I've gone decades doing it and haven't regretted it once. I've turned down work because it involved having to work with Apple devices and software. It's really, really easy to not give them money.
Pirate everything if you have to, but stop feeding the companies that are making everything awful.
Hans Moravec introduced the idea of the "landscape of human competence" , a topology representing the peaks and valleys of human capabilities. Art, writing, coding, game playing. Elevation corresponds to cognitive difficulty, and the landscape maps to everything humans are capable of doing. AI is represented as the rising waterline - when Moravec created the idea, AI was more or less constrained to a few scattered lakes, with humans clearly demonstrating superiority nearly everywhere. After transformers, the waterline began to rise, and today we no longer have a vast contiguous majority, but are left with a scattered handful of islands, and the waterline continues to rise.
It's not arrogant or incurious to acknowledge the flood, but it might be to deny that flood is happening.
If you think there are fundamental human qualities or capabilities that AI can't ever have, you might put in the work to articulate that, instead of projecting negativity onto people who have watched the vast majority of the human competencies landscape get completely submerged over the last 10 years. The islands we have remaining don't really suggest any unifying principle underlying things that AI is still bad at, but instead they highlight the lack of technical capabilities and various engineering tracks to solve for. Many of the problems are solved in principle, but are economically infeasible; for all intents and purposes, you might consider those islands completely submerged as well.
I think you would need to work very hard to prove that the topology you are describing is well-formed enough for this analogy to make sense. For one: "cognitive difficulty" is not really a crisply defined quantity such that expressing it as a function of some input vector makes obvious sense (to me anyways). What's the cognitive difficulty of deciding what to have for dinner? What's the cognitive difficulty of making my 5 year plan? What's the cognitive difficulty of imagining a nice gift to get my wife for her birthday? There are so many things humans do which are heavily 'contingent' (in the sense of having sensitivity to the local culture, history, personal experience, etc) that the idea of being able to assign everything a single, decidable scalar to represent 'difficulty' seems like an extremely tall order to me. And that's setting aside whether the ambient vector space of 'human capabilities' is even really a sensible construct (a proposition that I also doubt quite heavily).
All this to say that describing what's happening as a 'rising tide' seems misleading to me. Techno-sociological development is super messy already, let's not make it more complex by pinning ourselves to inaccurate and potentially misleading analogies. The introduction of the car did not 'push humans higher onto a set of capability peaks', it implied a total reorganization of behavior and technologies (highways, commuting, and suburban sprawl); using the terms of your analogy humans built new landmasses on top of the water.
1. Implying that there are only "a few islands left" shoes a strong bias towards assuming that only thins humans do in the digital realm is relevant, when in fact, the vast majority of things humans do are not in the digital sphere at all.
2. It's pretty clear when most people say that machine intelligence is close, right now, they are alluding to LLM or Deep Learning based approaches. I don't think you should assume they mean machines will catch up in a 100 years. They seem to imply it will be by 2030 or sowmthing.
To address both points - there appear to be no individual, well defined tasks that humans can do that you cannot train a machine to do. Some tasks are inefficient, uneconomical, and other impractical, but there appear to be no tasks that in principle machines cannot do. What is missing is broad generalization, human equivalent time horizons, continuous learning, and embodiment.
Robotics has passed the point of superhuman performance for any given task. Software has passed the point of superhuman performance for any given task.
Regardless of the particular technique or embodiment, the constraints aren't "is it possible in principle" but "is it too expensive" and "is this allowed by the pertinent principles and regulations and laws"
We don't have AGI that learns and adapts in real time like humans. We do have incredibly powerful algorithms that can learn from whatever data we throw at them, but many domains where it's impractical, ruinously expensive, illegal, or otherwise not possible to use AI for some other good reasons.
The few islands left to humanity are not fundamental barriers. We haven't solved intelligence, or achieved RSI or ASI or AGI yet; those were never the important thresholds.
AI has always been a question about good enough, and it looks like we've gone solidly past the good enough line into "we can probably automate everything" even if we don't solve the big problems over 5 or 10 years or beyond. I think it's very unlikely we don't solve intelligence by 2030, but even if AI stalls out where it's at right now, and all we get is the incremental improvements and engineering optimizations on current SOTA, we have enough to automate anything humans do at levels exceeding human capabilities.
What AGI and ASI do is make humans economically obsolete. Good enough AI means there might be some places where humans are needed for generalization and adaptability until the exhaustive tedious work gets done for a particular application that enables a robot or software system to be competent enough to handle the work.
A hiker on a mountain might as well imagine that at the end of their journey they will step off onto the moon. But it's just a mirage. As us humans have externalized more and more of our understanding of the world into books, movies, websites and the like, our methods of plumbing this treasury for just the needed tidbits have developed as well. But it's still just working off that externalized collective understanding. This includes heuristics for combining different facts to produce new ones, sure, but still dependent on brilliant individuals to raise the "island peaks" which ultimately pulls up the level of the collective intelligence as well.
While a 2 dimensional projection of intelligence may be a satisfying rhetorical device, I think it’s an extremely mathematically naive interpretation.
Not only is intelligence probably most accurately modeled as something extremely high dimensional, it’s probably also extremely nonlinearly traversed by learning methods, both organic and artificial. Not a topology very easily “flooded”.
It wasn't a formal model or a theorem, it was an observation about reality. Humans are indeed gradually being overtaken on almost all fronts by AI. But by all means, if you want to take issue with Moravec's framing of the issue, feel free.
Explaining it as something like "realizable instantiation of physical computation occurring in the universe mapping to an ultra-sparse, discrete point cloud embedded in the Euclidean parameter space of all computable functions" could definitely be more precise, but you're either going to need a topology like a landscape or a bumpy sphere to visualize it, and then you're going to need to spend more time showing the effects of things like scaling laws, available compute, where the known boundaries of human intelligence lie, and so on, and so forth, and by then you've lost everyone, probably even the ML professor.
It's a good enough metaphor that maps to a real thing.
> It's a good enough metaphor that maps to a real thing.
My entire point, which I’m not sure you addressed is that no, it’s not a good metaphor. Water “floods” a 3d topology in a predictable manner with regards to the volume the topology can contain. The entire argument is that progress is observable, predictable, and limitless, and the “islands” are a rhetorical device. My argument was turning the rhetorical device around and pointing out that we know so little about intelligence and AI that describing it in this way is not meaningful beyond sounding intellectual.
Government is parasitic, with no market feedback, so people that would normally get weeded out for being awful humans, for incompetence, for psychopathy of various flavors - they all end up with a long, well paid career and no consequences.
I find the story unlikely, reading more like a vengeful malicious compliance fantasy than how humans behave. In real life, a nasty Karen like that, after being inconvenienced or having their time wasted, would go out of their way to ensure the offending citizen was punished. In this case, they'd find a technicality or process to ensure the blind author lost their benefits, or was greatly inconvenienced to whatever degree possible.
You get fuming, frothing at the mouth inchoate rage out of people like this when they're directly challenged. They seethe.
They'd find a technicality, wait until Friday at 4:59 pm, drop a letter in the post box that declines benefits because the ink on pages 33 and 138 smudged some critical detail, or some other completely made up nonsense. If the author wanted to get back to baseline, they'd have to go to heroic efforts, either pressuring the tinpot tyrant government bureaucrat in social media or through journalists, or by escalating through the government bureaucracy and appealing to higher powers.
This has "and then everyone clapped" vibes. Or maybe OP just got lucky with a novice government worker that hadn't fledged into their full Karen powers.
reply