So, to respond to the snark seriously, the problem it is trying to solve assumes that we have encountered a particular economic failure case - that AI has outcompeted humans in cost and quality of work, such that most humans have zero economic value.
So without some scheme, there will be no wealth distribution whatsoever, and potentially grim actions by national leadership.
It's a "utility-based" wealth distribution scheme.
I prefer to find a distribution scheme that is not isomorphic to communism.
The social graph described in the article is what you see in a high school or small community or anywhere material productivity is not important. Relationships and attention are the source of value.
Ironically, those places are the most communist. Everybody enters on a relatively even footing, and there are strong norms against directly profiting off your peers.
Have you ever been to a high school? Nobody is on equal footing, socially speaking. The authoritarian nature of high school is about the only communist thing about it.
Kids are not materially the same. Some kids are on foot, some have cars, some have nice cars. Same for clothes, vacations, basically everything. I think uniforms may be common these days but even with uniforms you can tell who has money, drugs, popularity. Their bodies are radically different from each other.
I get what you're driving at I guess (and I initially misunderstood and thought you were pro-communist). All that happens if you take money and talent out of the equation is, the most attractive and connected people get ahead. Being unattractive is what makes a lot of people develop other positive qualities within themselves such as talent. If we are entering a talent-free world then there will be a lot of people with no prospects to have a satisfying life because there is nothing else they could do systematically to increase their value to society.
> I prefer to find a distribution scheme that is not isomorphic to communism.
Communism as it was practiced, or communism as Marx envisioned it?
Because what Marx was envisioning was communal ownership of the means of production, which in the case of AI would mean communal ownership over the very businesses that necessarily have to replace humans with AI for cold hard business reasons.
I don't think Marx's idea of utopia is stable, game theoretically, but then very little utopian ever is.
What you've got certainly has enough merit to be interesting! But, as Communism demonstrated, the difference between theory and practice can have a genocide or three between them.
Unfortunately communism "...then was replaced by a <insert ruling class> after masses were sufficiently bamboozled by a strong leader" seems to be the kinetic outcome for every attempt at nation-scale communism.
This is an example of the "No true Scotsman" fallacy. These arguments for communism have been refuted thousands of times before, no need to waste electrons doing it here again.
Communism is always and everywhere a violent ideology, because the core tenets of communism are an exercise in theft, repression (economic and otherwise), and envy. People naturally don't want to be repressed - the communist answer is that those people, then, need to be murdered.
I resent the fact that communists created a history where terms like Lesser Megamurderer and Deka-Megamurderer[1] (sic) are simply factual descriptions, rather than works of dark fiction.
I'm ever hopeful though that good people like yourself, that have been unintentionally absorbed lies about what communism is, can be given an opportunity to read broadly, learn about history, meet victims of these regimes, and look back with discomfort at what you advocated in the past.
To the extent that one may argue that Communism has to be anarchic, I think that it is doomed to follow exactly that path — but this is a flaw with anarchy, not with Communism, and would also affect e.g. anarcho-Capitalism.
I would also argue that the flavour of Communism seen in China, from Mao Zedong to wherever you'd like to say it's become "state Capitalism", was overall a success despite the Great Chinese Famine that was unnecessarily severe due to people wanting to win favour and avoid getting purged.
You may be surprised by my position on such a severely bad thing, which I am openly calling out as severely bad. This is not to minimise it, but because I see similar failures in very non-Communist systems — the UK had, for different reasons, a similar failure of governance with the Irish Potato Famine that made that unnecessarily severe. Given the timings, I suspect the Irish Potato Famine formed part of Marx' reasoning when writing the Communist Manifesto in the first place.
I do not think we need to repeat the mistakes of the past, but for me that means looking into the causes of bad governance everywhere rather than picking between the biggest teams of the mid 19th to late 20th century.
There's a place for something new — just as Marx' Communist Manifesto was written in a world where Capitalism seemed to be enriching a minority in a "meet the new boss, same as the old boss" kind of a way; and just as Smith's Wealth of Nations was written in a world where local market information could not possibly be collected let alone computed by a central government; so now do we find ourselves in a world where we have some ideas about how a Nash equilibrium can show us what outcomes are incentivised by the legal and technical things we consider building.
It's OK to ask if we can figure out how to do the (next) Industrial Revolution without Sabotage[0] from Luddites afraid of losing their jobs to automation, it's also OK to ask if we can figure out how to get public ownership of the means of production without dictatorships, and it's OK to ask how we might get government of any kind without groupthink[1].
Isn't this the plot to the episode "Nosedive" of Black Mirror? Isn't the outcome of such a system obvious? It's very clear that allowing people to rank one another in a way that results in dire economic outcomes results in a deeply bifurcated society, and in this scenario there's no out.
On the upside, the ones who opt into the system would likely be mentally destroyed. This includes the popular ones due to having to be slaves to popularity.
Calls it “HumaneRank”, this is already off to a shaky start.
> Imagine a system in which every month, every human in the nation state or other political polity participates in an exercise of the grant of endorsements to other humans
My guy, we’ve literally written dystopian sci-fi stories about this. The most 1:1 identical one being that Black Mirror episode. How can you possibly think this is a good idea?
Might be better to have a hubs and authority score. It could be “being a good producer” and “being a good judge” are different things. Problem is you can clone somebody’s hub score by stealing their judgements.
> It could be “being a good producer” and “being a good judge” are different things
Yes I agree with this in principle. Although in this case what "good" is, is ill-defined - or rather different to everyone. The scoring each person performs is actually most directly a statement of utility to the scorer. That should probably go into the piece as an edit :-)
btw I also agree that pure PageRank on the endorsement links might benefit from tweaking, although some of the common failure cases for PR on the web don't apply here - you can't generate piles of extra humans to endorse-link to yourself.
This is terrible idea that will be aggressively rejected by humanity.
This is exactly the type of "tech bros are completely out if touch with how people work" vision of the future I guess we should expect from the guy that wrote this article.
Humanity has never lived in a world where leisure and idle time are not constantly being invaded by the oppressive survival demands or a hierarchal authoritarian mandate from a ruling class. If AI does provide a mechanism to allow humans to spend their free time in true leisure, like most of humanity living in the Star Trek future, then we should embrace it. Not straddle humans with artificial subordination.
The possibility of "humans to spend their free time in true leisure" is actually cited and specifically not rejected in the piece. It's addressing a very specific, and very dangerous, failure mode of a post-AI society.
Read the piece and be enlightened.
Also, jeez, I pine for the day when silly randomly-applied ad-hominems like "tech bros" just get dogpiled. For your own sake, elevate your discourse, man (or woman).
Interesting idea, I bet it can still be gamed because this is fancy voting and voting gets gamed a lot — the devil is in the details, rather than the grand vision.
If superintelligence means a robot that looks, acts like a human in every way plus infinitely smarter, that does not mean human is lower value. There are people smarter and richer than me, but so what? Maybe they got lucky, and had better upbringing etc. same with machines/agi
2) that the robot is total higher efficiency - not the case, due to expensive resources and high density fuel
3) that the human is not degraded by the economic system below the pure machine - also not the case
The current winning mode is one of a cyborg - a human empowered with powerful tools, which may include robots and AI. And one human can only control so much.
Reminds me of https://community-sitcom.fandom.com/wiki/MeowMeowBeenz