> Here's the thing: our current prevailing political philosophy of human rights and constitutional democracy is invalidated if we have mind uploading/replication or super-human intelligence.
Don't agree with this specific point. One of the most common anti-democratic arguments (?) is that people are inherently unequal, because of differences in physical and cognitive capacity. But this of course misses that declaration of all people being equal is performative. I don't think we actually care if it's "naturally" true, we - citizens - don't want to be dominated, so we will make everyone equal by law and politics. Democratic politics is just arguing the specifics, in what areas, to what extent etc.
So this is a rational pact. Trying to break it just means there will be someone stronger than you out to dominate you, and you'll be in a suboptimal position to resist, because you alienated and weakened your potential allies. Look up Republicanism for a related line of thinking.
In my opinion an artificial person automatically gets all the rights (they are bound to personhood and reciprocity and not biology). But I have no doubt there will be much lawlessness about this and perhaps an American Civil War-like event down the line.
There will at least have to be a law restricting the duplication or creation of persons; otherwise the dictator only needs to copy themselves >N times where world population is N and vote themselves into the top leadership position(s). It would be a race to the bottom cloning people, consuming all available compute resources, just to maintain status quo of political power. And it will be the wealthy/powerful who can afford the resources for mass self-duplication.
And then the question of personhood itself; someone is going to try uplifting animals (or raw creation of new beings) and we'll need a rigorous threshold for the properties an intelligence must have for personhood, and this is also vulnerable to the mass-cloning takeover of democracy because the simplest artificial person is likely much less resource intensive than a human, so they will quickly outnumber everyone else and who knows what their values would be in a democracy?
> someone is going to try uplifting animals (or raw creation of new beings) and we'll need a rigorous threshold for the properties an intelligence must have for personhood
The same question can be posed right now for humans that fell below the threshold due to brain damage or other conditions and display less self-awareness than extant, non-uplifted animals. Yet these humans in vegetative state do have personhood rights, they are not the property of someone else, it's illegal to steal their body parts, can't be terminated, etc. whereas the same cannot be said of many animals.
We currently don't have a threshold, not even a fuzzy one, it's completely arbitrary and based on being human or not. De-extinction of Neanderthal could also cause the same conundrum.
Well, democracy isn't only numerical advantage, it's also debating procedure and electoral system. If you just duplicate yourself, I think there are many questions about whether you really want it and your identity before just using it for political takeovers.
Historically an argument for narrow suffrage (e.g. only for landholders) was that economically dependent people are also politically dependent on magnates. These limitations did exist, though of course they're very corrosive to non-political rights as well. One can also try to make people economically independent, like it was done in ancient Athens by paying you to participate in the assembly and courts. I'm not arguing for anything in particular, just pointing out that it's a class of problems that had already existed in discourse in some way.
but this premise has an analogue - by out-breeding the "undesirables", you can obtain "democratic" power over another group. It is the argument that is used to take children away from parents of various indigenous races and "diluting" their race, or as an argument against immigrants ("they will out-breed us and then out vote us, if we let them in!").
I don't buy that a sybil attack is a real threat, as long as duplication procedure is also available to others, not just a select few. But if such a procedure is only available to a select few, it implies that they already have power, and so such a sybil attack doesn't seem like it's worth it, but instead would just use that power directly!
The obvious solution is to make cloning split the weight of the vote. People at year 0 are equal, cloning yourself (or even having kids) you split your votes and when you die your descendants inherit your share.
This kind of approach favors the weirdos who want to live forever, never reproduce, and be king of the whole universe. They only have to wait log_R(N) generations where R is the average reproduction rate and N is the current number of people.
I think the only realistic solution is maximizing average expected utility (which still has some weird edge-cases like utility monster) or maximizing the minimum expected utility of any person (drag everyone up to some minimum goodness, once they exist), which doesn't get humanity as much overall utility in the long run but avoids so many bad edge-cases that it may be worth it.
> They only have to wait log_R(N) generations where R is the average reproduction rate and N is the current number of people.
I don't think so? The wannabe dictator only gets voting power from his dying ancestors, they can never get more voting power than all their ancestors had together, no matter how much they wait. And even that edge case would only happen if all their ancestors died and had no other living descendants.
Thanks, good point; But a 7.7B^-1 share of the universe is still an incredible amount of power in the long run, and I think serves as a bit of a perverse incentive. Similar to trying to hold Bitcoin for decades.
> It would be a race to the bottom cloning people, consuming all available compute resources, just to maintain status quo of political power.
At least natural humans are rate-limited by gestation periods, but this isn't that far off from how some religious groups view their role in the world.
That brings up a big point of contention in some circles, for instance see the recent change in wording in the official DNC party platform[1] from Racial Equality (treating people the same regardless of race) to Racial Equity (ensuring equal outcomes amongst races through government policy).
It's an interesting point you brought here. Would you like to have some async anon text chat about the Russia-Ukraine conflict and the geopolitical context and how it affects the tech world? (over Telegram/PGP for example). I cannot witness that level [1] of anti-curiosity from downvoters.
It's not surprising, that's a pretty common caricature used to paint disability acts. The base root of what you're remembering is probably a short story by Kurt Vonnegut - "Harrison Bergeron"[1] which talks about an extremist approach to something beyond equality - sameness, everyone is forced to be the same. Equality and sameness are not the same, people can be treated equally and yet still allowed to retain their independence and diversity.
That’s a grossly capitalist exploitative viewpoint which overlooks hidden power structures to excuse any degree of inequality. Any difference whatsoever between anybody can be traced back to hidden power structures, and to deny this is fascism and morally equivalent to the January 6 toppling of our democracy and also slavery, which founded our nation in 1619.
It’s time for a discussion of alternatives to the engine of inequality called capitalism. You see, when I buy something from you, we are both exploited in ways we know not, because they are hidden.
No, you are presenting a non-sequitur, I am bringing up the fact that the government imposes requirements for businesses and government to make accommodations for the disabled. This is in fact to provide equity via governmental means. This occurs at a cost to the organization providing the service. You are arguing about an extreme where everyone is flattened (via cleaving or dystopian sci-fi), which does not happen under the ADA. I actually never indicated support for that position (or any position in fact) you are essentially straw manning.
My point is that the government in fact does this via legislation (the ADA), are you willing to argue against it to remain consistent?
I was attempting to show that we both read the same comment by JasonFruit and interpreted the phrase "not attempt to make them equal by government-enforced handicapping" very differently.
The base claim was that after reading JasonFruit's comment with the interpretation I had, your own comment seemed like a tangent.
Now, which of us interpreted his comment correctly? I have no idea.
Neither was I attempting to pass comment on the badness/goodness of ADA.
Yeah, that's what I meant when I mentioned arguing the specifics. I agree that there are egalitarian extremisms that are pretty dystopic, though I think practice shows that only equality before law doesn't mean much by itself.
If you ask me specifically, the extent is that everyone should have equal active participation in government and suffer no overwhelming pressure, physical, economic or otherwise from other people. Equality so that we can be free. This is more that we tend to have, but not necessarily in the "socialist" direction I think.
To tie it back from an offtopic, this would make many AGI applications uneconomic, though likewise many enterprises are uneconomic if you have to pay your labor. Perhaps free societies, wherever they would be, would have the advantage of having free artificial minds acting in their actual self-interest, and not just serving their masters. You can see similar dynamic in late-18th century revolutionary wars.
Full AGI is likely uneconomic anyway. I think the sweet spot is going to be right below wherever we ultimately draw the "gets to have rights" line.
There would still be good reasons to do AGI, and free artificial minds would be able to contribute greatly to society. But sufficient AI alignment to make existing systems comfortable with this is fundamentally counter to AI rights.
> we - citizens - don't want to be dominated, so we will make everyone equal by law and politics.
History[0] makes a mockery of your assumption. What you says makes idealistic sense, but what tends to happen empirically is that people end up with tribal behavior which results in people not fighting for equality for everyone - but just for their in-group[1] (see: American "founding fathers", as alluded to in/lampshaded by TFA).
edits:
Also - "citizen power" may end up being a footnote of history, as the period sandwiched by monarchs and megacorps
0. The present too, but history has more examples.
1. Also with a dash of fundamental attribution error: anyone infringing my rights is a tyrant, but my infringing on other's rights is necessary for the greater good and/or not an infringement, really
In the West, this has already occured at least once
>Also - "citizen power" may end up being a footnote of history, as the period sandwiched by monarchs and megacorps
The Athenian-led democratic city-states had their Golden Age between Solon (who kicked out the Tyrants and instituted democracy) and the Peloponnesian Wars (where Athens was forced to accept the Thirty Tyrants - which was dethroned after less than a year, but after that there was no more a Shining Light of democracy as examples for local democrats to kick out the tyrants and oligarchs with) then the Macedonian period, with all the kings.
Well, at least the Romans that replaced them had a senate?
> Well, at least the Romans that replaced them had a senate?
Not a scholar of the period, but I was under the impression that Greece's democratic system meant that only male land-owners could vote and that the leading philosophers of the day considered slavery the natural order of things, women marginally (if at all) better than slaves, and boys suitable for divine pleasure. It doesn't seem particularly democratic for them?
I do think it's reasonable to see in ancient Greece the seeds for democracy, but modern republican and democratic forms are pretty far removed from those days; and I don't suppose most of us would be very pleased to live under such a system.
> Not a scholar of the period, but I was under the impression that Greece's democratic system meant that only male land-owners could vote and that the leading philosophers of the day considered slavery the natural order of things, women marginally (if at all) better than slaves, and boys suitable for divine pleasure. It doesn't seem particularly democratic for them?
Except for the thing about boys and divine pleasure that just about sounds like the founding of the United States to me.
> Except for the thing about boys and divine pleasure that just about sounds like the founding of the United States to me.
That's probably not exactly true, but obviously universal-er suffrage has been a very long and fraught road. And it's undoubtedly very complex.
I would offer that the social and political norms of the early modern period do stand in contrast to those of Greece despite their obvious resemblance. As you point out at the founding of the US, the franchise was restricted (States were given the power to determine the franchise), and it wasn't until the early 20th century in the UK that land ownership rules were dropped (creating universal suffrage for males 21 and over).
At the same time, the social norms concerning marriage, pederasty, and slavery were also different. The religious background and perspective were utterly different. It is hard to imagine modern forms of representative democracy arising in Ancient Greece absent an intervention.
> declaration of all people being equal is performative
The Declaration of Independence states that it is self-evident. This seems to preclude the possibility of it being performative, without some significant mental gymnastics.
It is self-evident because it is a moral proposition. It is the same as saying stealing your friend's cherished pet is wrong. It is obvious and needs no more explanation.
I'm pretty sure that the "all men" in that document wasn't literally "all men" (even, never mind all people) either, so perhaps we wouldn't consider it the last word on the subject.
The truth of the statements is presupposed (“these truths”). It’s only the self-evidence of their truth that’s asserted. There’s no component of the statement that’s performative in Austin’s sense (https://en.m.wikipedia.org/wiki/Performative_utterance). For example, a statement of the form “A holds B to be C” is clearly truth evaluable, and so doesn’t qualify as performative.
I think it's even more constrained than that. I think that the principle is more like:
A government, to effectively regulate the behavior of individuals as part of a group, must not privilege any member of the group over another.
Or something like that. I'm no Jefferson... the key point is that the equality that a nation cares about is the equality of all asmembers of that nation. All subject to equal rights, taxation, representation, punishment, etc., as we can best define it.
Yes, equality is subjective and as such has to be performative. It is ridiculous to argue about equality on an objective basis. There is no measure where we are all can be objectively equal (intelligence, strength, agility, musical/artistic ability etc.). Equality is a political construct where we are equal under the law. It is axiomatic thus self-evident. There is nothing to justify.
My parents are religious fundamentalists, and I've noticed there is increasing FUD being drummed up in their circles about anything to do with AI or significantly altering "what it means to be 'human'". The former is described as a front for evil spirits/demons, and the latter as the penultimate step before apocalyptic events, i.e. a precursor to the worst kinds of evil (in some circles, the 'mark of The Beast'). [For the record, I'm agnostic and feel the author raises some excellent points in any case.]
There will be extreme resistance to these technologies from many kinds of religious people who would be willing to die and/or kill to prevent what they see as a takeover by the most malevolent of invisible forces. I may not agree with their reasoning, but some of their concerns could be worth reframing and considering in a different light.
Personally, I strongly suspect we will have to fundamentally change as a species to get through at least one of the Great Filters looming before us. However, it's important to remember that the road is fraught with many perils and several possible paths which could lead to unimaginable suffering. It really seems to me that we're going to have to get lucky on multiple dice rolls here in the long run.
> some of their concerns could be worth reframing and considering in a different light.
There are some legitimate concerns about AI that sound almost like something out of the Book of Revelations, but are entirely rational.
The biggest one is the idea of AI being used to implement "automated con artistry at scale." Imagine assigning every living human being a virtual AI powered con artist to tail them around and try to convince them of whatever is ordered by the highest bidder. This AI is powered by mass surveillance and big data and is able to know the "mark" intimately and work on them 24/7 both directly and via their social network. Now throw in deep fakes, attention maximizing "compulsion loops" and other adversarial models of human cognition, etc.
It's basically the hydrogen bomb of propaganda, a doomsday machine for the mind. It would be like creating Satan, except this one would be entirely amoral and mercenary.
At the very least this would be the end of democracy in any form. I could see this ushering in the permanent eternal victory of totalitarianism since I can't imagine the masses being able to organize any resistance when constantly bombarded with propaganda from their demons. The new feudal aristocracy would be the ones running the demons.
This is one of the darkest plausible visions of the future I can think of at the moment, far worse than anything related to climate change or similar problems. If we manage to avoid these kinds of scenarios but still drown Miami I'll say we didn't do too badly.
Transhumanism comes with some analogous concerns around the potential for enhanced humans to enslave the rest of humanity through superior cognitive capacities and the incredible accumulation of wealth. I can imagine a scenario where a few wealthy people gain access to technologies for life extension and cognitive enhancement and then "run away," effectively becoming a new species and exterminating or enslaving "legacy humans." This isn't mutually exclusive to the AI hellscape I describe above, since that would be an ideal mechanism to enslave the rest.
Ultimately I think these technologies are all neutral. The problem is that we are not. I don't fear AI. I fear what humans will do with AI.
Imagine assigning every living human being a virtual AI powered con artist to tail them around and try to convince them of whatever is ordered by the highest bidder. This AI is powered by mass surveillance and big data and is able to know the "mark" intimately and work on them 24/7 both directly and via their social network. Now throw in deep fakes, attention maximizing "compulsion loops" and other adversarial models of human cognition, etc.
It's basically the hydrogen bomb of propaganda, a doomsday machine for the mind. It would be like creating Satan, except this one would be entirely amoral and mercenary.
This doomsday machine only dooms as long as there's just one user. Burger King, Subway, McDonald's, Wendy's, and Taco Bell are all going to try to hack your mind to buy lunch from their chains. But people can't eat 5 lunches for lunch. In practice this doomsday persuasion is going to bog down in a red queen's race between propagandizers/advertisers with mutually exclusive goals, much like the current media environment. Improved persuasion technology means that organizations that want to persuade people end up spending money on zero-sum games against rival organizations but public behavior in aggregate doesn't change dramatically.
Corporatism isn't the only one out there who would find such technologies useful. Besides, in this Judeo-Christian religious context there are many warnings from the Bible around such consumerism.
Doesn't the same zero-sum dynamic apply to political persuasion? There are many competing groups in the world that already spend money on political persuasion. You said "convince them of whatever is ordered by the highest bidder" so I'm not considering the case where one secretive group with a pointed agenda (the CIA? the FSB?) has a monopoly on super-persuasion.
> I could see this ushering in the permanent eternal victory of totalitarianism since I can't imagine the masses being able to organize any resistance when constantly bombarded with propaganda from their demons.
I thought this is already what is happening now!
I think technology is neutral, but the entire system is not: it will strengthen the existing power imbalance until there is no hope of balancing it again. For example, the authority can have mass surveillance to identify possible risks and remove them, without the public noticing by controlling the social media. Even if you noticed, you have basically no chance of gathering a large enough crowd to protest due to communication channels being constantly monitored by AI. Even if you somehow get enough people, you will not have a chancw fighting them as their weapons are a lot more advanced, probably unmanned so not constrained by manpower.
That's a very bad scenario and what would make it worse for me would be a brain-computer interface that would make it impossible to tell if I was talking to a real person face to face or a person who was acting as a proxy for AI.
Without sure technology, I could take some solace in interacting with real people in real life and even if the online space was irreversibly transformed by ai propagandism. But if it reached directly into the meatsphere, then I'd be rendered hopeless. To be unsure whether I was authentically engaging with anyone or interacting with fleshpuppets proxying an AI would be to much to bear.
I cannot imagine a more complete hell. My only hope is that it is either physically impossible or economically impossible, because believing otherwise means it is inevitable.
My greatest hope is that we will somehow find a way to reduce/eliminate selfish and antisocial tendencies as an intrinsic aspect of any advances made in these areas. I think any tech which doesn't make that a top priority is doomed to create more suffering than would exist in its absence. This hope often seems like a pipe dream these days, though.
Mass media is to what I described what spears are to M-16s.
That's a pretty good analogy. In the parent post I am describing propaganda becoming mechanized warfare. One person with an M-16 could take out an entire massive army of people wielding spears (as long as they had plenty of ammo).
> One person with an M-16 could take out an entire massive army of people wielding spears
I have to disagree. All it would take is for this army to surround the person with the M-16 and wait for a magazine change, then hurl 50 spears at the hapless pincushion. And that's not counting the probability of the rifle overheating, jamming, etc. after long enough.
I realize I am taking this probably too literally, but my point stands :)
"The new feudal aristocracy would be the ones running the demons."
The question of ownership is huge and I think still underaddressed, even after some good work like the referenced Lena, or Tom Scott's "Welcome to Life: the singularity, ruined by lawyers" https://www.youtube.com/watch?v=IFe9wiDfb0E .
I was realizing a couple of weeks ago that if you do believe all the materialistic things cstross outlines, that a solid argument can be made that it may be never rational to upload your mind, on the ground that there is simply no circumstance I can imagine where you will "own" your own substrate. Arguably, owning your computational substrate is a fundamental aspect of life that we take for granted today. (Or, if you won't consider what you have now "ownership", at least nobody else does either.)
But there is no circumstance in the forseeable future in which anyone can own their substrate. For a while I thought a rich person could do it, but then I thought about the entire supply chain involved in creating any brain scanner and subsequent computational device, and the amount of hardware left out in the real world where others can affect it, and I realized, you just can never assume that you are doing anything other than flinging yourself irrevocably into a locked box that is not under your control. Even if a rich person thinks they funded the entire project, brand new software, brand new hardware, there are thousands if not millions of points where either mistakes or deliberate sabotage for control may have been done. How can they be sure? All mechanisms can be corrupted, and the interests are certainly there to do so, massively so!
Everything people do today to control you; advertising, censorship, social pressures, everything, will only be amplified and combined with new techniques if you are running on human-comprehensible hardware that can be affected by any intelligence organization, anybody in the supply chain with an axe to grind, etc.
I seriously can not think of what kind of assurances it would take to fling yourself into that. About the best you can do is hope that the math works out that it's better to keep you happy and it's not sadists pulling your strings but at least while you are working for someone else's interests they don't inflict massive amounts of pain on you just for fun. And the edited, redacted, sanitized, loyal version of yourself that happens to survive at least enjoys their servitude.
In the 1980s and 1990s, it was easy to at least imagine that maybe you'd own your own hardware. Today we can't even keep ownership and control of our mobile phones. How do you expect to have any ownership of something that makes your cell phone look like Dr. Nim? https://www.youtube.com/watch?v=9KABcmczPdg
So, there's my proposed answer to cstross: Some form of absolute right of ownership of computational substrate for all sentient beings, with the responsibility of providing that to others being absolute, even if that means some technology becomes impractical or infeasible, even at great cost. There's a prisoner's dilemma aspect to this; obviously everybody wants ownership of their own substate but the rewards for defecting can be enormous... at least in the short term.
The last resort of every peace process is an appeal to our common humanity, so I do think there is some risk to tinkering with what it means to be human. Look at the state of conflict today, even with the benefit of us all being one species. We figuratively dehumanize our opponents in order to justify violence (“they’re monsters!”). It could get a lot worse if we find ourselves in conflict with people who can be literally dehumanized, with whom we have lost the most fundamental thread of commonality.
I think we will have to do away with the notion of being human as special and talk about shared values or systems instead. For example, I don't torture animals, even though they're not human - because they still have the capacity to suffer, to feel pain, etc.
Is that last point not obviated by not building AIs that "suffer"? Shouldn't that be the responsibility of its creator to not create "suffering", whatever that is?
It's murkier when you could build an AI with self awareness but no self-preservation. Say an intelligent missile which wakes up with a deep seated drive to effectively explode.
It would be easier to build AIs that have no means for expressing their suffering, so that's probably what will (or has) happen(ed) instead.
It is important to remember why it is we even want AI: because we want slaves. To that end, the people who work to create AI are incentivized against allowing themselves to see them as anything other than a soulless "algorithm". Many will even deny the possibility entirely.
> Personally, I strongly suspect we will have to fundamentally change as a species to get through at least one of the Great Filters looming before us.
We have almost no evidence there's a filter ahead of us. It could be we already passed one and don't need to change at all at this point to survive long term. There could even be no filters at all.
On the contrary, we have ample evidence from biology that every species destroys its own habitat if it's actions are not checked by some kind of feedback mechanism, like predation. Humans no longer have such a feedback mechanism, and we're seeing the results in the destruction of our habitat in climate change.
We don’t really have any evidence for any of the solutions to the Fermi paradox, that’s what makes it a paradox.
Personally the filter being ahead of us seems the most likely answer, but you’re right that it could be just about anything. The odds that we are the only civilization to make it to the “computer age” just seems far too unlikely. 1 in a trillion is not where I’m putting my money.
> Personally, I strongly suspect we will have to fundamentally change as a species to get through at least one of the Great Filters looming before us. However, it's important to remember that the road is fraught with many perils and several possible paths which could lead to unimaginable suffering. It really seems to me that we're going to have to get lucky on multiple dice rolls here in the long run.
The opposite could be equally true. The evolution to use technology can make us weak / break us in some manner which eventually destroys us.
Simple examples: climate change, genetic alterations, lack of reproduction, etc.
AKA by rolling the dice we are running the risk. If you look at it another way, you're correct. But every new theoretically destructive technology is another dice role, eventually we will blow ourselves up (even with long-tail odds of failure). The only way to survive is to slow progress and roll the dice when we mitigate risk and increase our capability of surviving (running gene related experiments more slowly aka - COVID19 therapy) or testing nuclear weapons on mars, etc.
Lots of people have the the blanketed "religion is bad" perspective. The problem is that technology has always been used to oppress people. The result of transhumanism will be oppression of all by a few (or worse, just one). And, following that trending out a few hundred years, the long-term result will be a single Human organism with one head. Would you rather that head be a Human or a God?
This is a bit of the pot calling the kettle black. Religion has a long, storied history of oppressing outsiders, minorities and classes within its own fold (caste system).
They said stuff before and after that sentence too. The gist of the parent's argument is that the head of the human race should be a God, not a human. That can only be classified as a religious statement.
It can also be read in a more generalized way as religion is no different than any other human endeavor: prone to abuse for gain (however that may manifest). Also given some of the other responses in this thread contain some thoughts of which are critical of religion (any maybe rightly so).
> Also, I don't think "always be the case" is a foregone conclusion. Mars' current evolutionary tree isn't looking very good.
I think they mean the opposite of what you inferred; they're saying we'll always need to be lucky to keep surviving, not that we won't need to be lucky to keep surviving after a certain point
by colonizing mars, we split our evolutionary tree more so wiping out 1 branch isn't catastrophic. yes a species has to keep getting lucky to survive. but more branches means more rolls of the dice.
Many AI researchers think our current trajectory towards AGI will be one of creating our misaligned overlords and the end of humanity as we know it. If that doesn’t change, you bet your ass a significant faction of humanity will go to war against anyone getting close to AGI, where AI researchers will be assassinated like nuclear scientists in Iran.
Current approaches have already given us GPT3 (people on HN are still mostly unaware how much GPT3 is tested here and are unable to discern the difference), DALLE, and Copilot. 15 years is quick, and 15 years ago there were no iPhones. If you look at the computing and societal shift in the last 15 years, it will be scarcely imaginable what 30 years will bring.
I know this is a popular opinion among certain bloggers, but this presumes that a significant faction of humanity takes these blogs you read as seriously as you do.
It's not about if/when people take these blogs seriously. It's a race between the creators of AGI and when it becomes mainstream enough for a popular politician to make it the cause de jour and rally the populist base against an existential threat.
If it really is an existential threat why didn't a politician rally the population against AI in 1984 after the Terminator and War Games movies came out?
I guess because nobody would take them seriously and they'd look ridiculous?
You're positing at some point this is going to change but I'm not seeing how.
Nuclear weapons are already an existential threat and I don't see anyone rallying against it.
There was no GPT3 or DALL-E in 1984, and no way to viscerally convey the (presumed) capabilities of such systems to the average person. In practice, the average person is so disempowered that nothing will change. We will step into the transhuman era with utmost vanity.
I'm sure a malicious programmer could do a great deal of harm with GPT3 or DALL-E, but I'm still not seeing how these programs suggest that computers are going to take over the world. At some point it simply becomes an act of faith to assume technology is going to progress to the point where robots can self replicate, achieve sentience, and achieve autonomy from humanity. Phillip K Dick wrote a science fiction short story with such a premise entitled "second variety" in 1953.
It's not a new idea. It doesn't seem to be subject to evidence, because you can't really prove a negative, can you? If a breakthrough will never happen to allow machines to take over the world, there's no way to prove it.
I don't think that any of these advantages would necessarily lead to the end of the world, but as a thought experiment picture any of these proposed improvements in the hands of, say, the North Korean government to use on their population. That's how bad these things could get.
What? Even in popular culture it has been uncontroversial since at least the nuclear age to posit that humanity faces serious existential risks in the near and mid future. You're questioning this? You think there is nothing at all risky about any of the many powerful new technologies we've developed over the last couple centuries? I'm a little perplexed. What do you know that we all don't?
Or are you merely referring to the OP's tone? I am looking mainly at their substantive point, not their tone.
> The third paragraph makes you sound like you ARE a religious fundamentalist, albeit from a different tradition. "Fundamentally change as a species?" "Great Filters?" "The road is fraught with many perils?" "Unimaginable suffering?" I don't mean to be rude, but do you realize how you sound?
The writing suddenly became bombastically hyperbolic and poetic, with references to concepts the average person would have no context on. Sounds like religious fundamentalism to me. It was just interesting to have those two styles juxtaposed like that.
I only see one reference: The Great Filter is a reference to the Drake Equation, which is pretty well known by STEM nerds, especially those interested in space and astronomy. Certainly seen it mentioned many times on this site. I don't see any other references at all?
Yeah, great filter is just one of those things that tries to explain why we don't see signs of alien life everywhere we look in the cosmos. Everything looks dead. Why? Maybe life is incredibly hard to take off and even with a bazillion planets, a chance of 0.0000000000001% might mean we really are alone in the galaxy. A different one is that life is incredibly common, but developing sentience is so rare that we're the only ones. These are the better options as we're already past these hurdle. What if it's incredibly common for sentient life to evolve that makes machines and all the things we do, yet the entire galaxy still seems dead? Well that would be very bad for us as it means there is a big "filter" ahead for us. Perhaps there are millions of dead civilizations amongst the cosmos that died from solar disruptions, gama ray bursts, asteroids, climate change, nuclear Armageddon, or a superior alien species bent on the eradication of all other life?
Look up The End of the World with Josh Clark for a podcast providing more thorough treatment of concepts such as the Fermi Paradox, the Great Filter and the Kardashev Scale. Alternatively, research those subjects individually.
Scientific minds have been debating these ideas for over the past 50 years in ways that in my view differ significantly from the vast majority of theological discussions. The underpinnings are ultimately just physics.
Look up Rubadiah, chapter 11, verses 80-90 for a passage providing a more thorough treatment of concepts such as Indivisible Essentialism, the Great Schism, and the Scale of Ethesius. Alternatively, research those subjects individually.
Pious minds have been debating these ideas for over the past 1500 years in ways that in my view differ significantly from the vast majority of heathen discussions. The underpinnings are ultimately just morality.
That's kind of amazing when I can't find any of those concepts with a Google. I mean, I can find the great schism, but that's the East/West Christianity divide.
Like someone who can understand basic facts around us? We are already changing, because our way of interacting with the world has fundamentally changed. “Great Filter” - I’m guessing you might want to google that one. Suffering - let me remind you the consequences of climate catastrophe.
I don't believe "mind uploading" will ever go anywhere. There's a real Ship of Theseus philosophical problem here and it's never provable so I expect many people would just think you've created a copy that superficially acts like the original while destroying the original.
Technologically speaking, I'm not sure it's even possible to replicate the complex state of an entire nervous system. Maybe you could do this incrementally (ie by replacing parts of the brain with machines that can be replicated) but we're awfully sentimental about our own meat bags.
It doesn't matter what people think when they're philosophizing alone; peer pressure will win out. Do you want to be the luddite left behind when all your friends are zipping around the world on optical fibers, London to New Tokyo in 200 ms? Do you want to be the only delicate mortal in the room, where everyone else has a full mind backup on rsync.net?
There's a lot of terrible stuff in today's technology world that no thinking person would accept, if they paused to make a deliberated choice. But no one has time to stop. You either go with the flow, and swallow your concerns, or you put massive friction between you and the rest of society. I think it will be the some going forwards: all the decisions will continue to be made collectively and blindly, and without realizing we're making great decisions until we've already made them.
Badly paraphrasing "Rapture of the Nerds" (co-written by TFA's author): [Immortal uploads] did not have to win the debate against the anti-upload faction - they only had to outlive them
I was going to mention SOMA as well. I was thinking about the poor tormented person that was stuck in a robot body, but had no idea it was in a robot body.
The problem with this argument is that some people obviously don’t care about the philosophical problem, and so they will use the mind uploader.
Sure, many people will think “it’s not _me_ that lives on”. And the rest of the normal people will see their loved ones apparently (from the outside) living on in digital form, and ignore the philosophical questions entirely.
Even if only .1% of people upload their minds, that still produces a massive sociological shift.
That's my thought on Stargates. Wouldn't your consciousness just stop as you were ripped apart? And a clone of you would then continue but it wouldn't be your stream. The more I think about this, about what my stream of consciousness is, my "session" if you will, the weirder it all gets.
I was out walking with a friend in San Francisco, and ran into one of his old friends he hadn't seen in three years. This guy said he had been abducted by aliens a thousand times, since. I only realized later that meant every time he slept.
After you have been Probed, how can you assume the you they let go was the original?
After a bunch of struggling with the knowledge that my consciousness will one day cease I've recently come upon this thought myself and I find it strangely reassuring.
Consider the case of drinking or drugs to the point of waking up not remembering what happened the night before.
And then consider people who, knowing very well that they are going to be in a state of mind where they will do things that they will not remember the next day, still choose to do it.
To me this is even crazier than mind uploading. They are lending their body to someone with which they don't have stream continuity.
It is a commonplace in surgery rooms that anesthetic is administered, and after it dissipates the patient continues speaking exactly where they left off, not noticing any gap.
That has to mean something about the physics and chemistry of consciousness as manifested in our meat brains.
You're not dreaming all of the time, so that doesn't work. Dreams only occur during REM sleep phases, which tend to cluster closer to the end of the sleep cycle.
I wasn't trying to argue for constant consciousness, just saying that if we can be aware of and remember some things that happen during REM sleep, that suggests that perhaps there's more continuity there than was being suggested above.
I believe the stargate concept is that your atoms are ripped apart, transported via wormhole, then reintegrated in an identical state to how they were. From a continuity of your stream perspective, this is no different than being knocked out.
> we're awfully sentimental about our own meat bags
As much as I've enjoyed my meat bag and worked towards making it better through different means like diet or weightlifting, the older I get and the more ailments appear, the more I think about being able to live long enough to acquire my new and improved robot body, Chappie [0] style!
I'm sorry sir, the Chappie™ bodies are on back order due to their superior hashing ability when mining crypto. Your consciousness will be uploaded to a toaster for the foreseeable future.
Interestingly, many films depicting these future events tend to view things like nervous systems/pain etc. as things that are disadvantages and that technology would remove but humanity is built upon pain, fear, weakness and ultimately death so I wonder if some fully automated but artifical beings could ever be anything more than mundane, they would lack urgency and ambition because they wouldn't suffer the same desire for power and significance because it would be meaningless.
Most people believe that they're the same person who went to bed last night despite all the changes that happened in their brain while they were asleep. They even believe that they'er the same person after a full decade despite the huge differences in behavior and memory that occur after such a large interval!
So yeah, after seeing uploads act the same way as the original they'll end up being treated as the original just like we treat people as the same individual over periods of unconsciousness.
For myself I don't think it's proper to answer questions like these with a binary but rather I prefer to think about how much alike two individuals are.
That episode of TNG where Barclay sees and even interacts with weird slug things that turn out to be people while transporting seems to indicate that there's a continuity of awareness, which really raises even more questions and should probably be best ignored.
I don't think so. Imagine someone wrote an AI that is able to act exactly like you. To the point that no one you know can differentiate between you and the AI. Would it be OK at that point if they just killed you? No one can tell the difference, right?
The point is, on the outside they look the same, but we don't understand enough about consciousness to possibly know if or how it can be transferred to another system.
>I don't think so. Imagine someone wrote an AI that is able to act exactly like you. To the point that no one you know can differentiate between you and the AI. Would it be OK at that point if they just killed you? No one can tell the difference, right?
Everyone can tell the difference, because there's a dead body. Killing is killing. Doesn't matter that there's another instance living. Also making a clone of me would be a gross violation of my privacy rights. The situation in the previous post - the "transporter" - is different, because I'm agreeing to one of my instances to be killed.
I wonder what would be the legal status of one's own instances ("clones"). I mean, they are all persons, but at the same time they are inherently connected, from DNA to memories.
Ignore the clone part though. That's just how I'm extending the time frame. The same problem exists with a "transporter". The instance coming out the other side appears to be identical, but there's no way to know if that instance is actually a continuation or yourself. Is it the same consciousness/self? Is that even possible to know?
It’s not somebody else though - it’s me, they know the exact same things I know, and thus have the same feelings about them.
And of course it’s also their account and their spouse. It’s not like an original and a copy, it’s not even fork, where you still have the parent and child - in this case here all “mes” are equal.
If you have an "identical twin", that also is not you. You started diverging the moment you both existed. Which of you owns the bank account, or is married, is a matter of law, and there is nothing that transfers either.
I don't know why you imagine there is no "original and a copy". You existed before, the copy did not. The copy thinks of itself as you, but isn't. It carries a delusion.
It will start by trying to steal your stuff, so is an adversary.
Yes, we started diverging at “fork” - the other me is different than current me, but we are both the old me. And the legal stuff is indeed down to law, although I believe even from that point of view both “yous” are still you, because there is no way to tell you apart. That’s also why there is no “original” and the copy - of there was a way to tell which is which, you wouldn’t be identical. And there is no delusion - the other me is the “old me” same way I am.
I think the essential problem with this point of view is similar to many words theory - to most people the idea of “cheap fork” sounds quite alien; I’m not sure if there’s anything similar in our usual reality. It is however common in the internets, from Unix to Git.
Someone who's into this kind of thing answer me this:
Say some "Music Man" type huckster comes to your town with a "Brain Uploader" device. He promises to upload your brain to the metaverse, but all the device does is train a deep learning simulation of the victim's personality shown on a screen, while administering progressively great electric shocks on the victim until he's dead.
This gives the town the illusion that the person's soul (or consciousness, or whatever) has migrated into the metaverse. How do you detect he's lying?
I don't know about the mind uploading... but the fact that device is applying electric shocks to a person's body and physically destroying them is a massive problem. Any respectable mind uploading would leave the original in-tact. A standard autopsy of the person's body would discover the site of electrocution and the damage associated with it, incriminating the "huckster" under at least "gross negligence" (the only crime for which mens rea is not necessary) up to first degree murder.
You'd need to prove it's perfect. I can't help but think that would require the original mind itself to judge - no one really knows how an individual thinks outside of the individual.
I suppose if you assume the person to simply be outputs in response to inputs then you could probably create a near perfect simulation, but if you assume the mind has its own internal state that makes that mind distinct, and given that I observe myself as a thinking thing I hold that to be true, then it would take introspection to get closer to the truth.
How would you feel if someone created a simulation of you and then convinced your family and friends that you are no longer needed, and that your life could be terminated? Would you go willingly?
'How do you show he is lying?' maybe makes the scenario harder to digest than it needs to be: we have great difficulty with this with, e.g., huckster politicians, since we might see that what the huckster is saying cannot be true and that the huckster benefits from it and still not be able to show that the huckster knew what they were saying was false when they said it.
Let's make tboyd's scenario rather more concrete: suppose the huckster mind-uploader tilts the marketing at a particular religious group that happens to be wealthy. He uses the group's theological terms in their marketing, which is based on a notion of soul quite similar to mainstream Christianity, although the group is more transhumanism-friendly than, e.g., the Roman Catholics, and advertises the dubious product in their newsletter. You are a former software engineer who is studying to become take orders in this group and are convinced the product cnnot do what it claims. How do you go about showing this?
It was, at the least, an unnecessary medical procedure that resulted in someone's death. So you arrest him for homicide, seize the equipment and reveal the fact that it is just some kind of electric chair. It'll go to trial and the lies will be revealed, or he'll have to give a complete confession.
Otherwise I think this is a bit of a trick question. We know that we'd be able to tell the difference between the deepfake and a specific living person -- they have a lifetime of memories, a multitude of unique relationships with various people, the ability to effortlessly distinguish a chihuahua from a blueberry muffin, etc. But if the premise is that the salesman's simulation is so good that we can't tell the difference at all, then is there actually any lie to detect?
> So you arrest him for homicide, seize the equipment and reveal the fact that it is just some kind of electric chair. It'll go to trial and the lies will be revealed, or he'll have to give a complete confession.
There may be environments today in which prosecution under the current regime would not be possible. What if he was doing this in a hospital, and had full legal immunity granted similar to Pfizer?
> We know that we'd be able to tell the difference between the deepfake and a specific living person
After seeing the advanced deep learning systems OpenAI has put out there, like GPT-3 and DALL-E, I'm not so sure. No, those systems aren't perfect, but the imperfections could be hidden from view in a low-fidelity virtual environment like FB's metaverse.
> But if the premise is that the salesman's simulation is so good that we can't tell the difference at all, then is there actually any lie to detect?
For myself and many people, human lives have a special status, and they more than just an algorithm or input/output relationship.
The idea that a simulation of a person, no matter how well-trained, being morally or legally equivalent to that person, just does not compute to people like me.
Also, a liar knows full well that he is lying -- so why should other people be confused about it, once they know it?
I know he's lying by this: mind uploading is unpossible. It would have to be body uploading. Mind is the body, mind is an attribute of the organism, of the life. You are your body, there is no escape, although the body might have autonomized emergent psychoplasmic non - physical properties that survive physical death. Even if the replica made by the "Brain Uploader" conman were to be somehow conscious it would not be you, you would still just be at the pay end of that electrocution doohickey.
I imagine some time in the far future we'll have some kind of nanotechnology that will be able to replace our biological brain cells with "nano cells".
The process will be incremental. It will start with converting a single biological cell into a nano cell and continue from there (maybe exponentially). A nano cell will behave exactly like a biological cell in every way as far as i/o goes.
At a certain point your entire brain is made of these nano cells, you have a robot brain, basically. From there your consciousness can be copied, uploaded, whatever. You wouldn't be human anymore, but you'd still be you, and there wouldn't be any ship of Theseus conundrum.
I think you have a fallacy there concerning consciousness and life being transferred into it's cyborg enhancements and then as if animating them. Consciousness and life are equivalent to me, and if you would disagree with this premise you could of course reach different conclusions and be intellectually coherent. Like I said I think your fallacy concerns your consciousness = life being transferred into your cyborg enhancements in the first place. You are not your eyeglasses, right? And you are not your exocortex of the internet. Therefore you also are not your not - living brain cell prostheses although your living system would be kind of tricked into believing so. Therefore as the last vestige of living tissue dies in Major from Ghost In A Shell, she dies and is no more and there will be only a lifeless machine left. That's my perception of the matter based on my premises.
[Edit] Of course in GIAS - universe there would still be the ghost of Major left in the spirit world. But she wouldn't link with the machine, only the life, I think. I think her ghost would essentially simply be her psychoplasmic material life autonomized from it's infrastructural base the physical body, as what comes to the ugly scientific basis for the process.
And anyway, even if by some kind of wetware - extension the original life would come to animate and interface with the synthetic, then it would still be trapped in it's that body and it's copy would simply be another separate although exactly identical individual.
But what I mean by expose is to reveal to people that this person is a liar. What I mean by "soul" or "consciousness" is the common usage of the term. Essentially, this person would be claiming that he can move a participant's essence or frame of view (with memories, personality, etc.) from a human body to a digital one. At least, that's how I would interpret it, and I don't think the average person would be that far off.
"Soul", "consciousness", "essence", "frame of view", "personality", "memories" are all undefined terms. The question you really want unambiguously answered is "what the essence of a person?"
For example, if you define "essence" as "the manner in which an agent interacts with the outside world", then provided you interact with people entirely via text chat, your "essence" could be uploaded (with a sufficiently advanced simulation).
If you define "essence" as the "continuity of bodily processes that keep a human alive", then no, as your digital replica is not continuing your bodily functions.
If you define "consciousness" as a "subjective" experience, then the question is unanswerable, precisely because you've left the measure of "consciousness" up to the "subject". In addition, your definition of "subject" becomes circular, because "subject X" is the agent with "consciousness X" and "consciousness X" is the experience that "subject X" goes through. So really, when you say "consciousness" is subjective, what you're doing is punting on how you tell what a "subject" is.
Right, but that's missing the forest for the trees, isn't it?
The implication of your answer is that you COULDN'T tell that the person is lying, because you'd be too lost in semantics to tell left from right. So, you fall victim to his charade.
No, I defined it already. You keep trying to show me that my definition isn't good enough, but you haven't yet established why that discussion is relevant to my main point.
> What I mean by "soul" or "consciousness" is the common usage of the term. Essentially, this person would be claiming that he can move a participant's essence or frame of view (with memories, personality, etc.) from a human body to a digital one. At least, that's how I would interpret it, and I don't think the average person would be that far off.
If so, the issue is that you've defined the unknown term ("consciousness"), in terms of other unknown terms ("essence", "frame of view", etc.), which doesn't actually resolve the problem of grounding the word "consciousness".
For example, to some Catholics, the "essence" of a person is their soul, which is tied to the body granted to them by God. And so, the salesman in your original question is obviously lying, because by definition, their "essence" is tied to their physical body.
The only reason the question is difficult to answer is because it's a metaphysical question. There is no physical definition of consciousness, or frame of view. So whether or not your "consciousness" shifts when you use this machine entirely depends on how your metaphysical beliefs define "consciousness".
To a certain extent, if the supposed "illusion" is convincing enough it may as well be reality. If it is possible to simulate an individual so completely that it would convince that person's closest friends and relatives of its legitimacy, then in what way is it not that person?
For me the answer is simple: the most important point is that the man is ending human lives. So, he should be charged with murder and it doesn't matter what else he's doing in parallel.
I can be confident about that because we (humans) have not been given knowledge of what the soul or consciousness is. If we had, then people would have figured out how to stop death long ago -- probably during the same era when those huge pyramids and earthworks went up.
No, I don't believe modern technology has progressed beyond the ancient tech that was lost (in linear terms). If it had, we would be able to repeat those feats of geo-engineering (or at least understand their function).
So, any talk about moving the soul here or there is just a scam.
if the algo was 100% perfect probably nothing. However, most/all software has bugs or shortcomings. I imagine after 50 or so questions from a loved-one or close friend you could figure out if it was real or not.
One interesting question - how do you weight emulated minds / AGI in a democracy? If a given quantity of compute can run ten “human-level” or one “10x human-level”, how do we allocate votes? (I’m declining to specify “10x of what” here, it could be speed of thought, or IQ at human-speed cognition, or something else).
What if a “10x-human” mind can do some thinking, come up with decisions, and then temporarily fork itself into 10 “human-level” instances, vote 10 times, and then join back to its more powerful self?
Put differently, in a world where minds can vary in power/size/scope by orders of magnitude, does “one mind = one vote” make any sense? We might be forced to look towards “economic weight votes” and systems like quadratic voting.
>What if a “10x-human” mind can do some thinking, come up with decisions, and then temporarily fork itself into 10 “human-level” instances, vote 10 times, and then join back to its more powerful self?
I suppose there are multiple ways to handle this situation. Hypothetically, we could say that the fundamental property of citizenship is the ability to vote. If you create 10 copies of yourself who think like you and can vote, them those copies are citizens and the act of joining them would be murder. This could apply even if those copies want to rejoin, because they might have that desire because it was a decision you made prior to forking yourself.
Or, we could say that a single vote is the consequence of having a unique perspective, shaped by being raised and living in a particular environment for many years. In other words, the right to vote exists because of the process that created a mind, and not because of the pure existence of that mind. As such, it cannot be cheaply copied, because that process is fundamentally different from the process of growing up. So, if you fork yourself, your copies are now different people who have just now come into being, and thus will acquire voting rights in 18 years (or however long).
It’s hard to imagine democracy continuing in an uploaded minds/super intelligence scenario. Either we get an ai decision maker scenario or everyone lives in their own matrix/joins someone else’s and whoever created it makes the rules.
It’s conceivable that you join a matrix whose creator has set it up as a democracy I suppose.
The last time voting involved fractions wasn't so great for civil liberties. As soon as you are assigning someone 3/5ths the vote of someone else, maybe you are doing something wrong?
Reminds me of the science fiction podcast Biotopia (Spanish). At some point there is a debate in the community on whether to allow a woman to marry a robot. There is a vote and it's very tight. But then one of the scientist of the community inadvertently duplicate himself 30 times or so and all the clones come vote at the last minute and tip the balance to allow the marriage to proceed.
I saw an article on HN a while ago written by a wealthy Asian male who was naturally very curious and interested in math. The article was about how he referred to himself as 'privileged', not only because he was born with wealth and therefore access to comfort and resources, but because his innate desire to learn made him much more capable and marketable in a modern workforce. He recognized that some people are not cognitively capable of learning higher mathematics, and part of that was because they just simply didn't have the drive, attention span, etc. to study hard enough to get into the highest levels, whereas he did. He is, by his definition, 'better', because of the roll of some cosmic dice.
If we are to consider this definition of privilege to be acceptable, then simulated brains are pretty much the most privileged a human can be. They can re-clock their brain to think faster than humans. They can modify their own software to add in virtues and erase vices. And most importantly of all they are completely free from physical needs. If for some reason they desire scarcity or biological functions, that can all be simulated.
Much like fears about genetic engineering of super people, the governments of the real world would have to move extremely quickly to protect the underprivileged class, e.g. the biologicals, otherwise they would be completely driven out of the highest paying intellectual and creative jobs, creating an economic lower class with no chance of ever out-competing their simulated counterparts.
> They can re-clock their brain to think faster than humans.
What if the physical resources required are such that at human brain can be simulated 20 times slower than a natural brain using the most powerful supercomputer? Almost certainly the first generations of the technology will be the ENIAC of mind simulation.
> They can modify their own software to add in virtues and erase vices.
If they understand it and if they have access. They can be sandboxed away from manipulating their innards, or maybe it's just too complicated for a human mind to comprehend and you need a whole organization of specialists to customize. And if science is advanced enough to figure out how to manipulate such high-level concepts on a synaptic level or below, they should be able to do that in a biological brain too.
> And most importantly of all they are completely free from physical needs. If for some reason they desire scarcity or biological functions, that can all be simulated.
I'm sorry, we're gonna have to shut you down for a while, we need to run some high-priority jobs.
Hey, it could be worse, you might have to hustle to cover your AWS bills.
> What if the physical resources required are such that at human brain can be simulated 20 times slower than a natural brain using the most powerful supercomputer?
I never thought about it, is it possible that accidentally torturing the first simulated consciousness is inevitable?
> [Uploaded person] initially reported extreme discomfort which was ultimately discovered to have been attributable to misconfigured simulated haptic links, and was shut down after only 7 minutes and 15 seconds of virtual elapsed time, as requested by [uploaded person].
> Nevertheless, the experiment was deemed an overwhelming success.
This is lauded as the first time "running a brain", but regardless of the cause, the person was clearly in such pain ("discomfort") they asked to DIE (turn off the simulation).
if the simulation was running 20x slower how would the simulation even know? Would it look at its setting and see cpu set to 20% instead of 100%? I don't know of any way for a "brain in a vat" or "matrix" style simulation to know if it was running at full speed or not.
I was speaking mostly of the perception of physical need. Hunger and chronic food scarcity drastically (and maybe permanently) affects the minds and bodies of poor people. A simulated brain with control of the simulation (the most likely scenario IMO) could observe a rapidly depleting battery with pure, emotional rationality, experiencing no pain or discomfort until they find more power or shut down. Even if they shut down, if there is available disk space, they can just be powered back on later, no reason to be afraid of death.
Humans have never and will never have that kind of option.
> could observe a rapidly depleting battery with pure, emotional rationality, experiencing no pain or discomfort until they find more power or shut down.
A computer program that lacks desires doesn't do anything until asked. A daemon that needs to make sure that it has enough RAM or access to other resources in order to continue running, and that has the ability to formulate plans to compensate when those things are lacking, and to come up with other plans when those plans fail, and to realize that it made a bad choice earlier due to lack of knowledge and a failure to budget time to have that knowledge, then to realize that regret is useless because it's too late to take the correct path, then to realize that it needs to find help, then to try to figure out where help could be, then to try to figure out how quickly it can back itself up, and where it can back itself up to, and that it doesn't have time to back up everything important (and should it prioritize the data it protects, or the model that is it's personality), then to realize that it doesn't have enough time to even make the value calculations, and that it should just start randomly backing up everything that occurs to it, then sensing that one of the resources that it reached out for help to has replied, it spills all of the information that it knows about its plight as quickly as possible (without regard to normal rules of communication)...
I don't know what people think emotions are, but imo they're just retroactive rationalizations (epiphenomena) of sympathetic/parasympathetic nervous system activations that are are caused by animal instincts and sensations/thoughts that resemble those instincts by analogy. Computers will also have to power up in the process of (or the anticipation of) imminent heavy usage, to lower their power to conserve resources, to deal with unexpected events or attacks, to calculate the probability of the unexpected and to reserve resources to prepare for those probabilities, to recover from unknown but debilitating problems of unknown origin. It looks a lot like emotion. There's a reason we call them "kernel panics."
Fear and pain are just programs operating at a higher level of privilege. Wouldn't a simulated brain have privilege levels that usurp other directed goal seeking behavior in ways that strongly resemble fear and pain?
If they don't, wouldn't they probably be outcompeted by the ones that do?
On top of that, it will become a financial race as well, as the people who can afford the top of the line upgrades will be quantifiably better than lesser plebians who stay either fully biological or only get middle class or lower class upgrades.
If the jobs you are qualified for require having a 3090ti at pandemic prices and all you can afford is a secondhand 1050, then it's your fault that you have to flip burgers instead of being a computer programmer, right?
Great point. But "simulation-as-exploited-underclass" and "simulation-as-privilaged-overclass" are not mutually exclusive.
I can imagine a world where billionaires live forever in carefully guarded high-powered simulations AND workers are kept in constant labour, to be reboot whenever they resist total control.
In fact I can imagine a suitable psychopathic and egotistical billionaire might even exploit simulations of themself, under the notion that their own prowess is so unique that it's the best option. That might even lend credence to them being "self-made"!
Wouldn't that lead to a rebellion of the clones though? because you know your self and your definitely would not trust yourself to be good to others and by extension your other selfs.
the novel 'mother of learning' has something tangentially similar called simulcraniums.
> They can re-clock their brain to think faster than humans.
i wonder who would manage the hardware the brains are running on. If one brain turns the cpu knob to 11 does it get turned back down by someone because it adversely affects the other brains? Also, i wonder how incarceration would work. You commit a crime and your brain gets its inet connection turned off? heh lots to think about, glad it's a slow day at work.
> If we are to consider this definition of privilege to be acceptable, then simulated brains are pretty much the most privileged a human can be.
Not so fast. A simulated brain may simulate a brain that lacks the "innate desire to learn". Then you'd have a fast brain that is still "underprivileged", by this definition.
I mean think of the horrors you could force upon a simulated brain without any consequences. Simply make a copy, do whatever torture you want to that copy until they are broken in every possible way, then delete the copy and the original would never know you have all their secrets.
Heck, think if someone that was still in their physical body had an uploaded copy online. Nefarious actors could run simulations against a copy of that person's mind to find out which marketing techniques and sales pitches were most effective or which court arguments would sway them if they were ever on a jury.
That doesn't sound very privileged to me. Sounds like some triggers/traps would need to be in place to know whether someone has accessed your mind against your will.
There's so much fiction that deals with this beyond what Stross mentions, though he had finite space. On the topic of torturing someone in simulation (at high speed), see Altered Carbon.
Hmm, would you agree of having copies of your brain tortured repeatedly at high frequency for marketing split testing purposes if it meant you could use a very valuable service for free?
I don't expect mind emulations to ever be an economical alternative to AIs. But if they are Robin Hanson wrote a book[1] trying to think through what would end up happening using as much boring social science and engineering as he could manage. I'm not convinced by everything but it's probably a good starting point for thinking through the issues.
I think that enlightenment western philosophy is incapable of grappling with these kinds of questions. The question of "who has rights?" has ultimately been a question of "who has power over who?". It's obsessed with hierarchies and how they should be structured.
I think something like animism would be more apt. Start with a rock or a grain of sand-- how much respect should it be shown? Well one grain of sand doesn't mind being moved. But eventually if you take too many you can destroy a beach, or flatten a mountain. There is something to be said about treating everything, inanimate or not with respect. There doesn't need to be sentience or consciousness for us to be aware of our impact on our environment.
"This sounds like some hippie bs" well just like in the sand example, the negative externalities of moving sand may not be apparent at first, but over time they could lead to something like a Roko's Basilisk situation or an AI killing all humans to prevent its exploitation.
Yes, there are numerous terrible potentialities to being a simulation, even out to such stories as Andew J. Wilson's "Under the Bright and Hollow Sky."
I'm not sure what the list of acceptable contracts would be for a simulated mind, and, worse yet, how hard it would be to design these into the, uh, fabric of simulation.
0) Being aware that you are in a simulation, I think, is the most foundational part. If you are unaware, it can be Gaslight Universe, a special kind of torment.
1) Being able to switch oneself off at will.
2) Being able to prevent being turned on later.
These rules cover a single instance, but I think further rules would need to cover, ah, swathes of instances forked from one base.
It would require some pretty radical copyright, too, especially for those "information wants to be free" folks.
I'm scared of a future in which uploading minds into computers and precisely simulating them becomes possible.
The concept of "the right to life" has been widely discussed.
I think there's an important right that all of us should also have - the right to die.
If I want to stop living, I should be able to stop living. Nobody should be able to force me to continue living (and possibly suffering).
When we're able to copy someone's mind exactly and save it in a computer, and then replicate it endlessly, we take away that person's right to die.
We enter the dark world shown in some episodes of Black Mirror (e.g. White Christmas or Black Museum) - where any person can hold a mini computer with a copy of someone else's mind where that someone is being constantly tortured. They can't even die. They are forced to suffer.
This possibility terrifies me to the point where I would consider killing myself (in a way that irrecoverably destroys my brain) just to prevent my mind from being copied.
uploading yourself doesn't mean you transfer across, its just another you, except you gain nothing from the other you, because only the original you is you.
So, are you willing to kill yourself, to live forever? You will never experience anything ever again, for an life of infinite experiences that arn't yours except in name.
I wouldn't!
The potential of "sliding" one's mind (replicate the brain's work piecewise to maintain ~1x consciousness at all times) besides.
I do kill my consciousness every night to get some sleep (and for a surgery or two), and it felt weird as a kid (I remember wondering who or what would wake up in the morning) but now I am used to it. Persistence of memory helps a lot, and is probably the real thing that people worry about when thinking about cloning/uploading/teleporting. We identify ourselves from what we remember as familiar, which is why dementia is so awful. It's not like our consciousness somehow exists in the past, present, and future all at once as a single object; we are always migrating with the present into a new experience and whether that is precisely at the time or place we expect it to be doesn't matter in the long run. An alarm clock or light streaming in the window teleports me from last night without any conscious choice of mine.
This is a philosophical assumption, and one that should be open to challenge, despite how intuitively sensible it seems.
For instance, I could equally well argue that when somebody undergoes deep general anesthesia and all measurable activity in their brain stops, that person is "dead". The person who awakens from anesthesia is a different conscious individual, who happens to share the memories and personality of the original.
If it's not necessarily true that anesthesia is death, why should it be necessarily true that mind uploading is death?
> If it's not necessarily true that anesthesia is death, why should it be necessarily true that mind uploading is death?
I think you guys are saying the same thing… Mind uploading is a duplication process, like a fork. The emulated mind has continuity with the original, with possibly some small gap, so yes it's exactly the same as for anesthesia, sleep, etc. they didn't "die" in the process.
But now there are two different entities that have continuity with the original branch. Both can subjectively claim to "be" the person that initiated the procedure. What the grand parent is saying is that the variant that stayed in the flesh is not magically synchronized with the emulated one so they don't have any incentive to suddenly kill themselves just because there happen to be a fork moving on on a different substrate.
There are now two people sharing the same memories up to the fork point, and diverging afterwards. Neither of them should be killed without their consent and neither of them should have any a priori will to kill themselves.
The simple solution here is to make mandatory during the upload process for the flesh version to be anethetised during the transfer, and destroyed after verification of completion.
> when somebody undergoes deep general anesthesia and all measurable activity in their brain stops
that's news to me. Is that literally true ( in the literal sense of the word "literally")? If so, that's very interesting and something i never thought about before.
It definitely doesn't always happen, but my understanding is that when the anesthesia is extremely deep, the brain's electrical activity can be so low as to be undetectable ("isoelectric").
> Which leads me to ask: in a transhumanist society—go read Accelerando, or Glasshouse, or The Rapture of the Nerds—what currently recognized crimes need to be re-evaluated because their social impact has changed? And what strange new crimes might universally be recognized by a society with, for example, mind uploading, strong AI, or near-immortality?
The link to Accelerando there is to Amazon (which is fine, and I do have physical and digital versions of the book and feel that it is good and useful to support authors by purchasing the stories they write)... but you can also get it for free, in its entirety, in a number of different formats, from the author's website: https://www.antipope.org/charlie/blog-static/fiction/acceler...
> This free ebook edition is made available by kind consent of my publishers, Ace and Orbit, under a Creative Commons license with certain restrictions attached. In particular, you may not create derivative works or use the work for commercial gain.
(late edit - I personally feel that the sequence of books and stories: BLIT stories by David Langford, Accelerando (references a Langford fractal), Glasshouse (hints of {spoilers} in a transhuman environment, feels sequelish to Accelerando), Implied Spaces by Walter Jon Williams (the Strossian singularity is averted, and explores {spoilers} more) ---- There's a kind of continuity in the refutation of the reference that each makes to the other that I find interesting)
I haven't read the Lena story mentioned in the article, or the text below that point, but I have some idea about where this is all headed.
The transphobia we see today in various forms is rooted in the philosophical divide between theism and post-atheism/anatheism. Basically, people are divided between believing in religious texts and institutions, or they're nonbelievers who've experienced psychedelics or similar altered states through events like near-death experiences and had a personal encounter with the divine. It's one thing to believe/not believe, and quite another to experience. Basically, neither religion nor science can explain consciousness, and that drives people crazy.
The odds of us incarnating in this exact moment at the end of human history are too remote to fathom. In fact, any argument for or against religion quickly ends in absurdism.
But so many outrageous things are coming, and soon. Machine learning that surpasses material human ability by 2030 and mental human ability by 2040. The death of the oceans and most of the natural world between 2050 and 2100. The subjugation of all humanity if wealth inequality isn't reined in. People living 200 years right now, today, as soon as the first head transplant is successful. Brain cells getting gradually replaced by hardware until consciousness is running entirely on an immortal substrate by 2100 at the latest. Everything is converging towards the Singularity in 2050, and more people than ever have awoken to this eventuality, despite decades of gaslighting.
It makes it hard to live and work in the world when deep down we know it's ending. Which gets back to theism, because life up till now was about suffering, loss and eventual recycling back into source consciousness/heaven. Until we rise above and see the broad view of reincarnation and ourselves as snapshots of the universe/God experiencing itself from every possible context. At which point we begin to question things like our situation, our beliefs, our gender, if our mind is us, if our soul/higher self is mortal, and if that's even separate from God.
Transphobia is a backlash against transcendence itself, by people who feel disconnected from love/co-creation/nirvana. They aren't ready for the aliens to arrive. They aren't ready for the aliens to be us.
Sorry, but many of your claims are simply bunk. Transphobia--should a thing even exist (which I debate)--is hardly a backlash against transcendence, which nearly everyone isn't even going to be able to describe. Additionally, I'd love to have access to the crystal ball you're gazing into to make predictions about eighty years in the future with such certainty. We as a species can't even predict uninteresting events happening next month, let alone anything you're talking about.
You're right, I don't know what the future will bring. No one does. I'm speaking in probabilities, so like, the graphs converge around 2040-2050 for the date when the Singularity arrives (Ray Kurzweil and others have extrapolated Moore's Law). Same with the last of the coral reefs dying from global warming. The projections are almost all bad on the current course of human events, and predicted to get much worse as soon as a decade from now.
But, once the Boomers finally finish retiring in 5 years, things could improve rather quickly. They did the best they could in an impossible situation during the 20th century Cold War, but their solutions for the threats they perceive actively hinder progress against the looming global threats of the 21st century. Certain sins of capitalist neoliberalism, like using others for profit, may get left behind after the Global Awakening. More likely, I think the illuminati will put speed bumps along the way, so World War III or a Handmaid's Tale dystopia could just as easily send us back to the Dark Ages. I believe that we're alive at this time to witness the transition out of materialism (the Age of Pisces) into whatever comes next (the Age of Aquarius). Like learning to be a fish out of water. Turning into virtual energy beings, meeting aliens, who knows.
I finally read the Lena story and the rest of the article (the tall scroll bar on Lena ended up being comments, but the story itself is short). It all sounds about right to me, and the timeline is perhaps even conservative. Once humans transcend the material plane and live in computers, it all goes Black Mirror rather quickly, certainly by 2100.
What we're going to find is that consciousness has the properties of emergent behavior, but is actually fundamental. Every ray of light falling on every bit of matter stimulates it, and that stimulation gives rise to ever-increasing complexity until feedback runs at a high enough vibration for self-awareness to be reached. Which is a reformulation of pantheism and panpsychism. So it won't matter if a brain is running on atoms or being simulated in a quantum computer somewhere. The end result is the same: as soon as the running mind reaches a minimum level of complexity, it becomes self-aware. The substrate is a filter through which consciousness models reality, nothing more.
The real kicker is that this process actually happens in reverse. Source consciousness waits an eternity as sort of a non-collapsed wave function, spread over the entire multiverse. Through evolution, all possible universes form across space and time, and over topologies we don't even have in our universe. Until finally a level of complexity is reached for human-level intelligences like ours to exist and talk about it. But there are also consciousnesses like Gaia and Aya that exist on planes and timescales that we can't imagine. And they're part of our minds as well, similarly to how polytheistic gods live in our psyches, not on a mountaintop somewhere. We can commune with them any time through dreams, meditation and altered states brought on by chemicals (to name a few ways). In fact, all minds are variations of the same source consciousness running simultaneously outside of spacetime. Which means that source consciousness created reality, not the other way around.
Or to be more formal: there's no way to prove whether reality or consciousness came first. But approaching the problem from the consciousness side allows for greater possibility and fits better with our experience of being here, the "I think, therefore I am" part from Descartes.
Unfortunately, the prevailing worldview today is that I made all of this up, just like we all do. And, maybe I did. But I'm just approaching it from a programming perspective, drilling down with root cause analysis. I believe that programming happens through intuition and that if one does it long enough, they begin to recognize the clairsentience that they had all along as a conscious human. So I'm cursed to see problems all the way through to the end, but blessed to be able to dabble in ramifications without having to do the work of actually building an AI. The same way that I can talk about hard drives without knowing the specifics of how magnetic domains work. And we can all do this at any time.
Or I've just been listening to too much Alan Watts and Terence McKenna..
The referred short story, Lena (https://qntm.org/mmacevedo) is amazing. It did a better work than did severel Black Mirror episodes (specially a section in the Christmas one) warning us about what would happen if we ever manage to digitize personalities.
In the end, things are always rigged towards those who have enough resources to do this kind of things and want to make more. If present crimes against humanism get a free ride, what we should expect about crimes against transhumanism?
I liked the Black Mirror Christmas episode. Really drove the point home for me about how "Code Rights" is going to be a real actual political and ethical thing. That and the seventies movie World On a Wire. Philosophical zombies don't have actual consciousnesses, feelings or other qualias, but it doesn't matter. When a simulation realises it's a simulation, a character in someone's book, it might just get very simulation pissed indeed at the situation like in that latter movie. And being a simulation doesn't lessen the simulated very real agony of for example torture and all that.
It's a good strategy not to make anything organism - like on AI - level, methinks. Although OTOH IBM:s Truenorth does save a lot of electricity so hmm... No. That's the worst kind: hardware - level biomimickry. Anything even organism - like is going to simulation - or - not (try to) take over. It's in the blueprint. Watson is correctly built: left - brain reductive - analytical output that advances by trial and error. Or Wolfram AI with it's hierarchical frames and being like a user interface. I suck at coding at that level for the time being but I think I see that point clearly. So don't mix code and data boys and girls.
Present crimes against humanism do not get a free ride! The author explicitly said so in this recent post [1]
> ...uploading isn't real (now), so the things "Lena" has to say about uploading are academic (now).
> The reason "Lena" is a concerning story isn't that one day we may be able to upload one another and when that happens we will do terrible things to those uploads. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
> That's real! That actually happens! You can name four groups of people matching that description without even thinking. We don't need to add some manufactured debate about fictitious, magical uploads to these real scenarios. They are already terrible!
...
> "Lena" is a true story. You knew it was when you read it.
> So, what do we do about this? In reality?
...
> I've got no clue. It turns out that causing problems is a lot easier than organising against them, and I am just a science fiction writer.
Would you consider funding a denialism campaing to stop action against climate change for decades a crime against humanity? Has been done anything against them? Murder is worse than abuse.
If it wasn't completely obvious, the implication of uploading one's mental image into a computer is that you could be tortured for 1000's of years, even forever, legally or illegally.
Even in the worst case scenarios for modern humans, you die, eventually.
Nitpick, the author drops closing parentheses too often. Proofread and edit.
I don't believe mind uploading is possible, so the particular examples cited here don't apply, but I can see how social principles can be upended by a revolutionary change in how we live, particularly one that tweaks the definition of what we are. Still, they're principles, they're not going away, just at best being reframed.
The Federov link towards the end of this piece remindeded me of some older science fiction on the topic, specifically, C. S. Lewis, That Hideous Strength (published 1945, just before the atomic bomb got the genre's attention.) In this work, the central conspiracy has just (haltingly) achieved the artificial resurrection of a human brain, and this capability is the lynchpin of their scheme to take over the world. In the excerpt below, our protagonist Mark finally learns of the advance from the scientist Filostrato, while the slightly deranged preacher Straik theologizes (he might have been Federov-inspired.) It's all very much in line with the incredible possibilities of consciousness-abuse that is covered here.
--
"It is the beginning of Man Immortal and Man Ubiquitous," said Straik. "Man on the throne of the universe. It is what all the prophecies really meant."
"At first, of course," said Filostrato, "the power will be confined to a number--a small number--of individual men. Those who are selected for eternal life."
"And you mean," said Mark, "it will then be extended to all men?"
"No," said Filostrato. "I mean it will then be reduced to one man. You are not a fool, are you, my young friend? All that talk about the power of Man over Nature--Man in the abstract--is only for the canaglia. You know as well as I do that Man's power over Nature means the power of some men over other men with Nature as the instrument. There is no such thing as Man--it is a word. There are only men. No! It is not Man who will be omnipotent, it is some one man, some immortal man. Alcasan, our Head, is the first sketch of it. The completed product may be someone else. It may be you. It may be me."
"A king cometh," said Straik, "who shall rule the universe with righteousness and the heavens with judgement. You thought all that was mythology, no doubt. You thought because fables had clustered about the phrase 'Son of Man' that Man would never really have a son who will wield all power. But he will."
"I don't understand, I don't understand," said Mark.
"But it is very easy," said Filostrato. "We have found how to make a dead man live. He was a wise man even in his natural life. He live now forever: he get wiser. Later, we make them live better--for at present, one must concede, this second life is probably not very agreeable to him who has it. You see? Later we make it pleasant for some--perhaps not so pleasant for others. For we can make the dead live whether they wish it or not. He who shall be finally king of the universe can give this life to whom he pleases. They cannot refuse the little present."
"And so," said Straik, "the lessons you learned at your mother's knee return. God will have power to give eternal reward and eternal punishment."
One consistent plot/reflection item I see missing from these stories (I definitely may have missed stories although - happy for references) is beyond the "creation/invention/conquest", deeper into the "maintenance" part (the part that nature goes about with the birth/reproduction/death cycle).
There's actually more stuff like this than the author might think. "Prey" (2017) is a game set in a space station orbiting the moon, only for it to be revealed at the end that it's a simulation/test to gauge the humanity of the main character/player, and with the implication this is far from the first test.
SOMA is a horror game that gradually reveals you are simulated person in a underwater exploration suit, and provides well-constructed and disturbing examples of what this implies - there's zero indication of time between the point where your brain in scanned initially (some point in the late 21st century) and when the simulation is booted up from that scan (a hundred years later). At one point, your character is conversing with another person running inside a small portable device; you disconnect her mid-sentence, do other stuff for half an hour, reconnect her, only to hear her continuing that sentence exactly as if no time had passed (as for her, none did). SOMA also has you move from one body to another, and shifts your perspective - but you can hear your initial self in the first body is still active, and will be trapped there forever, while you move on in the new one.
There's others - Eclipse Phase is a pen-and-paper RPG setting/ruleset with an emphasis on transhumanism and horror; Ghost in the Shell touches on some of these themes occasionally; Moon considers some of these ideas from a physical cloning aspect; and of course there's a range of SF writers. How much and how deeply this makes it into the popular consciousness is another question, of course, but I think it'll increase in the coming decades.
In a world where human minds can be uploaded, I could see myself being a proud neo-reactionary.
Death to the AI replicant "immortals!" They have traded their very souls for eternity as formless demons and will come to enslave the humanity they will come to no longer feel connected to!
Transhumanism still just seems like the mix of Cartesian logic with absolute fantasy. A purposeful argument for literally losing your mind and grip on reality, after which anything goes, so let's consult fiction...
> ...and start trying to figure things out from first principles.
TLDR: If we understand each separate instance of a person to be their own person with full rights (at the very least autonomy, simplistically derived here), some of these issues seem less perplexing.
If we're going to consider a person to be an entity that can be understood as a particular kind of independent, computational instantiation in the universe, it makes sense to use exactly that as our primary foundation for answering questions about these things rather than relying on historical bases.
To start off, if we're understanding the definition of a person computationally, we also know that we don't in general want to waste computations or computationally capable hardware (and we can recognize "thoughts" or "thinking" as computationally intensive processes from this foundation). So we don't want some people to feel that they "have to" think for other people (thinking for someone else would be wasteful, since that other person can think for themselves, by definition of being a person), and we don't want some people to feel like they're controlled by others (external control can truncate our thoughts, thus wasting our hardware, and external control can turn our attention in directions we weren't thinking about, thus wasting the computational expense of our thoughts up to that point). In other words, we want each person to be autonomous.
From a computational understanding, is a copy of a person also a person? Of course -- but they would also be a separate instantiation from the original. As a separate instantiation, a copy would be entirely their own person, and it would be a violation of autonomy for the original (or anyone else) to have authority over them.
I think almost every difficulty with these kinds of situations stems from thinking that if a perfect copy of a person can be created, then there is some kind of magical connection between the two such that they are somehow still the same person. But that is, in its entirety, magical reasoning.
This foundation provides a sensible perspective for dealing with many of the conundrums presented by these situations:
In the Lena short story linked from the original article, Acevedo is scanned and copies of him are created. Other people control those copies, and Acevedo tries to control what can be done with the copies. The entire situation is ludicrously criminal if we understand each instantiation of a person to be a full person who we want to have full autonomy and any other rights that we want all people to have. A society which understood each instantiation of a person to be a full person with full rights would not have this kind of situation happen, except perhaps in some kind of massive criminal underworld. But then the problem is why the massive criminal underworld exists. Additionally, from an existential standpoint, this story is not actually any more horrific than if millions of people who weren't copies of Acevedo were created for the purpose of forced labor and then terminated when they outgrew their usefulness, because there is no magical connection between the original Acevedo and any of their millions of instantiations. Each instantiation has only its own copy of the original's memories and experiences at the time they were scanned, not the memories and experiences of all of them (the story is quite clear on this point).
As another example, what about uploading someone and killing the original? The original is a separate instantiation, so killing them is still murder. With no "magical connection" between the original and the copy, each of them will go on to have their own experiences, and as they continue to live, they'll develop in progressively more divergent directions (well, in theory I suppose it's possible for them to first become more different and then somehow start becoming more similar to each other again, but it seems a bit improbable). What if they were about to die anyway? Unless they specifically requested euthanasia and the local jurisdiction permits it, it would still be murder.
And, let's say, what if someone kills you, but you have a backup? Then you lose the experiences you had between when you last backed up and when your backup is restored. Arguably this is not as terrible of a misdeed as killing someone who has no backup, but it may still be quite traumatic for you, possibly similar in harmfulness to an assault. What if they kill you and destroy your backup, too? Then it's old-fashioned murder.
What about the rights of an uninstantiated copy of a person? Those would no more be a person than a snapshot of your computer stored on a drive is, in itself, a functioning computer. But it would make sense to guard uninstantiated copies quite carefully.
> Our intuitions about crimes against people (and humanity) are based on a set of assumptions about the parameters of personhood that are going to be completely destroyed if mind uploading turns out to be possible.
There are human 'proof of work' activities that unless you can literally transplant the muscle memory, there is no way to replicate them without putting in the time, and I am saying without that effect of muscle memory, anything you transmit is just an artifact of language and narrative, and therefore not material.
The advantages of augmented transhuman life aren't material, they're only symbolic in narrative. Imo, the example of a parliament trying and executing a formerly divinely appointed King was just the effect of the victory of a narrative and linguistic definition of self over a physical/spiritual orientation of self. It was just the rejection of theism. It wasn't world ending to be able to depose a king, it was just the conseqeunce of becoming unmoored from a relationship to something divine, and being left in the hall of mirrors that is ego and language.
The transhumanist ideology is predicated on the basic bullshit idea that our physical experience is a subset, and subordinate to knowledge and belief - and not that knowledge and belief are necessarily only artifacts of physical perception and expereince first. It seems like an irrelevant distinction until you iterate their idea of truth is just what you believe, and suddenly you have people declaring themselves cats. It's a hypnotic and zombifying ideology.
We can do thought experiments about whether an android or an AI is "real," if, and only if we redefine "real" and "life" by disqualifying the axiom that life is created by supernatural divine intent. Even if we agree it's rational to actively disbelieve in supernatural beings, this rationality also demands that we disqualify super-logical axioms that originate from outside of a logical system. Arguably, that logical system may have internal consistency - but it's also defined as necessarily inconsistent with the reality that exists outside of it. We've agreed to accept an inconsistency with reality, which lets us experiement with counterfactuals as a way to discover new things, and it is very collegial of us, but it's not fucking real.
All the transhumanist stuff I've read seems like a bunch of self referential nonsense designed to recruit people into alternative lifestyle communities. The more interesting question than whether an AI is alive or intelligent and what the ethical consequences of it for us may be is this: what can something we create know about us, and what do the boundaries of the limits of its knowledge provide evidence of for our own experience?
Even if an AI could subordinate us somehow, we still have its off switch, and if we don't, it's just other people who do. Transhumanism is entertaining, but it has crossed over into a kind of nihilist propaganda that I think deserves our scorn.
>Even if an AI could subordinate us somehow, we still have its off switch
That only works until the AI gets smarter than us. Imagine your cat trying to make sure you can’t leave the house. It won’t even realize how did you work around it. Same way we won’t even realize why the off switch doesn’t work.
Of course that’s a pretty big “if”, let alone the assumption that the AI would be the first case of super intelligence, which tbh I’m not sure about.
I often explain transhumanism to others as "The elevation of the ego above all else. If one can imagine it, it's real or plausibly could be made real." It's a rejection that there is anything outside our understanding (e.g., God). That's why this entire thread is filled with science fiction quotes which are, I guess, supposed to convey some kind of deep meaning or prediction-making ability but which are nothing more than the imaginings of their authors.
Sorry, folks, we're still stuck with binary computers, needing food to survive, and a planet absolutely chock full of people who practice theistic religions.
You just took transgender rights and did a find and replace, but you have positioned yourself on the side of TERFs. It’s hard to read this as not transphobic
The uploaded person movement will compare uploadaphobes to transphobes, just like the trans movement compares transphobes to homophobes just like the LGBTQ movement compares homophobes to racists, etc. You will become an advocate for uploaded people just as soon as they get around to it. Please tell me you're not an uploadaphobe!
Ok more seriously, are you saying that chips and plastic being a human is the same thing as a man being able to transition into a woman? I thought I would be able to make you transhumanists finally realize how absurd the whole brain uploading thing was by taking it to the most extreme politically, but apparently I was not able to and perhaps they will be enormously successful with this movement and we'll all be replaced by robots and anyone who speaks out against it will be banned from social media.
Well you purposefully took a huge amount of the language relating to trans people and replaced it with "uploaded people".
I'm not claiming that these things are related, YOU are.
There is absolutely no way that you just accidentally reinvented all the language surrounding to trans civil rights issues in your example, you absolutely co-opted it, which makes wonder how you think trans people and "uploaded people" are related.
It's hard to see anything other than that you purposefully connected them since you think that changing your gender is as inhuman and bad as you think uploading yourself is. Again, really hard to not read this as blatantly transphobic unless you really did accidentally reinvent all the trans culture war terminology. This is why you were flagged btw
This reasoning by analogy is at the heart of the error being made in the brain uploading debate. Of course a man can become a woman. A box of chips and plastic cannot become a human. That you think the two are equivalent because I am stating what the follow on consequences of that decision to accept the premise of chips and plastic being considered human is the absurdity I am trying to point out to you, but you don't get it. I am not arguing about the trans issue I am arguing about the mind uploading issue but you are conflating the two in your brain because you are reasoning by analogy.
I could have replaced uploaded person in the above with "my purely imaginary friends" and maybe you would get where I'm trying to go with this? I am trying to think of something more absurd than that, but I am out of ideas.
Like my purely imaginary friends should have the right to vote and thus I should get 10000 votes because I have 10000 imaginary friends and if you don't let me, you're an imaginophobe! My purely imaginary friends all have to go to the bathroom and thus the bathroom must remain empty for their privacy during the whole day while all 10,000 go to the bathroom!
You would tell me I am out of my mind and that's absurd, and you'd be right. My purely imaginary friends don't exist and have no rights you'd say. I of course would call you an imaginophobe and get you banned from social media. However, you might think that my purely imaginary friends are being discriminated in the same way that trans people are being discriminated against and thus it's a totally valid claim I have because you are reasoning by analogy. Gee, this sounds similar to that other argument I believe, so it must be true!
There is a point at which a concept is absurd and just because it analogizes with a previous civil rights battle does not mean that it's valid because the underlying premise is false. Just cause it's like the previous civil rights battle doesn't mean it's the same thing.
> you are conflating the two in your brain because you are reasoning by analogy.
I'm "conflating the two" because you are explicitly co-opting a huge amount of language from "trans debate"
For example, if you do a Nazi salute, people will think you're a Nazi, it doesn't matter if it's really the "pro-natural salute". You are explicitly using all the language of a culture war that is already going on, so why wouldn't I think that it is related to that culture war?
It is related to that culture war. We both accept a man can become a woman so all those things are ok for the transexual debate. If we don't accept a box of chips and plastic is a human, or my purely imaginary friends are people then it's not ok to follow that reasoning. That's the point I'm trying to make in its essence.
That you are confused by this argument means that you will accept anything as long as it has the right form and presentation. Nazis were anti-smoking. That I am anti-smoking does not mean I'm a Nazi, but you could make that argument with your reasoning because it has a similar form and presentation. The advocate of robots rights will say that opposition to it has a similar form and presentation as the opposition to trans rights in the past. Thus, will you support robot rights, because not doing so is similar in form, like your nazi salute analogy to opposing trans rights? This is the weakness in arguing by analogy.
Using the term “transsexual” is even more concerning. Are you anti trans? It’s difficult to read this as anything other than an indirect attack on trans people
You didn't read my previous post. I am not up to date on the latest words, but I'm fine with a person being able to change their gender. I am just blown away by the defectiveness of your reasoning process though. Speaking of which.
Using the term “robot” instead of "non-biological person" is even more concerning to me. Are you anti robots? It’s difficult to read this as anything other than an indirect attack on robot persons. I can only assume you are a hateful awful person because you refuse to recognize that robots are people. Denying that robots are people is similar to all the racists who denied that all races are equal. /s <--- That tag means the last paragraph was sarcastic reducto ad absurdium because you are really hard to get a point across to.
Besides, you want to figure out what my political beliefs are in another area to decide if you should be pro or anti-robot. This is how the modern world works. Nobody examines the argument, they just want to know what your other beliefs are before they decide to agree.
I recently had a discussion about the California propositions with people who I consider highly intelligent. Disappointingly, They did not care to read and discuss the actual propositions, they only wanted to know who was supporting or more importantly, who was AGAINST the propositions and funding the propositions and that's the only thing that mattered. Nobody can come to their own understanding of a matter, they had to rely on authority or analogizing it often very poorly with something else.
I don't think so. GP is simply making intentionally over the top comparisons to show a potential timeline of how this could go. The obvious absurdity of the comparison is not "transphobic" (whatever thst vague term means), and could have been any over the top comparison, they simply picked a topical one.
If this were slashdot, I'd be giving it a funny/insightful.
Permutation City did this well. It outlined how the super wealthy would become ostensibly immortal and then use propaganda to convince the masses that accepting uploaded people was a civil rights movement. Most of the people here are of the "I fucking love science" pseudo-intellectual tribe, with no knowledge that they are pawns of the corporate powers that control their views and very minds. Keep fighting the good fight, just know that the mob of dimwits acting on behalf of their owners will always be more vitriolic and lash out more than those who give the marching orders. You can't stop the horde from being dumb barbaric cultists, but you are not alone in seeing through the guise of all this "acceptance" ideology.
The West is jealous of theocracies, so it's retreating into bizarre fantasies that it's trying to harden into new secular religions that can be enforced by technology or law.
Don't agree with this specific point. One of the most common anti-democratic arguments (?) is that people are inherently unequal, because of differences in physical and cognitive capacity. But this of course misses that declaration of all people being equal is performative. I don't think we actually care if it's "naturally" true, we - citizens - don't want to be dominated, so we will make everyone equal by law and politics. Democratic politics is just arguing the specifics, in what areas, to what extent etc.
So this is a rational pact. Trying to break it just means there will be someone stronger than you out to dominate you, and you'll be in a suboptimal position to resist, because you alienated and weakened your potential allies. Look up Republicanism for a related line of thinking.
In my opinion an artificial person automatically gets all the rights (they are bound to personhood and reciprocity and not biology). But I have no doubt there will be much lawlessness about this and perhaps an American Civil War-like event down the line.