I can't believe he said that with a straight face. He actually thinks you can write a computer program that can give humans permission to kill themselves. I don't think he consulted with a single engineer before making that statement.
Baudrillard's going to laugh in his grave for this one.
Our modern society has tried so hard to defeat death that we have essentially surrounded ourselves in it. Once we've made machines that try to defeat death, the only natural extension is to make machines that do the exact opposite, since there is no longer a distinction between the two.
(I highly recommend reading Chapter 5 from Baudrillard's Symbollic Exchange and Death, which elaborates about how modern society has ripped apart the symbolic exchange between life and death and neatly partitioned it, and as a consequence death becomes an immortal force that we cannot deal with.)
“Permission” was just a bad way to put it. I'm sure what they are actually talking about is making sure that the person is making the decision in clear mind and not under the effect of drugs, strong temporal emotions and is not being forced by someone to do this.
That's a pretty loaded statement. You're probably thinking of those ostentatious fools who end up in the news like "man was prevented from jumping off the bridge". Not every suicide does it on a whim, some think about that decision for many years.
As a general trend, when you make effective suicide less convenient, e.g. switching from lethal gas to carbon monoxide free natural gas in ovens, they happen less often. Not necessarily because of fewer attempts though.
Some do. Some don't. It is very hard to generalize here.
I personally don't really think that a person, who is making that a leap and decide and stop existing, has a 'clear' mind. It may be an objectively rational decision ( and frankly, I believe it is up to each of us to make that decision ), but I would be hard-pressed to argue that 'this individual, who I find of sound mind opted for the chair'. There is a reason society has certain level of concern for those that try and fail.
It is possible that I do not have enough of a population sample, but I personally see it as part of an effort way to keep world population control at certain level. Before anyone accuses me of tin-foiling, I mean it in the same sense as that there are efforts to prevent suicides by means of suicide prevention hotlines.
I guess what I am saying, as a society, we are grappling with with two competing interests:
1. We care about certain individuals and we don't want them gone from our life.
2. We care about certain individuals and we want them to have control over their own body.
I don't find it very hard to think of situation where it is likely that a person has a clear mind and decides to stop existing. In particular in case of a disease where the person knows that the only thing that life has in store for them is either more pain or being palliatively sedated.
Why choosing not to exist can't be clear mind? Many are suffering, struggling yet choose to continue that trauma. That's clear mind. But as soon as one does not want to continue the trauma, they are mental.
We are going to die in the end. Why not bring that closer so that the useless suffering stops? I think people who want to continue struggle, is the mentally ill. Having to do work 9-5 whole life and yet not being able to afford healthcare, acceptable living standard is somehow okay to continue. People are delusional and having too much copium.
Lots of here have good job life in a first world country and they think everyone is living like that. Someone says different, then it's "why you are enjoying life like me? you are just mentally ill".
These supposedly normal people also bring a kid without their consent to study, work, have disease, die horrible death by giving birth. All of these just to satisfy their own selfish wish to have a child with their gene, but not for the benefit of the kid.
It’s not about permission in the idealistic sense. It’s about issues like people being pressured or manipulated into it so someone can have inheritance, so the healthcare system can save money, etc.
These would be even larger concerns in the US than Switzerland for cultural and economic reasons.
I am personally against assisted suicide for the same reason I am against the death penalty: the logic works but only if you ignore the ugliness and messiness of real human behavior.
So you think several cases of unintended death are more important than suffering of all who really want to die?
Cause I can't imagine that percentage of wrong deaths is more than several percents.
That must be traded off against a “prolonged suffering rate of X%” as the overwhelmingly likely alternative.
We chose to euthanize our dog this summer. No matter how obvious her medical condition was, I still questioned whether we did it too early, too late, or just right. (Upon reflection, I think very slightly too late [by days or maybe a week].) I also couldn’t help but compare that experience to that of people. In many ways, I think we treat our family pets with more compassion.
>>It’s not about permission in the idealistic sense
Then you proceed to give an idealistic reason to oppose it... A person choosing to end their life to preserve a family inheritance IMO would / should be a valid reason, your opposition to such a choice is idealistic.
>I am personally against assisted suicide for the same reason I am against the death penalty
There is no logical or idealistic similarities between the two as there is a difference between actions being forced upon you, and voluntary actions.
This has become a common trend in the modern era where we attempt to expand the idea of coercion to include scenario;s where people have only poor choices. Examples include people taking a poor paying job being "coerced" into it because they did not have "good" choices,
it is very dangerous to equate a circumstances where there are no good options to coercion
I suspect I’m generally aligned with you on the topic, but I agree that “choosing suicide for grandma to preserve a family inheritance” is perfectly valid if grandma is choosing it, but acknowledge that it’s terribly problematic if the kids or grandkids are behind it. Being in the middle part of my life, I’ve seen the pressures that arise here, the concerns over finances and quality of life, and the diminishment of mental capacity of many elderly folks.
I had a close family member express repeatedly and regularly that “they were done” and “are looking forward to finally dying”. That’s what makes me strongly support individual choice here, but I’m not blind to the possibility of abuse here (and near certainty that it will happen in some cases).
Permission is probably correct here. Unless you can live in a close box in isolation without ever needing to depend on another living or having someone depend on you, then sure you can live/die on your own terms.
You live by the rules of whatever society you are living in. You don't live in isolation, you depend on countless other living beings to be where you are in this point of time in your life. Life is interconnected web, not an isolated event.
Your life has a value for other people too. No one can force you to live. "Permission" does not mean being forced. Unless you are physically unable to have a life, you should be needing a permission.
We already give permission in courts and write rules on how a person should live (or not) their lives for so many reasons we think are beneficial/harmful for rest of us. How is this any different?
They will not let you have a reliable and painless death. I bet some people will be stopped by the pain that a traditional suicide methods might bring, as well as the risk of staying alive but getting mutilated for the rest of their life as a result.
^ This. The risk of becoming a vegetable from a botched suicide is pretty high. The last thing a suicidal person wants is to make their life even worse.
That's the kind of suicide one is going to do anyway. I am here talking about the one which someone is sane enough to seek assistance or buy a device like in the article.
We already put people with mental issues in mental health facilities instead of killing them. We could probably do something similar (not the same) for people who decide to take their own lives and seek out before doing it.
So the plan is to cause bureaucratic headaches and forced treatment options for people who openly and sanely admit they don't like being here, leaving only the messy and less effective methods easily available?
This is one of those, "in theory, there's no difference between theory and practice. In practice, there are" situations.
Nobody owes society anything. In fact, the reverse is true: society owes things to those brought into this world without being asked: clean water, safety, clean environment, reasonable standards of living. I just don't see how it could cross anyone's mind to try to prevent people from ending their life, since they didn't ask to be here in the first place, to support a society that has clearly failed that individual.
If a society is good and just and receives conscious support from people, that's acceptable. But I don't see how it could possibly justify interfering with a right of self determination w.r.t. ending the ride early.
I guess I just don't get it. It seems cruel and Kafkaesque.
Forced treatment is opposite of how a person already sick of this world needs to be treated. That's not what I was suggesting. Something like providing a way to live a totally different kind of life in a totally different environment might actually help. Psychiatrists etc can suggest better ways.
Society usually tries to help, not deny the rights in a larger scheme of things. If you find someone on the verge of ending their life will you judge their sanity based on their age (what age if so and why) before deciding whether or not you should stop them or just let them do it because they must have a good reason?
I always wonder how we decide that at certain age a person is sane enough to start making decisions for their own life. In some ways or the other, we always need assistance from other people no matter how adult we become. This is just one of them. Society thinks it can help people from killing themselves just like we help kids from killing themselves unknowingly for first many years of their lives.
I will add more to this. What is really fascinating is the idea 'ai' has been so successfully sold as a solution to just about any problem out there. I am genuinely trying not to just not add 'using novel blockchain protocol for full transparency' ( while naturally keeping all transparency out from the ai blackbox ).
A Dutch engineer maybe. The views on this topic are so extreme in my country i find it a bit creepy. I'm sure everyone involved thinks people should be able to die at the touch of a button. The rest is just there for compliance reasons.
But going beyond that, Kahneman in his latest book summarises the recent research in the area as such:
- Simple linear models generally outperform experts.
- Simple linear models generally outperform experts when the experts get additional information to base their decisions on, that the model does not get.
- Simple linear models generally outperform experts when the experts get to know and use the outcome of the linear model.
- Simple linear models trained only on an expert's judgments and not the actual outcome outperform the very expert they were trained on.
- Simple linear models with random weights (!) outperform experts.
- Simple linear models with equal weights (i.e. transform the predictors to the same scale and then just sum them) outperform experts.
- Simple linear models with equal weights and almost all predictors removed except the best 1--3 outperform experts.
> Simple linear models generally outperform experts
At what? At correctly diagnosing patients or predicting prognosis once an issue has already been identified, as in Meehl's paper? That is not the same thing as determining whether a patient has the "mental capacity" to commit suicide.
At correctly diagnosing patients, too -- at least for some ailments.
The common objection against this evidence is "sure, it happened to work there, but it won't work in this case!" And of course, it's very hard to defend this research against an accusation like that.
However, it's been a remarkable stable result whenever it is tried for a new task, so I guess what I'm saying is that I like to stay open to simple algorithms making better decisions than humans. Not because the algorithms are good (linear models with random weights are not very good) but because humans are incredibly fallible.
> The second turned out not to be aesthetically pleasing. For that and various other reasons it’s not the best one to use.
Also on the list of concerning statements. Why do I get the feeling that “various other reasons” play a bit of a more important role than he’s letting on there?
My point is that the developers of the device may not believe that anyone needs such a "permission" either and thus the whole AI/computer program is merely there to fulfil some legal obligation - they don't actually care whether it's good or not.
The thing we are talking about is verifying that the person is in a mental shape to take the decision to end his life. The word "permission" is an odd choice here.
As an Engineer, I wouldn't necessarily be against it if it made EOL decisions more available to humans. I would prefer no program at all, and just have a waiting period of 14 days to nullify those who insist that all suicidal thoughts are just spur of the moment things.
But given access to data and an appeals process for those it turned down, I would be morally okay with writing the software.
We let parents have as many children as they want, why shouldn't those children get to say if the life they have is worth living?
> I would prefer no program at all, and just have a waiting period of 14 days to nullify those who insist that all suicidal thoughts are just spur of the moment things
I've not posted about this before, for a variety of reasons:
A close family friend of mine - in his mid 20s - took his own life a week or so before Christmas a couple of years ago.
I was with him and his father in a pub the evening before, he was telling jokes and buying drinks, he was dead a few hours later.
Unfortunately taking your own life doesn't have a cooling-off period :(
While it's not in the article, I could see them adding some in-person contact (ie interview/screen with a human, but not a doctor) to get a rough idea of the person. Not just accept or deny them 100% according to the AI.
Most hold that life has value, and the desire to continue living t9 be axiomatic. There are very few cases where suicide may be considered a rational choice, and in those cases it is difficult to determine that the choice was made of their own free will, and not pressured into it by lazy family, doctors or government (dictating available treatments, providing appropriate pallative care, etc).
IIF you hold the desire to die to be rational. I can believe that 2 + 2 = 5, but it doesn't. The desire to die may be derived not from an actual desire, but an expression of mental or physical illness which could be treated in such a way that I (rationally) do not desire to die (right now).
Whether there are valid exceptions or mere failures of treatment is what most people are inclined to debate (plus the difficulty in discerning rational consent, free of pressure or other external failures).
holding someone who feels suffering alive would be considered torture.
when you strip away the selfishness that is keeping others alive, suicide might be the only truly zelfless act.
i do not think you need a complicated ai to allow a suicide machine. you treat it as you would the way we sane people regulate guns. cool down periods, background checks and coercion checks.