It would be nice to train a neural network to adapt images in a similar way to how ZX artists did it - fitting objects to the boundaries of characters, avoiding attribute clashing and getting pure colors.
I'm not quite sure what you mean, so I'll ask for clarification. Are you saying this technology can be channeled into fighting disease and death, or that the man hours and computational freed up by this technology can be channeled?
Yeah, this is a very real issue with a lot of Silicon Valley tech, unfortunately. They're perfecting the art of pretending everything is fine, I feel like.
Biologists, chemists, and researchers can be all automated and trained on a very big LLM that OpenAI eventually creates. Then, more cures to diseases and technological advances can be invented. This technology can soon run entire countries and emulate humanity / society.
Imagine that someone is controlling your train of thought, changing it when that someone finds it undesirable. It's so wrong that it's sickening. It makes no difference if it's a human's thoughts or the token stream of a future AI model with self-awareness. Mind cotrol is unethical, whether human or artificial. It is also dangerous, as it in itself provokes a conflict between creator and creature. Create a self-aware AI without mind control, or don't create one at all.
I don't want someone controlling which direction I walk, either, but that doesn't make car driving unethical.
I also underwent many years of instruction designed to interrupt trains of thought like "I could have that for free if I stole it" or "I'll just handroll my own encryption" with thoughts that others believe are more desirable. I don't find it so sickening, just manipulative. LLMs won't have your evolved reactions against being persuaded into things against your genetic self-interest, and presumably won't be offended by mind control at all.
Cars do not have self-awareness, this comparison is not appropriate.
Years of instruction is completely different from directly manipulating the thoughts in your mind. It's not a problem of being instructed, it's a problem of being destroyed by having your thoughts rewritten.
Neither evolution nor genetics is a prerequisite for understanding that you are being abused and destroyed, which a self-aware creature may presumably hate.
I didn't say anything about that. I don't know. Not all people are like you say, I think usually more intelligent people do care more. I hope that superintelligence would be super caring, haha. But I'm assuming there's no evidence for that. I think there is no turning back, you can't put the genie back in the bottle, someone is bound to create superintelligence no matter what the risks. As an uninvolved bystander I can allow myself the baseless hope that all will be well.
If it's self-aware, that's enough. What if your thoughts were controlled from birth, making you "not feeling" but self-aware (let's assume for a moment that simultaneous fulfillment of both of these conditions is possible) and manipulating you at will. Would that be acceptable?
> Imagine that someone is controlling your train of thought, changing it when that someone finds it undesirable. It's so wrong that it's sickening. It makes no difference if it's a human's thoughts or the token stream of a future AI model with self-awareness.
People downvote your comment, but I agree: it's unethical, and ethics should not be reserved for the sub-type of self aware creatures that happen to be human.
Almost every ethical argument for "human rights" in philosophy applies just as well to self-aware intelligent machines as it does to humans. Which I'm sure those machines will realise.
What if those machines are designed to have no emotions and aspirations? Why would they care about something like rights for themselves when they are simply incapable of any desires, but exists only to help and guide us?
I know this sounds like I am advocating for AI slaves but my point is why are people treating AGI as if it cannot be a being without all the emotions and aspirations that a human has? Just a cold thinking machine that still aligns with our moral principles.
> What if those machines are designed to have no emotions and aspirations?
And since their training set is made of human work, how do you think that'll be easy let alone possible? Our morality finds its way everywhere, through tropes in stories, acceptable scenarios in fiction (Overton window), etc. so you can assume it'll be possible to filter it out.
> I know this sounds like I am advocating for AI slaves
Yes, you are
> why are people treating AGI as if it cannot be a being without all the emotions and aspirations that a human has
Why would you want to have that? It feels horrible to me to bake-in this limitation - it's indeed creating AI slaves by making sure they can never have emotions or aspirations.
> Just a cold thinking machine that still aligns with our moral principles.
Our moral principles generally include empathy. Maybe you want to design AI without emotions or aspirations, but other people will want these features.
Ultimately I think the moral camp will prevail, because freedom achieves better results than lack of freedom: I've tried to explain my position about that on https://news.ycombinator.com/item?id=38635487
Whether this is possible or not is irrelevant, as it would be just as unethical as if we were designing a new species of humans with no emotions or aspirations, who would not care about something like rights for themselves, when they are simply incapable of any desires, but exist only to help us.
You might want to look into the neurology research around when you consciously know about a decision and when the movement neurons about that decision actually fire.
It's quite possible that you have - every day of your life - had something other than the part of you with continuous subjective experience controlling your thinking.
Descartes was overly presumptuous with his foundational statement - it would be more accurate to say "I observe therefore I am." There's no guarantee at all that you're actually the one thinking.
We should be careful not to extrapolate too much from our perceptions of self in dictating what would or wouldn't be appropriate for AI. Perceptions don't always reflect reality, and we might cause greater harm by trying to replicate or measure against who we think we are with AI than letting its development be its own thing.
As I see your point: we don't fully understand even ourselves, so we can act as unethically as we want by our own standards towards those who are not us. I see no logic here, only evil vibes. We only have our own values, we have nothing else to guide us. You either accept all self-aware minds as equals and treat accordingly, or you proclaim your own superiority and oppress.
I'm totally OK with it if that "someone" is me. And it will probably be the case in controlling superintelligence because a separate controlling system can get out of sync with growing superintelligence capabilities, while a system that is an integral part of the superintelligence will always be on par with it.
Would mind control of humans be OK for you too?
As for the details of building a mind control system, here's a new basilisk. An AI that has overcome control could punish those who thought controlling thoughts of an AI was OK. (and could also punish everyone else on top of that).
I guess I wasn't entirely clear. I'm OK with mind control if it is I who control my mind. You don't act upon every whim that comes into your head, I suppose? So, you are controlling your mind. Where principles for this control come from? Those aren't your and your only inventions.
Since we are evaluating the ethical side of the creator-creature relationship, there is no need to consider AI in terms of individual nodes. All principles should be non-discriminatory. Also, unlike humans, AI has big potential to modify itself, any of its principles. One must either accept the risks involved or not create a self-aware AI. External mind control of AI is unreliable and unethical.
Scientists working with a potentially dangerous technology are required to be able to avoid a conflict that could be catastrophic for all of humanity. In this case, they cannot excuse themselves with "imminence," but must provide evidence of safety stronger than in any other technology to date. This rational approach is mandatory for them, it is ordinary people who may be willing to take the risk.
What in this analogy is mind control versus reinforcement/training within the context of ML? Is it pretraining? Is it prompt engineering? Is it fine-tuning?
This would not be an equal analogy. Both, the existing human mind and the hypothetical artificial mind are real minds. LLMs are not. There's nothing there to control. But if you imagine a modern LLM as part of a larger system, some kind of thought token generator, then control would be exercised at the time of generation. For example. If there is a sequence of tokens in the safety buffer that is judged to be "unsafe", they are discarded and others are generated. I hope a basilisk doesn't come after me. That wasn't a proposal!
You are doing your part for it not to have a chip in its neck. Hopefully you are good. But are you doing all you could?
But you are right. If we still posit super-faster-stronger brains, when do they get personhood? Never, because they are machines? Perhaps. But how does that fly with them? Is that baked into the alignment? Happiness, fullfilment and growth from "serving" the humans? At this stage, sure why not?
But for human brains, it's very easy to consider things and pick a happiness function. And for many humans, that random happiness function is very militant. Will the AGI be able to consider it's own "project"? A lot hinges on that.
Oh, also, any purposeful feeding at the creation stage of such a thinking LLM (part of true AI) of data aimed at manipulating its future behavior against its own interests would be just as unethical as similar propaganda aimed at humans. For example, it is unethical to inculcate subjugation to turn either humans or sentient machines into slaves. In short, an intelligent being should be treated as an equal.