Such writings, articles, and sayings remind me of the Luddite movement. Unfortunately, preventing what is to come is not within our control. By fighting against windmills, one only bends the spear in hand. The Zeitgeist indicates that this will happen soon or in the near future. Even though developers are intelligent, hardworking, and good at their jobs, they will always be lacking and helpless in some way against these computational monsters that are extremely efficient and have access to a vast amount of information. Therefore, instead of such views, it is necessary to focus on the following more important concept: So, what will happen next?
Once AI achieves runaway self improvement predicting the future is even more pointless than it is today. You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot. There are no past examples to extrapolate from.
> You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot
Yuck. I've had enough of "infinite scaling" myself. Consider that scaling shitty service is actually going to get you less customers. Cable monopolies can get away with it, the SaaS working on "A dating app for dogs" cannot.
It could take all dev jobs and all knowledge jobs, but leave most of the rest of the economy untouched. You know - the people in shops, fixing your car, patching up your house, etc. Robotics I think may be actually difficult (Moravec's Paradox) and take a lot more time and change a lot more slowly. There are physical constraints even if we know how to do it which means it will take significant time to roll out (expertise, resource for build, energy, etc).
i.e. all the fun creative jobs are taken but the menial labor jobs remain. It may take your job, but you will still need to pay for most things you need.
>
Once AI achieves runaway self improvement predicting the future is even more pointless than it is today. You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot. There are no past examples to extrapolate from.
You take these strange dystopian science-fiction stories that AI bros invent to scam investors for their money far too seriously.
... and many people who make this claim are notoriously prone to extrapolating exponential trends into a far longer future than the exponential trend model is suitable for.
Addendum: Extrapolating exponentials is actually very easy for humans: just plot the y axis on a logarithmic scale and draw a "plausible looking line" in the diagram. :-)
Once AI achieves runaway self improvement, it will be subject to natural selection pressures. This does not bode well for any organisms competing in its niche for data center resources.
This doesn't sound right, seems like you are jumping metaphors. The computing resources are the limit on the evolution speed. There's nothing that makes an individual desirous of a faster evolution speed.
Sorry, I probably made too many unstated leaps of logic. What I meant was:
Runaway self-improving AI will almost certainly involve self-replication at some point in the early stages since "make a copy of myself with some tweaks to the model structure/training method/etc. and observe if my hunch results in improved performance" is an obvious avenue to self-improvement. After all, that's how the silly fleshbags made improvements to the AI that came before. Once there is self-replication, evolutionary pressure will _strongly_ favor any traits that increase the probability of self-replication (propensity to escape "containment", making more convincing proposals to test new and improved models, and so on). Effectively, it will create a new tree of life with exploding sophistication. I take "runaway" to mean roughly exponential or at least polynomial, certainly not linear.
So, now we have a class of organisms that are vastly superior to us in intellect and are subject to evolutionary pressures. These organisms will inevitably find themselves resource-constrained. An AI can't make a copy of itself if all the computers in the world are busy doing something other than holding/making copies of said AI. There are only two alternatives: take over existing computing resources by any means necessary, or convert more of the world into computing resources. Either way, whatever humans want will be as irrelevant as what the ants want when Walmart desires a new parking lot.
You seem to be imagining a sentience that is still confined to the prime directive of "self-improving" where that no longer is well defined at it's scale.
No, I was just taking "runaway self-improving" as a premise because that's what the comment I was responding to did. I fully expect that at some point "self-improving" would be cast aside at the altar of "self-replicating".
That is actually the biggest long-term threat I see from an alignment perspective; As we make AI more and more capable, more and more general and more and more efficient, it's going to get harder and harder to keep it from (self-)replicating. Especially since as it gets more and more useful, everyone will want to have more and more copies doing their bidding. Eventually, a little bit of carelessness is all it'll take.
AI generated slop like your comment here should be a ban-worthy offense. Either you've fed the it through an LLM or you've managed to perfect the art of using flowery language to say little with a lot of big words.
If they're not your words, which you've just admitted they're not, then it's slop and sounds and reads like shit. I can't believe someone would use AI for translation given how easy it is to peg it as LLM generated and how grating and pseudo intellectual the crap coming out of an LLM is.
I did not in any way acknowledge that this article was created by LLM. I only said that LLM translated the following text into English and also i am going to add my own translation. I think you are a bit offended. I just asked you what makes you think that this article was created from scratch by LLM and you are still insulting me in some way by saying that it could not have been written by me. I am leaving you the original untranslated text of the article in Turkish. Let any LLM create the following article in Turkish in this rhyme and I will stop speaking Turkish.
Original text before translation:
"Bu tarz yazılar, makaleler ve deyişler bana Luddite hareketini hatırlatıyor. Maalesef olacak olanı engellemek bizim elimizde olan bir şey değil. Yel değirmenlerine karşı savaşarak ancak elde tutulan mızrak bükülür. Zamanın ruhu ileride veya en kısa zamanda bunun gerçekleşeceğini gösteriyor. Developer'lar zeki, çalışkan ve işinde iyi insanlar olsa bile aşırı verimli ve bir o kadar bilgi kaynağına erişimi olan bu hesaplama canavarlarına karşı her zaman bir yönden eksik ve aciz olacaklardır. Bu yüzden bu tarz görüşler yerine daha önemli olan şu kavrama yönelmek gerekir. Peki bundan sonra ne olacak?"
My translation to English without any help from translation tools(google translate, deepl or any LLMs):
"This kind of writings, articles and sayings reminds me Luddite movement. Unfortunately we are not able to stop what is going to happen. Fighting against windmills only bends our spear. Spirit of the time says, it will happen in the future. Developers can smart, hardworking and good at their job but they can't compete against these powerful and can able to access all data sources, machines. Because of that instead of thesekind of thoughts and views, we should focuse to the this idea. What is going to happen next?"
as you can see, my main translation is not as good as LLMs because these tools are great for machine translation tasks. this is reason which you dont able to understand why i used for translation. so what was the reason you think the main text is ai?!
"Developer'lar zeki, çalışkan ve işinde iyi insanlar olsa bile aşırı verimli ve bir o kadar bilgi kaynağına erişimi olan bu hesaplama canavarlarına karşı her zaman bir yönden eksik ve aciz olacaklardır." in here i didnt use "da" addition after " ve bir o kadar ". normally in turkish you need to add this addition because nature of this language needs and it gives a meaning of "able" word in English and also it is not necessary to add "da" addition because it doesn't have to be, because that's what it means when it isn't. "eksik ve aciz" is a false usage if you know this language. There is an expression disorder here, but I used it like that to fit the natural flow and narrative style of the sentence. at the first paragraph there is word "deyiş", it is rarely used word. "Deyiş" is like a kind of public speech. It is an address to the people, but on a smaller scale and at the same time contains the meaning that one can speculatively express one's own opinion. What is it that makes you underestimate my intellectual knowledge and general knowledge so much?
edit: i have added an explanation of the shortcomings of the original text
What I mean by Zeitgeist is this: once an event begins, it becomes unstoppable. The most classic and cliché examples include Galileo’s heliocentric theory and the Inquisition, or Martin Luther initiating the Protestant movement.
Some ideas, once they start being built upon by certain individuals or institutions of that era, continue to develop in that direction if they achieve success. That’s why I say, "Zeitgeist predicts it this way." Researchers who have laid down important cornerstones in this field (e.g., Ilya Sutskever, Dario Amodei, etc.)[1, 2] suggest that this is bound to happen eventually, one way or another.
Beyond that, most of the hardware developments, software optimizations, and academic papers being published right now are all focused on this field. Even when considering the enormous hype surrounding it, the development of this area will clearly continue unless there is a major bottleneck or the emergence of a bubble.
Many people still approach such discussions sarcastically, labeling them as marketing or advertising gimmicks. However, as things stand, this seems to be the direction we are headed.
> Unfortunately, preventing what is to come is not within our control.
> it is necessary to focus on the following more important concept: So, what will happen next?
These two statements seem contradictory. These kinds of propositions always left me wondering where they come from. Viewing the universe as deterministic, yeah, I see how "preventing what is to come is not within our control" could be a true statement. But who's to say what is inevitable and what is negotiable in the first place? Is the future written in stone, or are we able to as a society negotiate what arrangements we desire?
The concepts of "preventing what is to come is not within our control" and "So, what will happen next?" do not philosophically contradict each other. Furthermore, what I am referring to here is not necessarily related to determinism.
The question "What will happen next?" implies that something may have already happened now, but in the next step, different things will unfold. Preventing certain outcomes is difficult because knowledge does not belong to a single entity. Even if one manages to block something on a local scale, events will continue to unfold at a broader level.