If you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.
>Maybe it's actually going to be rather benign and more boring than expected
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
It's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
It's also reasonable as a Pascal's wager type of thing. If you can't affect the outcome, just prepare for the eventuality that it will work out because if it doesn't you'll be dead anyway.
> like a burger chain advertising itself on the brutality of its factory farms
It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.
I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)
That is how I normally hear the marketing theory described when people go into it in more detail.
I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.
Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.
Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.
Do you think opposing the manhattan project would have lead to a better world?
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
My answer is yes, with low-moderate certainty. I still think the USA would have developed it first, and I think this is what is suggested to us by the GDP trends of the US versus basically everywhere else post-WW2.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
You can make an awful lot of useful little tools with an LLM, vanilla JavaScript, GitHub Pages, and the user's own localStorage as a semi-persistence layer. Two 9s and cross-platform to boot.
Recently I made a diet checklist [1] that I've been following more or less to the letter 5 days out of the week. I have a little Android button that just opens right up to the web page. I click, click, click, then move on with my day. If I feel I need to change something I can copy a plain text screenshot of what's on there currently and chat with Gemini about it.
+1 over this. As someone without a deep technical background, LLMs enabled me to improve my life unimaginably, being able to quickly sketch and develop small features that remove every day annoyance.
My very first thing was automation of placing my journal entry to appropriate Google Drive folder. I write my brain dump/journal everyday in Google Docs, and if I just click "New document, it places it in the root GDrive folder, and I had to move it manually which I am to lazy to do it.
LLM helped me write a python script that searches the root folder, finds the right document (name is always the date of the day), and searches for the right folder in assigned Google Drive repository (and creates a yearly or monthly folder if a new month starts).
It also helped me create a yaml script for Github actions to trigger this once every day.
I felt like a magician. Since then I created second brain databases, internal index of valuable youtube videos, where I call the api to get transcripts and send it to llm, other note taking automations etc etc
I came here to say exactly this. You can even set up build steps using GitHub Actions if you prefer something beyond vanilla JS, or publish the project for free on Cloudflare, even from private repositories. In addition to localStorage, IndexedDB is also very useful. It's easy to export the app’s data as JSON for better persistence, and you could store it on Google Drive or a similar service.
Actually, when you put it like that, sending 'hello' back might be the best thing you could do. They sent you a SYN, you send back and ACK, then the real conversation can begin.
I suddenly no longer agree with TFA. This makes way more sense to me in this light.
In what way is that better than "Hello. How do I do x?" If they never reply, that's of no practical difference from just sending "Hello" and not getting a reply.
In TCP, it's useful because it happens in a different layer of abstraction. Even then, QUIC was developed (partly) because it was realised there's no point waiting for the full SYN / SYN ACK / ACK before starting some of the higher-level exchange (although the early data transfer in QUIC is used for TLS initiation rather than application-level data).
It's better because X might take a while to write correctly, and you might want some assurance that you have the other person's full attention first before you even send that message. It's a commitment mechanism of sorts.
This doesn’t make sense to me. What does it matter if you have their attention first? It’s asynchronous communication. I find it so damn rude to demand my attention first before you begin typing out a long message. Like do you want me to watch the chat bubble animation while you type or something?
Yes, I do. Sometimes people want (more) synchronous communication despite the asynchronous medium. Among other things that helps guarantee a speedy response. A lot of people use asynchronicity as a way to simply avoid answering in a timely fashion, so framing it like this can make sense if you can't afford that.
In addition, seeing the chat bubbles appear moments after you finish your round is a good sign the other person isn't multitasking and letting their own attention get fractured.
I never found it rude to begin with, just not using the medium to its strengths. But this has me realizing maybe it's a deliberate way to eschew those strengths, for some purpose or another.
Right, but the problem is that with async communication, you don't need a synchronous ack handshake.
Instead you can pipeline both messages: `[hello][are you coming to lunch with us?]`, and that's more convenient and efficient for the receiver and sender.
The problem that TFA is referring to is that context switching is very expensive for the receiver, so without pipelining, the receiver pays a huge cost just to send back the ack and then again to finally reply to the payload once it is sent. The receiver is asking that you send all messages; it prefers to buffer them.
The relevance of TFA is that this only works if the initiating party is still connected, and to make matters worse there is no ERR_SOCKET_CLOSED returned by most chat clients if that party got distracted before seeing the ACK. Then minutes or hours later they get back "hey sorry, missed your reply, ${QUERY}"
when they could have just included `${QUERY}` in the initial send, or at least `framing(${QUERY})`.
These are great and humanistic sentiments, until you have to talk price. When price comes into the equation, the approach of specialized accessibility software seems to work much better in a lot of cases.
Consider a rosy hypothetical: A SaaS under truly great, enlightened leadership. The team lead knows that it would take only one two week sprint, for one focused developer, to go the last mile and make it truly accessible for all. The fully loaded cost of those developer-hours, in a very optimistic scenario, is $1000, and optically closer to $4-8000 for a US based team.
First, do those extra steps towards accessibility even break even? Second, if so, are they truly the revenue maximizing move for what that dev can do with that 2-week sprint? Sometimes, rarely, they are. In practice I suspect market research would show the opposite. This is before we add in all of the usual fog of war around how long things really take to build, whether the leadership is really as enlightened as they seem, etc.
Consider an alternative model where one company specializes in creating high quality accessibility-enhancing software. This software aims to work as a compatibility layer across most to all of the other programs a user is likely to use; perhaps they use frequent in-memory screenshots and detailed image analysis to help blind users understand what's going on. Or perhaps it's as simple as a FOSS dev focusing on making sure every terminal program they can run works well with their screen reader.
There are a plethora of benefits to this model, not least that you aren't imposing a heavy tax on everyone else for a really small customer base. This is also very specialized, customer-facing work. If there is anywhere in software you would want dedicated frontend or UI/UX expertise it would probably be the guy designing the screen reader compatibility layer.
I point to the popular extension Dark Reader as an example of this paradigm; it does a wonderful job on most websites, is easy to disable on websites where it doesn't, and doesn't cost the website runner anything to use.
Some might take issue with this for aesthetic reasons. It feels kludgy to suggest someone run a whole third interface layer just to use the same software you and I use right out of the box. I think this aesthetic violation is misplaced in this case - the factors at play suggest to me that this work would benefit heavily from specialization. Indeed, that seems to be what has happened in practice; making the web accessible in 2025 is much easier than it was in 2000, because third parties have stepped up and improved the situation dramatically enough that hooking into accessibility layers "merely" requires things like writing semantically correct HTML.
Now imagine if a Dark Reader existed, that could reliably insert all the finer details into the page which are obvious from a screen grab of the page, but non-obvious to the web designer - that would clearly be a much better approach for the majority of businesses.
My hypothetical assumes that the team was writing 95% accessible software already. The last 2 weeks are for the final push.
Of course, if this is a truly all-or-nothing thing where you need to do it 100% perfectly to incur no extra cost, then that strengthens my argument for the compatibility layer, it doesn't diminish it. Very few non-specialists can get something 100% right on the first shot.
Yeah, it's like RSS: a solution that requires every single operator to implement something you need is quite a bad solution. For you who needs it and for every operator who has to implement it.
Instead, it's superior if you didn't need RSS at all to generate and consume feeds of website because your software did it for you.
Same for screen readers and accessibility. The superior solution is for software to derive the UX just like a sighted person can.
It will be nice when we get the tech for this so that accessibility convos don't just get stuck in these weird shaming rituals where you're supposed to feel guilty that you never tried your website with macOS VoiceOver when you're not even sure if your business will exist in a year.
Is that extra development cheaper than the risk of a lawsuit or loss of reputation? Not forgetting the ~20% of potential customers you might be missing out on…
> not least that you aren't imposing a heavy tax on everyone else for a really small customer base.
Ah. Seeing your disabled customers as a burden. One day you might encounter barriers when it comes to computing.
>Is that extra development cheaper than the risk of a lawsuit
It probably isn't cheaper, no. The base risk of a lawsuit in this domain seems very low for all but the largest of websites; the largest of websites generally have large enough user pools that investing in out of the box accessibility makes sense anyway. In fact I would wager Facebook makes more advertising money off of its median blind user than its median fully-sighted user, simply because that's a very easy demographic to target ads to.
I'm willing to change my mind on this if you can provide evidence if even, say, 1% of all inaccessible websites on the Internet have been sued on these grounds.
>Seeing your disabled customers as a burden
Disabled potential customers, for one. Disabled people aren't dumb, and they don't pay for things they can't actually use. I'm surprised you assume they would.
But, and and this may come as a surprise, I genuinely think the compatibility layer approach is the much better option here. There are plenty of reasons to think so, which I outlined in the original post. Your slander is not welcome or acceptable just because you disagree with me.
(I'm assuming you mean the latest Perl actually called Perl, and not its successors.)
In a vacuum I wouldn't recommend Perl over first learning the most common languages and technologies of today. I'd gain some familiarity with Python first at a minimum. But it does have some interesting niche advantages you might want to look into more down the road.
Perl 5 has been on the same major version for 30 years now [1], and hence has had a truly enormous amount of training data for LLMs to glomp onto. Since Perl is also primarily thought of as a "scripting-plus" language, something to reach for when Bash isn't cutting the mustard but a 'real program' feels too heavyweight, a lot of its use cases are very much in the LLM one-shot sweet spot. [1]
Perl 5 also has the unique advantage of being installed system-wide by default on more Unix machines than you might expect. It's sitting there quietly on Debian for you right now [2]. It's even the scripting-plus language of choice for OpenBSD!
You would think being "the same" for 30 years would also mean Perl almost accidentally performs really well on modern machines, which have a few orders of magnitude more resources to throw around. I haven't really found this to be that noticeable, though, and if I actually cared about performance in those domains I'd probably stick to the smallest tools I could work with first. Then again, a vanilla Perl 5 program might be even more cross-platform than a vanilla shell script is; shells come and go, but Perl 5 is forever, apparently.
"Perl 5 has been on the same major version for 30 years now"
I don't know that I would consider that accurate. 'Perl 5' is really no longer a version, but the language itself. It's had a lot of "major" releases over the last ~20 years and it's evolved significantly in that time. Sure, the language does prioritize backwards compatibility, but that is common among many programming languages. New features have been added regularly, however, and 'Modern Perl' brought around a lot of change in style and approach from the Perl of 30 years ago.
Yes, I meant Perl 5. Raku is a totally different beast.
I got interested in Perl exactly because of its scripting capabilities. I want to replace Bash, sed, awk, etc. with a single, more powerful, language (without having to remember a billion flags/strange syntax).
I think it's fascinating how well it integrates in a Unix system and I find it very nice how concise it can get.
Ha, I had the same thought a while back along a slightly different vector (legal-adjacent technical writing). I ended up writing a blog post of my wishlist of features Word has that static site generators like Hugo don't appear to, yet. [1]
I think there's a lot of money to be made in this arena, especially given that LLMs are much easier to integrate with plain text files than with Word documents.
Much (but not all) of what you are looking for exists in the reStructuredText [1] space. Sphinx [2] is an SSG focused on technical writing about software that you may find worth exploring.
Also, the scientific text community has been pushing MyST [3] which is an attempt to take some of the best ideas of reStructuredText and reapply them to Markdown-style syntax as a baseline. The MyST tools are a lot more recent and don't have the maturity just yet of Sphinx (including the larger ecosystem such as SaaS hosts like readthedocs).
There have been a nonzero number of times that asking Gemini about something in Finnish about the demoscene or early 1990s tech has returned much more... colorful answers than what I saw with equivalent questioning in English.
This doesn't actually surprise me, WhatsApp is fantastic at what it does. I seem to recall another story about a company which got a desktop browser running in a mobile browser and getting flooded with Indians using the site because it let them access WhatsApp without a data surcharge or something.
reply