And my take! A fork of fish where any command that starts with > or a capital letter is fed to $fish_llm_command: https://github.com/breuleux/fish-shell. With Claude's help, that took all of 30 minutes to make.
I don’t tidy up very often, but when I do, it doesn’t take much time or energy. I just dump everything that isn’t version controlled into a junk folder, and it feels great.
> It's easier for a small number of people to coordinate, than a large number.
That's basically my main argument for replacing election-based democracy by lottery-based democracy. Electing the right representatives is a coordination problem in and of itself, a process which the wealthy are already quite adept at manipulating, so we might as well cut the middle man and pick a random representative sample of the population instead, who can then coordinate properly.
It's generally easier to make such a process tamper-proof than an election. You can pick a cryptographically secure open source PRNG and determine the seed in a decentralized way by allowing anyone to contribute a salt into a list which is made public at the deciding moment. Then anyone can verify the integrity of the process by verifying the seed includes their contribution, and computing the candidates themselves.
>You can pick a cryptographically secure open source PRNG and determine the seed in a decentralized way by allowing anyone to contribute a salt into a list which is made public at the deciding moment.
If that were a viable model for the real world, we could make existing elections just as tamper-proof.
If the government doesn't have enough power, the wealthy won't need to bribe politicians to do their bidding. They will do their own bidding directly, and there will be nobody to stop them.
It's like, if you want to sell your cyanide penis pills under big government, you need to bribe someone. If you want to sell them under small government, you just... you just sell them, that's what.
There may be ways to design a government where power is better distributed, e.g. using sortition, but ultimately it needs to be richer and more powerful than its wealthiest citizens, otherwise these wealthy citizens will assess, correctly, that when push comes to shove, the laws won't apply to them, and they do not need the government's permission to do what they want.
Even a small government still has courts, in fact they would be a far more sizeable fraction of the government and thus a lot more effective. So if people like Epstein engage in criminal behavior, or even just unlawful behavior that they would be liable for, they can definitely be held accountable.
But suppose you have egalitarian nation N -- what stops the billionaire from non-egalitarian nation B from influencing your politicians? Especially if nation N is small and nation B is large.
Moreover -- why would low-level elites (think: entrepreneurs, small business owners, etc.) stay in nation N if it was more profitable to do business in nation B -- recall this is precisely the type of person that is often most mobile and internationalized.
> These feel like they involve something beyond "predict the next token really well, with a reasoning trace."
I don't think there's anything you can't do by "predicting the next token really well". It's an extremely powerful and extremely general mechanism. Saying there must be "something beyond that" is a bit like saying physical atoms can't be enough to implement thought and there must be something beyond the physical. It underestimates the nearly unlimited power of the paradigm.
Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions? What else than a sequence of these tokens would a machine have to produce in response to its environment and memory?
The point is that "predicting the next token" is such a general mechanism as to be meaningless. We say that LLMs are "just" predicting the next token, as if this somehow explained all there was to them. It doesn't, not any more than "the brain is made out of atoms" explains the brain, or "it's a list of lists" explains a Lisp program. It's a platitude.
In the case of LLMs, "prediction" is overselling it somewhat. They are token sequence generators. Calling these sequences "predictions" vaguely corresponds to our own intent with respect to training these machines, because we use the value of the next token as a signal to either reinforce or get away from the current behavior. But there's nothing intrinsic in the inference math that says they are predictors, and we typically run inference with a high enough temperature that we don't actually generate the max likelihood tokens anyway.
The whole terminology around these things is hopelessly confused.
I mean.. i don't think that statement is far off. Much of what we do is entirely about predicting the world around us, no? Physics (where the ball will land) to emotional state of others based on our actions (theory of mind), we operate very heavily based on a predictive model of the world around us.
Couple that with all the automatic processes in our mind (filled in blanks that we didn't observe, yet will be convinced we did observe them), hormone states that drastically affect our thoughts and actions..
and the result? I'm not a big believer in our uniqueness or level of autonomy as so many think we have.
With that said i am in no way saying LLMs are even close to us, or are even remotely close to the right implementation to be close to us. The level of complexity in our "stack" alone dwarfs LLMs. I'm not even sure LLMs are up to a worms brain yet.
Sure, you can put it this way, with the caveat that reality at large isn't strongly definable.
You can sort of see this with good engineering: half of it is strongly defining a system simple enough to be reasoned about and built up, the other half is making damn sure that the rest of reality can't intrude, violate your assumptions and ruin it all.
It is also a courtesy that free countries respect US copyright. I wouldn't be surprised if EU countries have already started ramping up corporate espionage and are making contingency plans to seize all data and assets on their territory. If they manage to get ahold of source code and data, they may be able to keep some services running without US involvement.
Netflix is a good example: the functionality isn't difficult to reproduce, and the only thing that restricts its library is copyright, which the EU could just stop enforcing for American companies.
> It is also a courtesy that free countries respect US copyright
Which, itself, is downstream of the US signing onto the Berne convention. American copyright actually used to be reasonable and (western) Europe was the insane one with life terms. All that is ugly about the US was buried so deeply in Europe that it is outside, here, with us.
Then America had the extremely short-sighted idea to assign copyright to software, and then use software to enforce copyright, and then make it independently illegal to tell anyone how to bypass that enforcement software. This was all then foisted back onto Europe, whose creative industries begged them for it, not knowing that it basically meant surrendering to the US before the war had even started.
Seizing American copyright would be a good start, but what you really want is to drop anti-circumvention law. Because that's the first domino[0] in the chain. Europe is chock full of businesses that would absolutely fall in line around a tyrant king just like American businesses have, and laws like that enable such businesses to exist.
What we observe is also consistent with the idea that when humans have no idea what they're talking about, it's usually more obvious than when LLMs have no idea what they're talking about. In which case the author is lulling themselves into a false sense of confidence chatting with AI instead of humans, merely trading one form of incompetence for another.
I think so, yes. We rely a lot on eloquence and general knowledge as signals of competence, and LLMs beat most people at these. That's the "usually" -- I don't think good human bullshitters are more obvious than LLMs.
This may not apply to you if you regard LLMs, including their established rhetorical patterns, with greater suspicion or scrutiny (and you should!) It also does not apply when talking about subjects in which you are knowledgeable. But if you're chatting about things you are not knowledgeable about, and you treat the LLM just like any human, I think it applies. There's a reason LLM psychosis is a thing, rhetorically these things can simulate the ability of a cult leader.
I think I'm going to have to disagree. When people tell you something incorrect, they usually believe it's correct and that they're trying to help. So it comes across with full confidence, helpfulness, and a trustworthy attitude. Plus people often come with credentials -- PhD's, medical degrees, etc. -- so we're even more caught off-guard when they turn out to be totally and completely wrong about something.
On the other hand, LLM's are just text on a screen. There are zero of the human signals that tell us someone is confident or trustworthy or being helpful. It "feels" like any random blog post from someone I don't know. So it makes you want to verify it.
There is a relatively hard upper bound on streaming video, though. It can't grow past everyone watching video 24/7. Use of genAI doesn't have a clear upper bound and could increase the environmental impact of anything it is used for (which, eventually, may be basically everything). So it could easily grow to orders of magnitude more than streaming, especially if it eventually starts being used to generate movies or shows on demand (and god knows what else).
Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.
The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.
This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.
And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.
Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.
If humanity's energy consumption is so high that there is an actual threat of causing climate change purely with waste heat, I think our technological development would be so advanced that we will be essentially immortal post-humans and most of the solar system will be colonized. By that time any climate change on Earth would no longer be a threat to humanity, simply because we will not have all our eggs in one basket.
But why do you think that? Energy use is a matter of availability, not purely of technological advancement. For sure, technological advancement can unlock better ways to produce it, but if people in the 50s somehow had an infinite source of free energy at their disposal, we would have boiled off the oceans before we got the Internet.
So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.
And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...
I think it ultimately comes down to whether you care more about the what, or more about the how. A lot of coders love the craft: making code that is elegant, terse, extensible, maintainable, efficient and/or provably correct, and so on. These are the kind of people who write programming languages, database engines, web frameworks, operating systems, or small but nifty utilities. They don't want to simply solve a problem, they want to solve a problem in the "best" possible way (sometimes at the expense of the problem itself).
It's typically been productive to care about the how, because it leads to better maintainability and a better ability to adapt or pivot to new problems. I suppose that's getting less true by the minute, though.
Crafting code can be self-indulgent since most common patterns have been implemented multiple times in multiple languages. A lot of time the craft oriented developer will reject an existing implementation because it doesn't match their sensibilities. There is absolutely a role for craft, however the amount of craft truly needed in modern development is not as large as people would like. There are lots of well crafted libraries and frameworks that can be adopted if you are willing to accommodate their world view.
As someone who does that a lot... I agree. Self-indulgent is the word. It just feels great when the implementation is a perfect fit for your brain, but sometimes that's just not a good use of your time.
I kind of struggle with this. I basically hate everyone elses code, and by that I mean I hate most people's code. A lot of people write awesome code but most people write what I'd call trash code.
And I do think there's more to it than preference. Like there's actual bugs in the code, it's confusing and because it's confusing there's more bugs. It's solving a simple problem but doing so in an unnecessarily convoluted way. I can solve the same problem in a much simpler way. But because everything is like this I can't just fix it, there's layers and layers of this convolution that can't just be fixed and of course there's no proper decoupling etc so a refactor is kind of all or nothing. If you start it's like pulling on a thread and everything just unravels.
This is going to sound pompous and terrible but honestly some times I feel like I'm too much better than other developers. I have a hard time collaborating because the only thing I really want to do with other people's code is delete it and rewrite it. I can't fix it because it isn't fixable, it's just trash. I wish they would have talked to me before writing it, I could have helped then.
Obviously in order to function in a professional environment i have to suppress this stuff and just let the code be ass but it really irks me. Especially if I need to build on something someone else made - itsalmost always ass, I don't want to build on a crooked foundation. I want to fix the foundation so the rest of the building can be good too. But there's no time and it's exhausting fixing everyone else's messes all the time.
I can guarantee you that if you were to write a completely new program and continued to work on it for more than 5 years, you'd feel the same things about your own code eventually. It's just unavoidable at some point. The only thing left then is degrees badness. And nothing is more humbling than realizing that the only person that got you there is yourself.
No, I wouldn't. I have been working for years on the same codebase, it's not that hard to keep it clean and simple. I just refactor/redesign when necessary instead of adding hacky workarounds on top of hacky workarounds for years until the codebase is nothing but a collection of workarounds.
And most importantly I just design it well from the start, it's not that hard to do. At least for me.
Of course we all make mistake, there's bugs in my code too. I have made choices I regret. But not on the level that I'm talking about.
I can guarantee you that I have been doing just that for 20 years, creating and working on the same codebase, and that it only got better with time (cleaner code and more robust execution), though more complex because the domain itself did.
We would have been stuck in the accidental complexity of messy hacks and their buggy side effects if we had not continuously adapted and improved things.
I feel this too. And it seems like the very worst code always seems to come from the people that seem the smartest, otherwise. I've worked for a couple of people that are either ACM alum and/or have their own wikipedia page, multiple patents to their name and leaders in business, and beyond anyone else that I have ever worked with, their code has been the worst.
Which is part of what I find so motivating with AI. It is much better at making sense of that muck, and with some guidance it can churn out code very quickly with a high degree of readability.
I don't know how a "good" programmer opens the same gig+ file for writing in multiple threads (dozens sometimes) without any kind of concurrency management.
A "good" programmer doesn't give you a 2000+-line python script where every variable has no more than two characters in its name, with 0 comments or explanatory info.
A "good" programmer doesn't write a cluster that checks an "OK" REST endpoint on a regular interval, and then have that same cluster freak the fuck out and check 10-100x as often if that "OK" does not arrive exactly as it should.
I usually attribute it to people being lazy, not caring, or not using their brain.
It's quite frustrating when something is *so obviously* wrong, to the point that anyone with a modicum of experience should be able to realize that what was implemented is totally whack. Please, spend at least a few minutes reviewing your work so that I don't have to waste my time on nonsense.
I enjoyed that but honestly it kind of doesn't really resonate. Because it's like "This stuff is really complicated and nobody knows how anything works etc and that's why everything is shit".
I'm talking about simple stuff that people just can't do right. Not complex stuff. Like imagine some perfect little example code on the react docs or whatever, good code. Exemplary code. Trivial code that does a simple little thing. Now imagine some idiot wrote code to do exactly the same thing but made it 8 times longer and incredibly convoluted for absolutely no reason and that's basically what most "developers" do. Everyone's a bunch of stupid amateurs who can't do simple stuff right, that's my problem. It's not understandable, it's not justifiable, it's not trading off quality for speed. It's stupidity, ignorance and lazyness.
That's why we have coding interviews that are basically "write fizzbuzz while we watch" and when I solve their trivial task easily everyone acts like I'm Jesus because most of my peers can't fucking code. Like literally I have colleagues with years of experience who are barely at a first year CS level. They don't know the basics of the language they've been working with for years. They're amateurs.
Then it’s quite possible that you’re working in an environment that naturally leads to people like that getting hired. If that’s something you see repeatedly, then the environment isn’t a good fit for you and you aren’t a good fit for it. So you’d be better served by finding a place where the standards are as high as you want, from the very first moment in the hiring process.
Obviously that’s easier said than done but there are quite a few orgs out there like that. If everyone around you doesn’t care about something or can’t do it, it’s probably a systemic problem with the environment.
Yeah a bridge has a plan that it’s built and verified against. It’s the picture book waterfall implementation. The software industry has moved away from this approach because software is not like bridges.
One of my better experiences with software development was actually with something waterfall-adjacent. The people I was developing software for produced a 50 page spec ahead of any code being written.
That let me get a complete picture of the business domain. That let me point out parts of the spec that were just wrong in regards to the domain model and also things that could be simplified. Implementation became way more straightforwards and I still opted for a more iterative approach than just one deliverable at the end. About 75% of the spec got build and 25% was found to be not necessary, it was a massive success - on time and with fewer bugs than your typical 2 week "we don't know the big picture" slop that's easy to get into with indecisive clients.
Obviously it wasn't "proper" waterfall and it also didn't try to do a bunch of "agile" Scrum ceremonies but borrowed whatever I found useful. Getting a complete spec of the business needs and domain and desired functionality (especially one without prescriptive bullshit like pixel perfect wireframes and API docs written by people that won't write the API) was really good.
If you can't get a complete spec, it's better start with something small that you can get detailed info on, and then iterate upon that. It will involve refactoring, but that is better than badly designing the whole thing from the get go.
reply