by devs you mean those two guys on twitter who brag about vibe coding with 100 agents running simultaneously. While Claude Code still can't display images. I wonder what they are doing with those 100 agents
Grindr is next Google. Sometimes when I need to search something, I dont even open google or chatGPT, i just type into the DMs of someone and they refuse to answer.
This is actually a genius level move from Grindr. Anthropic/OpenAI can only dream of the stable userbase like this.
Few years ago I even suggested they should add actual brainstorming and prototyping features like Framer for bros to discuss their business ideas. Dopamine rush of grindr notification into my AI chats would literally be insane levels of productivity.
This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.
I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
We've gone from "it's glorified auto-complete" to "the quality of working, end-to-end features, is average", in just ~2 years.
I think it goes without saying that they will be writing "good code" in short time.
I also wonder how much of this "I don't trust them yet" viewpoint is coming from people who are using agents the least.
Is it rare that AI one-shots code that I would be willing to raise as a PR with my name on it? Yes, extremely so (almost never).
Can I write a more-specified prompt that improves the AI's output? Also yes. And the amount of time/effort I spend iterating on a prompt, to shape the feature I want, is decreasing as I learn to use the tools better.
I think the term prompt-engineering became loaded to mean "folks who can write very good one-shot prompts". But that's a silly way of thinking about it imo. Any feature with moderate complexity involves discovery. "Prompt iteration" is more descriptive/accurate imo.
First you have to classify what “good code” is, something that programmers have still not settled on in the over half a century that the field has existed.
I also think what the other reply said is true, going from average to “good code” is way harder because it implies a need for LLMs to self critique beyond what they do today. I don’t think just training on a set of hand picked samples is enough.
There’s also the knowledge cutoff aspect. I’ve found that LLMs often produce outdated Go code that doesn’t utilise the modern language features. Or for cases where it knows about a commonly used library, it uses deprecated methods. RAG/MCP can kind of paper over this problem but it’s still fundamental to LLMs until we have some kind of continuous training.
AI's can self-critique via mechanisms like chain of thought or user specified guard rails like a hook that requires the test suite to pass before a task can be considered complete/ready for human review. These can and do result in higher quality code.
Agree that "good code" is vague - it probably always be. But we can still agree that code quality is going up over time without having a complete specification for what defines "good".
Unfortunately I can only give anecdotes, but in my experience the LLM's 'thinking' does not lead to code quality improvements in the same way that a programmer thinking for a while would.
In my experience having LLMs write Go, it tends to factor code in not so great way from the start, probably due to lacking the mental model of pieces composing together. Furthermore, once a structure is in place, there doesn't seem to be a trigger point that causes the LLM to step back and think about reorganising the code, or how the code it wants to write could be better integrated into what's already there. It tends to be very biased by the structures that already exist and not really question them.
A programmer might write a function, notice it becoming too long or doing too much, and then decide break it down into smaller subroutines. I've never seen an LLM really do this, they seem biased towards being additive.
I believe good code comes from an intuition which is very hard to convey. Imprinting hard rules into the LLM like 'refactor long functions' will probably just lead to overcorrection and poor results. It needs to build its own taste for good code, and I'm not sure if that's possible with current technology.
> Furthermore, once a structure is in place, there doesn't seem to be a trigger point that causes the LLM to step back and think about reorganising the code, or how the code it wants to write could be better integrated into what's already there.
Older models did do this, and it sucked. You'd ask for a change to your codebase and they would refactor a chunk of it and make a bunch of other unrelated "improvements" at the same time.
This was frustrating and made for code that was harder to review.
The latest generation of models appear to have been trained not to do that. You ask for a feature, they'll build that feature with the least changes possible to the code.
I much prefer this. If I want the code refactored I'll say to the model "look for opportunities to refactor this" and then it will start suggesting larger changes.
> A programmer might write a function, notice it becoming too long or doing too much, and then decide break it down into smaller subroutines. I've never seen an LLM really do this, they seem biased towards being additive.
The nice thing is a programmer with an LLM just steps in here, and course-corrects, and still has that value add, without taking all the time to write the boilerplate in between.
And in general, the cleaner your codebase the cleaner LLM modifications will be, it does pick up on coding style.
>The nice thing is a programmer with an LLM just steps in here, and course-corrects
This does not seem to be the direction things are going. People are talking about shipping code they haven't edited, most notably the author of Claude Code. Sometimes they haven't even read the code at all. With LLMs the path of least resistance is to take your hands off the wheel completely. Only programmers taking particular care are still playing an editorial role.
When the code is constructed by an LLM, the human in the driving seat doesn't get a chance to build the mental models that they usually would writing it manually. This stifles the ability to see opportunities to refactor. It is widely considered to be harder to read code than to write it.
>And in general, the cleaner your codebase the cleaner LLM modifications will be
Whilst true, this is a kind of "you're holding it wrong" argument. If LLMs had model of what differentiates good code from bad code, whatever they pull into their context should make no difference.
> Whilst true, this is a kind of "you're holding it wrong" argument. If LLMs had model of what differentiates good code from bad code, whatever they pull into their context should make no difference.
Good code is in the eye of the beholder. What reviewers in one shop would consider good code is dramatically different than another.
Conforming to the existing code base style is good in and of itself, if the context it pulls in makes no difference that makes it useless.
> When the code is constructed by an LLM, the human in the driving seat doesn't get a chance to build the mental models that they usually would writing it manually
I'm asking the LLM for alternatives and options constantly, to test different models. It can give me a write-up description of options, or go spin up subagents to go try 4 different things at once.
> It is widely considered to be harder to read code than to write it
Even more than writing code, I think LLM's are exceptional at reading code. They can review huge amounts of code incredibly fast, to understand very complex systems. And then you can just ask it questions! Don't understand? Ask more questions!
I have mcp-neovim-server open, so I just ask it to open the relevant pieces of code at those lines, and it can then show me. CodeCompanion makes it easy to ask questions about a line. It's amazing how
Reading code was one of the extremely hard parts of programming, and the machine is far far better at it than us!
> When the code is constructed by an LLM, the human in the driving seat doesn't get a chance to build the mental models that they usually would writing it manually.
Here's one way to tell me you haven't tried the thing without saying you haven't tried the thing. The ability to do deep inquiry into topics & to test &btry different models is far far far better than it has ever been. We aren't stuck with what we right, we can keep iterating &b trying at vastly lower cost, to do the hard work to discover what is a good model. Programmers rarely have had the luxury of time and space to keep working on a problem again and again, to adjust and change and tweak until the architecture truly sings. Now you can try a weeks worth of architectures in an afternoon. There is no better time for those who want to understand to do so.
I feel like one thing missing from this thread is that most people adopting AI at a serious level are building really strong AGENTS.md files, that refine tastes and practices and forms. The AI is pretty tasteless, isnt deliberate. It is up to us to explore the possibility space when working on problems, and to create good context that steers towards good solutions. And our ability to get information out, to probe into systems, to asses, to test hypothesis, is vastly vastly higher, which we can keep using to become far better steersfolk.
isnt ut more likely we are 80% of the way to maximum performance by doing 20 % of the work and the remaining tiny performance increase will require a multiple of the work we have done so far and will leave us with performamce that "isnt good enough"? Seems way more likely to me than a linear progression to agi from here
> They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.
I'd argue that "good", or at least "good enough", is when they reach a point where it becomes preferable to spend your time prompting rather than reading and writing code. That the final output meets the feature specifications is more or less the goal.
A lot of developers are having a difficult time accepting that the code doesn't matter nearly as much anymore, myself included. The feedback cycles that made hot fixing, bug fixing, customer support, etc. so expensive, have shrunk by orders of magnitude. A codebase that can be maintained by humans is perhaps not a goal worth pursuing anymore.
To really see this and feel this, I think it's worthwhile to spend at least a weekend or two seeing what you can build without writing or reviewing any of the code. Use a frontier model. Opus 4.6 or Codex 5.3. Probably doesn't matter which one you choose.
If you give it an honest try, you'll see that a lot of the limitations are self-imposed. Said another way: the root problem is some flavor of the user under specifying a prompt, having inconsistent design docs, and not implementing guard rails to prevent the AI from reintroducing bugs you previously squashed.
It's a very new way of working and it feels foreign. But there are a lot of very smart, very successful people doing this. People who have written millions of lines of code over their lifetime, and who enjoyed doing it, are now fully delegating the task.
Think about it from a resource (calorie) expenditure stand point.
Are you expending more resources on writing the prompts vs just doing without it? Thats the real question.
If you are expending more, which is what Simon is indicating at - are you really better off? Id argue not, given that this cant be sustained for hours on end. Yet the expectation from management might be that you should be able to sustain this for 8 hours.
So again, are you better off? Not in the slightest.
Many things in life are counter-intuitive and not so simple.
P.s. youre not getting paid more for increasing productivity if you are still expected to work 8 hrs a day... lmao. Thankfully im not a SWE.
I think something a lot of people miss out on is that we're not all the same. We all have different internal thought models, whether it is a biological difference (ADHD brain?), educational differences, and overall abilities. And it seems a lot of people have this idea everyone uses "AI" the same way. That's a lack of lateral thinking. Making assumptions we're all burning "calories" in the same way implies we all think, and work, alike.
Simon: "I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted."
Youre a time waster, stop posting and creating noise.
People often describe the models as averaging their training data, but even for base models predicting the most likely next token this is imprecise and even misleading, because what is most likely is conditional on the input as well as what has been generated so far. So a strange input will produce a strange output — hardly an average or a reversion to the mean.
On top of that, the models people use have been heavily shaped by reinforcement learning, which rewards something quite different from the most likely next token. So I don’t think it’s clarifying to say “the model is basically a complex representation of the average of its training data.”
The average thing points to the real phenomenon of underspecified inputs leading to generic outputs, but modern agentic coding tools don’t have this problem the way the chat UIs did because they can take arbitrary input from the codebase.
And Unix was mainly made by two people, it's astounding that as I get older, even tech managers don't know "the mythical man month", and how software production generally scales.
Thanks, I learned something, but the original point stands, 5 people is still not a lot and well within the scale where you could manage things within the team yourself without dedicated management and have first hand information flow.
I do agree with this idea in the sense that companies keep trying to add people to projects to do more things or complete projects sooner which ends up wasting a lot of effort. A more cost conscious way is to have smaller teams and let them more time to explore better approaches for longer.
> Sorry but a 99.999% of developers could not have built Unix. Or Winamp.
> Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
The problem is that that's the same skill required to safely use AI tools. You need to essentially audit its output, ensure that you have a sensible and consistent design (either supplied as input or created by the AI itself), and 'refine' the prompts as needed.
AI does not make poor engineers produce better code. It does make poor engineers produce better-looking code, which is incredibly dangerous. But ultimately, considering the amount of code written by average engineers out there, it actually makes perfect sense for AI to be an average engineer — after all, that's the bulk of what it was trained on! Luckily, there's some selection effect there since good work propagates more, but that's a limited bias at best.
Agree completely. Where I'm optimistic about AI is that it can also help identify poorly written code (even it's own code), and it can help rewrite it to be better quality. Average developers can't do this part.
From what I've found it's very easy to ask the AI to look at code and suggest how to make the code maintainable (look for SRP violations, etc, etc). And it will go to work. Which means that we can already build this "quality" into the initial output via agent workflows.
, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
I will use my right to disagree. Maybe not 4 people everywhere, but if you have product with well thought feature set you create those and then you really don't need 1000s people to just keep it alive and add features one by one.
I - of course - am talking about perfect approach with everyone focused to not f** it up ;)
Big projects can still be highly modular, and projects built by "1000s of devs" typically are. If your desired change can be described clearly without needing too much unrelated context, the LLM will probably get it right.
“You never needed 1000s of engineers to build software anyway”
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
You need 1000 engineers because you have poor engineering leadership, or no engineering leadership, and engineering is a black hole that management shovels money into where it falls directly onto a huge plane of middle managers who do the best they can with their limited power and understanding. Meanwhile your sales team is writing specifications for the next version of the product, which they already promised to customers, and they hired an outside consultant to transform it into 500 spec documents written in damn near legalese, which will appear one day on the lead engineer's desk with no foreshadowing. It turns out that throwing more engineers at the problem helps here because you'll run out of tasks to assign to all of them and some will roam the halls and accidentally connect distributed knowledge back together.
I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.
I mean, you've never used the desktop version of Deltek Maconomy, have you? Somehow I can tell.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
No, wealth gets more concentrated. Fewer people on the team will be able to afford a comfortable lifestyle and save for retirement. More will edge infinitesimally closer to "barely scraping by".
Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.
Validation was always the hard part, outside of truly novel areas - think edges of computer science (which generally happen very rarely and only need to be explored once or a handful of times).
Validation was always the hard part because great validation requires great design. You can't validate garbage.
Do not insult P-II w/256 MB of RAM. That thing used to run this demo[0] at full speed without even getting overwhelmed.
Except some very well maintained software, some of the mundane things we do today waste so much resources it makes me sad.
Heck, the memory use of my IDE peaks at VSCode's initial memory consumption, and I'd argue that my IDE will draw circles around VSCode while sipping coffee and compiling code.
> for no reason other than our own arrogance and apathy.
I'll add greed and apparent cost-reduction to this list. People think they win because they reduce time to market, but that time penalty is delegated to users. Developers gain a couple of hours for once, we lose the same time every couple of days while waiting our computers.
Once I have read a comment by a developer which can be paraphrased as "I won't implement this. It'll take 8 hours. That's too much". I wanted to plant my face to my keyboard full-force, not kidding.
Heck, I tuned/optimized an algorithm for two weeks, which resulted in 2x-3x speedups and enormous memory savings.
We should understand that we don't own the whole machine while running our code.
Haha, I know. Just worded like that to mean that even a P-II can do many things if software is written well enough.
You're welcome. That demo single-handedly thrown me down the high performance computing path. I thought, if making things this efficient is possible, all the code I'll be writing will be as optimized as it can be as the constraints allow.
Another amazing demo is Elevated [1]. I show its video to someone and ask about the binary and resources size. When they hear the real value, they generally can't believe it!
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
It's usually pretty stable for a while. It's when you get into very complex parts and assembles that it starts to really show problems. (You'll still see some crashes learning though).
> I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.
In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.
IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.
The problem with CAD is that mechanical engineering is still deeply proprietary, especially up and including the software stacks.
There is basically no "open source" in mechanical engineering. So you are relegated to super heavy legacy applications that coast by through their integrations with other proprietary tools. Solidworks is much heavier then FreeCAD but FreeCAD didn't have integrations with simulation tools, with CAM software, used a different geometry engine than industry standard, etc, so when a company tried to turn FreeCAD into a product they failed.
The only open source one sees in mechanical engineering comes out of academia, which while interesting, faces the problem that once the research funds dry up or the project finishes the software is dumped into the open in hard to find places, and is not further developed.
I remain hopeful in the potential for open source, I believe that to have a truly accessible and innovative industry a greater level of openness is needed, but it is yet coming.
I think CAD is a good place to start, as it is not a space where lots of hidden and closely guarded tricks are needed like in Finite Element Analysis. For personal uses FreeCAD is getting there. Snappier than Solidworks, but the workflow layout needs some work.
I am also looking at projects such as https://zoo.dev. In mapping the design 1to1 to code (while keeping gui workflow as well) I think they have a real chance of offering enough value that new companies will be interested in trying out their approach. It opens the doors to automation analysis, and generation that while possible with something like Solidworks is cumbersome and not well documented.
Much of the new datacenter capacity is for GPU-based training or inference, which are highly optimized already. But there's plenty of scope for optimizing other, more general workloads with some help from AI. DRAM has become far more expensive and a lot of DRAM use on the server is just plain waste that can be optimized away. Same for high-performance SSDs.
Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
> I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
I didn’t mean the Agile Manifesto prescribes individual productivity measurement. I meant what often happens in “agile in the wild”: we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success, while the harder question (“did this deliver user/business value?”) is weakly measured or ignored.
Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).
So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.
> we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success
That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.
> Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.
That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.
It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.
Yeah, that’s a misconception too based on my experience.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
What youre pointing at is the trade off between concentration of understanding vs fragmented understanding across more people.
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot.
Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
Yes, but you can combine the solutions. Aka, you know what you are working on.You can make it much faster. Or you builds something and learn from it.
I think there will be a lot of slop and a lot of usefull stuff.
But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.
So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.
Also i didn't read that book, if there are similarities in language it must be accident or claude steering me to what he knows. And if its the interpreter design, than it probably if from that book.
And they told us, that they don't memorise the material.
> And the initial gut reaction is to resist by organizing labor.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.
This website is literally a place for capitalists (mostly temporarily embarrassed) to brag about how they're going to cheat and scam their way to the top.
Labor organizing is (obviously) banned on HackerNews.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
No its not. 121M repos added on github in 2025, and overall they have 630 million now. There is probably at best 2x increased in output (mostly trash output), but no where near 100x
> It may not be 100x as was told to me but it's definitely putting the strain on the entire org.
But thats not even the top 5 strain on github, their main issue is the forced adoption of Azure. I can guarantee you that about 99% of repos are still cold, as in very few pulls and no pushes and that hasn't changed in 3 months. Storage itself doesn't add that much strain on the system if the data is accessed rarely.
I put the blame squarely on GitHub and refuse to believe it’s a vendors fault. It’s their fault. They may be forced to use Azure but that doesn’t stop one from being able to deliver a service.
I’ve done platforms on AWS, Azure, and GCP. The blame is not on the cloud provider unless everyone is down.