There has always been a gap between the experience of solo/small shop developers, vs. developers who work in teams in a large corporate environment. But thanks to open source, we have for the past twenty years at least mostly all been using the same tools.
But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.
Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.
What are all those people doing with the so called productivity enhancements?
What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?
The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.
It’s allowed a sludge of internal tools to spin up, and more bloat. The ability to sand bag and over build these tools has gotten 2-10x worse.
Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.
As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.
Im convinced the most scarce skill on the planet is the ability to a) envision something that needs to exist in the world b) explain how the thing creates value from a financial perspective.
Most people tend to think they know what they are talking about (e.g. surface level understanding of how to think economically) and end up making basket-case decisions - only realising it months later. By that point they will fail to admit defeat and keep going on.
"As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves."
What I see in my backyard: coding now takes significantly less time, but its just coding. Before one gets to building there are squabbles between business and product people. Testing takes just as much as it used to. Since nice to haves are easy to add and product people begin to take it for granted, the product cycles don't get shorter.
Give it time. Right now its just coding, but procedural AI will come after product development, architecture, and then whatever is left of management.
The best people can not only envision products but also possess great judgement without needing data. For AI to even come close it would need an insane amount of data that is nuanced and subtle - by the the time the AI has obtained all the necessary data and made sense of it the human is long gone working on something else.
A neutral hobbyist on a $20 budget will build something and immediately bump into quotas. Its not going to be an enjoyable experience.
A negatively predisposed pro who only dabbles in AI gets to the first disappointment, smiles, and thinks "yeah, about what i expected" and quits.
To learn those new tools one needs to not be stingy. Invest as much as needed into tokens, subscriptions, and maybe most importantly invest the time. Spend time building various things. Try out various models not just for coding, but as part of apps being built. For bonus points, meaningfully experiment with local models. I try to avoid discussions with sceptics who have not put at least a few months of effort into learning those tools. It's like discussing driving with my mother in law, who spent maybe 20hrs behind the wheel through her whole life (and is very, very opinionated!).
In my opinion it's a complete waste of time and money to learn something that is gated by a company that might disappear tomorrow.
It's akin to company courses to learn something that is specific to that company. Of course you do them on the job, there is no point in doing them if you don't work there.
Similarly what's the point of trying 300 different models if any job will decide for you which one they approve the use of, and you are liable to get fired and asked for damages if you let anything else access company intellectual property?
The difference is (if you'll forgive me recruiting a couple of straw men for the purpose of illustrating the spectrum we are talking about here):
Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.
vs
Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.
Okay, so just to be clear you're not commenting on productivity? Or what does "changes that impact" mean?
I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.
If you have some objective measure of productivity in mind, feel free to share it, but no that's not what I'm commenting on.
I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'
Technically this is stealing from the people who bet against the Maduro raid happening; and it’s cheating because we assume those people taking that side of the bet weren’t privy to the planning.
He’s only stealing from the US military if the DoD is taking the other side of prop bets on US military operations on polymarket. Which… I mean maybe it’s a reasonable insurance strategy? US military bets that they’re gonna screw up a raid on Venezuela, then either everything goes well and they end up with a successful operation, or it all goes to hell and they wind up winning a consolation cash prize. Hedging operational success by taking the over on casualty estimates… dark.
… As part of an explicit, openly stated mission to reshape the global political order.
Palantir is indeed in many ways just a software vendor but we shouldn’t downplay that they have a much more explicit agenda than most other companies do in seeking government contracts.
Eh. I mean, the government will do what the government will do with the software it buys. We've just seen that with Anthropic. The US government wouldn't give contracts to Palantir if it seemed like its ideology didn't line up with US aims, and they wouldn't give contracts to other vendors if it seemed like their less ideological marketing meant they weren't aligned with US aims.
1) we’re talking about a UK government contract with Palantir
2) actually historically, and aspirationally, the US government isn’t supposed to be focused on ideological alignment of its vendors - the current government is anomalous and we shouldn’t normalize this.
This all seems like a reasonable critique but the idea that the reason for not cleaning up data is so the system can run background behavioral analysis on it seems paranoid. Surely the main reason for not running cleanup until storage is needed is just optimizing for in the moment performance.
Apple has repeatedly shown - as in this case - that when police are able to find a way to use their subpoena and coercive powers over Apple to subvert a user’s privacy expectations and extract data from an iPhone, that they see that as a failing of iOS and are willing to fix that bug.
In this case they are patching out a data extraction path that was exploited to access data a user thought had been deleted.
But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.
reply