Has it been many people's experience that big companies intentionally remove experienced engineers from your team to something unrelated, in the name of fungibility? I've surely seen efforts within a team to make sure that there's not a single person who's necessary for the team to reach full productivity, and I think most would agree this model does not make for resilient teams. But many of the best engineers I know have had much more energy invested in getting them to stay than to leave
At the end of the day writing good code is rarely the "end" someone is shooting for. It's more research, more features, more experimentation, etc. Maybe hobby projects and library maintainers are the exceptions.
In my experience, big companies have the biggest incentive to write good code. They have the highest conviction in their bets, and they know with high confidence they will be around in 10 years. One large tech company I worked at had a rule of thumb that all code would need to be maintained for ~7 years - at which point, as the author points out, the entire team may have been replaced. This is precisely when the time it takes to write good code is a worthy investment
I think we take git for granted as software engineers Software engineering has decades of experience with proposing changes, merging them, staging them, deploying them, and rolling them back, and collaborating with other code-writers (engineers and agents).
I'm very interested in what this will look like for outputs from other job functions. And if we'll end up with a similar framework that makes non-deterministic, often-wrong LLMs easier to work with.
IMO the only concentration OpenAI has is brand. Anthropic & Gemini both have roughly equivalent models. This could change quickly since success compounds, but for now I am actually somewhat surprised at how competitive LLM labs are with each other.
The argument sounds like he believes AI (+ robotics) will take jobs, and breaking up OpenAI could slow it down
Historically the most productive countries are the most prosperous - I think there is a big landscape of local maxima/minima in how healthy & happy a country/economy is, but shunning new technology has never been the path to Quality of Life. The only future where the US maintains its relative success involves American leadership in AI and robotics, with humans supporting them
Almost every advantage we have over every other country, we owe to technology. (Well, that and a couple of oceans, I suppose.) Historically, Americans have been better at taking every possible advantage of automation, computation, and tech in general than anyone else.
> So why don’t we build some damn factories then? Our infrastructure stack is woefully outdated and yet all we seem to be funding is AI and other SaaS.
The short answer: 1) Veto power vested in too many NIMBYs, in various flavors; and 2) Wall Street is rewarded on — and therefore chases — short-term results that goose the stock price.
Ah yes, rust belt towns, midwest towns with huge disgusting meat processing plants and countless shuttered plants they would love re-opened. Notorious NIMBYs.
We don't need new meatpacking plants, we need factories to do high tech manufacturing. Unfortunately small towns in the Midwest aren't nearly as attractive for that.
It's true that currently US technology is among the forefront, but having a single economic pillar is fragile and bound to break, the same is happening with the automotive industry in Germany right now.
But a primary problem is financial. There is too much financial wealth that is desperately looking to find lucrative opportunities which do not exist. So everybody follows the hype and creates the largest bubble we have seen in human history. And those who don't invest in AI, are largely bound in private equity or real estate, which extracts wealth from everybody without giving anything in return. This makes all other businesses less competitive.
It's a huge bottom-up scheme because the incentives are wrong, lacking transparency and unchecked financial power. This is simply not sustainable and nothing but a systemic change is needed.
In the lie for search of productivity, the west didn't become more efficient. We simply outsourced lower-margin industries to Asia and we are now faced with: Lost knowledge, lacking infrastructure, vulnerable supply chains and people without jobs.
AI cannot and will not change this.
Edit: Not sure why you got downvotes, I think it is a valid question.
Couple of issues - "a single economic pillar is fragile". I don't think anyone is suggesting shutting the US economy and just doing AI.
"largest bubble we have seen in human history" - the sums involved are not the largest - the railway bubble was larger as percent of GDP. Also it's not even clear it is that much of a bubble - the AI may come through and produce more value than the sums invested. If you can double the workforce by matching humans with AI/robots how much is that worth?
That doesn't mean that technology has to be shoveled out to the public by the truckload. A country can have a productive pharmaceutical industry without being full of drug addicts. Also, there can be technological advances without having them concentrated in a small number of companies.
Not all technology is adopted equally, though. Traditionally we pick and choose which technologies to adopt based on their usefulness and efficiency, rather than adopting absolutely everything because it falls under the banner of "technology".
I am a +1 for productivity. Personal productivity. Countries with the highest productivity equating to highest prosperity is fine but it overlooks the acceleration income disparity. I struggle to reconcile with that.
The most productive countries are the most prosperous for the owner class. Ask a random citizen from West Virginia how much they've benefited from the market share of Nvidia increasing
Ah yes, the magic job fairy. Because there were jobs in the past, there will be jobs in the future.
There were also skid rows after industrialization in the US. Lots of people didn't make it out of them to the post ww2 jobs everyone thinks about when they say 'industrialization brought good jobs'.
There were also flop houses post industrialization, where you could rent yourself your own section of rope to lean on for the night.
But yep, after WW2 there were lots of jobs in the US. When did industrialization happen though? Why do we ignore all those that didn't make it out of skid row/flop houses and jump to an implied 1940s+ jobs market?
I don't think the person you were replying to was necessarily strongly implying that many or perhaps most Americans wouldn't become permanently unemployed or unemployable.
Hamstringing productivity and technology because of possible job loss - even loss of nearly all jobs, yes - just is not a sensible move. Moves certainly need to be made, but the best action is definitely not deindustrialization and degrowth and luddism. The dock worker unions demanding a ban on port automation is a microcosm of how we will slowly decay as a country.
Even the smart communists understand this. The goal should be wellbeing and prosperity and lack of scarcity for all. The end goal is not "ensure everyone can do these painstaking jobs which non-humans can do exponentially better and faster and cheaper". This is an artificial goal because of vague worries about "purpose". Yes, people need purpose, but placing objects onto locations or pressing buttons on a screen is not the pinnacle of what it means to be a sentient entity.
Interestingly I've often seen this in Claude outputs, especially on long prompts. I've assumed this is because of Claude's XML-based instruction format, but this does make me wonder how related the two are. And if Claude may have a harder time using <output> given it's related to both accessibility and its instructions
I've never understood the risk trade-off for early stage employees (Employees ~4 through ~10-20).
At this stage equity packages are often <0.5% over 4 years. Founders on the other hand may have more like 30% equity at this stage.
But the odds of success are still quite low - <3% is generous.
In venture funded companies I think it's wrong to say that at <10 employees, founders are 60x more responsible for company outcomes (or taking on 60x more risk), even accounting for what they did to start the company.
That being said - I get working hard if you're appropriately rewarded for it. Just less so if it's primarily on behalf of someone else.
There is a reason the bay is filled with foreign workers. The investers are well aware they are offering a bad deal. They want (not need) to exploit people with as few options as possibe.
I belive religous texts are mostly a coded way of rerfering to this type of person aka demons and to stay away from their offers..
This sounds like specific instance of what I think most people believe about AI: It's good for tedious or well-scoped tasks, but shouldn't handle things that are core to your job.
I think students see this (use AI as a friend/tutor, but don't use it to do your entire HW), and software engineers see this (use AI to refactor or handle small tasks, but don't use it to design your whole system, or for abstractions that need to be carefully designed)
(Many comments here are about if performance management is core to a manager's job - which for the record I think it is)
- AI Labs will eat some of the wrappers on top of their APIs - even complex ones like this. There are whole startups that are trying to build computer use.
- AI is fitting _some_ scaling law - the best models are getting better and the "previously-state-of-the-art" models are fractions of what they cost a couple years ago. Though it remains to be seen if it's like Moore's Law or if incremental improvements get harder and harder to make.
It seems a little silly to pretend there’s a scaling “law” without plotting any points or doing a projection. Without the mathiness, we could instead say that new models keep getting better and we don’t know how long that trend will continue.
"Law" might not be the right word - but there's no denying it's scaling with compute/data/model size. I suppose law happens after continued evidence over years.
Yes, those are scaling laws, but when we see vendors improving their models without increasing model size or training longer, they don't apply. There are apparently other ways to improve performance and we don't know the laws for those.
(Sometimes people track the learning curve for an industry in other ways, though.)