But there isn’t a person on the other side whom you are reaching through their service. The only communication is between you and the OpenAI server that takes in your input message and produces an output.
I understand that people assume LLMs are private but there isn’t any guarantee that is the case, specially when law enforcement comes knocking.
It only seems that way because much of the data that humans use is not in a format that computers would understand. A toddler learning to talk is engaging their full body.
You don't even need a leading direct question. You can easily lead an LLM just by having some statements (even at times single words) in the context window.
Very large projects are an area where AI tools can really empower developers without replacing them.
It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done.
The "Pro" variant of GTP-5 is probably the best model around and most people are not even aware that it exists. One reason is that as models get more capable, they also get a lot more expensive to run so this "Pro" is only available at the $200/month pro plan.
At the same time, more capable models are also a lot more expensive to train.
The key point is that the relationship between all these magnitudes is not linear, so the economics of the whole thing start to look wobbly.
Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.
>Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.
I think we're a lot more likely to get to the limit of power and compute available for training a bigger model before we get to the point where improvement stops.
"Corporate" in that quote refers to groups consisting of something like entire industries, including employees and employers, like a guild. It doesn't refer to a business legally recognized by the state as the word is commonly used today.
But when you link evidence that hitler had secret meetings with capitalist business leaders to bankroll him, the entire mefo bill nonsense, etc to the "were nazis socialist?" argument you get downvoted.
If you wanted to tax capital via equity stakes, you'd simply have demanded a much larger stake.
What we're doing is starting down the road of "capitalism with Chinese characteristics". It's a tacit admission that the Chinese model can be effective at achieving a nation's strategic economic goals. (More effective than the model we previously championed.)
The real flip side in all of this is that everyone else sees what we're doing for what it is, and they also implement capitalism with Chinese characteristics. Which in and of itself wouldn't be bad. But what if nations like India or Indonesia turn out to be just flat out better than us at it?
Or, God forbid, the nightmare scenario, which would be nations like Brazil being better than us at it?
10% is not a controlling stake, and US already controls Intel via regulation.
Most importantly, Intel's market cap is minuscule $100 bln, it doesnt allow control over meaningful amount of capital
Socialism with Chinese characteristics - it reduces private wealth and curbs control of oligarchs like Jack Ma. I feel like US is the opposite, where oligarchs directly control the government already
I didn't mean the intent is to control Intel's capital.
I meant controlling capital flows. In this particular case, controlling the flow of capital in a strategic sector out to TSMC et al. The idea is that regulation, state backed companies, etc etc all concert to oblige the market to keep those capital flows inside of your jurisdiction.
China does the same. It's extraordinarily difficult to exfiltrate capital from China. One of the only ways to do it is to turn the capital into products and exfiltrate those products out of China in place of the capital.
I think, long term, the US wants the same sort of environment over here.
Nationalized companies don't necessarily mean socialism or fascism but fascists did like giving the state fairly tight ownership and control of companies. It depends on how they handle that - if you see Trump loyalists embedded in lots of boards or top down instructions given to industry that might be a sign.
I'd suspect Trump would model his on Saudi Arabia's PIF rather than Norway's fund. The PIF invests in companies worldwide, including Uber and Blackstone, as well as providing capital for mega-projects like NEOM.
This is a very informative article, a good starting point to understand the complexities and nuances of integrating these tools into large software projects.
As one commenter notes, we seem to be heading towards a “don't ask, don't tell policy”. I do find that unfortunate, because there is great potential in sharing solutions and ideas more broadly among experienced developers.
It's a really difficult problem. I read a comment on here the other day about the increased burden on project maintainers that I sympathized with, but I wonder if the solution isn't actually just more emphasis on reputation tools for individual committers. It seems like the metric shouldn't just be "uses AI assistance" vs "doesn't", which as you note just leads to people hiding their workflow, but something more tied to "average quality of PR." I worked in finance briefly and was always really intrigued by the way responsibility worked for the bankers themselves: they could use any tools they wanted to produce results, but it had to be transparent and if someone was wrong the pretty strict burden fell on that IC personally.
The worst case for AI and OSS is a flood of vibe-coded PRs that increase bugs/burden on project maintainers; the best case is that talented but time-starved engineers are more likely to send the occasional high-quality PR as the time investment per PR decreases.
That’s a good point. My concern is that these tools will increase the gap between the trusted contributors to a project and people honestly trying to get their first patch in, because the latter now have to make themselves noticed in a sea of low-quality spam.
I guess I haven't been an OSS maintainer in too long and might be a bit naive, but I still feel like completely incompetent "vibe-coded PRs" can't be that common... I understand why unqualified people try to spin-up X vibe-coded app that turns out to be a complete nightmare, but not why they would flood github with random PRs: there just doesn't seem to be all that much or even any incentive to push out PRs for free, especially if you more-or-less need a pro Claude subscription.
At the moment, AI tools are particularly useful for people who feel comfortable browsing through large amounts of text, intuitively nudging the machine this way and that until arriving at a valuable outcome.
However, that way of working can be exasperating for those who prefer a more deterministic approach, and who may feel frustrated by the sheer amount of slightly incorrect stuff being generated by the machine.
Possibly, although at least that article tried to justify the $100k with some handwaving about multiple agents working in parallel with minimal human supervision.
Unfortunately, people are swallowing the headline without any critical thinking.
The math on the 'if costs keep rising' bit of the story would take a hefty amount of (the bad type of) oversight to get to that figure per developer, yes.
I understand that people assume LLMs are private but there isn’t any guarantee that is the case, specially when law enforcement comes knocking.
reply