Hacker Newsnew | past | comments | ask | show | jobs | submit | felipeerias's commentslogin

But there isn’t a person on the other side whom you are reaching through their service. The only communication is between you and the OpenAI server that takes in your input message and produces an output.

I understand that people assume LLMs are private but there isn’t any guarantee that is the case, specially when law enforcement comes knocking.


It only seems that way because much of the data that humans use is not in a format that computers would understand. A toddler learning to talk is engaging their full body.

LLMs are very sensitive to leading questions. A small hint of that the expected answer looks like will tend to produce exactly that answer.

You don't even need a leading direct question. You can easily lead an LLM just by having some statements (even at times single words) in the context window.

As a consequence LLMs are extremely unlikely to recognize an X-Y problem.

Very large projects are an area where AI tools can really empower developers without replacing them.

It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done.


AI makes so many mistakes, I cannot trust it with telling me the truth about how a large codebase works.

The "Pro" variant of GTP-5 is probably the best model around and most people are not even aware that it exists. One reason is that as models get more capable, they also get a lot more expensive to run so this "Pro" is only available at the $200/month pro plan.

At the same time, more capable models are also a lot more expensive to train.

The key point is that the relationship between all these magnitudes is not linear, so the economics of the whole thing start to look wobbly.

Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.


>Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.

I think we're a lot more likely to get to the limit of power and compute available for training a bigger model before we get to the point where improvement stops.


Is this the beginning of the US sovereign fund?


US own the dollar why they need a sovereign fund? sovereign fund is profitable when you invest in a foreign currency.


That's what is being said. The Intel stake is a first step to a US sovereign fund that will include ownership stakes in many corporations.


is merging interests of the state and corporations marks a return to good ole fascism ?

or is it just an alternative way of taxing capital? instead of taxing wealth and capital, just take an equity stake in it ?


"Fascism should more properly be called Corporatism because it is the merger of state and corporate power.” — Benito Mussolini


"Corporate" in that quote refers to groups consisting of something like entire industries, including employees and employers, like a guild. It doesn't refer to a business legally recognized by the state as the word is commonly used today.


But when you link evidence that hitler had secret meetings with capitalist business leaders to bankroll him, the entire mefo bill nonsense, etc to the "were nazis socialist?" argument you get downvoted.


> "were nazis socialist?" argument you get downvoted

Well it's complicated... Is China socialist? What about state capitalism in general?


More an alternative way of controlling capital.

If you wanted to tax capital via equity stakes, you'd simply have demanded a much larger stake.

What we're doing is starting down the road of "capitalism with Chinese characteristics". It's a tacit admission that the Chinese model can be effective at achieving a nation's strategic economic goals. (More effective than the model we previously championed.)

The real flip side in all of this is that everyone else sees what we're doing for what it is, and they also implement capitalism with Chinese characteristics. Which in and of itself wouldn't be bad. But what if nations like India or Indonesia turn out to be just flat out better than us at it?

Or, God forbid, the nightmare scenario, which would be nations like Brazil being better than us at it?


10% is not a controlling stake, and US already controls Intel via regulation.

Most importantly, Intel's market cap is minuscule $100 bln, it doesnt allow control over meaningful amount of capital

Socialism with Chinese characteristics - it reduces private wealth and curbs control of oligarchs like Jack Ma. I feel like US is the opposite, where oligarchs directly control the government already


Sorry, I believe I've been misapprehended.

I didn't mean the intent is to control Intel's capital.

I meant controlling capital flows. In this particular case, controlling the flow of capital in a strategic sector out to TSMC et al. The idea is that regulation, state backed companies, etc etc all concert to oblige the market to keep those capital flows inside of your jurisdiction.

China does the same. It's extraordinarily difficult to exfiltrate capital from China. One of the only ways to do it is to turn the capital into products and exfiltrate those products out of China in place of the capital.

I think, long term, the US wants the same sort of environment over here.


Nationalized companies don't necessarily mean socialism or fascism but fascists did like giving the state fairly tight ownership and control of companies. It depends on how they handle that - if you see Trump loyalists embedded in lots of boards or top down instructions given to industry that might be a sign.


Or is it more socialism, the public taking ownership in the fruits of labor?

Honestly this is pure horseshoe theory where Bernie Sanders and Trump hold the same views.


Don't most sovereign wealth funds invest mainly (or entirely like in Norway's case) into foreign assets.

Holding significant stakes in domestic companies just seems like light state capitalism.


I'd suspect Trump would model his on Saudi Arabia's PIF rather than Norway's fund. The PIF invests in companies worldwide, including Uber and Blackstone, as well as providing capital for mega-projects like NEOM.

https://en.wikipedia.org/wiki/Public_Investment_Fund#List_of...


This is a very informative article, a good starting point to understand the complexities and nuances of integrating these tools into large software projects.

As one commenter notes, we seem to be heading towards a “don't ask, don't tell policy”. I do find that unfortunate, because there is great potential in sharing solutions and ideas more broadly among experienced developers.


It's a really difficult problem. I read a comment on here the other day about the increased burden on project maintainers that I sympathized with, but I wonder if the solution isn't actually just more emphasis on reputation tools for individual committers. It seems like the metric shouldn't just be "uses AI assistance" vs "doesn't", which as you note just leads to people hiding their workflow, but something more tied to "average quality of PR." I worked in finance briefly and was always really intrigued by the way responsibility worked for the bankers themselves: they could use any tools they wanted to produce results, but it had to be transparent and if someone was wrong the pretty strict burden fell on that IC personally.

The worst case for AI and OSS is a flood of vibe-coded PRs that increase bugs/burden on project maintainers; the best case is that talented but time-starved engineers are more likely to send the occasional high-quality PR as the time investment per PR decreases.


That’s a good point. My concern is that these tools will increase the gap between the trusted contributors to a project and people honestly trying to get their first patch in, because the latter now have to make themselves noticed in a sea of low-quality spam.


I guess I haven't been an OSS maintainer in too long and might be a bit naive, but I still feel like completely incompetent "vibe-coded PRs" can't be that common... I understand why unqualified people try to spin-up X vibe-coded app that turns out to be a complete nightmare, but not why they would flood github with random PRs: there just doesn't seem to be all that much or even any incentive to push out PRs for free, especially if you more-or-less need a pro Claude subscription.


It might boil down to individual thinking styles, which would explain why people tend to talk past each other in these discussions.


At the moment, AI tools are particularly useful for people who feel comfortable browsing through large amounts of text, intuitively nudging the machine this way and that until arriving at a valuable outcome.

However, that way of working can be exasperating for those who prefer a more deterministic approach, and who may feel frustrated by the sheer amount of slightly incorrect stuff being generated by the machine.


To save you a click: no, the author does not provide any source for the headline.


I think they read this nonsense: https://news.ycombinator.com/item?id=44867312


Possibly, although at least that article tried to justify the $100k with some handwaving about multiple agents working in parallel with minimal human supervision.

Unfortunately, people are swallowing the headline without any critical thinking.


> To save you a click: no, the author does not provide any source for the headline.

TFA kind of did, with the '20-30 dollars per person, per month, across an organisation' quote / comment - though they didn't do the math for you.

But that range of monthly spend only needs 220-400 ish people to reach the headline figure of $100k.

Whether that's good value, who can say.


With a large code base and big context windows, it's easy to blow past the $20-30 allocation in an hour


100K, per developer?


Oh, I overlooked that bit.

The math on the 'if costs keep rising' bit of the story would take a hefty amount of (the bad type of) oversight to get to that figure per developer, yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: