Hacker Newsnew | past | comments | ask | show | jobs | submit | whywhywhywhy's commentslogin

> There will then be many ‘underground’ internets

Only with very old technology, its possible force ID validation from silicon to server or even to unlock the cpu cores so if it ever comes to what you suggest that will also happen.


France has mistral and the energy infrastructure to compete, the EU has nothing.

If it was about this why do OpenAI and Anthropic lose their minds when people are training off their output or trying to scrape their systems.

I actually don't have an issue with training off the mass of everyones work if the models are open and free to build upon, it's locking them away and then throwing your toys out the pram when people try and do the same thing that bothers me.


Good question. I actually have a technical answer, believe it or not.

Pre-training is: training a model from scratch on cheap data that sets the foundation of a model's capabilities. It produces a base model.

Post-training is: training a base model further, using expensive specialized data, direct human input and elaborate high compute use methods to refine the model's behavior, and imbue it with the capabilities that pre-training alone has failed to teach it. It produces the model that's actually deployed.

When people perform distillation attacks, they take an existing base model and try to post-train it using the outputs of another proprietary model.

They're not aiming to imitate the cheap bulk pre-training data - they're aiming to imitate the expensive in-house post-training steps. Ones that the frontier labs have spent a lot of AI-specialized data, compute, labor and hours of R&D work on.

This is probably not "fair use", because it directly tries to take and replicate a frontier lab's competitive edge, but that wasn't tested in courts. And a lot of the companies caught doing that for their own commercial models are in China. So the path to legal recourse is shaky at best. But what's on the table is restricting access to full chain of thought, and banning the suspected distillation attackers from the inference API. Which is a bit like trying to stop a sieve from leaking - but it may slow the competitors down at least.


>Ones that the frontier labs have spent a lot of AI-specialized data, compute, labor and hours of R&D work on.

Granted thats time and money but it's an absolute minuscule amount of human hours compared to the scraped data.

We know this for a fact because of parallelization, work of hundreds of millions vs the work of 20-100 even of OpenAIs team worked for the entire lifetimes of the current team and the lifetimes of the offspring of that team and the lifetimes of their offspring even with several lifetimes they still wouldnt have even made a dent in recreating that initial scraped training data.


If it was about this why do OpenAI and Anthropic lose their minds when people are training off their output.

Never thought it tasted fishy really anyway it's more acrid umami.

I’ll never understand modding in this day and age, I got it back in the quake and half-life 1 days when teenagers didn’t have access to commercial game engines but modding total seems crazy to invest time into building on infrastructure you don’t own, you can’t successfully monetize and will likely be taken from you if you do.

Instead of just building something you own.


So much has to go right for a new game to see even moderate success. In addition to the programming, you need an art director to give your game a coherent style, 2D texture artists, 3D model and terrain artists, UI designers, music composers, narrative writers, etc, and on top of that you need a compelling universe and concepts for all of these people to work from. And then once that's all done, you need competent marketing so people actually know about this game so they can want to play it.

By comparison, with a pre-existing game much of this is already out of the way and amateurs can get pretty far by just kitbashing existing assets and occasionally mimicking them when creating new assets. Marketing can be as simple as, "this thing you liked, but more, in the way you want it". It's a much smaller lift.


> you can’t successfully monetize

this wasn't the goal of modding.


but there will be a point you'll wish you did.

The Photon Paint eye image in CRT mode flickering is so accurate to how it felt at the time https://amiga.lychesis.net/applications/PhotonPaint.crt.html

>I strongly suspect a lot of their (A/M)RR was coming from extra seats for PMs, developers, etc

their seats system has always been brutal it’s extremely easy to have the seats balloon if you’re not careful and if they’re yearly there is only a 30 day window a year where you can cancel them when the banner to do so appears.


All Figma has spent the last 2 years doing is trying to get designers to use their Cursor/Claude Code text to code app.

Not convinced Figma cares about traditional design craft anymore.


I'm not sure they don't care anymore, as much as they experienced the same pressure every company faced when AI went mainstream.

Had they not included support for it, where would they be now? I'd wager a critical mass would be screeching to High Heaven for integrations, seeing as a Figma document is effectively a config file that can be translated to real code.


They never integrated it like that properly though they just made a text to app thing called Figma Make

Figma was never needed. they were useful when enterprises allowed people with no coding experience to mandate how ui should look. It is the powerpoint of dumb people that wanted a career in tech. happy to see it dying.

Hard disagree. There's more to UX than pushing pixels around. Usability, accessibility, and capturing the broader customer experience at 40,000 ft isn't a trivial process when you're designing a large product (or suite of products) especially.

These areas obviously tie into engineering very closely, but the thinking that goes into them happens at the design stage, at a lower cost than starting with engineering. AI models suck at getting every facet of this process right, because designers are achieving a balance between branding, usability, standards, taste, and differentiation -- the exact opposite of a model trained to reach for the most average outputs.


My SO is a UX designer and uses Figma. She wanted to try out Claude integration there, but was frustrated by limitations - like why she can't export interactive elements to Figma file format so that they can be edited further.

So I helped her look into it and I was shocked to find out that it just a react slop generator, not a Figma file generator. And extremely limited at that, too.

Who is Figma targeting with this exactly? Developers, who are interested in react apps will simply use claude code, and UX designers don't really care for react apps.


ultimately if it’s so close to the finished product you may as well just do it in cursor rather than have an extra step.

The design problem to solve post-ai isn’t this it’s how the space for thinking fits into all this, getting to the end result slower so human ideation can play out. This is just optimized for the first generic output + tweaks.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: