Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not about memory or training. The LLM training process is not being run on books streamed directly off the internet or from real-time footage of a book.

What these companies are doing is:

1. Obtain a free copy of a work in some way.

2. Store this copy in a format that's amenable to training.

3. Train their models on the stored copy, months or years after step 1 happened.

The illegal part happens in steps 1 and/or 2. Step 3 is perhaps debatable - maybe it's fair to argue that the model is learning in the same sense as a human reading a book, so the model is perhaps not illegally created.

But the training set that the company is storing is full of illegally obtained or at least illegally copied works.

What they're doing before the training step is exactly like building a library by going with a portable copier into bookshops and creating copies of every book in that bookshop.



But making copies for yourself, without distributing them, is different than making copies for others. Google is downloading copyrighted content from everywhere online, but they don't redistribute their scraped content.

Even web browsing implies making copies of copyrighted pages, we can't tell the copyright status of a page without loading it, at which point a copy has been made in memory.


Making copies of an original you don't own/didn't obtain legally is not fair use. Also, this type of personal copying doesn't apply to corporations making copies to be distributed among their employees (it might apply to a company making a copy for archival, though).


> But making copies for yourself, without distributing them,

If this was legal, nobody would be paying for software.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: