> 30 years ago, it was safe to go to vendor XY page and download his latest version and it was more or less waterproof.
You _are_ joking, right? I distinctly remember all sorts of dubious freewarez sites with slightly modified installers. 1997-2000 era. And anti-virus was a thing in MS-DOS even.
I am working on a little project in my offhours, and asked a non-hacker (but competent programmer) friend to take a run at exploiting it. Great success: my project was successfully exploited.
The industrialization of exploit generation is here IMO.
I am _much_ more interested in i. building cool software for other things and ii. understanding underlying underlying models than building "better claude code".
Archiving it and publishing it are different things.
More importantly, they may sabotage their mission: If Spotify shuts them down, their exiting archives and especially future archives may be effectively lost.
I guess I should say more accurately: Their mission is to both archive it and publish it. They seem to be explicitly against copyright, on principle. Which I greatly respect.
Yeah, it seems to only be a problem when you're a human being remixing the culture you grew up with.
Meta can admit to soullessly scraping books they don't own for their for-profit AI datasets [1], and it's not a problem because they're Meta. But if you're an artist? Nope. Sampling in hip hop songs, for example, is in a "complex legal gray area" (translation: "it's illegal but we don't want to admit that out loud") [2].
Fortunately, Spotify does not have that power. Annas Archive is not based in US or EU jurisdictions. They can make access for normal people a bit harder, but not shut it down.
Annas archive is not based in the EU (sorry for being not clear). So the law in EU is limited to enforce a ban. In germany it is already "banned" via ISP but just DNS.
But the real servers are hosted in kazachstan or russia I think. And they do not cooperate so much with EU courts.
So unless the EU installs a great firewall like china, they cannot really shut it down.
Presumably the opposing party is residing in non-US-or(and? depends on the order of evaluation)-EU territory, but I might be mistaken. "They" refers to both sides in the parent comment.
They are, but archiving without publishing is pointless.
I occasionally wonder how many enormous collections of culture like that of Marion Stokes[1] have been lost because their curators made no effort to realize the value of their collection.
Most archives - the ones in libraries, etc. - are not published, except they are available to qualified people who physically travel there. Most are not even fully indexed - nobody knows all of what's there.
My perspective is compatible with this fact. An archive that approximately nobody can access and/or nobody knows what it contains has no value to society at large, except the potential that it may some day be published.
The good news is I'd guess the number of (nonreligious/nonproprietary) institutionally managed pointless archives is dwindling.
> They are, but archiving without publishing is pointless.
One may collect/archive now (when the data is, well, "available"), and publish later, when copyright expires and the material will likely be harder to obtain.
They stated that they would pass the information on to other archivists and public/private trackers no? They obviously have backups, since there are multiple users seeding Gbs and even TBs of data. Mirrors can be created as well, like TPB.
I think the temptation to use AI is so strong that it will be those who will keep learning who will be valuable in future. Maybe by asking AI to explain/teach instead of asking for solution direclty. Or not using AI at all.
I haven't seen things work like this in practice, where heavy AI users end up being able to generating a solution, then later grasp it and learn from it, with any kind of effectiveness or deep understanding.
It's like reading the solution to a math proof instead of proving it yourself. Or writing a summary of a book compared to reading one. The effort towards seeing the design space and choosing a particular solution doesn't exist; you only see the result, not the other ways it could've been. You don't get a feedback loop to learn from either, since that'll be AI generated too.
It's true there's nothing stopping someone from going back and trying to solve it themselves to get the same kind of learning, but learning the bugfix (or whatever change) by studying it once in place just isn't the same.
And things don't work like that in practice any more than things like "we'll add tests later" end up being followed through with with any regularity. If you fix a bug, the next thing for you to do is to fix another bug, or build another feature, write another doc, etc., not dwell on work that was already 'done'.
Ironically, AI is really good at the adding tests later thing. It can really help round out test coverage for a piece of code and create some reusable stuff that can inspire you to test even more.
I’m not a super heavy AI user but I’ve vibe coded a few things for the frontend with it. It has helped me understand how you lay out react apps a little better and how the legos that React gives you work. Probably far less than if I had done it from scratch and read a book but sometimes a working prototype is so much more valuable to a product initiative than learning a programming language is that you would be absolutely burning time and value to not vibe code the prototype
Often it's less about learning from the bugfix itself but the journey. Learning how various pieces of software operate and fit together, learning the tools you tried for investigating and debugging the problem.
> I'm pretty sure AI is going to lead us to a deskilling crash.
That's my thought too. It's going to be a triple whammy
1. Most developers (Junior and Senior) will be drawn in by the temptation of "let the AI do the work", leading to less experience in the workforce in the long term.
2. Students will be tempted to use AI to do their homework, resulting in new grads who don't know anything. I have observed this happen first hand.
3. AI-generated (slop) code will start to pollute Github and other sources used for future LLM training, resulting in a quality collapse.
I'm hoping that we can avoid the collapse somehow, but I don't see a way to stop it.
On the contrary, being able to access (largely/verifiably) correct solutions to tangible & relevant problems is an extremely great way to learn by example.
It should probably be supplemented with some good old RTFM, but it does get us somewhat beyond the "blind leading the blind" StackOverflow paradigm of most software engineering.
I think seniors know enough to tell whether they need to learn or not. At least that's what I tell myself!
The thing with juniors is: those who are interested in how stuff works now have tools to help them learn in ways we never did.
And then it's the same as before: some hires will care and improve, others won't. I'm sure that many juniors will be happy to just churn out slop, but the stars will be motivated on their own to build deeper understanding.
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy.
FWIW, humans derive a lot of their self-evaluation as people from labor.
Getting everyone to even agree that this is a problem is impossible. I'm open to the universe of solutions, as long as it isn't "Anthropic and OpenAI get another $100 billion dollars while we starve". We can probably start there.
Whether it's capitalism or communism or whatever China has currently - it's all people doing everything to give their own children every unfair advantage and lie about it.
Why did people flee to America from Europe? Because Europe was nepo baby land.
Now America is nepo baby land and very soon China will be nepo baby land.
It's all rather simple. Western 'culture' is convincing everyone the nepo babies running things are actually uber experts because they attended university. Lol.
Yeah, unfortunately Marx was right about people not realizing the problem, too. The proletariat drowns in false consciousness :(
In reality, the US is finally waking up to the fact that the "golden age" of capitalism in the US was built upon the lite socialism of the New Deal, and that all the bs economic opinions the average american has subscribed to over the past few decades was completely just propaganda and anyone with half a brain cell could see from miles away that since reagonomics we've had nothing but a system that leads to gross accumulation to the top and to the top alone and this is a sure fire way (variable maximization) in any complex system to produce instability and eventual collapse.
> humans derive a lot of their self-evaluation as people from labor.
We're conditioned to do so, in large part because this kind of work ethic makes exploitation easier. Doesn't mean that's our natural state, or a desirable one for that matter.
"AI-based economy" is too broad a brush to be painting with. From the Marxist perspective, the question you should be asking is: who owns the robots? and who owns the wealth that they generate?
The question of leadership is much larger, more general, and more timeless than the last 15 years. I invite those curious about it to look into the American Army.
> Leadership is the process of influencing people by providing purpose,
direction, and motivation to accomplish the mission and improve the
organization.
The American army, the origin of the term "fragging." (to wit, making sure your commanding officer has a close, and final, encounter with a piece of ordnance, such as a frag grenade)
if we are to learn anything from the US military, it is twofold
1. You can absolutely create a self-reproducing tradition of absolute conformity while retaining ample capacity for local decisionmaking, if you have enough money and time. (In the case of the US army, approximately 150 years, and more money than any other organization in the history of man)
2. Segregating the staff into "officers" and "enlisted" is still gonna get a lot of "officers" killed dead, and even more "objectives" un-taken, because it spreads their incentives too far apart.
You _are_ joking, right? I distinctly remember all sorts of dubious freewarez sites with slightly modified installers. 1997-2000 era. And anti-virus was a thing in MS-DOS even.
reply