Hacker Newsnew | past | comments | ask | show | jobs | submit | nutanc's commentslogin

Lets break this down. There is very little in newness in what Anthropic announced. Claude had skills for a long time. They have added one more layer of abstraction and called it plugins. This mainly comes with a set of integrations.

Thats the pitch.

But, what are Claude plugins?

Plugins=Commands+Skills+Integrations.

Commands are specific to Claude code. But commands and skills are nothing but prompts at their basest level.

So what is the main differentiator?

Integrations.

But what are you integrating with?

SaaS companies.

And what is the stock market doing?

Dumping SaaS stocks.

How do they think Claude cowork will work without the integrations. Without the system of records.

If anything, these SaaS products have become more important. If I was a trading guy, I would go to the github of claude plugins, see the default integrations and buy the stock of those companies.


Claude cowork and the SaaSpocalypse case makes no sense. What are Claude plugins? Plugins=Commands+Skills+Integrations. Commands and skills=prompts So differentiator? Integrations. Integrating with? SaaS companies. And what is the stock market doing? Dumping SaaS stocks.

2 years since this started. The effort is to protect the creators from the AI machine.

The AI machine is different from the printing press. We think we need protections more than just the copyright. Can we have a copyleft for data?

In this age of AI, how do we protect the creators?


Chat GPT has ads now. This is a dangerous domain we are entering and society is not ready.


Organizations adapting AI is the biggest problem that businesses are facing right now. Even in Ozonetel I face this problem day in and day out. The employees who really use AI to its full potential are minuscule. I can count on my fingertips. We need to overcome this in the right way or we will face the same problems we faced during industrial revolution.


All they (management) had to do was _not_ shove it down everyone's throats. Now everyone has a gag reflex to it.


Got pissed off with too many ARR manipulations and AI startups announcing revenue numbers manipulations(best day * 365 as ARR etc). These startups are not only messing up the AI ecosystem with the non standard numbers, they are also messing up the SaaS ecosystem by co opting the SaaS metrics. Now SaaS startups are supposed to show the same scale though the AI startups have also not achieved that scale. So here is what I propose, the AI startups should use their own new vocabulary. Since everyone is vibe coding, I suggest VRR, Vibe Revenue Run-rate :)

I have even provided a formula(all scientific and all) and also provided a checklist for the VCs.


You can disagree with the commons definition, and thats fine. But the point I wanted to make was about the exploitation. The open Internet was built with a code of sharing. Now they are trying to put walled gardens around all that knowledge. Lets remove Marx from the equation if that becomes a bone of contention. But we as a society need to come up with better dialogues to decide how we will treat our creators and how we will deal with the AI copy machine. We cannot expect that profit mongers will do the right thing.


I understand your point, and despite what my (hastily-typed) critique shows, I find there are valuable kernels of truth in all types of ideas. The walled-gardens for AI are more to do with recouping the cost of model training, and there are currently no incentives for open sourcing some types of models. The new tool has knowledge of more than one type of domain, so its content is different from its source material, more or less. So while creators have a point, they lose it when it turns out the tool is capable of multi-faceted information synthesis. But what’s interesting to me is that creators are not precluded from using AI tools to develop more content, and that makes all the difference.

That said, I think it would be better if more models were open sourced, or if FOSS non-profits would buy GPUs and start their own model training program based on the currently released open source models. The commons argument doesn’t apply here if there are multiple open source models which contain information from hundreds of hours of GPU training which someone else has already done, and thus can be picked up by any open source organization to train on additional content for whatever is of interest. Some orgs have tried that already, but didn’t gain traction due to poor marketing and lack of funding, https://en.wikipedia.org/wiki/EleutherAI. Maybe if there were government subsidies to encourage open source model release, or non-profit funding for setting up and paying for GPU farms for training models which could be used by everyone, then this type of organizational behavior would become more productive.


The question here is, is social media addictive and is it harmful. If we have enough evidentiary proof, then yes, it should be banned just like we do for alcohol or cigarettes. We also ban porn for kids. And we don't need any ID proofs in implementing the ban. So we have a precedent. It's not perfect, but society knows it's bad, government, family, schools come together and implement the ban. No need for IDs etc and give more control to government.


Well not exactly earning calls in the classical sense, but haven't you heard about these startups announcing how they have scaled to $100 million in 3 months etc. Maybe revenue calls every quarter.


I would say in the AI age almost every business is a startup as per PGs definition [https://paulgraham.com/growth.html]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: