Two Claude accounts of mine got blocked (one of them was a paid account for which I got a refund following the block), without explanation, very normal use.
I loved using Claude, I think it did better job than other LLM. Attempts to appeal failed with no response.
I've seen this kind of comment a few times, I was considering building some tools on top of Claude but this strongly puts me off. I don't want to invest the engineering time only to randomly have the account blocked with no warning and no explanation.
Like you shouldn't lock yourself to a specific cloud provider, you shouldn't lock yourself to a LLM as a service provider.
You could build your tools with a generic access to the best LLMs of the market. Today Claude is great, but perhaps Gemini, Mistral, ChatGPT, Command-R, or some wizard-dolphin-mixtral-carrots-merge-v3 can be descent replacements.
Totally, spending time developing the muscles to build more advanced RAG workflows is first and foremost a data modeling and data engineering challenge -- developing the hierarchical data modeling expertise, taking advantage of the structure and flexibility of the document model, chunking, and real time memory requirements combined with the power of vector embeddings and advanced query and filter capabilities is a skill that will last, irrespective of the popular model, service provider, cloud platform or framework du jour.
I’ve been using Claude opus pretty heavily since it came out. In the last few days I’ve built a Mac app that can remove/replace image backgrounds in bulk with CoreML, despite not knowing much about Swift development. It works startlingly well - better than the model shipped with the system.
The notebooks are OK, but I would be much more interested in seeing an actual iteration engine that demonstrated tool use at scale, and how to set up the processes working it.
Right now it's all "here's our array of context data", "here's our tool function" and a linear flow through the process. Very little error handling. No consideration for all the failure modes involved in reaching out to actual data sources. No consideration for network errors, timeouts, retries (oh yeah, because there will be a lot of retries), very little (to nothing) regarding validating data schemas except one Pydantic example that doesn't even account for retries very well.
I agree openai models are by far the best, regardless of papers claiming otherwise according to some strange tests and metrics.
We are also using pagemaker hosted open source models, fine tuned for specific use cases. But even then they can't really compete with standard openai models.
It may be a good resource, but as someone living in the EU, I am "protected" from using all the shiny new AI tools to increase my productivity. And then they wonder why the EU is so behind in innovation and startups.
Claude's lawyers should find a way because they are lagging all other providers in this department. It's such a pity to close the model to a large number of countries. They probably expect to get sued for copyright infringement or something.
They went overboard a bit. I noticed yesterday they turned up the knob a bit further even. I gave it some lyrics (made by ChatGPT!) and asked it to rephrase certain things. It didn't want to comply because of possibility of copyright infringement. It didn't budge even with a prompt stating it's not copyrighted. They set themselves on a march to death that way and I can't blame them.
I got a "Content Filtered" error parsing a receipt on which was a product titled "Damen Sportbekleidung" (Women's sportswear), seems chatGPT is not much smarter.
sure agree - but it is a "march to compliance" .. to creative and free-thinking people, this is like Death.. but not the same. The leadership at Anthropic have made it very clear that they intend to partner with Amazon and other ultra-legal corporate aggressors.. to take and run the corporate AI services space.. and comply with every legal ruling.. (no mention in the public about Defense work?.. Mr Schmidt with Schmidt Futures has shown the way there)
Anthropic has the playing field to build an empire in the classical sense
Yes you can, I had no problems adding balance with a credit card issued in Finland. No need for a VPN or anything. Been using the API for three weeks now.
> I am "protected" from using all the shiny new AI tools to increase my productivity.
Can you elaborate? What’s happening in Europe that is “protecting” you from AI? America is clamoring for protections, maybe they should avoid whatever mistakes have been made in Europe.
What makes you think so?
Isn't it more "AI" providers being afraid of legal consequences (e.g. due to copyright or false output), so they protect themselves by not offering certain things in the EU?
Are you able to access Claude 3 via AWS Bedrock or GCP Vertex AI? I haven't used Vertex AI, but I know that several US regions have Claude 3 access through Bedrock.
They don't increase your productivity any more than non-AI tools do. At most they make you feel as if. But you are probably just looking for a scapegoat anyways.
I loved using Claude, I think it did better job than other LLM. Attempts to appeal failed with no response.
You think you can help me figure it out?