Hello HN! I want to share something me and a few friends have been working on for a while now — Zeroshot, a web tool that builds image classifiers using text-image models and autolabeling. What does this mean in practice? You can put together an image classifier in about 30 seconds that’s faster and more accurate than CLIP, but that you can deploy yourself however you’d like. It’s open source, commercially licensed, and doesn’t require you to pay anyone per API call.
Here's a 2 minute video that shows it off: https://www.youtube.com/watch?v=S4R1gtmM-Lo
How/why does it work?
We believe that with the rise of foundation vision models, computer vision will fundamentally change. These powerful models will let any devs “compile” a model ahead of time with a subset of the foundation model’s characteristics, using only text and a web-tool. The days of teams of MLEs building complex models and pipelines are ending.
Zeroshot works by using two powerful pre-trained models, CLIP and DINOv2 together. The web-app allows users to quickly create our training sets via text search. Using pre-cached DINOv2 features, we generate a simple linear model that can be trained and deployed without any fine-tuning. Since you can see what’s going into your training set, you can tune your prompts to get the type of performance or detail you want.
CLIP Small -- Size: 335 MB, Latency: 35ms
CLIP Large -- Size: 891 MB, Latency: 276ms
Zeroshot -- Size: 85 MB, Latency: 20ms
What’s next?
We wanna see how people use or would use the tool before deciding what to do next. On the list: clients for iOS and NodeJS, speeding up GPU inference times via TensorRT, offering larger Zeroshot models for better accuracy, easier results refining, support for bringing your own data lake, model refinement using GPT-V, we’ve got plenty of ideas.
When it produces a set of images for a given prompt, wouldnt it be better if we could remove a set of images from the possible selection ? Does it not work this way? Another idea would be to provide a few different kinds of prompts and based on that select all the images that matter for the given "class".
Some other things that would be good to know:
1. Can we keep adding items to the classifier? and getting newer versions of the classifier with the newly added item ? 2. How to deploy and host this kind of models? Is there any guidelines on how to deploy this in AWS or GCS for production use cases ?