Hacker News new | past | comments | ask | show | jobs | submit login

I would call them less hobbyist products, and more compute for IOT/edge devices. They aren't made for a datacenter and aren't trying to compete with an H100.

Yes, it has one sixth the performance of a RTX 2060, but it has one-five-hundredth the volume. For a specific siloed application, 8 TOPS is plenty. Think image processing, etc.

There are plenty of production use cases where that makes sense and an H100 does not.




At this point, these chips aren't anywhere close to the frontier of capability for embedded accelerators, and certainly don't merit an entire M.2 slot in a serious design.

These are at 5-year-old design and it shows.


Fair, the actual M2 version linked would not be what would be used beyond convenient development.

The actual TPU chip itself from the M2 card is sold individually as a surface mount chip and that is what would be used in a "serious" design.

https://coral.ai/products/accelerator-module

I did a cursory search and I can find zero other products that compete with that module, within an order of magnitude of power consumption/price/etc


At this point, nobody sells modules for this, and I doubt many coral chips still sell. The current slot for an ML accelerator at about 10 TOPS is as a peripheral on an SoC. Most serious SoCs have one.

In other words, the reason you didn't find a commercial competitor to these things is that the competitor is (nearly) free.

Here's one vendor: https://www.ti.com/technologies/edge-ai.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: