Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eh, call me a naysayer, but nobody cared about AI agents because they largely weren't useful until ~GPT 3.5 and Alpaca Lora 7b is about as useful as a vestigial nipple and requires state of the art hardware to run locally.

I'm also going to take the opportunity to poo-poo this pet peeve:

>More powerful CPUs, such as the Intel Core i5, can generate 17 seconds of audio in the same amount of time.

Oh, really?

...Intel Core i5 brand microprocessors... Introduced in 2009...

About as accurate as saying "users of four-wheeled vehicles" when carriages were still around.

At the very least, you need to provide the node.



The reason I, and I assume many others, really dislike these smart assistants is that you then have to have creepy companies listening to everything you do.

The concept itself is great, and if I can finally have a good local only assistant, then that's fantastic.

And yes, one of the reasons people are more excited than ever is that the latest versions ChatGPT are actually really good.


Coauthor of the blog post here. You're right, I said it on the live stream we had today but forgot to mention it in the blog post: the i5 is from a Lenovo ThinkCentre M72e. They're available refurbished for less than the cost of a Pi 4 these days, so it seemed to be a good comparison!


Man, I switched my home assistant box from a raspberry pi to one of these machines, because I wanted the raspberry pi to run one of my 3D printers, and it has made such a beautiful difference in terms of the snappiness of everything home assistant does.

I super recommend it, if you can afford the extra 10 to 15 watts of power.


The ThinkCentre runs at 65W, a Pi4 runs at ~8W. There's a slight energy crisis (in Europe), so you need to optimise for workloads that can run at lower power levels. Just, voice in my opinion, does not justify 65W constant draw.


No personal offense, this is a very common affectation that has just personally been a bugaboo for me lately due to technical documentation containing these non-technical statements.


The article was specifically comparing those CPUs to the Raspberry Pi4. The point is that they’re targeting local-only TTS on low-power hardware.


I read it.

A pi 4 has a very specific performance to power profile. An "i5" has no specificity.

An underclocked i5 could either be a whisper or an inferno compared to this ARM chip.

It also mentioned useful agents like GPT 3.5 as a sort of distant afterthought, which is both paid and highly non-local.


Oh my mistake. I totally missed your point. Yeah an i5 covers a huge range. In my mind I was just assuming they meant the slowest i5, but yeah that doesn’t make sense


I interpreted the whole comparison as an expectation management for rpi4 users - A 2:1 length:processing time is well above what I would have guessed TTS would take, so highlighting this limitation is important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: