Hacker Newsnew | past | comments | ask | show | jobs | submit | kevin42's commentslogin

At this point I wish it were against the rules to accuse people or complain about articles as written by LLMs. It's creating so much noise that useful commentary is hard to find.

I don't see any signs of the parent comment being written by an LLM other than it's detailed and well-written.


If they can't distinguish LLM text, then why should they care?

Anti-AI people like to bring up hallucination as if everything AI generates is false.

I can write pages of text, with my own content, and then use AI to improve my writing and clarity. Then I review and edit. It might have some LLM markers in there, which I remove sometimes because it's distracting. But the final, AI assisted writing is easier to read and better organized. But all the ideas are mine. Hallucinations are not remotely a problem in this case.


If you can’t distinguish between fake images and real ones why should you care?

That depends on the purpose of the image.

If it's used to create a false narrative (like a deep fake), sure, you should care. But if it's used as an alternative to a stock photo, or as an easy way to make an infographic then no, I don't think you should care.


> you should care

Why should I care? The world is full of false narratives.

How can I have the bandwidth to care about everything all of the time?

I swear that more than half of the complaining that I find here comes from priveledged people bike shedding over inane topics, and who have never had to really worry about serious survival-level (how am I going to eat today?) issues in their lives.


> Why should I care? The world is full of false narratives.

Why submit to the false narratives proliferation? It's goal is to make everyone's lives worse, including yours.

From the mere fact USA has a staggering number of school massacres doesn't mean we all should stop caring.


And when an LLM starts hallucinating, and I emphasize “when,” is that not the same issue as creating a false narrative?

"But here's the thing that gets missed in the narrative:"

That's a pretty big clue that it is LLM assisted at least. That said, I don't mind. The article has substance and other than a few LLM markers like that, I think it's well-written.


Most things I use it for could be done without it, it's just more convenient and entertaining.

I had it make a daily aviation weather brief for a private airpark. It uses METAR, outdoor IP cameras I have including one that looks at a windsock and another that looks at the runway surface, and a local weatherstation. It sends me a text message with all of that information aggregated into "It's going to be really windy this afternoon, visibility is high, but there is ice on the runway surface", that sort of thing.

The thing is, all I had to do is point it to a few endpoints and it wrote the entire script and set up a cron for me. I just gave it a few paragraphs of instructions and it wrote, then deployed the rest.

The other day, there was a post here about a new TTS model. I wanted to try it out, so I gave my claw the github URL, and it pulled everything down and had it running without any effort on my part. Then it sent me a few audio messages on discord to try.

When I'm away from home, I can text it to say "what's going on at home" and it will turn on the lights around the house, grab a frame from each camera turn lights back off, and give me a quick report. I didn't have to do any work other than tell it I wanted that skill.

I also have a group chat with some friends on signal that's hilarious. It roasts us, gives us reminders, lets us know about books we might be interested in, that sort of thing. It's really fun.


That's an interesting take, but I'm not sure 'easy to write' is the only advantage.

There is also a really good ecosystem of libraries, especially for scientific computing. My experience has been that Claude can write good c++ code, but it's not great about optimization. So, curated Python code can often be faster than an AI's reimplementation of an algorithm in c++.


Yeah ML is the one of the only spaces I could see it living on, but even then doing it in C++ isn't that much harder for an LLM


What I love about OpenClaw is that I was able to send it a message on Discord with just this github URL and it started sending me voice messages using it within a few minutes. It also gave me a bunch of different benchmarks and sample audio.

I'm impressed with the quality given the size. I don't love the voices, but it's not bad. Running on an intel 9700 CPU, it's about 1.5x realtime using the 80M model. It wasn't any faster running on a 3080 GPU though.


yeah we'll add some more professional-sounding voices and also support for diy custom voices. we tried to add more anime/cartoon-ish voices to showcase the expressivity.

Regarding running on the 3080 gpu, can you share more details on github issues, discord or email? it should be blazing fast on that. i'll add an example to run the model on gpu too.


I wonder if it's possible to guide the intonation in any way.


Oh that is a good use case. Don't connect to email and all that insecure stuff. But as a sandbox for "try this out and deploy a demo". Got me thinking!


I'm jealous. It took me far longer and much more frustration to get it to run.

Had to get the right Python version and make sure it didn't break anything with the previous Python version. A friend suggested using Docker, so I started down that path until I realized I'd probably have to set the whole thing up there myself. Eventually got it to run and I think I didn't break anything else.

I hate Python so much.


Nowadays these frustrations shouldn't be a thing any more. If the author used uv, the script would be able to install its own dependencies and just work.


yeah let me add uv and conda support to make it easier.


Thanks! I asked my bot to make me a plugin for it and it one-shotted it, the resulting script was ~20 lines, very nice!


One of the most responsive developers I’ve ever seen, kudos


why you don't use some kind of environment, Conda or something like that?


I used uv, which should have generated a stable environment. No dice. There's a bug in spacey.

I suspect success is highly variable on macOS vs. Linux; the spacey bug is only in newer (3.14 only or later) Pythons, which Linux will have.


thanks for pointing these errors out. we're looking into this and will help fix this.


Even the built in venv would've solved most of his issues too. But I agree with him in that Python documentation could be better. Or have a more unified system in place. I feel like every other how to doc I read on setting something Python up uses a different environment containment product.


Conda was fantastic up to some point last year and since then I've had quite a few unresolvable version issues with it. It is really annoying, especially when you're tying multiple things together and each requires its own set of mutually exclusive specific versions of libraries. The latest like that was gnu radio and some out-of-tree stuff at the same time as a bluetooth library. High drama. I eventually gave up, rewrote the whole thing in a different language and it took less time than I had spent on trying to get the python solution duct-taped together.

I should learn to give up quicker.


Because I need a new version of python very rarely (years go by). I don't remember all the arcane incantations to set everything up.

I did eventually do that though, and I'm pretty sure I had to mess about with installing and uninstalling torch.

I dread using anything made in python because of this. It's always annoying and never just works (if the version of python is incompatible, otherwise it's fine) .


I don't know, I'm pretty happy with Conda. I just create a new environment and install on it. It normally works.

Even if you have to install using pip it just affect the active environment.

Maybe I'm only trying simple things.


Two words; Nix Flakes


damnn, really sorry for the inconv, looks like some folks are having bad env issues. we're working on fixing this.


It's absolutely not your fault. It's a skill issue and compatibility issue on my end and/or python. You guys are doing amazing.


I'd love to use something other than ROS2, if for no other reason than to get rid of the dependency hell and the convoluted build system.

But there are a lot of nodes and drivers out there for ROS already. It's a chicken and egg thing because people aren't going to write drivers unless there are enough users, and it's hard to get users without drivers.

It looks like their business model is to give away the OS and make money with FoxGlove-like tools. It's not a bad idea, but adoption will be an uphill battle. And since they aren't open source yet, I certainly wouldn't start using it on a project until it us.


ROS is, in my opinion, dying on the industry front.

* It is a dependency hell

* It is resource-heavy on embedded systems

* It is too slow for real-time, high speed control loops

* Huge chunks of it are maintained by hobbyists and far behind the state of the art (e.g. the entire navigation stack)

* As robotics moves toward end-to-end AI systems, stuff needs to stay on GPU memory, not shuttled back and forth across processes through a networking stack.

* Decentralized messaging was the wrong call. A bunch of nodes running on a robot doesn't need a decentralized infrastructure. This isn't Bitcoin. Robots talking to each other, maybe, but not pieces of code on the same robot.


Can you say more about the nav stack? I thought nav2 was considered one of the better more mature packages in ROS2, but it's not my area of expertise.

| As robotics moves toward end-to-end AI systems, stuff needs to stay on GPU memory, not shuttled back and forth across processes through a networking stack.

NVIDIA actually is addressing this with NITROS: https://nvidia-isaac-ros.github.io/concepts/nitros/index.htm...

And ROS native buffers: https://discourse.openrobotics.org/t/update-on-ros-native-bu...



Very interesting. There is nothing that would prevent PeppyOS nodes from running on the GPU. The messaging tech behind PeppyOS is Zenoh (it's swappable), it can run on embedded systems (PeppyOS nodes will also be compatible with embedded in the future). That being said, at the moment the messaging system runs exclusively on the CPU.


What alternatives there are that exist and can replace ROS? I imagine that not all companies are using ROS, however, I'm not in that field exactly so I don't know. I always thought that the quality of that code is mediocre at best.


Most companies in production are inventing their own purpose-built systems and not open-sourcing them. High speed control loops usually use some form of real-time OS, AI-forward robots are starting to use fused CUDA kernels.


We're working hard to get ROS out of dependency hell - https://prefix.dev/blog/reproducible-package-management-for-...

Would love to hear your thoughts.


Fun fact, we've been using pixi to compile everything Python related internally. In fact PeppyOS was even started with pixi as a base layer (but we pivoted away from it since the project is in Rust and Cargo is the de-facto toolchain). We support uv by default for Python (since it's what's the most used these days) but pixi is already supported, see the note on this page: https://docs.peppy.bot/guides/first_node/


Hey, good points, we have plans to create a ROS2 bridge in the near future. We definitely won't be able to catch up with huge ecosystem that ROS2 has created over the years but we will rewrite the annoying parts, that's for sure.


I recently filed a lawsuit in federal court, but because of the nature of the suit (adversarial proceeding on a bankruptcy case, wanting to cut my losses knowing collection is going to be the problem) I decided to do it Pro Se.

I've used a lot of AI to do this, with a lot of research of my own, reading documents from similar cases, verifying citations, etc. So far, things are going well, I've won on all the motions so far. But I'm using critical thinking and carefully reviewing everything.

The real failure with slop filings is procedural, not technological. A competent attorney should never submit a brief built on case law they hadn’t verified. Legal practice has always relied on reading the sources, confirming relevance, and taking responsibility for interpretation.


There is a way to trigger a script when a budget is hit, but they don't make it easy. You set up a billing notification that triggers a script, which can disable resources (like APIs) automatically.

https://docs.cloud.google.com/billing/docs/how-to/control-us...

Google Cloud is easy to set up soft budget alerts via email though, something that I had to use third party service for with AWS.


Those budget alerts usually aren't instant though, they only fire when the cloud gets around to reconciling your usage some number of hours or even days after the damage is done. It's better than nothing but with runaway spending you can still blow way past your limit.


I've been working on an open source, fully self-hosted network video recorder for about two months now. https://github.com/kevinbentley/ronin-nvr/

It works with cheap, generic IP cameras over RTSP. It's pretty easy to get it working with a Raspberry Pi too.

I was using the synology surveillance app, but after their recent shenanigans, I wanted something I could self host and modify on my own.

I'm using it at my property with 14 cameras right now and I'm really happy with it. There's still some work to do, but it's integrated with ML object detection, and even integration with a VLLM to describe a scene when certain things are detected.

This was my first attempt at a large-scale application that is heavily AI assisted. I need to update the screenshots and feature list for the readme, but if you have any questions or want to get involved, let me know.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: