Hacker Newsnew | past | comments | ask | show | jobs | submit | coreyh14444's commentslogin

The debris is pretty close to the color of the road. Seems like a good case for radar/lidar ¯\_(ツ)_/¯

Who knew spending more on extra sensors would help avoid issues like this? So weird.

Sensor fusion is too hard is the new cope.

To paraphrase sarcastically: "But what happens when the sensors disagree???"

[Insert red herring about MCAS and cop out about how redundancy is "hard" and "bad" "complexity".]

Have a minimum quorum of sensors, disable one if it generates impossible values (while deciding carefully what is and isn't possible), use sensors that are much more durable, reliable, and can be self-tested, and integration and subsystem test test test thoroughly some more.


Maybe it decided it was a harmless plastic bag or something.

I've made that mistake before and the "plastic bag" cut my brake lines. Now I try to avoid anything in the road that shouldn't be there.

I've heard stories of plastic bags on the highway making their way into the path of the front-facing cameras of vehicles. Resulting in automatic emergency braking at highway speeds.

do we have an example of Lidar equipped car that can avoid that?

Of course. There are plenty of LiDAR demos out there. For a starter : https://www.youtube.com/watch?v=gylfQ4yb5sI BTW: https://www.reddit.com/r/TeslaFSD/comments/1kwrc7p/23_my_wit...

Tesla is essentially the only one that doesn't use lidar. I'd be very surprised if a Waymo had a problem with this debris.

And yet the humans detected it without lidar?

Human eyes are better by most metrics than any camera, and certainly any camera which costs less than a car. Also, obviously, our visual processing is, by most metrics, so much better than the best CV (never mind the sort of CV that can run realtime in a car) that it's not even funny.

they're making fun of Tesla, which stopped putting radar (ed: I misremembered, thanks to the commenter below) in their cars during the pandemic when it got expensive and instead of saying "we can't afford it", claimed it's actually better to not have lidar and just rely on cameras

Tesla has never had LIDAR on production cars, only mapping/ground truth and test vehicles. It was radar that disappeared during the pandemic.

Yeah! Just add more sensors! We're only 992 more sensors away from full self-driving! It totally works that way!

The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!

For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.

Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.


You want the AI to take the camera's uncertainty about a road-colored object and do an emergency maneuver? You don't want to instead add a camera that sees metal and concrete like night and day?

It’s a lot easier to make an AI that highly reliably identifies dangerous road debris if it can see the appearance and the 3D shape of it. There’s a fair bit of debris out there that just looks really weird because it’s the mangled and broken version of something else. There are a lot of ways to mangle and break things, so the training data is sparser than you’d ideally like.

The problem can be sensors, even for humans. When a human's vision gets bad enough, they lose their license.

We had sensors that can beat "a human whose vision got bad enough to get the license revoked" used as far back as in the 2004 DARPA competition.

That got us some of the way towards self-driving, but not all the way. AI was the main bottleneck back then. 20 years later, it still is.


We don't have a bottleneck anymore. We have Waymo. They seemed to have solved whatever the issue was...I wonder what the main difference between the Waymo system and the Tesla system is?

The "main difference" is that Waymo wouldn't even try to drive coast to coast.

Because it's geofenced to shit. Restricted entirely to a few select, fully pre-mapped areas. They only recently started trying to add more freeways to the fence.


You're right, they wouldn't try, but I don't think there's any evidence for the idea that Waymo couldn't pull this trip off now from a technical POV. Even if they're pre-mapping, the vehicles still have to react to what's actually around them.

Adding more sensors (such as LIDAR) can certainly make it easier for the "AI" (Computer Vision) to detect & identify the object.

This just reflects the type of coding that George is doing. But the VAST majority of code written in the world is CRUD, Forms, Scripts, etc that AI aka "English" is a perfectly reasonable fit for. I mean, I use AI to write code hours a day and I don't think I'd let it drive my car for me.


Hyprland was a bit too far from what I'm used to, and required too many changes in my workflow, but all of this DHH pushing convinced me to try out Linux on a day to day basis and I'm switching to Fedora/Gnome so that's still a win for the cause I'd say.


I've been using Linux on and off since 2005. I've mostly stuck with Ubuntu after briefly using Slackware, but found myself using Windows like 95% of the time in the last few years.

Then DHH launched Omakub and, for me, it's been a game changer. It's not like he invented anything revolutionary, but he did something I hadn't had the time to do: customize it in a sensible way that, for me, is now superior to using Windows 10/11. Also caught the Lazyvim/neovim bug thanks to him, which led to other improvements such as Vimium in browsers.

I haven't tried Omarchy yet, but I will as soon as I find the time to tinker with it.


As someone in the midst of transitioning to Linux for the first time ever, the thing is: I still kinda hate Unix, but my AI friends (Claude Code / Codex) are very good at Unix/Linux and the everything is a file nature of it is amenable to AI helping me make my OS do what I want in a way that Windows definitely isn't.


On UNIX the "everything is a file" quickly breaks down, when networking, or features added post UNIX System V get used, but the meme still holds apparently.

If you want really everything is a file, that was fixed by UNIX authors in Plan 9 and Inferno.


Yeah, I was really confused when I learned that every device was simply a file in /dev, except the network interfaces. I never understood why there is no /dev/eth0 ...

That was back in the mid-90s but even today I still don't understand why network interfaces are treated differently than other devices


It's probably because ethernet and early versions of what became TCP/IP were not originally developed on Unix, and weren't tied to it's paradigms, they were ported to it.


Plan 9 does exactly this but all networking protocols live in /net - ethernet, tcp, udp, tls, icmp, etc. The dial string in the form of "net!address!service" abstracts the protocol from the application. A program can dial tcp!1.2.3.4!7788 or maybe udp!1.2.3.4!7788. How about raw Ethernet? /net/ether1!aabbccddeeff!12345. The dial(2) routine takes a dial string and returns an fd you read() and write(). Very simple networking API.


What would it mean to write to a network interface? Blast everyone as multicast? Not that useful. But Plan9 had connections as files, though I’ve never tried.


That's a bad argument. What does it mean to write to a mouse device? To the audio mixer? To the i2c bus device? To a raw SCSI device (scanner or whatever)? Those are all not very useful either.

Especially since there actually is a very useful thing that writing to /dev/eth0 would do: Put a raw frame on the wire, and reading from it would read raw frames.


You haven't thought through what you're asking. That's the bad argument. Network packets are not viable without a destination address. Nor does anyone want unaddressed (garbage) packets on their network.


I certainly have thought this through.

Network packets don't need a destination address. Broadcast addresses exist. Also, packets to invalid/unknown destinations exist. You can send network packets with invalid source or destination addresses already anyway.

Taking a raw chunk of data and putting it on the wire as-is is the most logical interpretation of "writing to the ethernet device". Does it make sense to allow everyone to do that? Certainly not, that's why you restrict access to devices anyway.

The fact that not every chunk of data "makes sense" for every device in /dev is certainly nothing new, since that is the case for all other devices already (I mentioned a few in my post above).


My first comment mentions multicast. Invalid packets won’t get routed, unless lucky enough to be accidentally valid.


Packets don't need to be routed. Sometimes you just want to communicate with a host on the same Layer-2 network. I said "Broadcast" (not Multicast) on purpose.

Sometimes you don't even want TCP/IP on the wire. Heck, sometimes you maybe don't even want DIX Ethernet on the wire.

Anyway, this discussion is going nowhere. Handcrafting packets is possible (it's basically what the kernel does anyway), sometimes it's useful, and if you could write a user-space program that could just open /dev/eth0 and write its own handcrafted packets to that stream would be helpful.


Well it depends on what "file" means. Linuxian interpretation would be that file is something you can get file descriptor for. And then the "everything is a file" mantra holds better again.


Windows is actually much closer to this limited, meaningless, form of the "everything is a file" meme. In Windows literally every kernel object is a Handle. A file, a thread, a mutex, a socket - all Handles. In Linux, some of these are file descriptors, some are completely different things.

Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them. So them being the same type is actually a hindrance, not a help - it makes it easier to accidentally pass a socket to a function that expects a file, and will fail badly when trying to, for example, seek() in it.

* to be fair, Windows actually has WaitForSingleObject / WaitForMultipleObjects as well, which I think does do something meaningful for any Handle. I don't think Linux has anything similar.


> Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them.

You can write and read on anything on Unix that "is a file". You can't open or close all of them.

Annoyingly, files come in 2 flavors, and you are supposed to optimize your reads and writes differently.


You can call write() and read() on any file descriptor, but it won't necessarily do something meaningful. For example, calling them on a socket in listen mode won't do anything meaningful. And many special files don't implement at least one of read or write - for example, reading or writing to many of the special files in /proc/fs doesn't do anything.


You can try to read/write the same on Windows: ReadFile (and friends) take a HANDLE.

It won't make sense to try to read from all things you can get a HANDLE to on Windows either, but it's up to what created the HANDLE/object as to what operations are valid.

https://learn.microsoft.com/en-us/windows/win32/sysinfo/kern...


Although many people nowadays misunderstand Linux for UNIX, it still isn't the same.


I was recently thinking that object orientation is kind of everything is a file 2.0 in the form everything is an object I mean ofcourse didn’t pan out that good. Haven’t googled yet what people had to say about that already before. P.s. big fan of ur comments.


> object orientation is kind of everything is a file 2.0 in the form everything is an object

That is why I love Plan 9. 9P serves you a tree of named objects that can be byte addressed. Those objects are on the other end of an RPC server that can run anywhere, on any machine, thanks to 9p being architecture agnostic. Those named objects could be memory, hardware devices, actual on-disk files, etc. Very flexible and simple architecture.


I rather pick Inferno, as it improved on top of Plan 9 learnings, like the safe userspace in form of Limbo, after conclusion throwing away Alef wasn't that great in the end.


Inferno was a commercial attempt at competing with Sun's Java. The plan 9 folks had to shift gears so they took Plan 9 and built a smaller portable version of it in about a year. Both the Plan 9 kernel and Inferno kernel share a lot of code and build system so moving code between them is pretty simple.

The real interesting magic behind Plan 9 is 9P and its VFS design so that leaves Inferno with one thing going for it: Dis, its user space VM. However, Dis does not protect memory as it was developed for mmu-less embedded use. It implicitly trusts the programmer not to clobber other programs memory. It is also hopelessly stuck in 32bit land.

These days Inferno is not actively maintained by anyone. There are a few forks in various states and a few attempts to make inferno 64 bit but so far no one has succeeded. You can check: https://github.com/henesy/awesome-inferno

Alef was abandoned because they needed to build a compiler for each arch and they already had a full C compiler suite. So they took the ideas from Alef and made the thread(2) C library. If you're curious about the history of Alef and how it influenced thread(2), Limbo and Go: https://seh.dev/go-legacy/

These days Plan 9 is still alive and well in the form of 9front, an actively developed fork. I know a lot of the devs and some of them daily drive their work via 9front running on actual hardware. I also daily drive 9front via drawterm to a physical CPU sever that also serves DNS and DHCP so my network is managed via ndb. Super simple to setup vs other clunky operating systems.

And lastly, I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.


> I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.

Doesn't Wasm/WASI provide these same features already? That doesn't seem like "a lot of work", it's basically there already. Does dis add anything compelling when compared to that existing technology stack?


Inferno was initially released in 1996, 21 years before WASM existed.

An inferno built using WASM would be interesting. Though WASI would likely be supplanted by a Plan 9/Inferno interface possibly with WASI compatibility. Instead of a hacked up hyper text viewer you start with a real portable virtual OS that can run hosted or native. Then you build whatever you'd like on top like HTML renderers, JS interpreters, media players/codecs, etc. You profile is a user account so you get security for free using the OS mechanisms. Would make a very interesting platform.


WASI has nothing to do with hypertext though. Even WASM itself is not especially web centric, despite the name.


I am well aware of that. My point is a web browser, originally a hypertext viewer, is now a clunky runtime for all sorts of ad-hoc standards including a WASM VM. So instead, start with a portable WASM VM that is a light weight OS that you build a browser inside of composed of individual components like Lego. You get all the benefits of having a real OS including process isolation, memory management, file system, security, and tooling. WASI is a POSIX like ABI/API that does not fit the Plan 9/Inferno design as they thankfully aren't Unix.


The WASI folks are accepting new API proposals. If the existing API does not fit an Inferno-like design, you can propose tweaked APIs in order to improve that fit.


All of that prose doesn't change the fact that at the time Inferno was built, it was an improvement over Plan 9, taking its experience into consideration for improvements.

I know pretty well the history, I was around at the time after all, and Plan 9 gets more attention these days, exactly because most UNIX heads usually ignore Inferno.


I actually read a decent paper on that a while back

Unix, Plan 9 and the Lurking Smalltalk

https://www.humprog.org/~stephen/research/papers/kell19unix-...

Late binding is a bit out of fashion these days but it really brings a lot of cool benefits for composition.


There is also an interesting from Xerox PARC,

"UNIX Needs A True Integrated Environment: CASE Closed"

http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-89-4...

For the TL;DR; crowd

"We 've painted a dim picture of what it takes to bring IPEs to UNIX. The problems of locating. user interfaces. system seamlessness. and incrementality are hard to solve for current UNIXes--but not impossible. One of the reasons so little attention has been paid to the needs of IPEs in UNIX is that UNIX had not had good examples of IPEs for inspiration. This is changing: for instance. one of this article's authors has helped to develop the Small talk IPE for UNIX (see the adjacent story). and two others of us are working to make the Cedar IPE available on UNIX.

What's more. new UNIX facilities. such as shared memory and lightweight processes (threads). go a long way toward enabling seamless integration. Of course. these features don't themselves deliver integration: that takes UNIX programmers shaping UNIX as they always have--in the context of a friendly and cooperative community. As more UNIX programmers come to know IPEs and their power. UNIX itself will inevitably evolve toward being a full IPE. And then UNIX programmers can have what Lisp and Small talk and Cedar programmers have had for many years: a truly comfortable place to program."


Some GOSIP (remember that?) implementations on some Unicies did have files for network connections, but it was very much in the minority. Since BSD was the home of the first widely usable socket() implementations for TCP/IP it became the norm; sockets are a file, but just not linked to any filesystem and control is connect()/accept() and the networking equivalent (setsockopt()) of the Unix system call dumping ground; ioctl().


I don't remember any of the ones I used having it, or then I missed it.

Kind of, sockets don't do seek().


Not all devices support seek either, they're still files; ENOSYS


Psst - don't tell().


Linus finally relented and changed it to "everything is a stream of bits." Still, it's a useful metaphor and way to think about interacting with bits of the OS.


The problems of commercial Unix in 1993 are totally different from Linux in 2025.


Having observed my fair share of beginners transition from win to linux, the most common source of pain I've seen is getting used to the file permissions, and playing fast and loose with sudo.


<sarcasm> At least the Trump administration is opening up opportunities picking strawberries in triple digit heat </sarcasm>


While we are complaining about Microsoft and Emoji's -- they need to grow a spine and bring back Emoji Flags. If you weren't aware Microsoft removed all flags to avoid geopolitical backlash over Taiwain, etc.


So what happens when they need to display a flag?


Country flag emojis are actually just specialised glyphs for pairs of characters, taken in pairs to form a country-code. If the system doesn't know how to render a given country code, it will just fallback to displaying the country code (often stylised as white on blue tiles) instead.

More info here: https://en.wikipedia.org/wiki/Regional_indicator_symbol


Maybe just me, but I have given up on flow state and my solution is to run 2 or 3 projects concurrently and switch between them. I use virtual desktops. work on one, give an assignment to my AI, switch to the next and cycle through them all day long.


This belongs as a scene in the movie Brazil or something. (I do mostly the same thing)


I'm literally doing the same.

Feels wrong but it's the closest I got.


American in Denmark here. There are dozens of stories like this where the Danes have a very matter of fact attitude to all sorts of topics. Life+Dealth, Sex, etc etc. One story that was relayed to me yesterday was that the Zoo in Copenhagen got in some trouble because they had to put down a baby Giraffe and decided to do a dissection for some school kids. Nobody in Denmark cared, but it made to international news. My daughter's public school had "stone age week" in 4th grade and they hunted ducks with spears, prepped and cooked them over an open fire.


Who is downvoting these en masse?


Because GPT-5 comes out later this week?


It could be, but there’s so much hype surrounding the GPT-5 release that I’m not sure whether their internal models will live up to it.

For GPT-5 to dwarf these just-released models in importance, it would have to be a huge step forward, and I’m still doubting about OpenAI’s capabilities and infrastructure to handle demand at the moment.


As a sidebar, I’m still not sure if GPT-5 will be transformative due to its capabilities as much as its accessibility. All it really needs to do to be highly impactful is lower the barrier of entry for the more powerful models. I could see that contributing to it being worth the hype. Surely it will be better, but if more people are capable of leveraging it, that’s just as revolutionary, if not more.


It seems like a big part of GPT-5 will be that it will be able to intelligently route your request to the appropriate model variant.


That doesn’t sound good. It sounds like OpenAI will route my request to the cheapest model to them and the most expensive for me, with the minimum viable results.


Sounds just like what a human would do. Or any business for that matter.


That may be true but I thought the promise was moving in the direction of AGI/ASI/whatever and that models would become more capable over time.


Surely OpenAI would not be releasing this now unless GPT-5 was much better than it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: