Hacker Newsnew | past | comments | ask | show | jobs | submit | tgot's commentslogin

Lookup the RPLidar family of devices. Cheap 1D, easy to work with. By 1D I mean that it measures ranges in 360degrees around the plane that it is spinning in.


I think that your description is almost excellent, but that you're fundamentally misleading in describing what you are doing as a "30-bit" digit.

It's a 10^9 digit mathematically, occupying 30 bits of storage. You do briefly mention that it's 10^9, but repeatedly say 30-bits.


It isn't misleading at all.

A hexadecimal digit has 4 bits of entropy. You can guess it correctly by chance one in sixteen times. Calling that a four-bit digit is correct. Same with a 30 bit digit, all that changes is the magnitude.

The "storage bits" aren't a detail, they're an essential property. Stored in balanced ternary it would still have 30 bits of entropy.


I think you misunderstood the parent. If you speak of a number in the range 0-15 stored in 4 bits, we can all agree that "4 bit digit" is the appropriate term for it. But what about a number also stored in 4 bits but restricted to the range 0-13? It's a digit that fits in 4 bits, but calling it a "4 bit digit" without further qualification would omit relevant information.


That hadn't occurred to me, I had thought (having read the Fine Article, but not closely, nor really more than glanced at the code) that it was using exactly 30 bits and reserving two for, IDK, carry detection, and that 10^9 was just approximating 1073741824. We do that every time we call 1024 a KB and not a KiB, so there's precedent.

Ok. Fair. It's a ~28.9 bit digit taking up 32 bits of storage. It turns out I was mislead!


My mother was in the 2nd (? early...) class of AT&T programmers hired. Nobody knew any programming, so the identified pre-requisites to getting hired were:

1) College degree 2) Typing 60 WPM

She and basically everyone else had been pushed to learn to type in high-school, so they'd have something better than waitressing to fall back on.


Not widely publicized, but the benchmarking code is in the source. At one point I was running it on my specific target machines to get performance estimates in support of porting some large-ish CPU stuff from Matlab into C++.

The max performance was in Eigen-calling Intel MKL, but it was a big plus to not need MKL licenses on every development machine.


Always specify in order "sudo rm -rf opt /" so that opt is fully deleted before the / deletion causes too many failures?


The GPS can easily wobble by 50ns back and forth as the constellation changes. That's a lot! And, it is not random on a short time scale.

Folks often think "Oh, +/- 50ns, 20ns RMS, easy to filter...", but that's totally wrong.

The GPS will report -30ns from stable for minutes on end, then slew to +10ns, then -5ns, etc. Any high-precision oscillator (such as for radar) that's being jerked around like that isn't going to be as stable as high performance needs.

Even for just handoff of handsets at 2.2-2.3GHz, having the radio network (aka cell towers) all locked to an oven-controlled oscillator that was aligned-to, but far smoother-than, GPS, made a huge difference.

Now, improvements to GPS/GNSS that track 12 satellites instead of 6, and across multiple constellations, can result in more stable radio-based time. But then you get into urban canyons, and can only see 5 instead of 12, and you're right back into the jumpy situation.


I want this tied to a crude speed estimation algorithm, from a stationary camera.

Even with ALPR retention restrictions, I could trigger a video save and send the police a video of the idiots doing 50mph through the residential neighborhood.


For speed estimation, I wonder if the simplest thing would be two plate readers a fixed distance apart. Say they're attached to light poles N meters distant.

When the reader triggers and takes a picture, it pings your server for a timestamp. It does the processing and records car X at time T provided by the timestamp. You then calculate the speed by the distance between the two meters divided by the time between the two timestamps for X.

Provided that speeding cars are a problem in your neighborhood, I bet you could find the people responsible for the light poles (which likely have power associated to them) to let you install the devices, especially if you're providing the devices and servers.

I don't think you'd necessarily be able to ticket the drivers though, as you're not law enforcement. Maybe you'd be able to work with local law enforcement though.


I was thinking of using the bounding boxes across frames.

Maybe I could work with a neighbor, but then things need to be tightly time-synced.

Zero chance I could get light pole access. A city endorsed surveillance system? No way.


Network transparency meant that when I was doing Finite Element Analysis on the beefiest Sun workstation they sold, running a proprietary application, I could log in from home, run the application on the Sun, and see the results of the prior run.

I could edit the model, regenerate the mesh if needed, and kick off another big batch run, and then drop the X11 frontend GUI.

Over dial-up, from a Windows box to a Sun workstation, in the mid 90s. And it was could be secure, tunneled through SSH.


I'm assuming that was sending OpenGL 1.0 commands over the network, or doing some kind of equivalent. Using X11 drawing commands for that would have resulted in miserable frame rates over dial-up if you ever tried to rotate or zoom the mesh. In any case that's not really network transparency nor is it really X11, the program likely had to be built in a very specific way using display lists.


CAD and Simulation Apps were written in those specific ways.

You are right.

Where that was done, performance was excellent.


It just seems wrong in that case to to say that X is network transparent. The real concern is that OpenGL 1.0 was capable of running over the network, and in order to use it effectively application developers had to take network operation into consideration, and the server had to support the non-standard extension required to use it correctly. In some circumstances using display lists locally could actually reduce performance, so the application may not have wanted to take that path in all cases: https://www.opengl.org/archives/resources/faq/technical/disp...

Generally if your application has any code that does this:

    if (client is remote)
        ...
    else if (client is local)
        ...
Then I wouldn't say the protocol you're using is network transparent.


Open GL could run over the network because X could.

Sgi used X fully. They wrote the book on all that, including GLX, and it was flat out awesome.

The Apps I used all ran display and pick lists. They worked very well over the wire and that use case for used a lot.

The quibbles are really getting past the basic idea of running an app remotely on ones local display.

That happened and worked well and had some advantages. I personally built some pretty big systems for modeling and simulation that were cake to admin and very effective on many other fronts.

Notice I did not say network transparent in my comments above.

Multi-use graphical computing is much more accurate when it comes to how X worked and what it delivered people.


BTW, display performance on those was great local. A small hit locally is not that big of a deal. Never has been.

Users will employ detail limits, model scope limits, whatever to get the UX they need.

Developers can dual path it, or provide options. And they will provide options because not all users have latest and greatest. They never do.

In the end, it is mostly a wash for most things.

The big gains were had in other areas.

In the CAD space, sequential CPU is far more of a bottleneck. Mid to lower grade GFX subsystems perform more than good enough for a ton of cases today. Can't get a fast enough sequential compute CPU. And while there is serious work to improve multi threaded geometry, fact is most important data is running on crazy complex software that needs the highest sequential compute it can get.

Big data actually sees a gain with the X way of doing things.

Huge model data running over shared resources is a hard problem. And it continues to be one. Mix in multi user and it takes serious tools to manage it all and perform.

In the 90's, many of us were doing those things, multi user, revision control, concurrent access, you name it on big models and fast, local file systems. There was software with all that well integrated. We did not have cloud yet. Not really.

The app server model rocked!

X made all that pretty easy. One setup, and just connect users running whatever they want to, so long as a respectable X server is available, they are good to go.

One OS, the whole box dedicated to one app, fast storage, big caches, multiple CPUS all tuned to get that job done well and perform.

Once that work is done, doing things the X way means it stays done, and users just run the app on the server. Bonus is they can't get at that data directly. Lots of silly problems just go away.

And, should that system need to be preserved over a long period of time? Just do that, or package it all up and emulate it.

In all cases, a user just connects to run on whatever they feel like running on.

Those of us still talking about how X does things see many advantages. Like anything, it is not the be all end all. It is a very nice capability and a "unixey" way of doing things being lost.


It is quite possible it just sent draw line requests as a list of vertexes after doing all the math on the client, which core x supports just fine.


That takes huge amounts of bandwidth. There's a reason display lists were necessary to get it to work on dial-up connection.


It means that _today_, I can run an IDE on my 32-core home desktop and interact with it seamlessly on my 13-inch laptop at the office. I'm not sure whether this is possible with Wayland.


https://wayland.freedesktop.org/faq.html#heading_toc_j_8:

Is Wayland network transparent / does it support remote rendering?

No, that is outside the scope of Wayland. To support remote rendering you need to define a rendering API, which is something I've been very careful to avoid doing. The reason Wayland is so simple and feasible at all is that I'm sidestepping this big task and pushing it to the clients. It's an interesting challenge, a very big task and it's hard to get right, but essentially orthogonal to what Wayland tries to achieve.

This doesn't mean that remote rendering won't be possible with Wayland, it just means that you will have to put a remote rendering server on top of Wayland. One such server could be the X.org server, but other options include an RDP server, a VNC server or somebody could even invent their own new remote rendering model. Which is a feature when you think about it; layering X.org on top of Wayland has very little overhead, but the other types of remote rendering servers no longer requires X.org, and experimenting with new protocols is easier.

It is also possible to put a remoting protocol into a wayland compositor, either a standalone remoting compositor or as a part of a full desktop compositor. This will let us forward native Wayland applications. The standalone compositor could let you log into a server and run an application back on your desktop. Building the forwarding into the desktop compositor could let you export or share a window on the fly with a remote wayland compositor, for example, a friend's desktop.


See waypipe for an example of how this can be done "natively": https://gitlab.freedesktop.org/mstoeckl/waypipe/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: