Hacker Newsnew | past | comments | ask | show | jobs | submit | rasguanabana's commentslogin

There is a live view, but it looks like the machine has broken down


I had some down time when the Y axis stepper motor managed to unplug itself overnight!


Oh yes I see it's a YouTube embed, I had those blocked sorry


The only thing that comes to mind for me is simpler header, but not sure if it makes much of a difference anyway.


Yes, it makes a difference: about 8 milliseconds. Properly implemented IPv6 has a lower latency. (and is more efficient, though i believe the energy savings are negligible) See this map: https://stats.labs.apnic.net/v6perf


Wouldn’t VLM be susceptible to prompt injection?


For sure string like "zqb" would give me a pause with this letterform, because it looks a lot like "ząb". Maybe it would be clearer in surrounding text, though.


My first instinct was that JPEG XL would produce bigger images, probably for better quality.


I doubt it. There’s a reason you can’t call from iPad (despite having SIM card variant). There’s a reason you cannot use Pencil with Mac on touchpad. There’s a reason you have very limited multitasking support on iPad and none on iPhone.

Apple wants you to buy more devices to fill gaps that another one doesn’t support.


I’ve had to make “calls” to iPhone users plenty of times using FaceTime and to none iPhone users using whatever messaging app they were using.

I can also make regular calls from my Apple Watch.


Why go that way. I’m no digital signal processing expert, but images (and series thereof, i.e videos) are 2D signals. What we see is spatial domain and analyzing pixel by pixel is naive and won’t get you very far.

What you need is going to frequency domain. From my own experiment in university times most significant image info lays in lowest frequencies. Cutting off frequencies higher than 10% of lowest leaves very comprehensible image with only wavey artifacts around objects. You have plenty of bandwidth to use even if you want to embed info in existing media.

Now here you have full bandwidth to use. Start with frequency domain, set expectations of lowest bandwidth you’ll allow and set the coefficients of harmonic components. Convert to spatial domain, upscale and you got your video to upload. This should leave you with data encoded in a way that should survive compression and resizing. You’ll just need to allow some room for that.

You could slap error correction codes on top.

If you think about it, you should consider video as - say - copper wire or radio. We’ve come quite far transmitting over these media without ML.


We started with that approach, by assuming that the compression is wavelet based, and then purposefully generating wavelets that we know survive the compression process.

For the sake of this discussion, wavelets are pretty much exactly that: A bunch of frequencies where the "least important" (according to the algorithm) are cut out.

But that's pretty cool, seems like you've re-invented JPEG without knowing it, so your understanding is solid!


How about Fourier transform (or cosine, whichever works best), and keep data as frequency components coefficients? That’s the rough idea behind digital watermarking. It survives image transforms quite well.


I don’t clearly see how it’s ground-breaking. Non-repudiation could be achieved long before blockchain, because we have cryptography and can sign stuff.


Yes you can get pretty far just signing stuff but without general availability of those attestations you are quite limited.

That's a reasonable way to describe programmable blockchain - availability for these attestations.


It’s not all about computing. It’s about avoiding conversion from electrical to optical signal (and back) at every network node, which is costly.


Don't you need a certain amount of computing at each network node anyways to see what to do and where to send the optical signal next? In additional to error correction/amplifying the signal?


Generally you only need to read the 'header'. If that is little enough computation maybe that can be done optically, gaining the advantage of not needing to convert twice.


Often it might be as simple as routing right wavelength through right path, as in WDM systems. Optical amplifiers, such as EDFA [0] are interesting thing, too.

[0]: http://www.fiber-optical-networking.com/the-application-of-e...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: