Well, the problem with hardware decoding is it cannot handle all the variations in data corruption which results in hardware crash, sometimes not recoverable with a soft reset of the hardware block.
It is usually more reasonable to work with software decoders for really complex formats, or only to accelerate some heavy parts of the decoding where data corruption is really easy to deal with or benign, or aim for the middle ground: _SIMPLE_ and _VERY CONSERVATIVE_ compute shaders.
Sometimes, the software cannot even tell the hardware is actually 'crashed' and spitting non-sense data. It goes even worse, some hardware block hot reset actually do not work and require a power cycle... Then a 'media players' able to use hardware decoding must always provide a clear and visible 'user button' in order to let this very user switch to full software decoding.
Then, there is the next step of "corruption": some streams out there are "wrong", but this "wrong" will be decoded ok on only some specific decoders and not other ones even though the format is following the same specs.
What a mess.
I hope those compute shaders are not using that abomination of glsl(or the dx one) namely are SPIR-V shaders generated with plain and simple C code.
These are all gripes you might have with Vulkan Video.
Unlike with Vulkan Video, in Compute, bounds checking is the norm. Overreading a regular buffer will not result in a GPU hang or crash. If you use pointers, it will, but if you use pointers, its up to you to check if overreads can happen.
The bitstream reader in FFmpeg for Vulkan Compute codecs is copied from the C code, along with bounds checking. The code which validates whether a block is corrupt or decodable is also taken from the C version. To date, I've never got a GPU hang while using the Compute codecs.
I wrote the Vulkan ProRes backend. The bitstream decoder was implemented from scratch, for a number of reasons.
First, the original code was reverse-engineered, before Apple published an SMPTE document describing the bitstream syntax. Second, I tried my best at optimizing the code for GPU hardware. And finally, I wanted take the learning opportunity :)
amazon is working again (as it was a few years back) with the classic web, namely noscript/basic (x)html browsers? (even text browsers like lynks or links2, etc)
If that's an issue, and if you don't mind building something out yourself, Marginalia have an excellent API that you can connect to from your own personal non-Javascript meta-search engine. I did that, and I find Marginalia awesome to deal with. They're one of my favorite internet projects.
There is! The API Key is literally "public". But apparently it often gets rate limited, because seemingly every Metasearch engine uses that one. I think there might also be a slightly less rate-limited one for Hacker News users if you search around (I no longer remember what it is since I got my own key in the end.)
You can get your own API key for free by emailing, but that would not be anonymous, I guess.
I don't have curl syntax to hand, but hopefully it's easy to figure out from these documents. I may come back and edit later with curl syntax if I get time:
If their email server does handle self-hosted SMTP server with ip literal email addresses (with the ip from the SMTP, stronger than SPF), indeed, I will probably ask for my mine.
I wish major AI services would do the same or something close.
2. I don't think just because it uses javascript make it bad. It's a very nice site now. I prefer it better than old version. My website doesn't use JS for any functionality yet. But I've never said never either. The reason hasn't arised that I need to use JS. The day it does, I will use it.
But I understand the sentiment though. I used to be a no js guy before. But I've been softened by the need to use it professionally only to think --- hmmm, not bad.
web apps are gated by the abominations of whatng cartel web engines, with even worse SDKs, mechanically certainly not 'small' and assurely a definitive nono.
And the 'old' interface, you bet I tried to use it... which is actually gated with javascript... so...
I've tested it in both w3m and dillo, should work fine as long as your browser renders noscript tags. It's very much designed from the ground up to handle browsers like that. Just requires you to manually wait a few seconds and then press the link.
One configuration that might break is if you're running something like chrome or firefox, and rigging it to not run JS. But it's really hard to support those types of configurations. If it works in w3m, it's no longer a "site requires JS" issue...
Thanks a lot for considering no-JS browser like Dillo, in the current web hellscape is certainly a difficult task. I checked and it works well in Dillo on my end.
If there is so much performance difference among generic allocators, it means you need semantic optimized allocators (unless performance is actually not that much important in the end).
Agreed mostly. Going from standard library to something like jemalloc or tcmalloc will give you around 5-10% wins which can be significant, but the difference between those generic allocators seem small. I just made a slab allocator recently for a custom data type and got speedups of 100% over malloc.
Hello, I cannot tell if this is true or not, since I have not been able to really test the ability of Claude AI to code.
I am looking for a web API I could use with CURL, and limited "public/testing" API keys. Anyone?
I am very interested in Claude code to test its ability to code assembly (x86_64/RISC-V) and to assist the ports of c++ code to plain and simple C (I read something from HN about this which seems to be promising).
Kernel anti-cheats are weaponized by hackers. It is all over HN.
Play games which are beyond that: dota2, cs2 for instance.
On linux, there is a new syscall which allows a process to mmap into itself the pages of another process (I guess ~same effective UID and GID). That is more than enough to give hell to cheats...
But any of that can work only with a permanent and hard working "security" team. If some game devs do not want to do that, they should keep their game offline.
reply