Hacker Newsnew | past | comments | ask | show | jobs | submit | deeringc's commentslogin

JSON Web Tokens are part of the JSON Object Signing and Encryption (JOSE) family of standards which are really just containers for cryptographic primitives in a web-friendly representation. Most people are aware of JWS (signed payloads) but there are also JWE (encrypted payloads) and JWK (key payloads). If you're building any sort of cryptographic system that needs to represent encrypted/signed values or keys, you can use JOSE to represent these primitives without having to reinvent the wheel. By far the biggest use of JOSE is in authentication systems where JWS are used as signed bearer tokens but that's just one application and there are many others. They arent perfect, but they filled an important gap when they were created and made it much easier to deal with crypto at an application layer compared with all of hte binary formats that are used in things like TLS.


Enterprise customers often have split tunnel VPNs or proxies (with PAC configs) where part of the traffic may go through a VPN and another part goes directly. So for example a customer admin might configure an app that does email and webRTC so that the real time traffic (media and the associated signalling) goes directly and the email traffic goes via some TLS intercepting proxy for some compliance reason or DLP. This can result in one application having multiple public IPs for different network requests, even while they are on one internal network (not even jumping between networks like you say). That isnt something that the application author can control, it's the customer admin that decides to do that.


Whenever they have, they have been bought by larger and richer (mostloy US) tech companies. Look at Deepmind, Skype, Nokia, Tandberg, etc... Arm is another (although Japanese rather than US-owned). There are also many cases where European founders base their companies out the States for access to higher funding (eg Stripe, Spotify). Another factor is that US multinationals have a large presence across Europe in terms of employment - if a large component of the top tech talent of Europe is employed by US companies then they are less likely to build large European companies.


I agree on a level that SQLIte is a master class in testing and quality. However, considering how widely used it is (essentially every client application on the planet) and that it does get several memory safety CVEs every year there is some merit in a rewrite in a memory safe language.


I don't think dual core was that rare in 2007. Conroe was released the year before. For gamers, dual core was the standard at that point (at least from what I remember of the time).


There's also the new Ryu algorithm that is being used, which is probably the biggest speed up.

https://github.com/ulfjack/ryu


AFAIK the state of the art now is "dragonbox":

https://github.com/jk-jeon/dragonbox


You might want to watch this releavnt video from Stephan T. Lavavej (the Microsoft STL maintainer): https://www.youtube.com/watch?v=4P_kbF0EbZM


I don't need to listen to what someone says if I can look at the source myself.


I believe the impl you link to is not fully standards compliant, and has an approximate soln.

MSFT's one is totally standards compliant and it is a very different beast: https://github.com/microsoft/STL/blob/main/stl/inc/charconv

Apart from various nuts and bolts optimizations (eg not using locales, better cache friendless, etc...) it also uses a novel algorithm which is an order of magnitude quicker for many floating points tasks (https://github.com/ulfjack/ryu).

If you actually want to learn about this, then watch the video I linked earlier.


You profiled the code in your head?


Any reason it needs to be a chromium fork, and not simply FF?


> Not all pieces of software are created equal. A desktop CAD application that doesn't do any networking and doesn't manipulate sensitive user data isn't worthy of binary exploitation. If there is adequate security at the system OS layer, at worst it will corrupt a user's file.

That software is almost certainly running on a network-connected machine though and likely has email access etc.. A spear-phising attack with a CAD file that contains an RCE exploit would be an excellent way to compromise that user and machine leading to attacks like industrial espionage, ransomwear, etc...


If you've fallen victim to phishing you're hosed anyway as a malicious process can read and write to the address space of another process, see /proc/$pid/mem, WriteProcessMemory(), etc.


There's a spread of things that can happen in phishing; I would expect that it's a lot harder to get a user to run an actual executable outright than to open a "data" file that makes a trusted application become malicious.


In order to read or write /proc/pid/mem your process needs to be allowed to ptrace() the target process. You can’t do that for arbitrary processes. Similar story for WriteProcessMemory().


Above your security context, no, but you can definitely WriteProcessMemory any other process that is in your same security context or lower (something similar holds for ptrace, although remember that SUID/SGID binaries are running not at a same security context)


Well, maybe at an API and prompt level. But if Google pull ahead in this space then you may become dependent on what it alone can do functionally. Even if you can trivially switch LLM and prompt, if the others aren't able to do something equivalent (or at the same level of quality) then you're still locked in. Until now we've basically had this situation with OpenAI.


Why is vendor lock-in a concern if no other vendor offers that functionality?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: