Hacker Newsnew | past | comments | ask | show | jobs | submit | more minitech's commentslogin

Matrix does have that problem. I’ve lost so much message history to key management bugs.


This was pretty common until (I think) late last year, but GP was writing in present tense. The Matrix team have been fixing the various bugs responsible. I haven't seen an "unable to decrypt" error recently.

https://matrix.org/blog/2024/10/29/matrix-2.0-is-here/#4-inv...


Fwiw I still can read more than 5 years old messages in the new Element X app, which I recently installed.


> And the reality is that even a really precise fingerprint has a half-life of only a few days (especially if it's based on characteristics like window size or software versions).

A fingerprint that changes only by the increase of a browser version isn’t dead; it’s stronger.


I'm not sure if I understand this. If you show up on a website one day with one fingerprint, but on the next day it was a different fingerprint, there's no way to connect that it's the same device unless it wasn't a core trait of the fingerprint in the first place.


If everything is the same but the browser version, a day later how is that not the same person?


I think you’re thinking that the fingerprint is reported as a single hash (e.g. SHA512) of multiple attributes, which would of course change if a single bit was different. But there’s no reason they would be reported that way. It could be (and probably more likely) a big data structure of all the values. It would be easy to see that only a few things changed.


>it’s stronger.

marginally given that most browsers auto-update.


And what a weird and distracting AI-generated article. The bold-labelled numbered lists (e.g. “the evil plan”) are especially awkward.


Might have been AI-generated, or just poorly written. I found it hard to read either way.


> Old computers, before sandboxing and Windows defender and real-time protection, were more secure, because people were less likely to plug their bank account information, social security number, birth date, and home address into them.

So they weren’t actually more secure – they were less secure and less useful (setting aside the questionable historical accuracy of where popular online banking sits in the timeline relative to OS security measures in that claim). Maybe if we relax the made up constraint that a change must create 100% foolproof security, we can have a more nuanced discussion about ways to improve security.


We should implement mechanisms that make it hard and obvious to do unsafe things and easy to do safe things, in all kinds of computers; even as an expert user, I don’t want to have to think about my text editor’s color scheme being able to access my bank. Yes, this necessarily involves a barrier to installing apps with certain privileges, and it should be high enough in software targeted at non-expert users to provide them with protection against scams. No, we obviously shouldn’t make it illegal for a user to do what they want, and nobody has even come close to proposing that here. That’s a straw man.


For anyone else who’s curious to see this app in use but isn’t willing or able to install it, here’s a convenient link to a random video: https://www.youtube.com/watch?v=h-CT9McJF4s&t=2m57s


Extensions can already use a better mechanism for this (https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...) than starting a local web server.


Backwards compatibility. Older browsers will still be able to render the <select> and submit it as part of a form, just with its options unstyled.


Control+Command+Space or Fn+E or Edit > Emoji & Symbols if you know the character’s name. It’s not very convenient for repeated use, but it gets the job done in a pinch.


Yeah it's not great. Edit isn't always there. Fn+E seems to make the most sense. I've heard about ctrl+cmd+space but commonly forget it. Both of those open the same GUI which combines emojis, stickers, and unicode symbols—preferring the first two categories over the last. To type out a unicode symbol it takes at least three clicks on top of me starting to type in the name of my symbol

sigh

Thanks for the suggestions


> Edit isn't always there. Fn+E seems to make the most sense. I've heard about ctrl+cmd+space but commonly forget it.

You can remap Fn/Globe directly to it if you want. It's also accessible from the Input menu bar item if you show that.

> Both of those open the same GUI which combines emojis, stickers, and unicode symbols—preferring the first two categories over the last. To type out a unicode symbol it takes at least three clicks on top of me starting to type in the name of my symbol

Are you using the expanded Character Viewer window[0], or the default collapsed Emoji & Symbols pane[1]? Because the expanded Character Viewer lets you customise and reorder the categories[2] (though that doesn't affect search), including adding a full Unicode view[3]. And they both default to the search bar when opened (though the Character Viewer opens unfocused for some reason).

[0]: https://imgur.com/hTtrbcA

[1]: https://imgur.com/3L31DQu

[2]: https://imgur.com/Ch1PI5L

[3]: https://imgur.com/epayzwe


I assume people downvoted it because “ASLR obscures the memory layout. That is security by obscurity by definition” is just wrong (correct description here: https://news.ycombinator.com/item?id=43408039). It does say [flagged] too, though, so maybe that’s not the whole story…?


No, that other definition is the incorrect one. Security by obscurity does not require that the attacker is ignorant of the fact you're using it. Say I have an IPv6 network with no firewall, simply relying on the difficulty of scanning the address space. I think that people would agree that I'm using security by obscurity, even if the attacker somehow found out I was doing this. The correct definition is simply "using obscurity as a security defense mechanism", nothing more.


No, I would not agree that you would be using security by obscurity in that example. Not all security that happens to be weak or fragile and involves secret information somewhere is security by obscurity – it’s specifically the security measure that has to be secret. Of course, there’s not a hard line dividing secret information between categories like “key material” and “security measure”, but I would consider ASLR closer to the former side than the latter and it’s certainly not security by obscurity “by definition” (aside: the rampant misuse of that phrase is my pet peeve).

> The correct definition is simply "using obscurity as a security defense mechanism", nothing more.

This is just restating the term in more words without defining the core concept in context (“obscurity”).


I'm inclined to agree and would like to point out that if you take a hardline stance that any reliance on the attacker not knowing something makes it security by obscurity then things like keys become security by obscurity. That's obviously not a useful end result so that can't be the correct definition.

It's useful to ask what the point being conveyed by the phrase is. Typically (at least as I've encountered it) it's that you are relying on secrecy of your internal processes. The implication is usually that your processes are not actually secure - that as soon as an attacker learns how you do things the house of cards will immediately collapse.


What is missing from these two representations is the ability for something to become trivially bypassable once you know the trick to it. AnC is roughly that for ASLR.


I'd argue that AnC is a side channel attack. If I can obtain key material via a side channel that doesn't (at least in the general case) suddenly change the category of the corresponding algorithm.

Also IIUC to perform AnC you need to already have arbitrary code execution. That's a pretty big caveat for an attacker.


You are not wrong, but how big of a caveat it is varies. On a client system, it is an incredibly low bar given client side scripting in web browsers (and end users’ tendency to execute random binaries they find on the internet). On a server system, it is incredibly unlikely.

I think the middle ground is to call the effectiveness of ASLR questionable. It is no longer the gold standard of mitigations that it was 10 years ago.


ASLR is not purely security through obscurity because it is based on a solid security principle: increasing the difficulty of an attack by introducing randomness. It doesn't solely rely on the secrecy of the implementation but rather the unpredictability of memory addresses.

Think of it this way - if I guess the ASLR address once, a restart of the process renders that knowledge irrelevant implicitly. If I get your IPv6 address once, you’re going to have to redo your network topology to rotate your secret IP. That’s the distinction from ASLR.


I don't like that example because the damaged cause by and the difficulty of recovering from a secret leaking is not what determines the classification. There exist keys that if leaked would be very time consuming to recover from. That doesn't make them security by obscurity.

I think the key feature of the IPv6 address example is that you need to expose the address in order to communicate. The entire security model relies on the attacker not having observed legitimate communications. As soon as an attacker witnesses your system operating as intended the entire thing falls apart.

Another way to phrase it is that the security depends on the secrecy of the implementation, as opposed to the secrecy of one or more inputs.


You don’t necessarily need to expose the IPv6 address to untrusted parties though in which case it is indeed quite similar to ASLR in that data leakage of some kind is necessary. I think the main distinguishing factor is that ASLR by design treats the base address as a secret and guards it as such whereas that’s not a mode the IPv6 address can have because by its nature it’s assumed to be something public.


Huh. The IPv6 example is much more confusing that I initially thought. At this point I am entirely unclear as to whether it is actually an example of security through obscurity, regardless of whatever else it might be (a very bad idea to rely on it for one). Rather ironic given that the poster whose claims I was disputing provided it as an example of something that would be universally recognized as such.


I think it’s security through obscurity because in ASLR the randomized base address is a protected secret key material whereas in the ipv6 case it’s unprotected key material (eg every hop between two communicating parties sees the secret). It’s close though which is why IPv6 mapping efforts are much more heuristics based than ipv4 which you can just brute force (along with port #) quickly these days.


I'm finding this semantic rabbit hole surprisingly amusing.

The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity. But that doesn't fit the prototypical example where someone uses a super secret and utterly broken home rolled "encryption" algorithm. Nor does it fit the example of someone being careless with the key material for a well established algorithm.

The key defining characteristic of that example is that the security hinges on the secrecy of the blueprints themselves.

I think a case can also be made for a slightly more literal interpretation of the term where security depends on part of the design being different from the mainstream. For example running a niche OS making your systems less statistically likely to be targeted in the first place. In that case the secrecy of the blueprints no longer matters - it's the societal scale analogue of the former example.

I think the IPv6 example hinges on the semantic question of whether a network address is considered part of the blueprint or part of the input. In the ASLR analogue, the corresponding question is whether a function pointer is part of the blueprint or part of the input.


> The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity

Necessary but not sufficient condition. For example, if I’m transmitting secrets across the wire in plain text that’s clearly security through obscurity even if you’re relying on an otherwise secure algorithm. Security is a holistic practice and you can’t ignore secrets management separate from the algorithm blueprint (which itself is also a necessary but not sufficient condition).


Consider that in the ASLR analogy dealing in function pointers is dealing in plaintext.

I think the semantics are being confused due to an issue of recursively larger boundaries.

Consider the system as designed versus the full system as used in a particular instance, including all participants. The latter can also be "the system as designed" if you zoom out by a level and examine the usage of the original system somewhere in the wild.

In the latter case, poor secrets management being codified in the design could in some cases be security through obscurity. For example, transmitting in plaintext somewhere the attacker can observe. At that point it's part of the blueprint and the definition I referred to holds. But that blueprint is for the larger system, not the smaller one, and has its own threat model. In the example, it's important that the attacker is expected to be capable of observing the transmission channel.

In the former case, secrets management (ie managing user input) is beyond the scope of the system design.

If you're building the small system and you intend to keep the encryption algorithm secret, we can safely say that in all possible cases you will be engaging in security through obscurity. The threat model is that the attacker has gained access to the ciphertext; obscuring the algorithm only inflicts additional cost on them the first time they attack a message secured by this particular system.

It's not obvious to me that the same can be said of the IPv6 address example. Flippantly, we can say that the physical security of the network is beyond the scope of our address randomization scheme. Less flippantly, we can observe that there are many realistic threat models where the attacker is not expected to be able to snoop any of the network hops. Then as long as addresses aren't permanent it's not a one time up front cost to learn a fixed procedure.


Function pointer addresses are not meant to be shared - they hold 0 semantic meaning or utility outside a process boundary (modulo kernel). IPv6 addresses are meant to be shared and have semantic meaning and utility at a very porous layer. Pretending like there’s no distinction between those two cases is why it seems like ASLR is security through obscurity when in fact it isn’t. Of course, if your program is trivially leaking addresses outside your program boundary, then ASLR degrades to a form of security through obscurity.


I'm not pretending that there's no distinction. I'm explicitly questioning the extent to which it exists as well as the relevance of drawing such a distinction in the stated context.

> Function pointer addresses are not meant to be shared

Actually I'm pretty sure that's their entire purpose.

> they hold 0 semantic meaning or utility outside a process boundary (modulo kernel).

Sure, but ASLR is meant to defend against an attacker acting within the process boundary so I don't see the relevance.

How the system built by the programmer functions in the face of an adversary is what's relevant (at least it seems to me). Why should the intent of the manufacturer necessarily have a bearing on how I use the tool? I cannot accept that as a determining factor of whether something qualifies as security by obscurity.

If the expectation is that an attacker is unable to snoop any of the relevant network hops then why does it matter that the address is embedded in plaintext in the packets? I don't think it's enough to say "it was meant to be public". The traffic on (for example) my wired LAN is certainly not public. If I'm not designing a system to defend against adversaries on my LAN then why should plaintext on my LAN be relevant to the analysis of the thing I produced?

Conversely, if I'm designing a system to defend against an adversary that has physical access to the memory bus on my motherboard then it matters not at all whether the manufacturer of the board intended for someone to attach probes to the traces.


If you can look up the base address via AnC, is considering it to be a protected key material really correct?


I think that's why the threat model matters. I consider my SSH keys secure as long as they don't leave the local machine in plaintext form. However if the scenario changes to become "the adversary has arbitrary read access to your RAM" then that's obviously not going to work anymore.


If someone can guess the randomization within 1 second using the AnC attack, you can restart as much as you want, but it will not improve security.


  > The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
Also stated as "security happens in layers", and often obscurity is a very good layer for keeping most of the script kiddies away and keeping the logs clean.

My personal favorite example is using a non-default SSH port. Even if you keep it under 1024, so it's still on a root-controlled port, you'll cut down the attacks by an order of magnitude or two. It's not going to keep the NSA or MSS out, but it's still effective in pushing away the common script kiddies. You could even get creative and play with port knocking - that keeps under-1024 ports logs clean.


I use non-standard SSH ports too. It does not improve theoretical security, but it does improve quality of life from generating smaller logs.


In the limit, an encryption key falls to the same logic. You simply rely on the difficulty of scanning all possibly keys.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: