> On 18 March 2024, the Secretary of State was provided with a Submission which made it clear that Category 1 duties were not primarily aimed at pornographic content or the protection of children (which were dealt with by other parts of the Act).
Notice this is under Sunak, not Starmer. The Times chooses when to support and opposite the Online Safety Act based on which party is in government, and provides evidence for its view by lying through omission.
The Online Safety Act is undeniably terrible legislation, but you won't find good-faith criticism of it from the Times.
This is true, but there is a subtle point that key K1 used for the classical algorithm must be statistically independent of key K2.
If they're not, you could end up where second algorithm is correlated with the first in some way and they cancel each other out. (Toy example: suppose K1 == K2 and the algorithms are OneTimePad and InvOneTimePad, they'd just cancel out to give the null encryption algorithm. More realistically, if I cryptographically break K2 from the outer encryption and K1 came from the same seed it might be easier to find.)
I might be wrong, but can't you checkpoint the post-system prompt model and restore from there, trading memory for compute? Or is that too much extra state?
My mental model is that the system prompt isn't one thing, and that seems even more apparent with line 6 telling the model what today's date is. I have no insider information but system prompts could undergo A/B testing just like any change, to find the optimal one for some population of users
Which is to say you wouldn't want to bake such a thing too deeply into a multi-terabyte bunch of floating points because it makes operating things harder
This doesn't work for instruction-tuned models, but it's an interesting alternative approach that doesn't need a complicated (and thus gameable) evaluation function or human interaction. Instead, predict the next word with data newer than the training set.
Room membership is still determined by the server rather than the client - but we now warn the user and freeze the room if devices which are not signed by their owner are present in the room.
Constraining the user membership to be controlled by the client is Hard in a fully decentralised world, but we're working on it: one option is MSC4256 (which pushes the whole problem to MLS); another option is to run Matrix's state resolution algorithm on the client (making the client implementation even more complex) to ensure that the client agrees with the server on the correct user membership.
Thanks a lot for chiming in! That's nice to hear it's better and improving.
View from 1000 feet: maybe a way to lock a room's users would be interesting? So that new users in, say, a DM room do not get decryption keys for messages from the client. Something like a weaker form of "only send messages to verified users", where you could have a DM room with (at most) 2 people.
Or, instead, maybe an option to disable forwarding session keys older than the user's room join event, to keep forward secrecy so that a new user does not get to read old messages (or does this already happen every 100 messages?).
> View from 1000 feet: maybe a way to lock a room's users would be interesting?
That's a really interesting idea - having immutable memberships could be a good band-aid. The problem is that right now the fact that room membership is typically mutable can be valuable: you add assistants into DMs (human or virtual); you can bridge the DM to other platforms; you can add (benign) audit bots for compliance purposes; you can migrate between Matrix IDs by inviting in your new ID and kicking out the old one; etc.
Of course, this same flexibility comes with a risk, and I see the point that it might be better to 'seal' membership if you know this is flexibility you don't want. We'll have a think.
> Or, instead, maybe an option to disable forwarding session keys older than the user's room join event, to keep forward secrecy so that a new user does not get to read old messages (or does this already happen every 100 messages?).
Currently we never forward session keys, so new users don't get to read old messages whatever. This obviously causes its own problems, especially for Slack/Teams style use cases where new joiners expect to be able to read conversation history. Work is ongoing right now to finally fix this (https://github.com/element-hq/element-meta/issues/39), but we are very mindful of the risk of not sharing existing history to the wrong users (or devices), which is one of the reasons it's taken so long to land.
The 100-message thing is separate: it's the maximum number of times a session-key ratchet can be advanced before it gets replaced. In other words, if you steal a session key, you can only use it to decrypt a maximum of the 100 subsequent messages sent by that device.
Thanks again for taking the time running me through these things.
> we are very mindful of the risk of not sharing existing history to the wrong users (or devices), which is one of the reasons it's taken so long to land.
It's great to hear these things are kept in mind going forward, should hopefully mean it's less hard to make protocol changes when they are needed.
Yup, but when I've tried gorm this is not generic enough to let me do this nested generically (with relations).
In gorm you can't do Car.update({'soundsystem': [{id: 'bluetoothProtocolId'}]}), you can only do something like Car.Association('soundsystem').update([{id:'bluetoothProtocolId'}]), fundamental difference being that the latter is dealing with Car internals and is not generic.
Demo: https://roychao19477.github.io/speech-enhancement-demo-2024/
Try it: https://huggingface.co/spaces/rc19477/Speech_Enhancement_Mam...