The people of the West Bank have engaged in terrorism against Israeli civilians up to 2005 - https://en.wikipedia.org/wiki/Second_Intifada - and that's not particularly ancient history.
If the Ukraine war goes on for 50 more years you might start to see Ukrainians attacking Russian civilians too, once it’s clear they have no other option.
I don't have a lot of background in this, so tell me if I'm off-base. Israel is occupying land that it took from Palestine, is it not? I don't feel that terrorism is the correct term for attacks on an occupying force.
Yeah, typically if you agree with them they are "freedom fighters" if you don't they're "terrorists".
In my opinion, most long running conflicts tend to have both sides doing terrible things over the course of the conflict, so its rare there is one side that is "good" and one that is "evil".
People in the UK have engaged in terrorism against UK citizens up until the present; that doesn't justify the UK occupying Ireland, and installing groups of settlers.
I know, that's exactly what England did under Cromwell; one of my ancestors was such a settler.
Maybe that's the subject for the next blog post, but I think the main reason cancellation causes more troubles than ordinary IO problems, is that with ordinary errors you assume that the resource that suffered the errors is down and don't care about its precise state, while with cancellation the resource is perfectly OK and you want to continue using it.
The one system I've seen work is when each medium team (of say, 20-36 people) has a "devops team" of say 4-6 people that is responsible for all the infrastructure stuff, while the rest of the people can focus on product work. It's just that keeping track of infrastructure is much more efficient as a full-time job rather than a 10%-time job.
It's important to do the org engineering right so that the infrastructure people and developers feel responsible for each-other and work well together - ideally both teams should be able to open PRs against each other's code if needed and get help any time.
If the org engineering isn't done right, devs write code that pages the ops and the ops don't respond to devs quickly enough for things to work well ("traditional ops").
Not that surprising: CPUs can perform scalar in parallel with computation, so I would expect a few scalar ops to be free in any code that is not scalar-bound.
Why is Linux accepting packets coming from one interface into an IP address belonging to a different interface? It feels like it is "forwarding" the packets internally, but `ip_forward` is turned off.
Is there any case where this behavior is legitimately useful?
IP addresses don't "belong" to interfaces in the general case. It's just a hard problem. In fact there are lots of multi-homed use cases where you want to internally route packets across interfaces without an affirmative mapping of what address is supposed to be used where.
For the specific case of point to point VPNs, there's a rule that makes sense. But that's not part of the network stack per se and there's no way to enforce it generically.
Do network stacks drop 127.0/8 packets from external interfaces today? Superficially (I'm not an experienced TCP/IP or routing stack developer, although I do work in the kernel) it seems like the same treatment could be used for VPN-registered interface addresses. You just need an API to specify "I'm a VPN interface" when the device is created or the IP assigned, no?
What's the configuration you're talking about? In the Wifi+Ethernet case, how do the routers know to send the packets towards the "right" interface, without the computer having the "right" IP address?
I mean, suppose the computer has WiFi IP address 10.0.0.3 & Ethernet IP address 10.0.0.5, then after NAT the return packets will go to 10.0.0.3, and therefore should go to the WiFi interface, not to the Ethernet interface (or, if they don't, how do they know which interface they should go to?).
Suppose you have a VPN server that routes traffic between several offices. It has tun0 with 192.168.0.1/24 linked to the New York office and tun1 with 192.168.1.1/24 linked to the London office.
The server also runs some service, say ssh, and you have a name for it in the DNS that resolves to one of its IP addresses. When you type "ssh vpn-server.example.com" it should work regardless of whether you're in New York or London, right?
If 192.168.0.42 can reach 192.168.1.42 by routing through the VPN server then it should generally also be able to reach 192.168.1.1 on the VPN server itself.
> how do the routers* know to send the packets towards the "right"*
The described attack utilized a malicious router.
I imagine, in theory, that any middle router (such as your ISP) could then be used for such an attack. Imagine Comcast being able to inject their garbage [0] into even VPN sessions. Or a government actor that Comcast is known to route for.
I use this behavior in production systems where I have 'well-known' RFC1918 addresses I use for service bootstrapping/configuration. In the network engineering world, extra loopback interfaces are also used for similar reasons.
From the outside, it feels that most exploit chains on modern systems rely on 4 mostly-independent steps:
1. code execution in a worker process - typically a memory corruption
2. ACE in worker process to ACE in unsandboxed process
3. code execution in unsandboxed process to local root
4. local root to persistence
Finding (1) (3) and (4) is old-school exploit development - a combination of looking at fuzzers, looking at code, looking at bug reports, and memory exploit development (which is a black art I'm not familiar with). So persistence and luck. Be lucky 3 times and you have 3 steps. If you were an organization I suppose you could have 3 separate groups or buy from 3 separate blackhats.
I'm less familiar with the "worker process to user process" part, which tends to rely on combining a few vulnerabilities (in this exploit, 2 + 1 broken hardening), but it's probably similar.
For example, accidentally sharing a lock-less cache or a non-atomic reference counted pointer between threads.
For example this code, which tries to send a reference-counted pointer between threads, which can cause the reference counter to become unsynchronized and random use-after-free:
use std::thread;
use std::rc::Rc;
fn main() {
let rcs = Rc::new("Hello, World!".to_string());
let thread_rcs = rcs.clone();
thread::spawn(move || {
println!("{}", thread_rcs);
});
}
Is detected by the compiler and causes this error
error[E0277]: the trait bound `std::rc::Rc<std::string::String>: std::marker::Send` is not satisfied in `[closure@src/main.rs:8:19: 10:6 thread_rcs:std::rc::Rc<std::string::String>]`
--> src/main.rs:8:5
|
8 | thread::spawn(move || {
| ^^^^^^^^^^^^^ `std::rc::Rc<std::string::String>` cannot be sent between threads safely
|
= help: within `[closure@src/main.rs:8:19: 10:6 thread_rcs:std::rc::Rc<std::string::String>]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::string::String>`
= note: required because it appears within the type `[closure@src/main.rs:8:19: 10:6 thread_rcs:std::rc::Rc<std::string::String>]`
= note: required by `std::thread::spawn`
What's interesting is that updated info indicates it happened before the engine (which was a block 5 engine, as you say) was ignited, when they do a "LOx drop" test, basically running liquid oxygen through it to test for any leaks:
https://arstechnica.com/science/2017/11/an-experimental-spac...
It may mean they had a simple FOD (foreign object debris) or contamination issue, not a design problem. Powerful oxidizers like liquid oxygen can make contact explosives out of all kinds of organic materials.
Stuff that you normally don't think of as flammable -- like hunks of structural metal -- burn quite nicely in the presence of liquid oxygen. So stuff that burns well, burns REALLY WELL in the presence of LOX.