The answer is simple: I have something to hide. I have many things to hide actually. Nothing of these things is illegal currently but I still have many things to hide. And if I have something to hide - I can be worried about many things.
I support jail time for cases like this, but with mandatory physical consequences (like a daily beating with a stick).
Since that is kind of unrealistic (sadly), here's a more raelistic idea:
In case a company goes bankrupt and bricks its cloud-dependent products, I as the (now a defunct) product owner must become a co-owner of the company. By extension when the company holds licenses for software (e.g., QT), those licenses would transfer to me as well. This would grant me the right to receive a copy of the source code and build it myself. With access to source code I (and everyone) can easily change any hardcoded dns name. And even without changing it, everyone can run a pi-hole, why not add a special case for their domain to point to my server? (I don't use pi-hole but i guess it have that option)
> I support jail time for cases like this, but with mandatory physical consequences (like a daily beating with a stick).
I hear you, but I think corporate dissolution is the one case where no one can really expect a device to remain supported, so punishing that will do no good. Punishment only makes sense when you have an ongoing viable business that drops support for no good reason (like the Spotify Car thing).
IMHO, for dissolution, it makes more sense to require open-sourcing all device and server code if there's no entity willing to take on support.
For an ongoing business, there should be some onerous punishment if they decide brick a connected device too early (e.g before 5 or 7 years after the last version sold), and if they decide brick it later they need to open source the related code.
I have a single SPA deployed in my life. About 5-6 years ago. I did nothing to preserve the history, it was just working back then. Imagine how more polished are things now. I am thinking that people who are breaking the history should take special steps to do so, otherwise it will just work out of the box with most frameworks/libs.
Sometimes the requirements are shit and so are the results. Like asking a instant redirect on invalid page and thus break the back button completely after submit any form. And yet you still get requirements like this.
A similar thing happened to me last month. My Ubuntu server suddenly started showing a countdown, warning that my firewall was blocking Canonical's ‘essential data’ pings and threatened to revoke my apt privileges. I gave it a pass-through, but the next day, it upgraded itself to Ubuntu Galactic Cosmic and replaced all my bash scripts with PowerShell. Still getting used to the mandatory Snap packages for 'ls' and 'cat' now…
I'll try not to spark any debate and keep it brief—the article is aimed at people who consider themselves "power users." It might also be informative for those who don’t view themselves that way. If you learned what traceroute is from this article, sticking with this info for now is probably fine. But if you already know how it works, you can skip reading. The whole article "isn't real".
This is nothing new. A few years back, I implemented a very basic firewall rule: if I received a TCP packet with SYN=1 and ACK=0 to destination port 22, the source IP would get blacklisted for a day. But then I started getting complaints about certain sites and services not working. It turned out that every few days, I'd receive such packets from IPs like 8.8.8.8 or 1.1.1.1, as well as from Steam, Roblox, Microsoft, and all kinds of popular servers—Facebook, Instagram, and various chat services. Of course, these were all spoofed packets, which eventually led me to adjust my firewall rules to require a bit more validation.
So, I can assure you this is quite common. As a personal note, I know I’m a bit of an exception for operating multiple IP addresses, but I need the flexibility to send packets with any of my source addresses through any of my ISPs. That’s critical for me, and if an ISP filters based on source, it’s a deal-breaker—I’ll switch to a different ISP.
> As a personal note, I know I’m a bit of an exception for operating multiple IP addresses, but I need the flexibility to send packets with any of my source addresses through any of my ISPs. That’s critical for me, and if an ISP filters based on source, it’s a deal-breaker—I’ll switch to a different ISP.
If you actually have your own IP addresses this is normal and expected, but if you're able to use ISP A's IP addresses through ISP B or vice versa that has always been a bug that you are wrong to use.
If you are doing the latter this is firmly in the "reenable spacebar heating" category and I hope your ISPs fix their broken networks.
Okay, looks like I will reply to a few of the comments to clarify things.
I’ll give a concrete, real example.
I worked at a company that hosted some web assets on-prem in one of their branches. They had a 1Gbps connection there. However, at HQ, we had multiple 10G connections and a pretty good data center. So, we moved the web VM to HQ but kept the assigned IP address (a public static from ISP-A). We routed it through a VPN to HQ. The server used our default GW and sent responses with source IP (ISP-A) via ISP-B (10G).
That way, we utilized 10G outbound, even though the inbound was limited to 1G. It was only for GET requests anyway. I know this wasn’t the most optimal setup, and we eventually changed the IP, but it seems like a valid use case.
Scenario 2: We had two connections from two different ISPs (our own ASN, our own /23 addresses). We wanted to load balance some traffic and sent half of our IPs through ISP-A and the other half through ISP-B. It worked fine, but when we tried to mix the balance a bit, we found an interesting glitch. We announced the first /24 to ISP-A and the second /24 to ISP-B, but ISP-A had RP filtering. So, we had to announce all the IPs to them.
The way the RP filter works, as you may guess, means we cannot prepend or anything. All traffic must come through them. If they see a better route for that prefix, they will filter it. For a few months, they refused to fix this, citing security. There’s no shame in security best practices, so I might as well name the ISP—Virgin Media.
Note that the internet with rp_filter is not $20/month. It was more like 5K+/month!! And we did not change it due to lack of alternatives there. But otherwise guess who loses the contract :)
For your second scenario you should announce the /23 to both and each /24 to one of them. Usually you can also prepend your own AS, ISPs I've worked could also prepend for you with select communities.
I don't think your cases are good enough to allow anyone to spoof by default.
I said that we tried this.
They do not care about the announce. They care about what is injected into the routing table. We are announcing it but they see better path and drop us.
And they also said that it should work that way - just announce it somehow and it will work. Yes but no. It does not work.
In your first scenario, any connections established through the ISP-A's IP address would be routed back through the VPN connection that they came in on. If that server were to establish it's own connections to external resources, it would feasibly be able to use the 10g connection from ISP-B. It would not be able to dictate what source address was used with connections coming from ISP-B.
It could work the way OP described if they routed all outbound traffic via ISP-A regardless of source address, and ISP-A allowed spoofing. I think that's what they meant.
It is common practice for business subscribers (around UK) to get a /29
On the router we add a single /32 via the tunnel.
I think even the cheapest 100bucks business plans from many ISPs come with /28 or /29.
It is a complete waste because we had like 10 offices with 3-5 persons with laptops and NO servers. The common question from the ISPs is: Do you need some IPs? When we answer no, they give us /29.
>As a personal note, I know I’m a bit of an exception ...That’s critical for me, and if an ISP filters based on source, it’s a deal-breaker—I’ll switch to a different ISP.
"...and obviously, Pennywise, I must spoof ingress and egress..."
If it's your real IP, it's not spoofing, even if you send the packet through a different ISP than the one which gave you the IP. If you think about it: if you got an IP directly from ARIN you wouldn't have to send your packets through ARIN to make them legitimate.
Not really. Early IPv6 documentation kind of assumed that the vast address space would lead towards hierarchical addressing and that a multi-homed user would use addresses assigned by all of their ISPs, but at least in my experience, that doesn't really pan out --- if you have router advertisements from two different ISP prefixes, automatic configuration on common OSes (windows, linux, freebsd) will lead towards often sending traffic with ISP A through the router from ISP B, which doesn't really work well, especially if either or both ISPs run prefix filters. There's probably ways to make that style of multihoming work, but it's not fun.
Turns out, most multiphomed IPv6 users need provider indepdent addresses, just like with IPv4. And then you need to make sure your all your ISPs allow you to use all your prefixes. On the plus side, it's much more likely to get an IPv6 allocation that's contiguous and that you won't outgrow; so probably you only need one v6 prefix, and you may not need to change it as often as with v4.
The advantage of IPv6 is that can multiple addresses. This means that good way to organize network is to have machines use local provider addresses to access the Internet.
Then have ULA addresses for internal network. Those will be routed with tunnels and VPNs. That separates accessing the internet from internal network, and means that don't need to have routable address space.
The only people who would need own address space have data centers and routers.
Yeah, there are ways to make it work, for example by specifying source addresses or nets on the the routes. In openwrt it's a checkbox to tick on the upstream interfaces.
I was a paying customer for a long time, and their spam campaigns were the reason for my cancellation last year. Now I am a happy Microsoft Visual Studio Pro subscriber again. (happy in quotes btw)
It's a shame, I like the tools generally, but so much stuff is just about bombarding you with ads now, paying for a good tool is, to me, meant to be the way to avoid that, that really soured me on it all.
Just in case someone from American Lease is reading this, I’d be willing to migrate their servers for less than a million.
Jokes aside, after reading the comments here, I doubt anyone with technical knowledge would believe this. Even with certificate pinning, you can simply dump the firmware as a raw binary, replace the certificate with your own, and upload it back to the car.
And even if the source code is lost, you can still sniff the traffic and implement an API. I did this for my previous employer, who had a collection of expensive, locked devices. It took me about a week, without any prior knowledge or experience. Imagine what someone with more experience could do...
> Even with certificate pinning, you can simply dump the firmware as a raw binary, replace the certificate with your own, and upload it back to the car.
That's assuming they have access to the private key used to sign the firmware though...
Most implementations of this sort of thing in practice don't verify as hard you might think.
A lot of it seems to do with wanting to be able to replace certs and have reasonable expiration times, but not really understanding how to do that (I don't mean it's not possible, i mean the manufacturers seem to not really understand how to do it effectively)
As an example, the siemens CNC controller on my metal mill is totally signed. It has an FPGA with a secure element producing verification signatures to double check cert sigs haven't been modified, Every single file system with binaries is a read-only signed cramfs file signed with a secp521 ecc key. All read-write fsen are mounted noexec, nosuid, etc etc etc.
The initial CA key is baked into secure hardware.
However, in the end, they only verify the CA and signing certs have the right names and properties (various oem specific fields, etc), because the certs have 3-5 year expiration dates and these things are not connected to the internet or even updated often. So they accept expired certs for the signatures, and they also accept any root cert + signing cert that looks the same as the current ones.
So you can replace the CA key and signing keys with something that looks exactly the same as their current one and resign everything, and it works fine.
A whole lot of effort that can be defeated pretty quickly.
I would be surprised if the cars were not similar - they look really secure, but in the end they made tradeoffs that defeat the system.