Hacker Newsnew | past | comments | ask | show | jobs | submit | dlenski's commentslogin

How do these tens of thousands of Starlink terminals get smuggled into Iran?

Iran has a huge black market. Many things from refrigerator to small things are being smuggled to Iran, mostly from the mountains on Iraq border (also from Turkey border and from other Gulf countries. The government is not peaky about smuggling since Iran is under heavy sanctions and it's hard for the government to provide USD to legit traders on official channels.

Iran has a thriving black market through its borders

Trucks I assume

This is a 21st-century equivalent of leaving short words ("of", "the", "in") out of telegrams because telegraph operators charged by the word. That caused plenty of problems in comprehension… this is probably much worse because it's being applied to extremely complex and highly structured messages.

It seems like a short-sighted solution to a problem that is either transient or negligible in the long run. "Make code nearly unreadable to deal with inefficient tokenization and/or a weird cost model for LLMs."

I strongly question the idea that code can be effectively audited by humans if it can't be read by humans.


> I expect it won't be long until someone deploys the first proxy service that handles the initial CONNECT payload in the kernel before offloading packet forwarding to an eBPF script that will proxy packets between hosts at layer 3, making this fingerprinting technique obsolete.

https://github.com/sshuttle/sshuttle basically works like this. I've used it for many years. I don't think it'll be possible to detect using this technique.


sshuttle as described sounds like a normal CONNECT proxy which this is able to detect: https://sshuttle.readthedocs.io/en/stable/how-it-works.html

like its similar to connect or socks proxy except it is using SSH as a transport layer instead of TCP as a transport layer and its doing it transparently without having applications to be written to use the proxy. but if you are just converting TCP packets into a datastream and then sending them somewhere else where you convert them back to TCP packets then this is what this TCP RTT strategy is fundamentally meant to detect. i suspect the TCP only RTT thing works because of the delayed ack behaviour of most operating systems and this will still happen with sshuttle unless you are explicitly using quick-ack. also, quick-ack just works around the TCP-RTT issue and not the differences in timing between TCP and TLS or other higher protocols. i think if you are testing for other RTT differences then quick-ack would make them more obvious.

on the server side sshuttle just uses normal tcp sockets and nothing magic (https://github.com/sshuttle/sshuttle/blob/master/sshuttle/ss...)

also, if you have an sshuttle proxy this site cannot detect it may be due to how close the server is to the client. i have a CONNECT based proxy it is able to detect around 5% of the time (maybe only that often due to a bug) but this is because there is probably less than 10ms latency between the proxy and the client and probably around 50ms latency between the proxy and the server for some reason (?).


Came here to ask the same thing. Why do I _care_ if connections to my server come from a TCP proxy? Particularly when a VPN is _not_ observable in a similar way?

Is there some class of bad actors who extensively use TCP proxies and not only _don't_ use VPNs, but would incur large costs in switching to them?


Web scrapers maybe aren't "bad actors", but many sites dont want them. They'll use tons of TCP proxies which route them through a rotating pool of end user devices (mobiles, routers, etc...). Its not really possible to block these IPs as you'd also be blocking legitimate customers so other ways to detect and block are required.


Can't/won't these scrapers just switch to using VPNs or sshuttle or basically anything else that doesn't leak timing info about termination of TCP vs HTTP?


Not really. You can have 100,000 IPs from proxies or use VPNs and have only 5 egress IPs.

Anybody who wants to stop the scraper could get browser fingerprints, cross reference similar ones with those IPs and quite safely ban them as its highly likely theyre not a legitimate customer.

Its a lot harder to do it for the 100k IPs because those IPs will also have legitimate customer traffic on them and its a lot more likely the browser fingerprint could just be legitimate.

The risk of false postives (blocking real people) is usually higher than just allowing the scrapers and the incetives of a lot of sites arent aligned with stopping scrapers anyway. Think eccommerce, do they _really_ care if the product is being sold to scalpers or real customers? If anything, that behaviour can raise perception of their brand, increase demand, increase prices.

This tool should have less false positives than most, so maybe it will see more adoption than others (TCP fingerprinting for example) but I dont think this is going to affect anyone doing scraping seriously/at scale.


> Not really. You can have 100,000 IPs from proxies or use VPNs and have only 5 egress IPs.

Why…?

If I can run a proxy exit node on 100k residential IPs, why can't I run a VPN server on 100k residential IPs?

There is no additional technical complexity or resource consumption from the VPN server compared to the proxy server.


I don't mean that you can't do it, just that there is no company offering it so right now those are the only two options.

It's something we're experimenting with currently. the other commenter is right about apple products, but on android, desktop, etc... it's pretty easy.


for phones its a bit difficult because i don't think you can egress out ip traffic without root or jailbreak on iphone and iOS. but i guess on desktop this should be possible

Dare I ask how much bandwidth it is consuming?


Its around 700MB today so far.


Today I learned that Matt Godbolt is British!


“AI agents: They're just like us”


I surely don’t have $500B lying around


Neither does the LLM agent. It's the humans in charge that control the money.


> Are these major issues with cloud/SaaS tools becoming more common, or is it just that they get a lot more coverage now?

I think that "more coverage" is part of it, but also "more centralization." More and more of the web is centralized around a tiny number of cloud providers, because it's just extremely time-intensive and cost-prohibitive for all but the largest and most specialized companies to run their own datacenters and servers.

Three specific examples: Netflix and Dropbox do run their own datacenters and servers; Strava runs on AWS.

> If it's becoming more common, what are the reasons? I can think of a few, but I don't know the answer, so if anyone in-the-know has insight I'd appreciate it.

I worked at AWS from 2020-2024, and saw several of these outages so I guess I'm "in the know."

My somewhat-cynical take is that a lot of these services have grown enormously in complexity, far outstripping the ability of their staff to understand them or maintain them:

- The OG developers of most of these cloud services have moved on. Knowledge transfer within AWS is generally very poor, because it's not incentivized, and has gotten worse due to remote work and geographic dispersion of service teams.

- Managers at AWS are heavily incentivized to develop "new features" and not to improve the reliability, or even security, of their existing offerings. (I discovered numerous security vulnerabilities in the very-well-known service that I worked for, and was regularly punished-rather-than-rewarded for trying to get attention and resources on this. It was a big part of what drove me to leave Amazon. I'm still sitting on a big pile of zero-day vulnerabilities in ______ and ______.)

- Cloud services in most of the world are basically a 3-way oligopoly between AWS, Microsoft/Azure, and Google. The costs of switching from one provider to another are often ENORMOUS due to a zillion fiddly little differences and behavior quirks ("bugs"). It's not apparent to laypeople — or even to me — that any of these providers are much more or less reliable than the others.


Got one for my wife here in Canada recently, where it's on a similarly good sale.

It's a nicely put together piece of _hardware_ and firmware, way way better than the garbage Dell laptops I have to use for work, which are heavy and hot and regularly fail to manage basic things like customizing sleep/wake behavior…

… but I personally am completely unwilling to use a Mac unless I'm getting paid and forced to.

I hate MacOS. I hate the UI, I hate the fiddly little ways that it hides information about real file paths and makes it unnecessarily difficult to uncover the tall ones. I hate hate hate all the broken stuck-in-the-80s non-GNU CLI tools, and the kludged-together stupidness of the networking stack compared to Linux.

Windows 11 is arguably worse than MacOS in many of these ways, but Linux with a Gnome or Cinnamon or XFCE desktop is far far better.

I hate the lack of full-size USB ports and HDMI. I don't care if it makes the laptop 3 mm thicker. I want them, in particular to be able to plug in my Logitech wireless mouse adapter and all my 10-15-year-old USB devices which still work fine.

I hate the keyboard and trackpad. I want a pointing stick and a trackpad with physical buttons. I want page up/down buttons and separate delete/backspace.


It's fun indeed!

The theme seems to be something like "college students at house parties"… reminds me very much of my friends and the photos we took at around this age.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: