They already know people who are trying to access signal without a proxy, so I don't think this would make a significant difference. Also note that from the Signal Blog post above:
----
The Signal client establishes a normal TLS connection with the proxy, and the proxy simply forwards any bytes it receives to the actual Signal service. Any non-Signal traffic is blocked. Additionally, the Signal client still negotiates its standard TLS connection with the Signal endpoints through the tunnel.
This means that in addition to the end-to-end encryption that protects everything in Signal, all traffic remains opaque to the proxy operator.
It doesn't seem to be the same situation with tor exit nodes, where your node is automatically on the system. Here, it looks like people have to actively use your proxy; it tells people who run a proxy to share a URL with their friends.
And that brings the difficulty of letting your proxy be known to legitimate interested people if your iranian social presence is non-existent. I ran a Tor node (not an exit one) in Germany back in the days (it was to help iranian people).
A regime that has survived 40 years facing constant adversary and the majority of time under sanctions should be competent enough at internal security.
And the people that are protesting and hurting right now are not the most tech savvy one - so expect a lot of naivete about opsec. I doubt that the majority of them even know signal exists.
Looking at whois history sites, it looks like the domain was owned by Tom Christiansen aka tchrist, which wrote Programming Perl, Learning Perl and the Perl Cookbook.
The record wasn't supposed to expire until 2029, so not sure how the squatters got this domain.
There's always the chance someone social-engineered their way past the registrar's access control, or that they got some kind of access to the registrar's systems. Or the domain owner simply didn't read an email properly and clicked the wrong link.
There's too little information to draw conclusions at this point.
Same thing. Your site has only one worker script, and each request is therefore processed by at most one Worker. You can of course merge many independent pieces of logic into one script -- it's code, after all.
Enterprise customers are allowed to have multiple scripts mapped to different URL routes. This is mainly so that different teams owning different parts of the site don't step on each other. However, generally if you stuff your logic into one script rather than multiple, it will perform better, since that one script is more likely to be "hot" when needed.
Note that Enterprise customers can have multiple scripts-per-domain, allowing you to run specify a specific script per route (or routes). Additional matching logic (e.g. headers, cookies, response codes) can then be done within the Worker itself.
Hopefully that limitation can be relaxed over time. Having to stuff all my logic in one big script sounds a bit annoying. At minimum having access to a second worker script route would be welcome for testing/development purposes, so one doesn't muck up a working production script.
> Having to stuff all my logic in one big script sounds a bit annoying.
Keep in mind that you can write your code as a bunch of modules and then use a tool like webpack to bundle it into one script, before you upload it to Cloudflare.
> for testing/development purposes
I agree with this, we definitely plan to add better ways to test script changes. Currently there is the online preview that shows next to the code editor, but it's true that there are some limits to what you can test with it.