I don't think this is worth it unless you are setting up your own CDN or similar. In the article, they exchange 1 to 4 stat calls for:
- A more complicated nginx configuration. This is no light matter. You can see in the comments that even the author got bugs in their first try. For instance, introducing an HSTS header now means you have to remember to do it in all those locations.
- Running a few regexes per request. This is probably still significantly cheaper than the stat calls, but I can't tell by how much (and the author hasn't checked either).
- Returning the default 404 page instead of the CMS's for any URL in the defined "static prefixes". This is actually the biggest change, both in user-visible behavior and in performance (particularly if a crazy crawler starts checking non-existing URLs ni bulk or similar). The article doesn't even mention this.
The performance gains for regular accesses are purely speculative because the author didn't make any effort to try and quantify them. If somebody has quantified the gains I'd love to hear about it though.
I agree. But on that final point, I have to say i hate setups where bots hitting thousands of non-existent addresses have every one of them going to a dynamic backed to produce a 404. A while back I made a rails setup that dumped routes to an nginx map of valid first level paths, but I haven't seen anyone else do that sort of thing.
I've been thinking about that exact problem and solution with the map module. On the off chance you see this, do you happen to have your solution published somewhere?
Apache's .htaccess is much worse performance-wise because it checked (and processed if it existed) all .htaccess files in all folders in the path. That is, you opened example.com/some/thing/interesting and apache would check (and possibly process) /docroot/.htaccess, /docroot/some/.htaccess, /docroot/some/thing/.htaccess and /docroot/some/thing/interesting/.htaccess.
Separating api and "front" in different domains does run into CORS issues though. I find it much nicer to reserve myapp.com/api for the API and route that accordingly. Also, you avoid having to juggle an "API_URL" env definiton in your different envs (you can just call /api/whatever, no matter which env you are in).
Was that really so bad in terms of performance? Surely .htaccess didn't exist there most of the time and even if it did, that would have been cached by kernel so each lookup by apache process wouldn't be hitting disk directly to check for file existance for each HTTP request it processes. Or maybe I am mistaken about that.
a) If you didn't use it (the less bad case you are considering) then why pay for the stat syscalls at every request?
b) If you did use it, apache was reparsing/reprocessing the (at least one) .htaccess file on every request. You can see how the real impact here was significantly worse than a cached stat syscall.
Most people were using it, hence the bad rep. Also, this was at a time where it was more comon to have webservers reading from NFS or other networked filesystem. Stat calls then involve the network and you can see how even the "mild" case could wreak havoc in some setups
Both react-query (that is tanstack query now) [2] and rtk-query [3] include extensive configurability regarding their caching behaviors. This includes the ability to turn off caching entirely. [4,5]
Your story sounds like a usage error, not a library issue.
redux-query seems a popular library for dealing with API calls in react.
I’m having a real hard time being polite right now. Do we have an education problem because where is this person getting their information that they think redux-query is popular?
Yeah I don't know where the parent comment got this from. Every few weeks I seem to see these low effort posts that basically boil down to "javascript bad", but gets a lot of upvotes. And when you read into it, you see the author often has a poor grasp of js, or its ecosystem, and has set up some unholy abstraction, assuming that's how everyone does it.
Didn't work for me just now. When there have been ways in the past the non-nightly version wipes the changes each restart unless they are already exposed in settings I think.
Also, they would school them on actual-world problems in the process:
- You can't wait until you receive the entire body to be able to compute a signature to then validate the sender as your first line of defense. It is just too expensive and opens you to DDOS attacks. People use IP reputation as a first line of defense because it is cheap, not because it is good.
- You cannot enforce people's behavior through RFCs. I can assure you that random guy next desk will not care about your "this is a top-posting-thread" header and bottom post there. Even if she has to manually copy/paste things around.
- Likewise, auto-generated plain-text versions of HTML (or other rich-text formats) are no better than what screen readers can achieve. Most poeple won't bother writing that alternate version, meaning the obligatory alt text is now less useful than when it was optional and only people who cared inculded it.
- Your largest client may not update their e-mail infrastructure to comply with the latest standards. If that happens, you don't tell them to update or otherwise you won't be answering them because their e-mails go to spam. You do whatever is necessary to ensure that their e-mails don't go to spam. Business always comes first.
1. Could a future protocol require an immediate initial message (a “hello”) stating exactly how much content will be sent, and until the “hello” is sent, it’s limited to, say, 128KB before the connection is immediately terminated? (And of course, if the content exceeds the declaration, termination and immediate IP temporary ban, safe to do as this is an obvious violation of a new spec?)
2. The goal is to make it easier for the email client which by itself will encourage good behavior. There’s also no requirement for the messages to all be in one massive blob.
3. The goal is that it would be automatically created by the client. For personal emails, this is easy. For enhanced HTML emails, that is where the requirement comes in. Email providers can come up with their own ways of enforcement from there (I.e. “if it’s only one sentence, you obviously didn’t do it”), though I get your point and that would become messy unofficial spec again.
4. Could a future emails system have versioning, allowing the server to clearly communicate (“Hello, I implement MX2 v3.1.”)? In addition, a business can obviously make their own settings that original email alerts do not go to Junk in their business mailboxes - but they do know they’d better get on it or their messages to clients might go to Junk.
SMTP already has the BDAT command where the size is sent first, and arbitrary bytes can be sent (unlike DATA).
SMTP already has versioning through extensions.
If you're banning an IP for exceeding a processing resource limit please keep the ban short. Presumably you can afford to process the first 128KB of one bad message per six hours, for instance. There should be no need to make a month-long or permanent ban, and these just hurt interoperability if the sender realizes their problem and fixes it, or if the address is reallocated.
Trying to limit data between the hello and the email data is futile, since the attacker can just flood you with random packets no matter whether you told them to stop (closed the connection) or not. You can only limit things you have control over, mostly your own memory usage, and how much data is accepted into more expensive processing stages.
> 128KB of one bad message per six hours, for instance. There should be no need to make a month-long or permanent ban
As someone who saw the actual bruteforce attempts, most bots abandon any attempts after an hour or two. Resources are cheap but even for spammers (almost unlimited resources) futile attempts are costly.
> I can assure you that random guy next desk will not care about your "this is a top-posting-thread" header and bottom post there.
We should move away from having a single mutable body for email. It should be a series of immutable messages that reference the message that it is replying to. Each message can contain a hash signed by the private key for the domain that wrote it. Then when you write your message it just gets appended to this chain.
How it is shown is up to the email client so that it can be done in the best way for the user.
What you’re describing is already possible with email as it is, using the In-Reply-To header or whatever its name was. No need for cryptographic signatures. The only issue is that common mail clients still automatically quote the whole message being replied to for no good reason. It should work like it used to on phpbb forums: no quote by default, quote selected part if text is selected.
> The only issue is that common mail clients still automatically quote the whole message being replied to for no good reason.
Here is a good reason: In-Reply-To is a reference, not content. The recipient(s) of your message might not have that email.
Also including the quote is a default. The sender can edit it, splice responses into it and remove irrelevant parts of it. Admittedly quoting norms are in shambles though for various reasons.
> Each message can contain a hash signed by the private key for the domain that wrote it.
Me being able to prove that I wrote something is good. Other people being able to prove that I wrote something… it's good under many circumstances, but not in general.
We do it like Hacker News. It's just another message, with > indicators. Globally, inline replies are A. Rare and B. Often used with prank intent (i.e. you can make it look like you're replying to something they didn't say).
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
(x) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
(x) It is defenseless against brute force attacks
( ) It will stop spam for two weeks and then we'll be stuck with it
( ) Users of email will not put up with it
( ) Microsoft will not put up with it
( ) The police will not put up with it
( ) Requires too much cooperation from spammers
( ) Requires immediate total cooperation from everybody at once
(x) Many email users cannot afford to lose business or alienate potential employers
( ) Spammers don't care about invalid addresses in their lists
( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses
( ) Asshats
( ) Jurisdictional problems
( ) Unpopularity of weird new taxes
( ) Public reluctance to accept weird new forms of money
(x) Huge existing software investment in SMTP
(x) Susceptibility of protocols other than SMTP to attack
( ) Willingness of users to install OS patches received by email
(x) Armies of worm riddled broadband-connected Windows boxes
( ) Eternal arms race involved in all filtering approaches
( ) Extreme profitability of spam
( ) Joe jobs and/or identity theft
( ) Technically illiterate politicians
( ) Extreme stupidity on the part of people who do business with spammers
( ) Dishonesty on the part of spammers themselves
(x) Bandwidth costs that are unaffected by client filtering
(x) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck
( ) Whitelists suck
( ) We should be able to talk about Viagra without being censored
( ) Countermeasures should not involve wire fraud or credit card fraud
( ) Countermeasures should not involve sabotage of public networks
(x) Countermeasures must work if phased in gradually
( ) Sending email should be free
( ) Why should we have to trust you and your servers?
( ) Incompatiblity with open source or open source licenses
( ) Feel-good measures do nothing to solve the problem
( ) Temporary/one-time email addresses are cumbersome
( ) I don't want the government reading my email
( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
(x) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out where you live and burn your house down!
reply