Hacker Newsnew | past | comments | ask | show | jobs | submit | more fatchan's commentslogin

I always use a custom server (even express will work) with next.js, because I found the middleware and edge stuff a load of overcomplicated BS. Client side works like regular react app, SSR for any pages where the initial props are just populated from the server site is easily controlled, and the whole system is simple to reason about. There are other frameworks out there to do this now, but I'm comfortable with this and it just works, so no reason to change.


No. First of all, just check for `navigator.brave`. If it exists, it's Brave. When I ran a .onion site I added a JavaScript check and if navigator.brave was present, it redirected users to a specific page saying:

> Hey, there's something funny about your Tor Browser. When browsing Tor hidden services (.onion), you should be using Tor Browser. Are you using an outdated version, or perhaps something else entirely?

Brave is chrome. Tor browser is firefox, has a bunch of tweaks, different default settings, and a different fingerprint. Also when browsing on Tor, you should disable JavaScript as it's a source of many vulnerabilities.


Hey, funny to see my project mentioned here also. Yes, similar in concept.

Some differences:

- Uses HAProxy (duh)

- Proof of work can be either sha256 or argon2

- Optional recaptcha/hcaptcha in addition to the proof of work

- Includes a script for your page that will re-solve the challenge in the background before the cookie expires

There's also a control panel, dns server, etc. I kinda built my own everything because I refused to use bunny/cloudflare/whatever.

One thing I will say though, is that proof-of-work alone isn't a solution for ddos mitigation and bot protection! I've seen attackers using a mass of proxies and headless browsers to solve the challenge, or even writing code to extract and solve the challenge directly (https://github.com/lizthegrey/tor-fetcher). To adequately protect against more targeted attacks, you need additional acl and heuristics, browser fingerprinting, tls fingerprinting, ip reputation, etc. I do offer the whole thing setup as a commercial service, but will refrain from too much shilling.

It's fun, and I love seeing similar softwares help fight the horde of AI scrapers :^)


>One thing I will say though, is that proof-of-work alone isn't a solution for ddos mitigation and bot protection! I've seen attackers using a mass of proxies and headless browsers to solve the challenge

If you make the challenge sufficiently difficult enough, it should mitigate this no?

>or even writing code to extract and solve the challenge directly (https://github.com/lizthegrey/tor-fetcher).

Similarly if the challenge is difficult, wouldn't matter where it's solved.

I'm not sure why one would use Anubis over haproxy-protection.


Offering more detailed timeouts for other stages of the request would be great, too.

For example with HAProxy you can configure separate timeouts for just about everything. The time a request is queued (if you exceed the max connections), the time for the connection to establish, the time for the request to be recived, inactivity timeout for the client or server, inactivity timeout for websocket connections... The list goes on: https://docs.haproxy.org/3.1/configuration.html#4-timeout%20...

Slowloris is more than just the header timeout. What if the headers are received and the request body is sent, or response consumed very slowly? And even if this is handled with a "safe" default, it must be configurable to cater to a wide range of applications.


I also implemented timeouts for response processing (including reading the request body from the client), to protect against Slow HTTP POST attacks.


Is it configurable?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: