Hacker Newsnew | past | comments | ask | show | jobs | submit | js4ever's commentslogin

I don't think so, but my guess is raw performance rarely matters in the real world.

I once explored this, hitting around 125K RPS per core on Node.js. Then I realized it was pointless, the moment you add any real work (database calls, file I/O, etc.), throughput drops below 10K RPS.


It's always a matter of chasing the bottleneck. It's fair to say that network isn't the bottleneck for most applications. Heuristically, if you're willing to take on the performance impacts of a GC'd language you're probably already not the target audience.

Zero copy is the important part for applications that need to saturate the NIC. For example Netflix integrated encryption into the FreeBSD kernel so they could use sendfile for zero-copy transfers from SSD (in the case of very popular titles) to a TLS stream. Otherwise they would have had two extra copies of every block of video just to encrypt it.

Note however that their actual streaming stack is very different from the application stack. The constraint isn't strictly technical: ISP colocation space is expensive, so they need to have the most juiced machines they can possibly fit in the rack to control costs.

There's an obvious appeal to accomplishing zero-copy by pushing network functionality into user space instead of application functionality into kernel space, so the DPDK evolution is natural.


TCP is generally zero-copy now. Zero-copy with io_uring is also possible.

AF_XDP is also another way to do high-performance networking in the kernel, and it's not bad.

DPDK still has a ~30% advantage over an optimized kernel-space application with a huge maintenance burden. A lot of people reach for it, though, without optimizing kernel interfaces first.


The goal of this kind of system is not to replace the application server. This is intended to work on the data plane where you do simple operations but do them many time per second. Think things like load balancers, cache server, routers, security appliances, etc. In this space Kernel Bypass is still very much the norm if you want to get an efficient system.

> In this space Kernel Bypass is still very much the norm if you want to get an efficient system.

Unless you can get an ASIC to do it, then the ASIC is massively preferrable; just the power savings generally¹ end the discussion. (= remove most routers from the list; also some security appliances and load balancers.)

¹ exceptions confirm the rule, i.e. small/boutique setups


ASICs require years to develop and aren’t flexible once deployed

You don't develop an ASIC to run a router with, you buy one off the shelf. And the function of a router doesn't exactly change day by day (or even year by year).

Change keeps coming, even when the wire format of a protocol has ossified. I've spent years in security and router performance at Cisco, wrote a respectable fraction of the flagship's L3 and L2-L3 (tun) firewall. I merged a patch on this tried-and-true firewall just this year; it's now deployed.

As vendors are eager to remind us, custom silicon to accelerate everything between L1 to L7 exists. That said, it is still the case in 2025 that the "fast path" data-plane will end up passing either nothing or everything in a flow to the "slow path" control-plane, where the most significant silicon is less 'ASIC' and more 'aarch64'.

This is all to say that the GP's comments are broadly correct.


My colleagues are always writing new features for our edge and core router ASICs released more than 10 years ago. They ship new software versions multiple times a year. It is highly specialised work and the customer requesting the feature has to be big enough to make it worth-while, but our silicon is flexible enough to avoid off-loading to slow CPUs in many cases. You get what you pay for.

Even the ones supporting things like P4?

We do storage systems and use DPDK in the application, when the network IS the bottleneck it is worth it. Saturating two or three 400gbps NICs is possible with DPDK and the right architecture that makes the network be the bottleneck.

Storage and database doesn’t have to be that slow, that’s just architecture. I have database servers doing 10M RPS each, which absolutely will stress the network.

We just do the networking bits a bit differently now. DPDK was a product of its time.


What DB engine is it? What hardware?

It is highly irresponsible to disclose security vulnerabilities publicly, and in some jurisdictions it may even be illegal.

While I understand that the author attempted to contact Monster without receiving a response, publishing details of the vulnerabilities and how to exploit them only puts users at greater risk. This approach is reckless and harmful.


It is common practice to give the company sufficient time and communicate, and then release the details once the vulnerability is patched. But it’s also common in practice to disclose the vulnerability after a set period of time if the company does not engage in any form of communication and refuses to patch the vulnerability. In this case they didn’t engage in any form of communication and then partially patched the problems. Nothing out of the ordinary here.

What _isn't_ common practice is actually copying and posting company material on your blog. Just because a door is unlocked does not give you the right to take materials & post them.

This requires you to have any amount of respect for intellectual property, which many find to be immoral

I have seen this in practice for vulnerabilities that affect many users of some software. If some Hackermann finds that Microsoft Windows version X or Oracle Database server version Y has a security flaw then disclosure is virtuous so that people using those can take measures. That reasoning doesn't seem to apply here.

My understanding is this is the standard SOP for security vulnerabilities: 1. Report the security vulnerabilities to the “victim” 2. Work with the “victim” the schedule for mitigation and publication 3. Publicize the vulnerabilities (the security researcher wants his findings to be publicly recognized)

If the victim does not acknowledge this issue it is impossible to execute step 2. So then the security researcher goes to step 3.

If the hacker has the emails sent at step 1 he will be fine.


OP leaked internal business documents as part of their disclosure that had no business being in a disclosure. It looks like minor employee details have been leaked as well, which is very bad.

These companies treat fines as the cost of doing business and every time they lose people's personal information, they get slapped on the wrist and laugh it off while the execs get bonuses for having someone write a tearful apology to appear like victims.

I am happy every time somebody makes enough noise to make them notice and fix it because being polite and legal clearly is not working.


Nah, fuck that noise. If the company reacts to a responsible disclosure notice that's nice but no one is under any obligation to help out mega corps to secure their shit. And the users aren't put at risk by the people finding the vulnerability but by the company not fixing it.

Fuck Responsible disclosure, companies should have to bid on 0 days like everyone else.


One probably should not release information from company they hacked.

On other side, if it is some piece of software immediate disclosure in public is only reasonable and prudent action. It allows every user to take necessary mitigation actions like taking their services and servers offline.


There is a market for capabilities, i.e zerodays in widely used software. It has value, sometimes in the millions.

No one will buy some shitty XSS on a public website.


That argument misses the point. Yes, the company has the primary responsibility to fix their vulnerabilities, but that doesn’t justify recklessly publishing exploits. Once an exploit is public, it’s not just 'the company' that suffers, it’s every customer, employee, and partner who relies on that system.

Saying 'fuck responsible disclosure' is basically saying 'let’s hurt innocent users until the company caves.' That’s not activism, that's collateral damage.

If someone genuinely cares about accountability, there are legal and ethical ways to pressure companies. Dumping 0-days into the wild only helps criminals, not users.


> Saying 'fuck responsible disclosure' is basically saying 'let’s hurt innocent users until the company caves.' That’s not activism, that's collateral damage.

Correct. And I have good reasons for that. Activism has failed, consequences are required. The inevitable march towards the end of privacy due to the apathy of the unthinking majority of careless idiots will only be stopped when everyone feels deeply troubled by entering even the slightest bit of personal information anywhere because they've felt the consequences themselves.

> If someone genuinely cares about accountability, there are legal and ethical ways to pressure companies. Dumping 0-days into the wild only helps criminals, not users.

I could point to probably thousands of cases where there wasn't any accountability or it was trivial to the company compared to the damage to customers. There's no accountability for large corporations, the only solution is making people care.


let's be clear here, though: the root problem isn't someone finding some sensitive papers left on a printer accidentally, it's the person who left them on the printer to begin with. that's the root failure, and damage that results from that root failure is the fault of the person who left them there.

the american system clearly agrees with this, too. you see it insider trading laws. you're allow to trade on insider information as long as it was, for example, overheard at a cafe when some careless blabbermouth was talking about the wrongs things in public.


About Hetzner it's nothing new, since July, 1-2 of 10 deployments will stay stuck in creating state for hours or forever. It's really annoying, especially when you deploy a multi node cluster.


Probably I don't deploy enough or the EU regions are more reliable, but I was not experiencing issues during cluster provisioning. It is true that since July there were various small issues, for me mostly around removing provisioned resources, especially firewall configurations. Also noticed the web sockets are failing most of the times now and have to hard refresh the page to see the updates.


I did exactly that all this summer at the beach with Claude code. Future is already here!


That's the issue with people's from a certain side of politics, they don't vote for something they always side / vote against something or someone ... Blindly. It's like pure hate going over reason. But it's ok they are the 'good' ones so they are always right and don't really need to think


Sometimes people are just too lazy to read an article. If you just gave one argument in favor of Meta, then perhaps that could have started a useful conversation.


Perhaps… if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind…


>if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind.

You need some perspective - Meta wouldn't even crack the top 100 in terms of evil:

https://en.m.wikipedia.org/wiki/East_India_Company

https://en.wikipedia.org/wiki/Abir_Congo_Company

https://en.wikipedia.org/wiki/List_of_companies_involved_in_...

https://en.wikipedia.org/wiki/DuPont#Controversies_and_crime...

https://en.m.wikipedia.org/wiki/Chiquita


this alone is worse than all of what you listed combined

https://www.business-humanrights.org/en/latest-news/meta-all...


No... making teenagers feel depressed sometimes is not in fact worse than facilitating the Holocaust, using human limbs as currency, enslaving half the world and dousing the earth with poisons combined.


it is when you consider number of people affected


No, it isn't.

I'm not saying Meta isn't evil - they're a corporation, and all corporations are evil - but you must live in an incredibly narrow-minded and privileged bubble to believe that Meta is categorically more evil than all other evils in the span of human history combined.

Go take a tour of Dachau and look at the ovens and realize what you're claiming. That that pales in comparison to targeted ads.

Just... no.


Dachau was enabled by the Metas of that time. It needed advertising aka. propaganda to get to this political regime and it needed surveillance to keep people in check and target the people who get a sponsorship for that lifelong vacation.


all of the combined pales in comparison to what meta did and is doing to society at the scale of which they are doing it


Depends on the visibility of the weapon used and the time scale it starts to show the debilitating effects.


Great more enshitification! Broadcom is destroying everything they touch


That's nonsense. RPi would not exist if not for Broadcom.


The current Broadcom (Avago) is literally not the Broadcom that RPi came from. Not that old-Broadcom was great, but...


RPi doesn't exist due to Broadcom. It exists despite Broadcom.

Using RPis can be a huge PITA, if you'd like to do something a bit more complex with the hardware. HDMI, the video decoders are all behind closed doors with blobs on top of blobs and NDAs.

RPi SoCs are some of the weirdest out there. It boots from the GPU ffs.


Yeah Broadcom had a load of unsellable SOCs they needed to off load


But who will think of the shareholders?! \s

I’m surprised anybody works at bcom these days.


No, Safari is the new IE, nothing works on it, it's full of bugs and Apple is actively preventing web standards to move on. Do you remember how much Apple prevented web apps to be a thing by blocking web push, and breaking most things if run in PWA mode?

Apple are by far the worst offender and I can't wait for Safari to die


It’s death by a million papercuts with safari.

I made a reader app for learning languages. Wiktionary has audio for a word. Playing the file over web URL works fine, but when I add caching to play from cached audio blob, safari sometimes delays the audio by 0.5-15 seconds. Works fine on every other browser.

It’s infuriating and it can’t be unintentional.


"This deployment is temporarily paused" it seems you spent all your vercel quota.

You would "scale" better with a $5 vps


Definitely. But OP is probably on free tier, thats why this happens

Vercel is fine for stage deployment but for production even a solar powered raspberry pi is better, if vercel willpause the instance if there is too much traffic.


I was thinking the same about AI in 2022 ... And I was so wrong!

https://news.ycombinator.com/item?id=33750867


Hopefully no hobos were injured in the process


here it is:

Title: Serverless is eating Kubernetes—and that's a good thing

After 6 years of watching K8s engineers over-engineer themselves into a corner with YAML spaghetti, I finally moved a production workload to AWS Lambda + EventBridge + DynamoDB. You know what happened?

Nothing broke. It just worked.

No Helm charts. No Ingress hell. No cluster upgrades at 3am because of CVEs in some sidecar no one uses anymore. The whole app is now ~300 lines of infra code and it scales from 0 to 10k RPS without me touching anything.

Meanwhile, the K8s crowd is still debating which operator to use to restart a pod that keeps crashing because someone misconfigured a liveness probe. Or building internal platforms to abstract away the abstraction they just built last quarter.

Remind me again what the value-add of Kubernetes is in 2025? Other than keeping a whole cottage industry of "DevOps" engineers employed?

Serverless isn’t the future. It’s the present. And K8s? It’s the next OpenStack—just slower to die because it has better branding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: