Hacker Newsnew | past | comments | ask | show | jobs | submit | raydenvm's commentslogin

Yeah, it makes you wonder how much computing power the industry has wasted over the years on tools that nobody questioned because "that's just how long builds take." We planned our work around it, joked about creating breaks, and built entire caching layers to work around it.

Kudos to the Vite maintainers!


The waste of slow JS bundles is nothing compared to the cost of bloated interpreted runtimes or inefficient abstractions. Most production software is multiple orders of magnitude slower than it needs to be. Just look at all the electron apps that use multiple GB of ram doing nothing and are laggier than similar software written 40 years ago despite having access to an incredibly luxurious amount of resources from any sane historical perspective.


Something I realized while doing more political campaign work is how inefficient most self hosted solutions are. Things like plausible or umami (analytics) require at least 2 gigs of ram, postiz (scheduled social media planner) requires 2 gigs of ram, etc.

It all slowly adds up where you think a simple $10 VPS with 2 gigs of ram is enough but it's not, especially if you want a team of 10-30ish to work sporadically within the same box.

There can be a lot of major wins by rewriting these programs in more efficient languages like Go or Rust. It would make self hosting more maintainable and break away from the consulting class that often has worse solutions at way higher prices (for an example, one consulting group sells software similar to postiz but for $2k/month).


So you have free software that requires 2 GB of RAM and the alternative is $2k per month and you're complaining that the free solution is inefficient? Really?

Why do you expect to be able to replace a 2k/month solution with a $10/month VPS?


Because the fundamental task many of these programs are doing is neither complicated nor resource intensive.

In the age of cheap custom software solutions everyone should at least try to make something themselves that's fit for purpose. It doesn't take much to be a dangerous professional these days, and certainly more than ever before can a dangerous professional be truly dangerous.


Thank you, I get so confused when people think a $5/vps shouldn't be able to do much. We're talking about 99% of small business that might have 5 concurrent users max.

2 gigs of ram should be considered overkill to cover every single business case for a variety of tools (analytics, mailer/newsletter, CRM, socials, e-commerce).


Your criticism contradicts itself.

He's saying that the software seems free, but is so inefficient that it bloats other costs to run it. And he never said he wanted to replace $2K/month with $10/month.


I'm not saying it's so bad I don't recommend it, quite the opposite; but these things can be written in more performant languages. There's no reason why a cron job scheduler requires 500 mb of ram in idle. Same for the analytics. That is just a waste of resources.

Software can be drastically way less resource intensive, there is no excuse outside of wanting to exacerbate the climate crises.

This period of our history in the profession will be seen as a tremendous waste of resources and effort.


Dude you're complaining about the efficiency of free software.

Go write the software yourself, no one owes you anything.

Maybe if you had to actually write it yourself, you'd quickly figure out why people prefer "inefficient" languages for these things.

A cron job scheduler does not in fact require 500 MB of memory. You're just being disingenous, that software is doing a lot more than just that.


I am writing software myself and your attitude is just weird. We should always strive for better more efficient software, the climate crisis is a real thing and our industry has done an excellent job exacerbating it with more inefficient tools, libraries, and languages.

People prefer JS because all they know is JS, it's that simple. Please tell me why you think devs choose JS, I'm legitimately curious but your attitude of constant dismissal and disparagement makes it seem you just want to beat people down and not engage.


People choose JS because it’s the only first class browser language. Why they choose it on the backend I honestly couldn't tell you.


Dude, the $2k solution is not only worse than postiz they charge an additional thousand for each channel.

It's just garbage software, I brought it up as an example IDK why. Commentators here like knowing snippets about other industries in the profession, I know I do at least.

But to answer your Q, yes I do expect a cron job schedule, analytics, and a CRM not to require 8 gig of ram in order to not barf on itself too hard.

These things are incredibly resource intensive for their actual jobs. The software is incredibly wasteful.

A $5/vps should be enough to host every suite of software a small business needs. To think otherwise is extremely out of touch. We're talking about 3 concurrent users max here, software should not be buckling under such a light load.


> A $5/vps should be enough to host every suite of software a small business needs

Where is this weird expectation coming from?

Why should that be the case?


The expectation is that these aren't complicated tools, they should not command that many resources. Why do you think a $5/vps with half a gig of ram can't handle basic CRON/background jobs or management software? 512 mb of ram can do so much if you choose the appropriate tools but if you start with a weak foundation that requires 512 mb of ram to just stay idle it hurts a class of users that could benefit from this software.

These things aren't complicated, but when you choose NodeJS/Javascript they become way more complicated than expected. I say this as someone who has ever worked professionally with JS and nothing else for a 15 year long career.

Writing software that can only be used by the affluent is not the direction I want our industry to go in.


I guess there's the distinction between capacity that could be taken up by other things, and free capacity that doesn't necessarily cost anything.

For a server built in the cloud those cycles could actually be taken up by other things, freeing the system and bringing costs down.

For a client computer running electron, as long as the user doesn't have so many electro apps open that their computer slows down noticeably, that inefficency might not matter that much.

Another aspect is that the devices get cheaper and faster so today's slow electron app might run fine on a system that is a few years away, and that capacity was never going to be taken up by anything else on the end user's device.


It’s more likely that Electron app uses poor code and have supply chain issue (npm,…). Also loading a whole web engine in memory is not cheap. The space could have been used to cache files, but it’s not, which is inneficient especially when laptops’ uptime is generally higher.


Don't forget the human time wasted by an app being slow and laggy.


> Most production software is multiple orders of magnitude slower than it needs to be.

at least 100x slower than it needs to be?


Easily. Lots of things can take 3ms that actually take 300ms. Happens all the time.


what's an example?


A poorly written SQL query, an algorithm on large data sets using suboptimal bigO, the Home Depot or Lowe's website search bars, etc


good examples, i agree


Why are electron apps memory intensive compared to other cross platform frameworks. Is it language, UI system or legacy?


Electron apps tend to use a lot of memory because the framework favors developer productivity and portability over runtime efficiency.

- Every Electron app ships with its own copy of Chromium (for rendering the UI) and Node.js (for system APIs). So even simple apps start with a fairly large memory footprint. It also means that electron essentially ships 2 instances of v8 engine (JIT-compiler used in Chromium and NodeJS), which just goes to show how bloated it is.

- Electron renders the UI using HTML, CSS, and JavaScript. That means the app needs a DOM tree, CSS layout engine, and the browser rendering pipeline. Native frameworks use OS widgets, which are usually lighter and use less memory.

- Lastly the problem is the modern web dev ecosystem itself; it is not just Electron that prioritises developer experience over everything else. UI frameworks like React or Vue use things like a Virtual DOM to track UI changes. This helps developers build complex UIs faster, but it adds extra memory and runtime overhead compared to simpler approaches. And obviously don't get me started on npm and node_modules.


Loading a browser context isn't helping.


Imagine the amount of useful apps that would not have been made without Electron.


> Yeah, it makes you wonder how much computing power the industry has wasted over the years on tools that nobody questioned because "that's just how long builds take."

I feel the same way about tools like ESLint and Prettier as well after discovering Oxc https://oxc.rs/


I wonder what will be the parallel hindsight about waste, but for matrix multiplications, in a few years.


By then I understand that matrix multiplication will have cured cancer and invented unlimited free energy, so no hindsight of waste needed.


Cure cancer? It doesn't have to cure cancer for it to make billions.

All it has to do is put price pressure on your salary. (And it is already doing that.)


The economic incentives line up much better there. You charge for tokens -> cost is GPUs -> you work very hard to keep GPUs utilized 100% and get max tokens out of those cycles.

Compare this to essentially any modern business app, the product being sold has very little relationship with CPU cycles, or the CPU cycles are SO cheap relative to what you're getting paid, no one cares to optimize.


Build performance has been a pet topic for me for quite some time when I realized I was wasting so much times waiting for stuff to build 14 years ago. The problem is especially endemic in the Java world. But also in the backend world in general. I've seen people do integration tests where 99% of the time is spend creating and recreating the same database over and over again (some shitty ruby project more than a decade ago). That took something like 10 minutes.

With Kotlin/Spring Boot, compilation is annoyingly slow. That's what you get with modern languages and rich syntax. Apparently the Rust compiler isn't a speed daemon either. But tests are something that's under your control. Unit tests should be done in seconds/milliseconds. Integration tests are where you can make huge gains if you are a bit smart.

Most integration tests are not thread safe and make assumptions about running against an empty database. Which if you think about it, is exactly how no user except your first user will ever use your system.

The fix for this is 1) allow no cleanup between tests 2) randomize data so there are no test collisions between tests and 3) use multiple threads/processes to run your tests to 1 database that is provisioned before the tests and deleted after all tests.

I have a fast mac book pro that runs our hundreds of spring integration tests (proper end to end API tests with redis, db, elasticsearch and no fakes/stubs) in under 40 seconds. It kind of doubles as a robustness and performance test. It's fast enough that I have codex just trigger that on principle after every change it makes.

There's a bit more to it of course (e.g. polling rather than sleeping for assertions, using timeouts on things that are eventually happening, etc.). But once you have set this up once, you'll never want to deal with sequentially running integration tests again. Having to run those over and over again just sucks the joy out of life.

And with agentic coding tools having fast feedback loops is more critical than ever.


> I've seen people do integration tests where 99% of the time is spend creating and recreating the same database over and over again (some shitty ruby project more than a decade ago). That took something like 10 minutes.

For anyone that doesn't know: With sqlite you can serialize the db to a buffer and create a "new" db from that buffer with just `new Datebase()`. Just run the migrations once on test initialization, serialize that migrated db and reuse it instantly for each test for amazing test isolation.


Assuming you use sqlite in prod or are willing to take the L if some minor db difference breaks prod...

This method is actually super popular in the PHP world, but people get themselves into trouble if they tidy up all the footguns that stock sqlite leaves behind for you (strict types being a big one).

Also, when you get a certain size of database, certain operations can become hideously slow (and that can change depending on the engine as well) but if you're running a totally different database for your test suite, it's one more thing that is different.

I do recognize that these are niche problems for healthy companies that can afford to solve them, so ymmv.


We've had this exact same issue (clean db for every test) - the way we solved it was with ZFS snapshots - just snapshot a directory of our data (databases, static assets, etc) - and the OS will automatically create a copy-on-write replica that can be written to, and the modification can be just thrown away (or preserved).

Once you've created a zfs snapshot, everything else is basically instant and costs very little perf.


> Most integration tests are not thread safe and make assumptions about running against an empty database. Which if you think about it, is exactly how no user except your first user will ever use your system.

Yea, cypress has this in their anti-patterns:

https://docs.cypress.io/app/core-concepts/best-practices#Usi...

Dangling state is useful for debugging when the test fails, you don't want to clean that up.

This has been super useful practice in my experience. I really like to be able to run tests regardless of my application state. It's faster and over time it helps you hit and fixup various issues that you only encounter after you fill the database with enough data.


Kotlin compiles fast; I don't have any problems with ktor. Spring Boot and Rust do not.


>With Kotlin/Spring Boot, compilation is annoyingly slow. That's what you get with modern languages and rich syntax.

This is because the kotlin compiler is not written in the way people write fast compilers. It has almost no backend to speak of (if you are targeting the jvm), and yet it can be slower at compilation than gcc and clang when optimizing.

Modern fast compilers follow a sort of emerging pattern where AST nodes are identified by integers, and stored in a standard traversal order in a flat array. This makes extremely efficient use of cache when performing repeated operations on the AST. The Carbon, Zig, and Jai compiler frontends are all written this way. The kotlin compiler is written in a more object oriented and functional style that involves a lot more pointer chasing and far less data-dense structures.

Then, if run on a non-graal environment, you also have to pay for the interpreter and JIT warmup, which for short-lived tasks represents nontrivial overhead.

But unlike header inclusion or templates, which are language level features that have major effects on compilation time, I don't think kotlin the language is inherently slow to compile.


No worries, projects will soon catch up by throwing more code at the build system.


I was also thinking between these two. Still see the diffrence: I guess "critical path" is single, bottlenecks are multiple.


Awesome stuff! It would also be nice to add Opus Spicatum method when tiles are set at angles to form a “herringbone” zigzag pattern. It should be quite easy to implement: https://en.wikipedia.org/wiki/Opus_spicatum


Excuse me, I can't help but cite this one wonderful song:

"When everything's made to be broken, I just want you to know who I am" (c) Iris by Goo Goo Dolls

I feel it touches on something deep that has to do with the current state of the tech world.


Are there any known commercial use cases?


I suppose that switching to Brave will be one of the best solutions after all. They have already comment this in June: https://brave.com/blog/brave-shields-manifest-v3


Or Firefox, which isn't just a reskinned Chrome...


If you think Braves just 'reskinned chrome' you've clearly not used it.


I've tried Brave a few times. Doesn't seem significantly different from Chrome. Chromium will likely still dominate future choices for web standards and Google will still control what implementations work on the biggest properties.


Edge maintains more not-Chromium code on its Chromium browser than Brave does on its Chromium browser and both further encourage websites and users to strengthen Google's web monopoly.


What makes Brave trustworthy enough for us to run our entire life through it? For me it's irreparably forever tainted by crypto grifting.


The 'crypto grifting' is something you can turn off completely, it's there as a way to make the browser sustainable without accepting payments from Google to make it the default search engine.

I'd argue its far more trustworthy than modern day Firefox/Mozilla, they're not exactly the second coming these days.

What makes Firefox more trustworthy?


That's kind of like saying "yeah this is a mafia pizzeria but you can come eat at hours when the goons aren't there". Besides, why does Brave need that much funding? All they make is a Chromium wrapper, Google does all the work for them. They're not really an actual alternative in that sense, they just stuff it full of adblock, crypto, and god knows what. There was even a thing recently where it autoinstalled a VPN.

Yeah it's true that Mozilla's mostly financed from Google's anti-antitrust payments, but at least they actually made something of their own and have a trustworthy track record three decades long as a non-profit and Netscape before that.


> and god knows what

That right there sours your whole argument. Your entire reasoning here is based on "they're probably doing something dodgy", ignoring the bit about it being opensource, or that Firefox and Chrome are at the very minimum on equal terms of "dodgyness", as you'll no doubt already know.


"You can turn off the evil feature that evil people added" isn't really an argument that's gonna convince me that evil people are trustworthy.

Tell me I can turn off the evil intent, and not just one of its manifestations, and we're in business. But you can't tell me that.


By that logic you'd have to extend the same argument to Firefox, Chrome and Edge. All have a bunch of "evil" (which by your own definition evil = thing that makes a business money) things that can be disabled.

Once you've done that you're back to the same old question - why is <other browser> any better/safe/trustworth than Brave, which is arguably the only one that's gone out of their way to make sure its sustainable and not reliant on farming user data to the highest broker.


I'm gonna follow your lead on goal post moving for my response.

I'm sure no user data is shared with Brave's search partners (and don't pretend they don't get paid by Google and others for all the users who abandon the not-great Brave Search for a more capable service.) Google just pays them whatever they pinky swear to Google was their traffic, no reporting at all, no search telemetry, none of that. Right?

And I'm sure zero user data makes it to big advertisers who pay for full new tab takeovers. I mean, why wouldn't big advertisers throw tens of thousands of dollars a day on ads with no proof of reach or return.

Oh, it's anonymized, you say? So, just like all the other browsers?

Also, a quarter of a billion VC dollars have to be paid back at some point. You can't claim anything is truly sustainable when VCs still own a quarter of its value and it's taken VC money 7 of the last 10 years.


the lack of cryptogrifting.


Your favourite corporations commit all sorts of crimes (ethical and actual). But let’s remember that questionable thing Brave did for eternity.


Non-profits get a tiny bit more leeway in my book. Brave is not one of them.


For just another chromium skin, I prefer vivaldi as it has more traditional offerings than brave. While having more customizable ui.


I'd like to know how you intend to handle the infrastructure costs of providing this for free.

Also, how do you see the future of disposable email services as companies get better at detecting them?


He may easily cancel it in two weeks. With Trump, it's almost unpredictable how things will go.


Thanks a lot for sharing! A great food for thoughts. And yeah, I definitely don't want to build a custom solution - it should be max reliable and fast enough.



Same story with me. It says for both Workspace and Personal accounts that they are Workspace.

Have you managed to overcome that?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: