Hacker News new | past | comments | ask | show | jobs | submit login

For some reason

using a bash script to munge a few files together then a cron job running to rsync them to an Apache server running on a VPS

is considered morally and technically superior to

using a JavaScript tool to build a site then having a GitHub Action push the files up to an S3 bucket with a cloudfront distribution in front of it.

There’s no obvious reason for this, other than a belief that if something can be done with 50 year old technology, it should be.




I don't think one is morally superior to the other, but there are trade-offs.

With bash, cron, rsync, and a vps, one is using stable, long-lived tools that usually come with Linux, are vetted and patched automatically, and are familiar in the sense that those same tools can be used for all sorts of tasks, not just frontend development. There is also the argument that the only remote piece controlled by the service provider is a vps which still gives you lots of control and flexibility with minimal lock-in.

I prefer somewhere in the middle of the two extremes, but I see more appeal to the "old" approach than you allude to.


Funny that tools distributed via a linux package manager are considered 'vetted and patched automatically', but a broadly used JS build tool distributed via npm is likely seen as an unstable dependency, or worse a supply-chain attack vector.

Sure, there are maturity differences but it's nowhere near so vast a difference as some claim.

These are just preferences. It's not a culture war issue.


If you don’t understand the difference between software shipped by a distro and software shipped by developer (npmjs.com) then all is left is preferences or culture war issues.

However there’s fundamental difference: in a model of distribution by a distro the developer only produces a software but they don’t release it themselves. They provide source code and that’s it. Maintainers of packages for each distro vet new releases, test, and sometimes patch them so they work best within context of a given distro. So there’s extra effort spent on making sure the software is stable. In a model where a developer publishes their new release to some package repository directly (npmjs.com is just one of them, same applies to pypi.org, rubygems.org, etc) that extra “QA” by others than the developer doesn’t happen.


> There’s no obvious reason for this, other than a belief that if something can be done with 50 year old technology, it should be.

While past performance is no predictor of future performance, on the face of it I'm likely to bet on the stability and longevity of the classic time tested technology outlasting that of the newer technology. Especially if the newer one relies on specific 3rd party services.


To clarify, how you build it and how you host it are two different questions with different constraints.

eg the trade offs of the new might be weighted differently than the old as they apply to building vs hosting. Your build breaking due to stack changes is a different PITA than a security vuln in the hosting. Although the build might break more often due to churn in the new fangled stuff, you might trade that off for the less hands-on hosting maintenance.

But for another trade-off, the old stuff is far less service dependent, so less work to migrate if you need to change service providers.

Engineering is trade-offs, and decisions are made based on assigned weightings of various factors. The weightings are subject to bias though and thus always personal.


>"using a JavaScript tool to build a site then having a GitHub Action push the files up to an S3 bucket with a cloudfront distribution in front of it."

Why would I want to introduce so many a dependencies when I can have zero? Yes in this case old tech is superior. Sure if (and that is a big if) your blog is super popular you might want to "cloudflare" but that's about it.


As Douglas Adams once said:

“ I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things.”

Only when applied to Linux, this amounts to:

Anything that was copied from UNIX is normal and ordinary and just part of the way the world works.

Anything that is included in a standard Ubuntu distribution is exciting and revolutionary.

Anything else is a ‘dependency’.


> Anything that was copied from UNIX is normal and ordinary and just part of the way the world works.

> Anything that is included in a standard Ubuntu distribution is exciting and revolutionary.

> Anything else is a ‘dependency’.

The picture you're painting of pure back-in-my-day-sonny! bias, while not totally wrong, is also not quite accurate. These different technologies have different installed bases, different probabilities of security vulns, different probabilities of still being maintained in 10 years, etc.


Yes! The probability of a security vulnerability in the Apache install whose httpd.conf you had to write, running on a VPS you have to patch yourself, versus the probability of a security vulnerability in S3/CloudFront are indeed vastly different.

These are the kinds of trade offs we make.


1) I do not use Apache.

2) As for probability - I had zero security incidents on my personal website / blog in 15 years. And I did not spend my life patching things. Just few minutes every once in a while. Keep this FUD for uninformed.


Even if you did use Apache it’s FUD. It’s a very robust, well understood and documented technology.


If there is a security vulnerability in S3 or CloudFront, it will be patched pretty damn quickly.

If there is a security vulnerability in a static site generator, that is unlikely to present to the internet, as the generator is not running on a public-facing server.

Hugo is now 8 years old, Jekyll is now 13 years old. Both are actively maintained, and given the inability of most people to string together a shell script that is both portable and correct, are likely better solutions than said shell script.


>"As Douglas Adams once said:"

Sure doc. You are free to follow and waste your time and money. I do not get a kick out of using half of the world resources for writing glorified "hello world". And let all those intermediaries control what you do in a process.

Adams Douglas was magnificent. Your attempt to use his words in this particular context however is anything but...


To clarify, my original post was more satire on the idea of using 4-5 different SaaS/PaaS services in order to put a simple HTML site onto the net.

I don't have any moral opinion about it - in fact I run a site with the setup described which is half why I am satirizing it (It's cheap to run, but it's also overly complex, has a load of dependencies that ultimately a simple HTML file shouldn't have, and the final setup is quite limiting).

I do think we lose something intangible by building websites with mousetrap-esque technologies. The idea is usually that when you abstract something away, it becomes simpler to understand and quicker to develop in - but IMO it seems like we have abstracted away and it has become harder to understand and slower to develop in.


Moral superiority? Perhaps in some purist circles.

No, the biggest problem is that for so called 'engineers', software engineers who do web development are by far among the worst at actually evaluating solutions based on their engineering merit. Thus, we get overengineered solutions in companies and areas that simply do not need that level of complexity.


Now we’re doing engineering? I thought we were just putting some HTML files on the internet.

The two approaches I described have exactly the same number of moving parts; they require you to understand exactly the same number of technologies. There is no difference in ‘complexity’.


The number of moving parts & the number of technologies is clearly only one aspect of complexity. The actual complexity and dependencies of each particular technology also differs. My point still stands, I've seen far too much resume building to have any other perspective. Obviously sometimes a company does require complex tools to deliver a product. That's fine. Often they don't, but they still end up with complex tools. That's not so fine.


I just don't see github actions, say, as an inherently more complex tool than cron. Or vice versa. Both have a certain essential complexity to getting a scheduled task to run, once you've overcome the initial hurdle of getting to a point where you can use them. Depending on where you're starting from, what tools you're already using, and what ecosystem you're already part of, that initial hurdle is going to be of a different size, but once you're over it, for either case, you're just configuring a job with a schedule, and then hopefully getting to forget about it.


> using a JavaScript tool to build a site

there is so much can go wrong that this unlikely to be as simple as running a bash script.

i usually use node cmd to write terminal programs but recently tried bash and i found it much better for simple stuff.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: