Hacker Newsnew | past | comments | ask | show | jobs | submit | mkhalil's commentslogin

I'm sure its happened before, but this is the first time i finally get to see some sort of modern hardware in KiCad.

Pretty cool to see all 6 layers, paste layers, and adhesive layers as well. I've always wondered how the cake was made and if big projects do/could use KiCad. Seems like a lot more work relative to those Single Layer PCBs on YouTube for things like emulators and custom PCBs. Glad I now know for sure, that I can't do this.


Paste and adhesive are spat out by KiCad as part of the manufacturing outputs. It works pretty much the same way other EDA packages like altium do - the extra layers are part of the part footprint. If you don’t design your own footprints it’s basically no extra work to generate those.

You almost certainly could do it - obviously with some time investment. Getting multi layer PCBs made is surprisingly affordable now.


The Reform laptop project is open hardware: https://source.mnt.re/reform/reform

I encourage you to browse it, I found that while challenging, it does not seem unreachable to get to that level of proficiency in KiCad.


Depends on any project ideas, but as a newbie to hardware dev and with my own small scope eurorack module idea, I am having a lot of luck with flux.ai. Even got a small order of 5 PCBs printed for under $200.


I use pnpm, but even so: thankfully naming things is hard, and all my env variable names are very_convuluted_non_standard_names for things lol.


That's why it should be rounded for everything. No pennies should probably mean that any final transaction totals are rounded to the nearest nickel. Whether they pay with cash, credit, debit, snap, gift card, etc...

IMO, rounding for cash purchases only sounds worse than keeping the pennies.


1.5 billion used to be an absolute ridiculous number to pay for a company not long ago. AOL? 1990s AOL?

But with 5 trillion dollar companies these days that are "worth" more than the entire GDP of Germany, why not. It's not real. It's just a number on a computer at this point.


Huh? Did you read the link? Did you notice the ONE screenshot clearly shows the app has a material-ui look.

I'm going to say this because I think you might not know this, but also because I think many others might not have thought about this:

Almost always, a programming language is UI agnostic. Swift SDK for Android means: You can now write Android Apps in Swift. This doesn't magically include Apple's components / SwiftUI. When you write code for a platform, specifically an SDK for an OS, all you do is expose that platform to that language.

So, as long the SDK/bindings are there, a new "Window" means whatever a the OS thinks is a Window. A Button is what is defined (or exposed/binded to) as a Button in Android.

Swift was sorta released for Windows: a new Window looks like a generic Win32 Window. The same one you would get if you used C, C++, Rust, etc..

All your examples are GREAT examples to explain how this works: - Flutter has "Cupertino" to allow people to use Flutter to make Apple apps, and not have to learn names/methods/interface of the native Apple UI. - React Native: A LOT of work was put in to make/bind Apple native objects to a React component. And the same for Android.

So again:

The Swift SDK for Android means you can write your Android apps in Swift. The same apps you might of wrote in Java or Kotlin, you can now use Swift. Meaning whatever it looked like in Java/Kotlin (using native api's), it would look like in Swift.

The SwiftUI, Apple's component library written/exposed to Swift, is something completely different.


> Almost always … a new "Window" means whatever a the OS thinks is a Window. A Button is what is defined (or exposed/binded to) as a Button in Android.

But not always, and when it’s not true, it sticks out like a sore thumb. It’s convenient for developers, but produces apps that have this uncanny valley “this isn’t quite right” quality to them. Adobe used to do this when I worked there 20 years ago (honestly don’t look at modern Adobe software if I can help it so I don’t know if they still do) with an internal component called Adobe Dialog Manager that was built expressly so developers didn’t need to worry about native widgets. The result was this “adobeness” to all the ui elments on both platforms (at this point we are talking windows vs macOS). There was no os-level button. There was an ADMButton on both platforms and it was hand-rolled behavior for rollovers and drawing styles and general interaction rules and it was this pleasantly uniform experience that sucked equally everywhere.


It *is* confusing but in actuality, it kind of works: It's a Windows Subsystem - as in the Hypervisor/VM platform - to boot a Linux VM. And, more importantly IMO, Linux distro is using this Windows Subsystem for: booting, drivers, and networking (.e.g. the "/sys/wsl" folder, and whether the Window Subsystem will generate fstab, etc..)


The "demo" is kind of bologna.

1) The code that is running is not what's presented; it executes (non-transpiled) vanilla JS.* Why not just show that?

2) Removing the box shadow massively makes the two closer in performance.

3) The page could just be one sentence: "Reflowing the layout of a page is slower than moving a single item." GPU un-related.

---

*Code that actually is running:

```js

        , u = t => {
        h && clearTimeout(h),
        l.forEach( (e, s) => {
            const {top: o, left: n} = m[r[s]];
            t ? (e.style.transform = "translate(0px, 0px)",
            e.style.opacity = "0.7",
            e.offsetHeight,
            e.style.transform = `translate(${n}px, ${o}px)`) : e.style.transform = `translate(${n}px, ${o}px)`,
            e.style.top = "",
            e.style.left = ""
        }
        ),
        t && (h = window.setTimeout( () => {
            l.forEach(e => e.style.opacity = "1")
        }
        , 500))
    }
        , d = t => {
        y && clearTimeout(y),
        l.forEach( (e, s) => {
            const {top: o, left: n} = m[r[s]];
            e.style.top = `${o}px`,
            e.style.left = `${n}px`,
            e.style.transform = "",
            t && (e.style.boxShadow = "0 14px 28px rgba(239,68,68,0.45)") // REMOVING THIS LINE = BIG DIFFERENCE
        }
        ),
        t && (y = window.setTimeout( () => {
            l.forEach(e => {
                e.style.boxShadow = "none"
            }
            )
        }
        , 500))
    }
           
```


Good points.

1) I thought of giving an easier to read example. I just moved the example to react, so the snippets actually match exactly what's going on in the background.

2) It is true! Though, using shadows on the optimized code doesn't slow it down. I added more toggles to test same effects on transform and top/left implementations.

3) I think it's still interesting to start with some thought and then observe that in practice things are different really. In fact, thanks for all the feedback, as it made me go back and do more investigation.

If you don't mind you can give the article a second look now :)


> The "demo" is kind of bologna.

The word you are looking for is "baloney". They are pronounced differently.


I always felt TiVo really did a great job at identifying how important good UX and UI are for consumer products. Partially, the monopolies/cable companies knew/know they were able to get away with poor UI since consumers didn't really have a choice when it came to cable providers/cable boxes so it wasn't hard to beat them, but TiVo did actually do a good job.

I felt like they had consumer awareness at one point. Maybe if they went with there own premium streaming service, as oppose to only trying ad-based streaming services (like Pluto) OR continuing to try to make money charging people monthly for a subscription to use a device they first have to purchase.**

Instead they kept the old business model and went to more of a business-to-business service oriented offerings. Selling metadata, APIs, TV Guides, Car infotainment, all oddities IMO as most IPTV providers like to use turn key solutions.

I actually use the Tivo Stream 4K as my smart device. Works great, gives me 4K, can download Android TV apps, and is cheap $35.

Not a fan of ad-based TV (which is the Tivo+ thing, like Pluto, etc...), but I use it mostly for YouTube, Plex, etc.

--

*: My Plex server uses my HDHomerun for live tv; TiVo could have been both if it was more open. A TiVo competitor to Plex's Pass + Live TV service could of been there subscription revenue, and a TiVo competitor to HDHomeRun's devices could of replaced their DVR revenue. They could take the Tivo Edge, open it way up (as the HDHomeRun takes cable and give you actual m3u8's; this lets you decide where you view or record TV, and makes the device actually useful for commercial deployments as well (offices, restaurants, dorms, hotels, etc...). Pretty much: add features similar to Plex (i.e. combining my OTA/Cable recordings with my local media) + Plex's Live TV (Tivo already has the richest data and a sleeker guide) and combine the Tivo Edge CableCard and OTA in one device. This would appeal to many users, bring the hardware price down as it's one model, and provide them with both revenue streams like they are used to.


This is a total aside but you've reminded me: I couple of years ago I switched from cable TV to just using broadcast and bought a new HD Homerun tuner capable of receiving 4K OTA broadcasts.

Imagine my shock that these new broadcasts are DRMed. HDHomerun can't decode them. I don't blame them, I'm absolutely incensed that OTA broadcasts have been allowed to start using DRM.


I'm absolutely incensed that OTA broadcasts have been allowed to start using DRM.

We're back to the 1970's and early 80's when some OTA channels were allowed to charge subscriptions.

The difference this time is that the new decoders send information /back/ to the TV station, so you can be further tallied, tabulated, profiled, and collated.

Which is the precise reason I switched to OTA. What I watch is my business, not theirs.


Lon Seidman (lon.tv on YouTube) has been covering this closely, and the FCC is aware: https://youtu.be/0YmeEp_N6pY


> Maybe if they went with there own premium streaming service

If you don't own the content, you get squeezed. Hulu, Spotify all of these guys get nickle-dimed into oblivion.

Netflix understood this deeply creating one of the biggest, successful pivots in startup-dom


That doesn’t apply to Hulu though. It’s been co-owned by various media giants since inception (currently Disney, previously Fox and NBC.)

It made sense back when it was launched but is basically redundant with Disney+ at this point. Still profitable though


Normally, I would just ignore small typos, but since you made the exact same mistakes consistently, there is a chance you are actually unaware, and will benefit from pointing this out:

> if they went with there [should be “their”] own premium streaming service

> Plex's Pass + Live TV service could of [should be “could have”] been there [should be “their”] subscription revenue

> a TiVo competitor to HDHomeRun's devices could of [should be “could have”] replaced their DVR revenue


Thank you.

Honestly, the "could of" is more of a "sometimes I write how I sound" thing, but anything else, is more of a middle of the night brain mush.

I actually re-wrote that bottom part at least twice because I had a lot to say, but didn't know how to say it concisely. As I was writing it, I kept having less and less confidence that any readers would have prior knowledge about I was writing about (ex: Plex, HDHomeRun, TiVo Edge), so I kept defining or explaining things in parenthesis and re-ordering the sentences; so at one point I just had to say, good enough, click reply

I hate lengthy/wordy comments that coulda' () been just a couple of sentences, but I also love to explain things in a way that a wide range of people can comprehend, so it's a battle at times. (this reply is a good example...reply)


You may like this app, it can combine live tv with local files and DVR functionality with HDHR

https://getchannels.com/


Holy moly this thing has grown. I seen the Apple TV/Android app many many years ago and figured it was just another basic/forked IPTV/M3U viewer but looking at the website and "https://getchannels.com/releases/" -- what an app / features; can't imagine the codebase lol. Def. going to check out, thank you!


I held onto my TiVo for as long as I could until Spectrum forced me off.

I switched to AT&T Fiber but was keeping my Spectrum TV service just for my TiVo. When I called to turn off just Internet with Spectrum they terminated both. When I called to get them to turn my cable service back on they refused to reactivate my Cable Card that the TiVo uses. Since they were no longer required to support them, no new activations were allowed. I’m sure that played a huge part in this. Other providers like satellite or fiber TV had no obligation either.

Being able to pay $2.50 / month for a cable card and then use my TiVo with multiple minis around my house rather than paying per room to the cable company was great for years.

But YouTubeTV is excellent too. The only thing I miss is the ability to save recordings for as long as I want or record anything I want. There were some I kept for years and YT only lets you keep them for a few months.


>But YouTubeTV is excellent too. The only thing I miss is the ability to save recordings for as long as I want or record anything I want. There were some I kept for years and YT only lets you keep them for a few months.

cf yt=dlp[0]

[0] https://github.com/yt-dlp/yt-dlp


Going to hijack a thread... is there any chance one could point a Plex server at a different backend (in the hosts file) and then emulate Plex's own functionality? So tired of the internet going down and not being able to log into my own shows.


Use Emby or Jellyfin instead. Neither require internet access and both do the part of Plex you actually want without the garbage.

Jellyfin is free, but I prefer Emby and bought the lifetime license on sale.


I've used a local server and just DLNA for over a decade and that's been fine on almost every device I've used it on.

There's also Jellyfin if you're really into the whole Plex thing.


There's also Kodi if you use Android based devices and only need local playback. NFuse works similarly for Apple.

They just stream straight from the file share. No transcoding nonsense or server necessary.


DLNA doesn't do me much good... I need the parental restrictions and other such features.


you can just got to http://local_plex_server_ip:32400/


Jellyfin + Wireguard/Tailscale


Jellyfin?


Uhhh... Plex works fine without Internet?


There’s maybe some way to make it work, but by default it’s not so easy. I once had an internet outage and had to play the files with VLC as I wasn’t able to log into Plex on my NAS.


You don't know much about it. When you connect to a Plex server, you have to log into their backend, or it doesn't even know how to connect. It also does all the accounts/permissions stuff. An internet outtage is, unfortunately, a Plex outtage.


https://support.plex.tv/articles/200890058-authentication-fo...

In rare cases, you may decide that you wish to allow very specific access from the local network without authentication. You might do this if using a third-party Plex app, which doesn’t support authentication, for instance (though all modern official and third-party apps should already support authentication).

To make an exception, look in your Plex Media Server’s advanced network settings, under Settings > Server > Network > List of IP addresses and networks that are allowed without auth.

Here, you can specify LAN addresses as either a specific IP or a IP/netmask (to specify a range). Separate multiple values by a comma and be sure not to include any whitespace (e.g. spaces).


That isn't true. There's a setting in the server for networks which are allowed to access without authentication.


I have never set up a remote connection. My phone and laptop have a VPN connection back to my home network, and Plex just works. My TV also uses a local connection.

I believe that the regular "external" connection requires a STUN server, so it will fail without Internet.


This article has been re-written for over a decade. The so-called "complexity" is just a list of tools that each solve a specific problem.

Tooling isn't the problem: The complexity is inherent to modern web development. You see similar "hidden" complexity in other frameworks like ASP.NET, and GUI desktop frameworks as well.

If you're using Rails as an API backend with React handling the frontend, it's almost a completely different application architecture than a traditional Rails monolith. So the list of tools (Vite, React, Prettier, etc..) is almost for a completely different problem (again, unless you use Rails for FE; if you want to use Rails for Frontend, use Rails for Frontend; not a fan of the mash-up at all.)

The real issue is learning methodology: A lot of developers today start their careers with frameworks (point 4) before learning the fundamentals of the web (points 1-3).

HTML for markup.

CSS for styling.

Learning server-side logic (e.g.: <forms> can POST and can return a completely different page at the same URL) and databases for dynamic content.

Then, JavaScript for interactivity.

Embrace the tools: Each tool on the list (Vite, Tailwind, etc.) exists for a reason, and they're all necessary for a modern web application. Saying there are "too many" is an amateur take on the reality of the ecosystem.


Complexity is not inherent to web development. If anything it is now possible to get more done with less.

Hotwire is sort of vanilla rails and it enables you to create very modern experiences with content live updating through web sockets and it is basically a one liner to setup.

The de facto way to deliver JS in rails has also become far simpler through import maps. There is no build step for that. Tailwind support is a flag away when generating a new rails app and is super simple.

Deploying has even become simpler through kamal.

So no, complexity is not inherent to web development and the article is wrong in marking Hotwire as “complexity”. If anything it makes it simpler.

I agree with your point about learning, but learning shouldn’t be about learning more tech. The learning should be how to get more done with less. Anyone can use 20 different programming languages and servers, the skill lies in using 4 of them to do the same and outperform a thousand person team with just 3 devs.


>> "Complexity is not inherent to web development"

>> "Hotwire is sort of vanilla rails and it enables you to create very modern experiences with content live updating through web sockets and it is basically a one liner to setup."

My point was that web development isn't complex, but the core is simple; but modern web development is.

Your "Hotwire is sort of vanilla rails" statement is a perfect example.

What you claim to be simple, is a big list of tooling, web-sockets included, integrated together. The end result is using it might be a "one-liner", but that doesn't mean it's simple. And that's OKAY. Because simplicity should be the standard; and adding things, like sockets for live updates, should be something you explicitly enable (with modern web-apis, its definitely simpler than it used to be, but that doesn't mean its simple)


This really is different. Hotwire is simple. You can read through the library's codebase and understand what it's doing fairly easily, and then when working with it the flow is straightforward. Good luck doing that with React


> Good luck doing that with React

Data is sent to React by inertia/graphql/whatever and React renders it. It’s pretty straightforward.

Edit: I do love LiveView/HotWire/HTMX etc but honestly everything is a trade off and there are times just rendering a react component is less complex.


“Just rendering a component” takes thousands of nested function calls, covering a million lines of code; it’s not possible for a person to read or understand the whole process unless they dedicate months to it.


Sure it adds complexity, but isn't that what abstractions are for? We are talking about grokking how data flows in _a web app in Rails_. I wouldn't think usual workflow requires going into actual inner workings of React :p


Well React doesn't come by itself. You need a router, probably some way of managing shared state, bundling, compiling your TypeScript, and 7 other libraries

The more stuff you add on the harder everything is to understand, and the less stable your app becomes until suddenly you need specialists for every piece just to keep things chugging forward. Everything needs greasing and maintenance over time..

..and then in 4 years the React team decides "oh you know what the way Svelte is doing things is actually way better.. we'll need a re-write to integrate their ideas". Now what?

"that wouldn't happen! so many businesses depend on React!".. uh they have no obligation to make things compatible with whatever you've built. They're not working for you. What happened with AngularJS? Vue 2?

Hotwire is easy to understand (React "just renders it" is a massive oversimplification)

If Hotwire rewrites? I create a private fork and continue on. Who cares

If I want to tweak how Hotwire works cause it'll benefit my app specifically? I do it myself

I'm not against adding complexity.. but if you care at all about longevity and long-term productivity then adding React really needs a tonne more consideration than it gets


I think we fundamentally agree that we want to be careful about adding complexity to a project. Funnily enough there have been many times where I really thought Hotwire equivalent would have cut down a lot of complexity. I've also actively looked at web components at work and for hobby projects to see if we could make/keep things simpler.

But maybe I'm biased because I've been working with React for a long time, I don't find it too daunting to manage dev tools around React. When React was young, I remember that there were _a lot_ of ecosystem churn but now it's more-or-less settled and I don't think it's too bad.

I don't know how Hotwire works that well as most of my experience is around Elixir's LiveView, but at least for LiveView, there is also quite a bit going on under the hood to make it performant for large lists and to handle error states gracefully. And I (maybe incorrectly) assume Hotwire is similar, so I feel like it may not be not as simple as you say. (Edit: it is simpler than React though!)


It also doesn't need to be all or nothing. I've become a big fan of progressive enhancement or an islands approach. Default to SSR and scale it up as needed


Once you give up any hope of understanding the inner workings of the frameworks you are using, you're no longer a programmer, you're a cargo cultist. Now compound this a dozen levels deep, with systems piled on systems built by people who don't understand the other systems they are building on top of, and you have the current mess.


Every engineer in our industry is a Cargo cultist by that definition. Including experts. Where do you draw the line? I'm sure you have one, but your line is no less arbitrary than mine or someone elses.


A competent programmer should be able to at least conceptualise every level from the transistor up - not necessarily completely understand, but at least know roughly what it does, and what it rests on, and what rests on it. Transistors, gates, logic, state machines, instruction sets, assemblers, parsers, compilers, interpreters, operating systems, memory allocation, graphics, browsers, that sort of thing. Not at all unreasonable for someone with a decent computer science degree, surely?

Of course, you don't have to know any of that to grasp how to bang a page together with today's web frameworks; but you end up with the resource-hogging unmaintainable security disaster that is the modern web in the process.


Ok, but I've worked with people who are pretty good with web dev, but I guarantee you they don't know how memory gets requested from the operating system.

Like, sure it helps in some contexts, but in their context it would largely be irrelevant.

For the wast majority of people, it's fine for them to know basic tradeoffs between stdlib container types. Most web performance problems today come from misusing tools, whether container types, bad algorithms, memory leaks (and I don't think knowing how an OS manages memory would help them in JS for example), DOM pollution, or oversized assets or whatever. And my take is that that people are often too overworked to care about it, rather than lacking awareness about these things lol

On the other hand, if you're a systems engineer, then you absolutely do needs to know all of this stuff.

And I bet you they'd navigate stuff like better than a systems engineer, because that's more useful to their day to day!


Exactly. You don't need to really know how to program to use web frameworks, and web programmers are much cheaper than systems engineers. It's a no-brainer in short-term business terms. But there's a always a price to be paid for decisions like this, and this overhead is it.

Experience of similar tradeoffs tells us it will only get worse over time, and LLM-generated web programming will make the whole process get even worse even faster.


And you think slapping ActiveRecord to a Ruby class doesn't hide thousands of lines of code?

Do you need a truly holistic and in-depth understand of every piece you use? How in-depth does you understanding need to be to use ActiveRecord/ActionCable/etc? What about underlying libraries? Protocols? Engine internals?

Do you need an in depth and holistic understanding of React and all its dependencies to write () => <div>Razzle Dazzle</div>? Nah, surely not


I don't think a typical React rendering call is even 100 calls deep. React itself adds maybe a dozen frames. Your components could be complicated, but likely they don't add more than another dozen or two. React is pretty efficient if you hold it right, and use for its intended purpose, that is, large and complex interactive UIs.


The event handling alone is almost a hundred calls deep. Because a lot of the work is happening asynchronously, you won't see most of it when stepping through the debugger starting from a click handler for example, but try adding a breakpoint to the compiled JSX.

With fibers (React >16) and a couple commonly used hooks you'll easily hit a thousand high call stack.


Do you mean that the async chains of something().then(somethingElse()).then(...), into which async/await code desugars, grow 1000 levels deep? I never encountered it, but, OTOH, I did not research this in particular. V8 very definitely does not produce a call stack out of it, but schedules every new Promise as a separate task. (A bit like a Lisp threading macro.)

So, what forms 1000 levels of nested calls? Is that anything specific to React? I'm very curious now!


I meant the actual React code: handling the click event, running the component code, resolving dependencies and running hooks, building the virtual dom then handing off to react-dom for reconciliation, scheduling updates, and finally modifying the DOM. Not your application code.

The async comment was to point out that if you attach a breakpoint to your `onclick` handler, you will reach 'the end' of execution after less than a hundred function calls. But the actual work (see above) inside react and react-dom hasn't even started, as it's pushed to a queue. This may give the impression that far less code is running than actually is.

This is still in context of "you can read through the library's codebase and understand what it's doing fairly easily"; so yes, it's specific to React being very complex vs something like htmx, which most devs could understand in its entirety in one afternoon.


Most JSX expands to a single expression, but I guess you mean a single component? I'm not sure what controversial here. I've attached debuggers to React component many times


You also have to take into account the browser and OS call stack.


This does not change if you write pure Javascript that directly mutates DOM without calling any intermediate functions.

Given the speed of rendering that browsers achieve, I would say that their call stack during this is highly optimized. I don't see OS doing much at all besides sending the drawing buffers to the GPU.


And also, that with React you are not only buying into React, but also a JavaScript dependency manager/package manager. Be it NPM, or any other. Installing JS package itself already comes with its own problems. And then probably people buy into more stuff to install through that package manager. Some component library and a "router" here, some material style and a little library to wrap the root node with some styling provider or what it is called there, ... before you know it, a typical FE dev will have turned your stack into 80% React and related dependencies and the maintenance on that will continue to grow, as new features "can be solved sooo easily, by just adding another dependency" from the NPM/React world.


As someone who’s kind of a newbie in rails, but with 10 years of experience in other languages…

It sounds ok to adapt tools if needed (won’t get into whether tools are actually needed, let’s assume they are).

But Rails is supposed to be a giant, everything and the kitchen sink framework bringing everything from an ORM through its own console to scaffolding code generation.

If adding tools to the setup is needed, isn’t then rails the thing to reconsider? Something more modular could probably work better.

Just reading “vanilla Rails” sounds like a red flag. How can that behemoth be considered vanilla?


> But Rails is supposed to be a giant

All the tools on the article are about client-side rendering and operations.

It's ok if Rails decided to have opinions on client-side rendering and operations now, but it's far from expected. And it would alienate some users.

Instead, the article's conclusion is the correct one. You don't need to mess with complex client-side and ops tools if you don't want to. You can build many things perfectly well without them.


On one hand, I hear you, because Rails has focused on the server side.

On the other hand, Rails has had client-side opinions basically forever; when I started using Rails during the 2.x days in like, 2009, there were helpers that injected JavaScript to make forms better, Rails 3.1 included the asset pipeline, an attempt to compete with webpack, in 2011. Even the current generation of these things has been around for a long time, Hotwire is four years old at this point.


I mean, I think the problem with SPA-like client-side approaches are that there aren't any that have felt _good_ with Rails, let alone great.

Absolutely true that not every app needs to be a SPA from day one, but I do with there were a few more common solutions for "hybrid" apps, which use some pages as a SPA. That said, it's not that bad once you've got it setup. I like that Rails offers a solution like import maps, but I do also wish there were better core functionality for using some kind of package manager.

Like the redis analogy: Whether or not you need Redis, there are good defaults and very good 'third party' solutions for background jobs (or caching). You don't even need Redis is many cases, but it's easy to grow into.


'vanilla rails' is really a bunch of other tech bundled together. including much of it that is rendered using other technologies. hotwire? javascript and websockets. The thing that always gets me about Rails (and I am a decades long fan of it,) is that when they upgrade major versions, all of the tooling that comes bundled in the box changes. Sure rails 1.0 didn't have websockets bundled in but it did indeed come built on Prototype.js, which if you still have in a rails 8 project probably gets a lot of laughs. There's a lot more to 'staying current' in a long lived rails app than just upgrading the gems.


>'vanilla rails' is really a bunch of other tech bundled together.

That was roughly my point - unless there's something wrong in my mental model, a Rails user is someone who trusts Rails to get them a sane and consistent bundled pack of tools so they can skip the choice and get to work. If one is going to choose a different set of tooling later on, that seems to defeat the point of using Rails in the first place.

I was not judging the framework itself, for the record. Just saying that if you go for it, going "vanilla" seems like the only sensible choice.

>when they upgrade major versions, all of the tooling that comes bundled in the box changes.

this does sound like a major con indeed.


Some of these extra tools are added because someone wants to display "a blog post" using React and so they think they need all those extra tools for FE.

If there is no need for the level of reactivity that is on the level of facebook style then everything provided by a simple rails new is enough.

But I would say that you kinda need to think about the UX with simplicity in mind or else everything becomes a blog with React because everything can be forced to look like it needs to be reactive and do a lot of unnecessary stuff.


> If adding tools to the setup is needed

It is not needed.


I think the pushback isn't so much against the existence of the tools per se, more against the pervasive idea that everyone needs them.

When every other learning resource is titled something like "Ten reasons you need to be using the MONGOOSE stack right NOW!", it's no wonder we've got people trying to shove redis into their baking blogs.

Matter of fact is, the average website would be fine without a "stack" of any kind, but no YouTuber sells sponsorships telling their viewers that. Ergo, many junior devs genuinely don't know that.

While I agree that people should be primarily learning the core tech, it's a difficult message to deliver amongst the cacophony of corporations trying to promote their services.


And also in jobs many if not most places you go will already have made decisions about how they do web stuff and once more juniors are being given the impression, that this is how things are done "professionally", while actually that is no more professional than any experienced hobbyist making their website and often worse in aspects such as accessibility (need to run JS and often breaking browser functionality like the back button), complexity (maintaining the interplay of all those tools and libraries), maintainability (updating your dependencies frequently), feedback cycle (complex build pipeline, instead of just delivering HTML, CSS and perhaps a sprinkle of JS).

This is why I don't want to do much frontend in businesses, where there is a separate dedicated FE team. It seems to me, that traditional fullstack devs, not FE devs who want to do backend stuff in NodeJS, but devs who happen to have learned web standards like HTML, CSS, and JS along the way, not as a "one ring to rule them all", make better websites. Maybe not as fancy optically, but often more responsive, and better in the listed aspects. But this may be bias, because such websites are far and few between these days.


As an example of this, I had to build a management interface for the backend of the project I was alluding to above; itself a web app in its own right. Written entirely in Python, with HTML templates, CSS and JS and a bit of SQL with no "web frameworks", no other dependencies except nginx to proxy requests to it. Easy and quick to develop (a couple of days), and very unlikely to suffer from software rot, unlike a web-framework based system - Python (at least since the Python 3 debacle) has excellent backward compatibility, and basic HTML, CSS and JS likewise.

What it did lack, though, were fancy widgets and other decorative bells and whistles. But is it worth the cost of pulling in the vast overhead of "modern" frameworks, and their resulting complexity and maintenance problems, just to have those?


I think the point of the article is that it's likely you didn't need a "modern web application" in the first place, because vanilla Rails would work fine. But you won't know that if you don't bother understanding the choices made in vanilla Rails.


they're all necessary for a modern web application

Everything wrong with modern web applications.


> Each tool on the list (Vite, Tailwind, etc.) exists for a reason

Yes!

> and they're all necessary for a modern web application

No! Just as important as understanding the purpose of a tool, is also understanding when a certain tool is a bad fit for a certain project. There are no silver bullets.


The core rails philosophy has been, and continues to be, "reasonable defaults out of the box". In other words, if you are running `rails new` today, you should just start day 1 with the things that are preconfigured. One day you may need React/tailwind/etc, but vanilla rails will ship you to prod just fine on day 1 without configuring anything.

That doesnt mean you should rewrite an existing app to a more 'vanilla rails' config. You've already eaten the migration cost.


When you write "The complexity is inherent to modern web development" you are describing the problem, not a requirement. When you are pulling in a thousand npm packages just to make a simple transactional website that is a wrapper over a database and a few SQL queries, you're doing something wrong.


> Tooling isn't the problem: The complexity is inherent to modern web development

> Embrace the tools: Each tool on the list (Vite, Tailwind, etc.) exists for a reason, and they're all necessary for a modern web application. Saying there are "too many" is an amateur take on the reality of the ecosystem.

Depends. One can still write production-grade web applications with way less dependencies. You can write a Golang web server with minimal dependencies, keep writing CSS "like a peasant" and perhaps use jQuery in the client-side for some interaction. What's wrong with that? If you hire a strong team of engineers, they will be pleased with such a setup. Perhaps add Makefiles to glue some commands together, and you have a robust setup for years to come.

But some engineers feel that counterproductive. They don't want to learn new things, and stick to what they know (usually JS/TS); they think that a technology like CSS is "too old" and so they need things like Tailwind. Makefiles are not sexy enough, so you add some third-party alternatives.


Production-grade web app without advanced build tools? Depends.

CSS classes not scoped and starting to leak? You hire more frontend developers and because there is no type system we get critical exceptions? And no automated testing to discover them?

Correctly handling hyphenation of user-generated content? Safari decided to handle audio differently in the latest version and you have no polyfills? iPhone decided to kill the tab because of memory pressure, because someone uploaded an image with exotic properties, and you have no cdn service like fastly image optimiser to handle that? Support for right to left languages such as Arabic? The backend returned a super cryptic response that actually originates from the users private firewall?

a11y requires you to use resizable browser text, and someone is using google translate chrome extension at the same time, and you can’t possibly know how the layout of the page will look like?

Some Samsung devices bypass browser detection completely and you don’t know if the user is on mobile or not? localStorage.setItem will throw an error when the device is low on memory, etc etc…

Once you get to a certain scale of users, even the simplest of tasks become littered with corner cases and odd situations. For smaller scale applications, it is not necessary to have a very wide tool arsenal. But if you are facing a large user-base, you need quite some heavy caliber tools to keep things in check.


You're not considering how scalable your simplified solution is to a team of 100+ people developing the same codebase.

Most of the problems of software engineering are not technical, they are social. Web development is simple for a team of 1-10. I love the idea of hand-writing CSS and relying on simple scripts for myself and a few teammates. Unfortunately it doesn't scale to large orgs.

It's not that people don't want to learn.


Counterpoint: These tools add complexity, and you don't need them. If you step out of the system and look in, you see madness. The problems they solve are created by other tools; they are problem-generating systems.


That's like saying the Unix philosophy adds complexity because it dictates a tool should do one thing well. Composition of tooling (consisting of many individual tools) is the basis for lots of rock-solid stacks.

I don't think the Unix philosophy is universally correct either, but "too many tools" is a complaint without much consequential basis. It's an aesthetic problem not a functional one.


No, it's a functional one.

The unix approach works because the tools themselves almost never change and the platform doesn't change either.

Web tooling changes, and the web platform changes. The more tools you use, the greater your risk. If something changes and you coupled 100 things together, you have to do a lot of changes. If you control it and delegate stuff to standard browser functionality, you have to do less changes.

There's also the issue of tools going out of date and being deprecated. Again, that's risk.


I am speaking specifically to examples used in the article, and related web dev paradigms that are popular. In the general case, there are tradeoffs to be made when adding tools, libraries, additional code of any sort to a work flow. In the case of web dev, adding these are bad tradeoffs.


You don't have any concrete complaint beyond the number of tools, and the nebulous idea that solving the problem each addresses within a smaller ecosystem would be somehow better.

Removing ESLint means... you don't have a linter. It doesn't have an upside. Removing Tailwind means you need to write more verbose CSS. Removing Babel means you have to use older JS idioms for browser compatibility. Etc.


I don't think complexity comes just from the number of tools, but the disparate things you "need" them to do. Redis, Vite, large CSS toolkits -- you're learning a bunch of large components.

I mean complexity comes from many places, but compared to the 'Unix philosophy' most of these tools are quite large. Obviously, there's quite a bit to learn about the way a *nix OS works, but if you treat tools as small and composable for simpler interfaces it helps a lot.

The web dev example of pub/sub is funny, because chances are if you're using Rails your primary DB (probably Postgres) already has a pub/sub system in or. Or you can just use any RDBMS for your job management system.


Not really I guess. Of course some tools are not mature, but many are mature and solid enough to solve real tech tasks.

Real problems are not caused by tools / problem generating systems but by silly people who fetishize complex tooling for simple jobs. Tooling is chosen not by merit but by hype.


> and they're all necessary for a modern web application.

Modern doesn't mean much.


> The complexity is inherent to modern web development.

Underemphasized

Responsive networked GUIs are complex.


You're saying each tool solves a problem. That's true. But is it a problem that you have? And is the pain of that problem larger than the pain of introducing another tool that you need to debug when things go wrong?

Also, browsers have really stepped up their game. Now there is native CSS nesting[0], cross-document view transitions[1] and much more. So the above calculus is continuously shifting. So much that I think it's best to start with a really simple stack nowadays.[2]

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_nesting...

[1]: https://webkit.org/blog/16967/two-lines-of-cross-document-vi...

[2]: https://mastrojs.github.io/


> The complexity is inherent to modern web development

No it isn't.

> they're all necessary for a modern web application

No they aren't.

> Saying there are "too many" is an amateur take

Yikes.


Unpopular opinion: I think it's wild that ANY ORG would pay $200k for a chat app. If I ever ran an org that needed a chat app and the costs came even close to $200k a year, I would rather hire an engineer, contract a designer, and create our own, or more likely, contribute/fork an open source project like Matrix; providing us with the ability to *really* integrate it into our company/tools - as oppose spending it on IRC+ for "good enough" integration. PLUS ... our data stays on under our control.


Not unpopular at all. That’s the way


I would say and non-profit. For a larger company like, idk, AT&T or IBM or Goldman and Sachs, $200k/yr would be cheap to handle all of the internal chat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: