I personally am a user of JavaScript and don’t care what it is called. Call it FuckScript for what I care. How does this benefit anything other than Deno marketing?
The malware could have been a JS code injected into the module entry point itself. As soon as you execute something that imports the package (which, you did install for a reason) the code can run.
I don't think that many people sandbox their development environments.
It absolutely matters. Many people install packages for front-end usage which would only be imported in the browser sandbox. Additionally, a package may be installed in a dev environment for inspection/testing before deciding whether to use it in production.
To me it's quite unexpected/scary that installing a package on my dev machine can execute arbitrary code before I ever have a chance to inspect the package to see whether I want to use it.
I've been using pnpm and it does not run lifecycle scripts by default. Asks for confirmation and creates a whitelist if you allow things. Might be the better default.
Go already has a JSON parser and serializer. It kind of resembles the JS api where you push some objects into JSON.stringify and it serializes them. Or you push some string and get an object (or string etc) from JSON.parse.
The types themselves have a way to customize their own JSON conversion code. You could have a struct serialize itself to a string, an array, do weird gymnastics, whatever. The JSON module calls these custom implementations when available.
The current way of doing it is shit though. If you want to customize serialization, you need to return a json string basically. Then the serializer has to check if you actually managed to return something sane. You also have no idea if there were some JSON options. Maybe there is an indentation setting or whatever. No, you return a byte array.
Deserialization is also shit because a) again, no options. b) the parser has to send you a byte array to parse. Hey, I have this JSON string, parse it. If that JSON string is 100MB long, too bad, it has to be read completely and allocated again for you to work on because you can only accept a byte array to parse.
New API fixes these. They provide a Decoder or Encoder to you. These carry any options from top. And they also can stream data. So you can serialize your 10GB array value by value while the underlying writer writes it into disk for example. Instead of allocating all on memory first, as the older API forces you to.
There are other improvements too but the post mainly focuses on these so thats what I got from it (I havent tried the new api btw, this is all from the post so maybe I’m wrong on some points)
> If that JSON string is 100MB long, too bad, it has to be read completely and allocated again for you to work on because you can only accept a byte array to parse.
I was not sure whether this was the case, as `json.NewEncoder(io.Writer)` and `json.NewDecoder(io.Reader)` exist in v1, so I had checked, and guess what, you're right! Decode() actually reads the value to internal buffer before doing any marshalling in the first place. I had always assumed that it kept internal stack of some kind, for matching-parenthesis and type safety stuff within streaming context, but no, it doesn't do any of that stuff! Come think of it: it does make sense, as partial-unmarshal would be potentially devastating for incrementally-updated data structures as it would leave them to inconsistent state.
If you don't like changing APIs I'd stay away from the Remix guys. I know it is not like Next but I've used react-router, which had some API churn, later evolved to remix and then back to react-router... Backward incompatible changes are the signature of it. The documentation story is a problem too because of that. Completely different things are named the same and they are now building a new Remix, not even on React as far as I can tell.
Stick with a single version and you'd probably be happy though.
While the "Remix" renaming / branding is a little confusing, the React Router team has always done a fantastic job delivering a robust solution that properly leverages the web as the platform. Its framework mode (fka "Remix") is simpler and better than Next.js, and more featureful than vite-ssr. Want to mutate data? Use a form. Fetch data? Uses browser-native fetch under the hood. It's all about the fundamentals: HTML and HTTP. You can decide how much clientside JS to ship, and mostly eliminate it. OR, if you want a traditional SPA, go for it. A quick HN comment thumbed on my phone can't do it justice -- but it's very, very good. And its maintainers have a stellar track record. (No vendor bias like w/ Next.js / Vercel.)
FWIW, I've been doing webdev-related work for a living since 1998, and React since 2016.
their issue with breaking changes is from way before the Remix days - React Router introducing massive breaking changes at every major that required significant rewrites was already a running joke of the community
YMMV but the current docs seem fine to me. Though it was pretty bad during the Remix -> RRv7 transition. You can also learn a lot from their github activity (proposals/rfcs/issues). API docs have some additional docs too.
New devs coming in and expecting the framework to be with "batteries included", which it absolutely is not, will also have a bad time. Node apis, ALS/context, handling app version changes on deploys, running the server app itself (if in cluster mode, e.g. with pm2, what that means in terms of a "stateless" app, wiring up all the connnections and pool and events and graceful reloads and whatnot...), hell even basic logging (instead of console.xxx) ... all of that is up to you to handle. But the framework gives you space.
People new to React and/or Node will be confused as hell for quite a bit... in such a cases I would add like 3 months of personal dev time and learning to just wrapping your head around everything... React docs themselves say that you should use a framework if you're using React in 2025 - but it's not that easy. There is a cost, especially if you go the full stack route (client + server + hydration) of having everything under one "typescript roof". The payoff is big, but definitely not immediate.
I have a tendency to use different stuff on new projects for the sake of it. I've built apps with express + react on client, angular, vue, next, nuxt.. Used go, .net, node, php etc on the server. I always find good and bad parts and appreciate different aspects of different solutions.
Not Next though. We built a pretty large app on Next and it was painful from start to finish. Every part of it was either weird, slow, cumbersome or completely insane.
We still maintain the app and it is the only "thing" I hate with a passion at this point. I understand that the ecosystem is pretty good and people seem to be happy with the results given that it is extremely popular. But my own experience has been negative beyond redemption. It's weird.
I thought wayland had some restrictions on global clipboard access and the last time I tried none of the well known clipboard managers worked as expected. (Also they all looked like shit).
This has been one of my pain points switching from macOS to linux or windows. Great job.
I actually went looking at the source code to see if this would work on Wayland and it doesn't. The clipboard snooping is implemented by listening for events using gdk.Clipboard, which is not an ext_data_control_v1 implementation. So on Wayland it'll only notice clipboard events if it's in focus (or if the compositor sends clipboard events to unfocused windows, which I'm not sure any do).
Edit: Yes, tested it now and it doesn't detect clipboard events from Wayland windows when it doesn't have focus. It only detects events from Xwayland windows when unfocused, or if I copy something from a Wayland window and then focus the clyp window then it detects the thing I copied.
It's almost as if a Wayland compositor should keep a list of trusted apps to broadcast clipboard events to, somehow similar to how screenshots are handled. (Not that Wayland is well-rounded in this regard.)
The ext_data_control_v1 protocol I mentioned is a protocol specifically for clipboard managers. So a client that wants to be a clipboard manager would implement that protocol. There are already implementations of it like wl-clipboard. There is no need for the compositor to broadcast regular clipboard events (wl_data_offer).
Now the compositor could certainly keep an additional list of trusted applications that are allowed to be clients of the ext_data_control_v1 protocol. Though identifying the client to enforce such a thing is a bigger problem than just maintaining a list of applications, because the protocol has no client identification. AFAIK every compositor that supports that protocol has no restrictions on clients requesting it, though something involving the security-context protocol might change this in the future.
I see the prompt-less permission config is based on the executable path. How does it get the executable path for the client? And is it robust against me spinning up a mount namespace with an arbitrary /usr/bin/grim that I control?
That's interesting.. Never ran into this, been using various clipboard managers in wayland (swaywm at first, now niri) for years without issue. copyq is what I use these days and, while not quite as pretty as this one, its great!
I have done a lot of data migrations between sqlite -> postgres and such using duckdb. It works great but does not seem to perform well. I'd simply leave an instance churning data but a specialized small cli tool would probably work a lot faster.
I don't remember the last time I used Safari on iOS, but once I started using Kagi, I was naturally drawn to Orion and that's been the best browser experienced I ever had on mobile.
The included ad-blocker being a big factor in the great UX.
That seems wild to me, but admittedly I don't search from the address bar at all. Is setting your preferred search engine as your homepage and opening a new tab to search really such a huge burden?
Yes. I do not tend to launch new tabs. I almost always use a single tab for browsing. Only when I need to keep something around then I launch a new tab to preserve the old one.
That means the address bar is my main interaction with the browser.
I remember downloading stuff from usenet 20 years ago and the par files going with them. Basically, files were split into chunks and you’d merge the chunks. In case there was a missing or corrupt chunk, the par files would allow recovering the missing part. You did not need to download entire par chunks either. You lost 1mb? Download 1mb of par and recover it (or something like that, cant remember now).
That blew my mind and I went into a rabbit hole of error correction. I did not know about the reed-solomon or any other methods. Just took days to understand it. Implementing my own par like shitty thing in the process.
This brings that back as I’m more curious about the error correction than the actual bit encoding.
I remember doing this for years as well and it seemed like magic and this algorithm blew my mind that any piece of a movie could be rebuild from a small part of par files. ChatGPT is quite good at explaining it
The concept of being able to (for a physical example) rip ANY page out of a book and being able to replace it using only the information on your single "magic page" is incredible. With two magic pages you can replace any two torn out pages, and the magic pages are interchangeable.
The math we have mastered is incredible. If only they could impart this wonder to children instead of rote worksheets enforced by drill sergeant math teachers.
While I'm ranting, I checked out a book from the library yesterday called "Math with Bad Drawings", it's very fun, and approachable for anyone with no math background, kids and adults enjoy it.
We need more STEM for fun, and not just STEM for money. That's how we get good at STEM.