If you're visually impaired, you can hit it even with just a few icons on a 14" laptop. Fonts anything other than tiny + overloaded menus + even a handful of app icons means I always hit this unless I'm docked.
Hacky menu bar modification tools are basically an accessibility requirement for me, and my vision isn't even that bad. (Best corrected is 20/30 or 20/40 or so.) People with serious impairments are totally screwed by this on macOS, sometimes even with large external monitors.
It's striking to me that while the article argues that LLMs and agentic development tools will increasingly trend towards higher and higher quality, but the "pro-AI" comments in this discussion and others tend to announce a "quality doesn't matter" camp.
You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc.
+1. If we’re at an early stage in the agentic curve where we think reading commit messages is going to matter, I don’t want those cluttered with meaningless boilerplate (“co-authored by my tools!”).
But at this point i am more curious if git will continue to be the best tool.
I'm only beginning to use "agentic" LLM tools atm because we finally gained access to them at work, and the rest of my team seems really excited about using them.
But for me at least, a tool like Git seems pretty essential for inspecting changes and deciding which to keep, which to reroll, and which to rewrite. (I'm not particularly attached to Git but an interface like Magit and a nice CLI for inspecting and manipulating history seem important to me.)
What are you imagining VCS software doing differently that might play nicer with LLM agents?
Check out Mitchell Hashimoto’s podcast episode on the pragmatic engineer. He starts talking about AI at 1:16:41. At some point after that he discusses git specifically, and how in some cases it becomes impossible to push because the local branch is always out of date.
F-Droid is in fact what an app store concerned about user safety looks like. Nobody gets hoodwinked into installing apps that track them or sell their data or otherwise abuse them on F-Droid.
F-Droid is so irrelevant that it doesn't even begin being targeted by supply chain and scam attacks. Being obscure always help with this, but pretending that it's the same threat model is absolutely false.
The XZ utils backdoor made it into Debian repositories undetected, although it was caught before it was in a stable version.
Debian repositories are quite secure, but also pretty limited in scope and extremely slow to update. In practice, basically everyone (I'm sure there are a few counterexamples) using a Linux distro uses it as a base and runs extra software from less tightly controlled sources: Docker hub, PyPI, npm, crates, Flathub etc. It's far easier for attackers to target those, but their openness also means there's a lot of useful stuff there that's not in Debian.
Holding up Debian as a model for security is one step up from the old joke about securing your computer by turning it off and unplugging it. It's true, but it's not really interesting.
XZ attack is an extremely rare event coming likely from a state actor, which actually proves that GNU/Linux is a very important target. It was also caught not least thanks to the open nature of the repository. Also, AFAIK it wasn't even a change in the repo itself.
In short, using FLOSS is the way to ensure security. Whenever you touch proprietary staff, be careful and use compartmentalization.
That article's premise is that the Android security model is something that I want. It really isn't.
The F-Droid model of having multiple repositories in one app is absolutely perfect because it gives me control (rather than the operating system) over what repositories I decide to add. There is no scenario in which I wish Android to question me on whether I want to install an app from a particular F-Droid repository.
Can you describe the threat model / specific attack under which... any of the supposed flaws on that page matter? (Most of the particular section you've linked appears to be about extra defenses that could be added, but which are unlikely to make a difference in the face of Android's TOFU signature verification on installed APKs.)
The section you linked in particular is a load of editorialized bullshit IMO. As far as I can tell the only legitimate complaint is that there is (or was?) some sort of issue with the signing methodology for both APKs and repository metadata. Specifically they were apparently very slow to replace deprecated methods that had known issues. However it's worth noting that they appear to have been following what were at one point standard practices.
The certificate pinning nonsense is particularly egregious. APT famously doesn't need TLS unless you're concerned about confidentiality. It's the same for any package manager that securely signs everything, and if there's ever a signing vulnerability then relying on TLS certainly might save you but seems extremely risky. On top of that the Android TOFU model means none of this matters in the slightest for already installed apps which is expected to be the case the vast majority of the time.
As far as I'm concerned F-Droid is the best currently available option. That said of course there are places it could improve.
I used WINE a lot in the 2000s, mostly for gaming. It was often pretty usable, but you often needed some hacky patches not suitable for inclusion in mainline. I played back then with Cedega and later CrossOver Games, but the games I played the most also had Mac ports so they had working OpenGL renderers.
My first memorable foray into Linux packaging was creating proper Ubuntu packages for builds of WINE that carried compatibility and performance patches for running Warcraft III and World of Warcraft.
Nowadays Proton is the distribution that includes such hacks where necessary, and there are lots of good options for managing per-game WINEPREFIXes including Wine itself. A lot of the UX around it has improved, and DirectX support has gotten really, really good.
But for me at least, WINE was genuinely useful as well as technically impressive even back then.
It is really coll that Homebrew provides a comprehensive enough JSON API to let people build on Homebrew in useful ways without directly running Ruby, despite everything being built in a Ruby DSL. That really does seem like a "best of both worlds" deal, and it's cool that alternative clients can take advantage of that.
I didn't know about the pending, official Rust frontend! That's very interesting.
Yeah I don't know why people are saying that speed doesn't matter. I use Homebrew and it is slow.
It's like yum vs apt in the Linux world. APT (C++) is fast and yum (Python) was slow. Both work fine, but yum would just add a few seconds, or a minute, of little frustrations multiple times a day. It adds up. They finally fixed it with dnf (C++) and now yum is deprecated.
Glad to hear a Rust rewrite is coming to Homebrew soon.
One of the reasons I switched to arch from debian based distros was precisely how much faster pacman was compared to APT -- system updates shouldn't take over half an hour when I have a (multi)gigabit connection and an SSD.
It was mostly precipitated by when containers came in and I was honestly shocked at how fast apk installs packages on alpine compared to my Ubuntu boxes (using apt)
pacman is faster simply because it does less things and it supports less use cases.
For example pacman does not need to validate the system for partial upgrades because those are unsupported on Arch and if the system is borked then it’s yours to fix.
Less charitably, pacman is fast because it's wrong. The dependency resolver is wrong; it fails to find correct answers to dependency resolution problems even when correct answers are available.
* it’s purpose built for mega-sized monorepo models like Google (the same company that created it)
* it’s not at all beginner friendly, it’s complex mishmash of three separate constructs in their own right (build files, workspace setup, starlark), which makes it slow to ramp new engineers on.
* even simple projects require a ton of setup
* requires dedicated remote cache to be performant, which is also not trivial to configure
* requires deep bazel knowledge to troubleshoot through its verbose unclear error logs.
Because of all that, it’s extremely painful to use for anything small/medium in scale.
> dnf < 5 was still performing similarly to yum (and it was also implemented in python)
I'm perhaps not properly understanding your comment. If the algorithmic changes were responsible for the improved speed, why did the Python version of dnf perform similarly to yum?
Because dnf4 used the same dependency resolution as yum but they revamped it in dnf5 (it was initially supposed to be a whole new package manager with a different name)
> Yeah I don't know why people are saying that speed doesn't matter. I use Homebrew and it is slow
Because how often are you running it where it's not anything but a opportunity to take a little breather in a day? And I do mean little, the speedups being touted here are seconds.
I have the same response to the obsession with boot times, how often are you booting your machine where it is actually impacting anything? How often are you installing packages?
Do you have the same time revulsion for going to the bathroom? Or getting a glass of water? or basically everything in life that isn't instantaneous?
I would guess this change builds on the existing json endpoints for package metadata but that the Ruby DSL is remaining intact.
I think how to marry the Ruby formulas and a Rust frontend is something the Homebrew devs can figure out and I'm interested to see where it goes, but I don't really care whether Ruby "goes away" from Homebrew in the end or not. It's a lovely language, so if they can keep it for their DSL but improve client performance I think that's great.
Is Ruby really the speed bottleneck in Homebrew? I would assume it would be due to file operations (and download operations), not choice of programming language.
Largely agree, though some things are notably difficult in some languages. Things like true concurrency for example didn’t come as naturally in Ruby because of the global interpreter lock. Of course there are third party libs, and workarounds though. Newer versions of Ruby bring it more natively, and as we’ve seen, Homebrew has adopted and makes use of that experimentally for a while, and the default relatively recently.
I can’t say that’s the only reason it’s slow of course. I’m on the “I don’t use it often enough for it to be a problem at all” side of the fence.
If you use the Homebrew module for Nix-Darwin, running `brew` against the generated brewfile becomes the slowest part of a `darwin-rebuild switch` by far. In the fast cases, it turns something that could take 1 second into something that takes 10, which is definitely annoying when running that command is part of your process for configuration changes even when you don't update anything. Homebrew no-ops against an unchanging Brewfile are really slow.
I've been looking for something like this, especially to use only with casks now that Homebrew has removed support for not adding the quarantine bit. Looking forward to giving it a try!
My hypothesis is that generally, there's no quality floor at which security departments are "allowed" to say "actually, none of the options on the market in this category are good enough; we're not going to use any of them". The norm is to reflexively accept extreme invasiveness and always say yes to adding more software to the pile. When these norms run deeply enough in a department, it's effectively institutionally incapable of avoiding shitty security software.
Fwiw w/r/t Trivy in particular,I don't think Trivy is bad software and I use it at work. We're unaffected by this breach because we use Nix to provide our code scanning tools and we write our own Actions workflows. Our Trivy version is pinned by Nix and periodically updated manually, so we've skipped these bad releases.
reply