I'm somewhat surprised with Github's strategy in the AI times.
I understand how appealing it is to build an AI coding agent and all that, but shouldn't they - above everything else - make sure they remain THE platform for code distribution, collaboration and alike? And it doesnt need to be humans, that can be agents as well.
They should serve the AI agent world first and foremost. Cause if they dont pull that off, and dont pull off building one of the best coding agents - whcih so far they didnt - there isn't much left.
There's so many new features needed in this new world. Really unclear why we hear so little about it, while maintainers smack the alarm bell that they're drowning in slop.
Microsoft’s real goal is selling Copilot seats and pushing Azure, not building a neutral playground for third-party agents. There is just no money for them in being the backend for someone else's AI.
As for the AI spam, GitHub's internal metrics have always been tied to engagement and PR volume. Blocking all that AI slop would instantly drop their growth numbers, so it is easier for them to just pass the cleanup cost onto open-source maintainers.
This describes quite well the huge advantage small companies have vs big companies.
(Motivated) people at small companies "care", and what I mean with that is they are responsible and can see a large enough portion of the customer experience that - if something is broken - they'll see the pain and try to address it.
At a big company no one cares. They of course care about their job, but their job is such a small fraction of the overall customer experience, that seeing their work having an impact on their customer is exceptionally difficult.
That's why large companies need to encode customer feedback into a system to imitate feedback cycles. Mostly in metrics.
That's a very lossy way to capture signal, and leaves a lot to be desired, but so far it doesnt seem like anyone has come up with a better system.
> That's why large companies need to encode customer feedback into a system to imitate feedback cycles. Mostly in metrics. That's a very lossy way to capture signal, and leaves a lot to be desired, but so far it doesnt seem like anyone has come up with a better system.
The other thing you can do is having senior leadership occasionally try the product themselves and talk directly to customers (especially ones that have problems).
Often, problems remain because of bureaucratic hurdles, or disputes between different fiefdoms: there's a feature that needs teams X and Y to improve, but it would only help the internal metrics for team X, so team Y doesn't give a shit and drags their feet. Leaders who are sufficiently high in the hierarchy can cut through these sorts of problems if they know and care.
I manage post sales support at a small company where this theme is present, however, I'm concerned that things may be slipping in the wrong direction. I need help dealing with this situation.
Our software engineering team is burdened with tech debt and aggressive product roadmap. Understandably, they often feel the need to push back and keep things under control. Our customers and the employees who are accountable to our customers literally lose sleep when things aren't going well. Our engineering team feels complacent to function on a 9-5 schedule and sleep on critical issues for weeks.
The only thing the engineering team appears motivated to actually own is the next iteration of the escalation process that somehow makes them even less accountable. Invariably the next iteration includes adding more details to Jira tickets that nobody will read.
Basically no one on that team is dogfooding the product. Broken features are being shipped and they are expecting us to actually sell and support this.
1. There needs to be reframing of the engineer role in the team, by the leadership. It's not just to build things, but to help the customer. The job of the engineer is to build a good product for the customer, not just to "build", fix Jiras, ...
2. At the same time, get some engineers on customer calls. It doesnt need to be everyone, but you need to shorten the feedback cycle that engineers directly get signal from their users. A lot of information gets lost through Jira's, intermediates, ...
They dont have need talk, just listen. Will they pay attention? That comes back to the reframing of their job being to create a good product for users. If everyone understands that's the goal, they will.
If you have engineers building custom extensions for your database, because your CRM product needs some low level performance optimizations - it'll still be good but less interesting.
The goal is to get some engineers talking to customers, cause they'll talk to other engineers. And engineers know how to talk to engineers. It'll shorten the feedback cycle, and that's good for both speed + signal.
Now I don't know how much of that you already do, and they might even do all these things already, and what you see is the output of the worst issues already taken care of. If the latter is the case, there's some more cross team comms needed
half baked? disagree, doing EU is hard - it's a bunch of fully sovereign countries trying/having to agree, and we're still figuring out how that could work.
We'll need a bunch of steps like that, to get closer to the efficiencies we're hoping for.
I do think it's more painful to distribute files when you're a distributed as a single binary vs scripts, since the latter has to figure out bundling of files anyway.
It's cool that it fits into golang's readable file system interface so it can be used polymorphically. I don't know if golang has very complete interfaces for a read and write file system that could be used for a full vfs. If it does, that's nice, and a starting point for a similar vfs! I'm also not sure whether it should go into the standard library or not.
Zip files are created in such a way that they can be a part of an executable file. (This is how self-extracting archives used to work.) Support for reading zip files is lightweight, and is present almost everywhere.
A ZIP fork embedded into the executable should be an obvious read-only VFS implementation. Bring your assets with you, even maybe build them with the standard zip utility.
It should take relatively few LOCs, provided that libzip is already linked into the executable anyway.
Sure they're different. They all download blobs and can then execute them. Exactly how, when and why is completely different but they still get you blobs.
In the same way s3 is different to a dropbox, and a car being different to a bike.
Can't tell if you're ragebaiting here, but I'm very confused by this question because they support an entirely different set of features, and if you use both it's painfully obvious how they're different.
Docker is built for running services, distribution is part of that, but it's core is that you can pull an image and run random service on your machine packaged with all the right libraries, network them into your machine in the way you like so it can access the right things, constrain it's resources, and create our own image based on it.
Snap/Flatpak is built to distribute applications, sandboxing being a core part of it, with applications wanting to integrate into distribution mechanics such as audio, URL handling, taking screenshots, ...
For me, the difference is in do I get to compile this from sources myself or do I get someone elses compiler output?
IOW it's not comparing vastly different vehicles, but rather a vehicle with its blueprint.
Also, a long time ago it was commonly accepted that spaghetti code is awful. Docker replaces all that with spaghetti services and debugging gets much worse.
This thing about openai brand is changing fast. In the dev circles I'm part of, everybody dislikes OpenAI and prefer Claude. How long it'll take for the same to happen with the normies?
I use Claude for work and Codex for private use due to already having a Plus subscription.
I can't say that I have noticed that 5.3-Codex is much better, but it's definitely on par with Opus 4.6, and its limits for $25/months is comparable to Max x5 at 1/4th of the cost (not to mention pay-per-token which we use at work). Claude Code is generally a much better experience though.
The thing is though, Google Gemini is pretty good and it's not super hard to switch to and, the real moat, Google can just keep improving, integrating Gemini, and gathering customer while just waiting for OpenAI to go bankrupt. Basically, everyone on the planet has to pay OpenAI to keep them in business. If they don't get the vast majority of the market OpenAI can't pay their bills. Google is going to just starve OpenAI out.
I wish it would be, but it's not. Gemini feels more sluggish, it's relatively overloaded with animations compared to chatgpt. Like most Google products.
I've been testing Gemini as I code on Claude 4.6 and the answers aren't great for coding. ChatGPT has been better. But it did a good job with some personal IRA/401k planning.
It feels like it's only a few months behind though.
And yet Google has search monopoly, is part of mobile duopoly, has almost monopoly on e-mail and data storage, is strong player in office solutions, and owns the biggest entertainment platform in form of YT.
Seems like sluggishness and animations don't mean as much to normal people.
Really surprised by all the comments here, they didnt hire him because of the amazing security openclawd had, but because he's one of the first one's who made a truly personal assistant that's actually valueable to people.
It's about what he created, not what he didnt create.
They're not acquiring the product he built, they're acquiring the product vision.
I understand how appealing it is to build an AI coding agent and all that, but shouldn't they - above everything else - make sure they remain THE platform for code distribution, collaboration and alike? And it doesnt need to be humans, that can be agents as well.
They should serve the AI agent world first and foremost. Cause if they dont pull that off, and dont pull off building one of the best coding agents - whcih so far they didnt - there isn't much left.
There's so many new features needed in this new world. Really unclear why we hear so little about it, while maintainers smack the alarm bell that they're drowning in slop.
reply