Yeah, I’m aware of that. But this windows-sudo is sudo in name only anyway, so it seemed funny they’d copy a term that’s just about to go out of fashion.
Am I reading this[1] correctly that they basically had that "compromised OAuth token" for a month now and it was only detected now when the attackers posted about it in a forum?
> Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.
This was an interesting tidbit too. If true, this means that Vercel’s IT/Infosec maybe didn’t bother enabling the allowlist and request/review features for OAuth apps in their Google Workspace.
On top of that, they almost certainly didn’t enable the scope limits for unchecked OAuth apps (e.g limiting it to sign-on/basic profile scopes).
I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.
The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.
Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.
I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."
You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.
And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.
I used to believe that too, yet the dichotomy is what’s being pushed by what I called an “AI maximalist” and it’s what I was pushing against.
There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”
It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.
Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].
In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:
- Make it easy to write, test, and run programs.
- Interactive use instead of batch processing.
- Economy and elegance of design due to size constraints ("salvation through suffering").
- Self-supporting system: all Unix software is maintained under Unix.
In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to
- Write programs that do one thing and do it well.
- Write programs to work together.
- Write programs to handle text streams, because that is a universal interface.
This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.
Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!
Is AWS security boundary the AWS account? Are you expecting Vercel to provision and manage an AWS account per user? That doesn’t make any sense man, though makes sense if you’re a former AWS employee.
It doesn’t make sense for a random employee who mistakenly uses a third party app to compromise all of its users it’s a poor security architecture.
It’s about as insecure as having one Apache Server serving multiple customer’s accounts. No one who is concerned about security should ever use Vercel.
He works for an AWS consulting company, where they promote cloud native solutions, driving cloud spend towards AWS. In many cases, managed cloud services are actually the way to go.
However, to say that serving multiple customers with Apache is "insecure" is inaccurate. There are ways to run virtual hosts under different user IDs, providing isolation using more traditional Unix techniques.
As always, "it depends" on the application. So I've worked for several B2B SaaS companies. None of them used a VM per tenant. In some cases, we used a database (schema...) or DB cluster per tenant.
I've read about the Vercel incident. Given the timeline (22 months?!), it sounds like they had other issues well beyond shared hosting.
There is a difference between a SaaS offer where you are running your code and serving multiple customers on one server/set of servers and running random customer code like Vercel.
Well, I know that you have never heard of someone using a third party SaaS product at any major cloud provider compromising all of their customers accounts.
Are you really defending Vercel as a hosting platform that anyone should take seriously?
How is any of that a defense of Vercel? If you understood how any of this works you’d know that it isn’t. Vercel is a manifestation of what’s wrong with web development, yet it has nothing to do with “creating an AWS account per user account” nor “running a reverse proxy process per user account”.
Because the same “web development” done with v0, downloaded, put in a Docker container, deployed to Lambda, with fine grain access control on the attached IAM role (all of which I’ve done) wouldn’t have that problem.
Oh and I never download random npm packages to my computer. I build and run everything locally within Docker containers
It has absolutely nothing to do with “the modern state of web development”, it’s a piss poor security posture.
Note: what follows is absolute 100% speculation based on nothing but gut feelings.
Theo has long been Vercel supporter and was sponsored by them several times. In this case it could be a combination of him being genuinely interested in Vercel (a rare thing) and hopes for future sponsorships
Yes, in general government censored speech is inherently not important by the fact the government censored it. Like if it were important it wouldn’t have been censored. Obviously.
reply