Hacker Newsnew | past | comments | ask | show | jobs | submit | eddythompson80's commentslogin

Garry Tan is an imbecile though.

Are you referring to sudo-rs or something different? because sudo-rs is just a reimplementation of sudo.


Oh okay. That’s very specific to using setUID vs an alternative. Both have no real equivalent on windows to begin with.

Yeah, I’m aware of that. But this windows-sudo is sudo in name only anyway, so it seemed funny they’d copy a term that’s just about to go out of fashion.

Uh clearly you don't PowerShell enough. It should be `Invoke-CopilotSudo`

Am I reading this[1] correctly that they basically had that "compromised OAuth token" for a month now and it was only detected now when the attackers posted about it in a forum?

[1] https://context.ai/security-update


> Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.

This was an interesting tidbit too. If true, this means that Vercel’s IT/Infosec maybe didn’t bother enabling the allowlist and request/review features for OAuth apps in their Google Workspace.

On top of that, they almost certainly didn’t enable the scope limits for unchecked OAuth apps (e.g limiting it to sign-on/basic profile scopes).


And that they engaged Crowdstrike for incident response... who missed OAuth tokens in the clear?

lol, yeah that Crowdstrike part was a funny CYA name drop

A month? If true this is insane.

I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.

The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.

Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.


"endless list of CVE" seems rather exaggerated for coreutils. There are only very few CVEs in the last decade and most seem rather harmless.

Now I'd genuinely like to know whether "yes" had a CVE assigned, not sure how to search for it though...

I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."

You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.

And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.


I used to believe that too, yet the dichotomy is what’s being pushed by what I called an “AI maximalist” and it’s what I was pushing against.

There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”


It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.

Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].

In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:

- Make it easy to write, test, and run programs.

- Interactive use instead of batch processing.

- Economy and elegance of design due to size constraints ("salvation through suffering").

- Self-supporting system: all Unix software is maintained under Unix.

In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to

- Write programs that do one thing and do it well.

- Write programs to work together.

- Write programs to handle text streams, because that is a universal interface.

This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.

Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!

0: https://en.wikipedia.org/wiki/Not_even_wrong


Is AWS security boundary the AWS account? Are you expecting Vercel to provision and manage an AWS account per user? That doesn’t make any sense man, though makes sense if you’re a former AWS employee.

Yes the security boundary is the AWS account.

It doesn’t make sense for a random employee who mistakenly uses a third party app to compromise all of its users it’s a poor security architecture.

It’s about as insecure as having one Apache Server serving multiple customer’s accounts. No one who is concerned about security should ever use Vercel.


> It’s about as insecure as having one Apache Server serving multiple customer’s accounts.

You really have no clue what you’re talking about don’t you? Were you a sales guy at AWS or something?


He works for an AWS consulting company, where they promote cloud native solutions, driving cloud spend towards AWS. In many cases, managed cloud services are actually the way to go.

However, to say that serving multiple customers with Apache is "insecure" is inaccurate. There are ways to run virtual hosts under different user IDs, providing isolation using more traditional Unix techniques.


No, if they said they were running on separate VMs I wouldn’t have any issues.

Absolutely no serious company would run their web software on a shared Apache server with other tenants.

How did that shared hosting work out for Vercel?


As always, "it depends" on the application. So I've worked for several B2B SaaS companies. None of them used a VM per tenant. In some cases, we used a database (schema...) or DB cluster per tenant.

I've read about the Vercel incident. Given the timeline (22 months?!), it sounds like they had other issues well beyond shared hosting.


There is a difference between a SaaS offer where you are running your code and serving multiple customers on one server/set of servers and running random customer code like Vercel.

I know. I just don't think code isolation was their only issue. I've read about the incident.

Hey, knock it off. If you disagree with someone, present a substantive counterargument.

Already did. There is no fixing a pretender. Someone arguing akin to “the security boundary of a Linux system is the electrical strip”

Well, I know that you have never heard of someone using a third party SaaS product at any major cloud provider compromising all of their customers accounts.

Are you really defending Vercel as a hosting platform that anyone should take seriously?


How is any of that a defense of Vercel? If you understood how any of this works you’d know that it isn’t. Vercel is a manifestation of what’s wrong with web development, yet it has nothing to do with “creating an AWS account per user account” nor “running a reverse proxy process per user account”.

Because the same “web development” done with v0, downloaded, put in a Docker container, deployed to Lambda, with fine grain access control on the attached IAM role (all of which I’ve done) wouldn’t have that problem.

Oh and I never download random npm packages to my computer. I build and run everything locally within Docker containers

It has absolutely nothing to do with “the modern state of web development”, it’s a piss poor security posture.

Again, I know how the big boys do this…


Isn’t he a Vercel evangelist though?

He is "whatever gives me short-term boost in popularity". Including doing 180 turns on whatever he's evangelizing or bashing.

Fair enough. That’s probably a better description from what I’ve seen from him. I remember that arc browser shilling.

Let's see. Roasting vercel is more popular than defending but his posts so far he seems to be defending and arguing in the replies.

Note: what follows is absolute 100% speculation based on nothing but gut feelings.

Theo has long been Vercel supporter and was sponsored by them several times. In this case it could be a combination of him being genuinely interested in Vercel (a rare thing) and hopes for future sponsorships


Yes, this is exactly how I see it too minus the "genuine" part. It is because of money, and for that, he doesn't care about lying.

Good for the content but would sponsors be on board long term?

He quite publicly is not anymore.

Yeah, I’ll sign a contract so you can “support” a configurable bind address. That’s post-doc level of comp-sci stuff right there.

I’ll also sign the “numbers bigger than 2^32” contract and a “weird looking characters in text” contract.


Yes, in general government censored speech is inherently not important by the fact the government censored it. Like if it were important it wouldn’t have been censored. Obviously.

Except this is self-censorship. The author chose to make it unavailable.

Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: