When we released our open-source project[1], this hacker (Eva) pentested our project pretty extensively and was very professional in their disclosures. They didn't even ask for a bounty since we didn't have a program back then!
Eva is an incredibly gifted hacker and a responsible one, a16z should treat them better.
We used a nodejs cms called apostrophecms that had an admin panel called global settings.
We used that for managing api keys to our auth server.
We only found out a few months in that it was outputted in the html source code. They did this so it was available to JS, of course it was in their docs. So not blaming them. We glossed over it.
Annoyingly we paid a reasonable amount of money for a pen test with one of the big consultancy companies but they also didn’t see it.
I ended up finding it and checking the logs seems like it wasn’t abused but it was shocking and a big leak
I've absolutely been involved (conducting, coordinating, and receiving) some high value pen tests over the years.
One problem is there is no hard definition of what is considered a "pen test".
I've seen very highly reputable vendors claim essentially out of the box nessus scans as pen tests, automated burpsuite scans as pen tests.
In my own personal definition of a pen test: security practitioners may use those tools amongst others, but they generally leverage them as recon and then try to uncover pathways in from those vulns, in addition to abusing application logic and misconfiguration.
Second problem: paid pen tests have limited scope and time constraints. If the application surface is sufficiently large, that engagement may simply not be big enough to conduct a thorough test.
Contrast this with Bug Bounty hunters (and attackers): they have unbounded time and resources. They can literally keep testing until they find something.. and best part, there are so many of them!
So these public bug disclosures are hard to compare to a private/paid for test.
You could argue, the app owners didn't pay enough for a comprehensive test.. but the downside is: just because you paid more, doesn't mean the pen tester did a better job :(
While they are high noise, I tend to think bug bounty programs are the best fit for the problem space. You end up with much deeper coverage, and a very positive ROI (even factoring in your engineers to triage the bounty reports).
Security is just box checking. Most IT work is. The deployed stack has limited set of parameters to learn and test for.
Leetcode is popular hiring criteria for a reason; that kind of code checks the “KISS/don’t be clever” and DRY rediscovering known algorithms boxes
Except in a few fields, most startups are pretty vanilla config ops and secops tasks.
Recent popularity among the working class has inflated the egos of run of the mill office workers. “Programmers are lazy” has long been waved around like a badge of honor.
Rather than Silicon Valley I’d like to see a Mad Men take on IT. Start in 06-ish with a bunch of entitled first world craft beer drunkards wasting nights on syntax art, framework wars, rise of cloud. End with Covid, launch LLM AI and a bunch of code school burnouts being laid off.
Hello, I'm really sorry you had this unexpected exposure using ApostropheCMS. As you've mentioned, this data sharing was noted in the documentation but can still prove surprising.
A note for future researchers: the currently supported major version of Apostrophe no longer behaves in this way. Any data injection to the logged-out front-end would be a choice made at the developer level, specifically to avoid this sort of surprise.
That said, there are still use cases for including API keys as part of the configuration and 'content' of certain types of widgets.
For context, I am the head of design at Apostrophe and also play an engineering role.
Yeah, I didn't want to dunk on ApostropheCMS, this was our responsibility for not understanding the tech. I made another comment hoping to make that clear.
Overall it's a great & in current headless craze a unique product. V3 looks very good, but we never got that in production.
We always need to do our due diligence when using someone else's project. It's an open source project, available for free.
If they weren't very clear in the docs is one thing, but it doesn't appear so. Anyway, we won't combat these types of shenanigans by assuming others did everything up to snuff. We gotta be more careful ourselves.
If the panel setting was specifically for API keys, then yes, that's on apostrophecms.
If it's just some kind of generic settings with name/value pairs, then it might make sense to expose those to the browser, and make that very clear up front.
Yeah you can define extra global settings extending the existing fields, so we used that for our multi tenancy solution. And is available on the node side of things as well as on the frontend.
When I create a new service and add LetsEncrypt cert to server via ACME. I immediately see logs filled with junk, obviously bots searching for shitty defaults that devs might leave open. I have even seen requests for the process env file lol.
How was such vuln not found and abused in this case? a16z is very lucky or maybe it was abused and not disclosed. Researcher or bored person with a kind heart/white hat hacker mindset is the first to reach out.
a16z should be fined heavily unfortunately there is no legal framework for this type of negligence
> a16z did not give me any bug bounty on this because of the fact i publicly reached out instead of trying to reach out privately. the only reason i did it this way was because there was no available contact on their main site and the email i could find engineering@a16z.com bounced my emails
That's a clever lifehack to save your company money, by not having any way to privately contact engineering all bug bounties will have to be reported publicly which means you don't need to pay anything.
All sorts of cleverness going on there. I'll bet they saved a ton of money on development by lowballing people on fiverr or whatever they did, and indirectly they'll also save a ton on bookkeeping when a russian ransomware group effortlessly takes them for everything they have.
That doesn't seem irresponsible to me. Sure they could have searched the bottom of a connect page for the office emails to try, but I don't see any significant issue with what they did instead.
Why broadcast the tweet publicly instead of sending it as a DM to A16Z then?
It’s obviously not safe to publicly announce the existence of a security vulnerability, and there was no barrier to alerting them privately via the same platform.
> It’s obviously not safe to publicly announce the existence of a security vulnerability
Publicly showing the vulnerability would have been unsafe, but I don't think there's much harm in asking to get in touch about an unspecified security issue (not even saying that it's a vulnerability in their website). Andreessen Horowitz is a massive firm, not some tiny website flying under the radar.
> and there was no barrier to alerting them privately via the same platform
DM would have to get picked up by their social media person next time they check Twitter, whereas a directed tweet can additionally leverage networks and be escalated by people with contacts - possibly someone could give the up-to-date engineering contact email, for instance.
Either way would have been fine, really. I feel we're going over the actions of an individual researcher with a fine-comb, searching for any hint that there was an arguably better course of action, when there are multiple huge obvious mistakes from a16z.
> I feel we're going over the actions of an individual researcher with a fine-comb, searching for any hint that there was an arguably better course of action, when there are multiple huge obvious mistakes from a16z.
You're going over things "with a fine-comb". I just wrote two sentences that made a single point.
The extent to which attempted fault-finding of someone's behavior is unwarranted is not determined by the number of words. I could complain "Why break my door when the window was open!?" to the firefighter carrying me out of a burning building in nine words.
The email the researcher found (engineering) seems more appropriate than the office info emails (menlopark-info, ...) at the bottom of the Connect page (an actual "contact" page used to exist, but is now 404 with no redirect). I don't see anything irresponsible about trying engineering then reaching out over social media.
So you’d rather researchers reach out to black hats with this information instead? Because that’s what this line of thinking leads to.
It’s in everyone’s, especially the company’s, best interests to have a bug bounty and easily accessible security hotline. Expecting researchers to jump through hoops like contacting their offices’ front desks to get to security is absurd.
> So you’d rather researchers reach out to black hats with this information instead?
That is pretty much what they did. Posting publicly about the vulnerability most certainly meant that every hacker in the world tried (and probably succeeded) at reproducing it, all before the company had enough time to act.
So you’d rather this happen? That is the question I asked.
Because this is explicitly what happens when a company doesn’t have a good process for accepting and responding to exploits.
The onus should entirely be on the company to invite researchers to find and report exploits in a responsible way. They are the ones at risk of losing millions of dollars over an exploit.
They didn't post publicly about the vulnerability; they reached out via twitter to tell them that they had one, without giving any details about it whatsoever.
Telling everyone that there's a vulnerability is usually as bad as providing detailed steps. No one was looking, and now you've pointed them in the right direction.
> They also have contact email addresses listed at the bottom of https://a16z.com/connect, which the researcher conveniently missed.
They have those now. Do we know they did when the researcher tried to reach out?
Edit: I decided to take a look at it myself. It does seem that that was available on June 3rd of this year [0]. (You'll have to look at the source since the archive doesn't do their animations.) It seems to be available on previous snapshots as well [1].
I did the same thing with OP years ago, I tried to contact in every way possible the dev team of the largest telecom company in my country.
All channels were ignored, so I have to resort to contacting our government agencies. Luckily, one agency replied to me and had one of the devs contacted me. For this hassle I was only paid $50.
You have no idea the effort we go to report this things. So I quit bug hunting after that.
I mean, a16z should be very grateful this got reported by an honest hunter regardless of the means it was reported.
I stumbled upon a big vulnerability in an unnamed Czech ministry's web apps around January. It's now July and after trying the appropriate support email, the official "snail mail but digital", and calling various people's office landlines (thankfully they publish those in the org chart), it might get fixed this month.
If there is a next time, maybe I'll try convincing the cybersecurity bureau to take my vulnerability reports instead.
I'm generally sympathetic to what you're saying, but I also detest a16z and Horowitz personally for being the epitome of "software guy decides he's expert at everything now" and his role in the crypto bubble.
Should the hacker have tried more? Sure, maybe. Do I really care? Definitely not
It's polite to say thanks if someone informs you that you accidentally left your backpack open.
But in no way you are supposed to give them anything.
Even further, some people take precious things from your backpack (trying to exploit the issue)
and then come back to you asking for money; claiming they are nice people. This is non-sense.
... Did they actually steal anything or take advantage, or just touch the bag to make sure it wasn't fake? Seems more of the latter, and your analogy falls flat when the bag carrier contains other people's pii.
Terrible analogy. This is more like someone returning your wallet full of cash, on live TV. You aren't legally obligated to give them anything, but it sure is a dick move not to and good luck getting your wallet back next time you drop it if you don't.
Because the next person will know there's a good chance you'll give them a cash reward, and that will tip the "immorally take all the cash" vs "return it and hope for a reward" balance more in favour of it being returned.
I would have thought that was completely obvious so maybe that's not what you were asking?
The places you're most likely to get your wallet back in the world are the places you're also less likely to get a reward. The reward for returning a wallet is knowing you're doing your part to make the place you live in a nice place to live.
I think A16Z and the companies they’ve funded have done a great deal of good for the world. The very web browser you’re typed your angry comment into is a technology pioneered by one of its two founders.
Being anti-VC is essential being against technological and economic progress.
It’s just that the analogy breaks down a bit. It’s fair to say a dropped wallet in a city is a one-shot game—it’s reasonable to expect neither the participants nor their acquaintances will ever encounter each other again; whereas a security vulnerability is closer to a repeated one—it’s a fairly small world. (Some kind of neighbourly behaviour would work better here, but then again, it’s more difficult to find a universal experience of that kind.) I didn’t misunderstand this, but perhaps GP did?..
You're using the wrong line of thought on the analogy here.
The value of the wallet is not the cash you'd directly lose inside of it. The value is getting your ID and cards back without them being copied by someone else, along with any other identifying information.
The value of having and up front and easy to use bug bounty system is it's easier to use then selling it off to some blackhats (hopefully). Those blackhats may otherwise scrape all your s3 buckets or somehow otherwise run up a zillion dollars of charges over a holiday with your keys.
Not when you find it on first "inspect element". That really is the equivalent of looking through someone's window and seeing their bank information and credits cards just lying in full view of anyone who'd look in.
This what you expect from VCs. I always prefer to report these incidents to GDPR authorities if user data is leaked. Then they pay the fines and some get a criminal record. Money is something VCs “print” and manipulate.
Counterpoint: OP is a security researcher and couldn’t find a single human email address at one of the most well-known VC firms on the planet? LinkedIn? Twitter? Facebook friends? Come on. They’re not hard to reach if one really wants to.
Trying more than one email is not jumping through hoops when it's one of the worst possible vulnerabilities hitting all of their databases/platforms. Being a research means being an adult and having a basic level of responsibility. Just like being a gun owner, it's a powerful tool that needs to be treated with utmost respect.
A lot of pentesters are just kids who are angry at the world and the poor state of security, which I get, but it's not a huge barrier to try a bit more. He would have been rewarded if he did.
A researcher should not have to “try different emails”. Period. There should be a clearly disclosed email provided by the company to report such issues. Very obviously plastered. Or just use the standard abuse@, security@, infosec@, etc.
It is by far in the company’s best interests for this to happen because the alternative is public disclosure or disclosure to black hats instead.
Anything more is jumping through hoops. It should not be the researcher’s responsibility or burden to go out of their way to help a company that hasn’t done the bare minimum to welcome white hats helping them secure their own systems.
Yes of course company's should do that, but in the real world a lot of companies don't think to do that, especially a marketing site for a VC firm.
Any dev knows what it's like having a million responsibilities, a lot of things get put on TODO lists that never get completed. Them being owned by a wealthy company doesnt mean they have a huge dev team running 247 to handle this stuff. Which is probably why such a obvious failure even happened...
Security researchers get high and mighty extremely quickly, which is immature IMO.
The security researcher in this case worked for free to find a hole in their security, reached out via a provided email address, had that bounce, so then chose to reach out via a different messaging system to let them know that there was an issue. ALL OF THIS WAS UNPAID. They have 0 or less responsibility to this firm. The researcher was doing them a huge favor.
> Security researchers get high and mighty extremely quickly, which is immature IMO.
Immature would have been not trying to responsibly disclose this, or disclosing the hole before it was patched.
>Any dev knows what it's like having a million responsibilities,
Any airplane mechanic has a million responsibilities, and if they are not followed people fucking die. Maybe software devs should step up and take a little responsibility for their lack of action that can have consequences for their users.
Security researchers owe you nothing. If you make the path of least resistance selling sploits to blackhat groups the world will be a worse place.
Alright then: you go to Andreessen Horowitz's website[1] and see if you can find a SINGLE email address in any of the normal places a business would list the (not-social-media) contact information. Because they did their damnedest to make sure you won't find any.
See 4 links to social media pages where every single one has DMs open
Wait at least a couple business days to see if anyone replies, if no one does or it’s not being taken seriously then you can announce it publicly on social media you found something but can’t reach them
Okay. There’s 4 front office emails and 4 social media accounts, both presumably manned by non-technical folks.
So now you have to go back and forth just to get routed to the right place. Which may not even happen if this is the first time that employee handled a security incident.
You’re making it sound like sending the email or DM is the end of the work. That is usually far from the case.
Emailing an office manager with a company security issue would be incredibly irresponsible. They're in charge of managing the physical office and are about as "outside" as you can get in a company while still being employed by that company.
I don't think the onus should be on the researcher, and I think A16Z should have paid them. But if they actually wanted to get in touch, I'm just saying they could have.
If they're putting the effort into vuln scanning the site, they can also put in the effort to get in touch like a professional. You could just as easily say "why should the onus be on the researcher to find vulnerabilities when it's A16Z's job to secure their own site". The researcher is in this to find holes and make a few bucks (which is fine!). The job is complete when you get in touch.
> If they're putting the effort into vuln scanning the site, they can also put in the effort to get in touch like a professional.
They did. They emailed, and when that was bounced, they used a different medium to reach out. Twitter is a place that many companies actively engage with the public.
> The job is complete when you get in touch.
They got in touch. If A16Z aren't going to respond to people via email, but they do on twitter, they don't get to decide that twitter isn't a viable communication platform.
> You could just as easily say "why should the onus be on the researcher to find vulnerabilities when it's A16Z's job to secure their own site". The researcher is in this to find holes and make a few bucks (which is fine!). The job is complete when you get in touch.
Presumably, the company wants to be as secure as possible. It’s in their best interest to make this process as painless as possible. A security researcher has many options for what to do with a found exploit, some far less moral than others. The company has very few, relatively. They are the ones that are limited and therefore should be doing everything in their power to ensure the best outcome, a responsible disclosure that is fixed as quickly as possible.
The best way to ensure they do this is to provide an obvious, easy to find avenue for these things. This includes reasonable, well-displayed emails (or using something like a standard abuse@, etc) and a bug bounty.
Simply put, the company is the one that should be going out of their way or else they will just have researchers either disclosing it publicly or selling the exploit for likely far more money than a bug bounty.
I understand where you're coming from, but you're using "should" a lot. Companies should do a lot of things! They should make their sites secure. They should have a formal bug bounty program. They should have security@ and engineering@ and lots of other emails easily visible. We agree.
But many don't. And a lot of things in the business world are not as they should be. And in this real world of imperfection, others sometimes need to put in effort (and be paid for that effort) to make up for the failings of companies. This is one of those cases of imperfection.
Of course I’m using “should” a lot. Because “should” clearly didn’t happen.
That doesn’t change anything. Just because a company has shitty security reporting practices doesn’t suddenly mean the onus is on the researcher to do the company’s job.
Exactly, if he even just browsed their website a bit he'd have stumbled across loads of email addresses that could have been a useful point of contact.
It’s more fun getting attention by doing it publicly and being the victim (security researchers love hitting the 'nobody respects us' button) than putting basic effort in.
A single email bouncing is frustrating of course, but he then posted that an easily found vulnerability existed on Twitter, while a16z:
- has a contact page page https://a16z.com/connect/ with 4x emails to their offices at the bottom (despite claims the main site had no other emails)
- links to their Twitter where DMs are open https://x.com/a16z same with instagram, FB, and linkedin, all open
it would be easy to just email all of them at once and waiting a couple days to see if it gets escalated.
when companies say they are “hacked”, it’s now a corporate term for “we were negligent in securing important credentials, but please shift blame to this no-name entity we called a ‘hacker’”
If you accidentally leave your front door wide open and somebody steals all your stuff, you'll also say that you were robbed.
There might be a legal distinction between "breaking and entering", "burglary", "trespassing" etc, and in a legal sense, whether the front door was open might have some impact on whether the act was illegal or not and what the consequences are, but in colloquial usage, you've still been robbed.
A website is not a house. It is nothing like a house. There is no front door. There is no lock. There is no expectation of privacy. There are only things you can access and things you cannot. There is nothing inappropriate about trying to open the bathroom window from the outside.
If I wanted to try to use such a weak analogy, the analogy to hacked is not robbed. You were only robbed if content was removed and exclusively held by someone else, which in the security world we call a ransom.
In this case, a person was yelling through the front door "Your door is wide open!" and no-one was listening.
For a 42B AUM company, at a time where running an IT operation means "use CrowdStrike so that you pass audits", leaving the front door open all night should get you fired, regardless of whether you blame hackers or not.
If you put all your stuff on your front porch with a sign “please take what you want” and it’s all gone the next day - then you can’t say you were robbed.
I think this is a more apt analogy to what az16 did here
IMO these sorts of analogies to houses and porches don’t really work because there are just different cultural norms between websites and porches.
If there were a convention of leaving stuff on your porch to donate it, and a general assumption that when people left stuff on their porch it was up for grabs, somebody started storing their groceries there, and they were taken… they would just be stupid and not sympathetic.
If somebody just moved to a neighborhood where this was tradition and didn’t know about it, they would rightly be a little bit annoyed when the groceries they stored on their porch were taken, but really they only have themselves to blame for not understanding the local conventions.
If somebody opens up a storage company and then just put all the customers’ stuff on one of these porches, they are just dangerously, unethically incompetent. Even if there isn’t a convention of taking stuff from porches, actually. Because there are also armed gangs (nation-states) that go check out people’s porches for secrets.
There's no analog for the sign. You just put it in because without it your scenario still feels like theft (because it is) and you end up arguing against your own point.
Using those credentials is still a violation of the he CFAA, no reasonable person would think they were invited to access the systems protected by those credentials.
Yea, I'm sure the Russian/China/NK/Iran hackers are deeply afraid of the CFAA, you got them shaking dude (and vice versa when someone in the US hacks one of their sites).
The particular problem here is we think of the crime on the web in a civil/criminal manner... "People should just follow the law or be punished for a crime". This is not the internet. Regardless of what you think about the internet, it is an international war zone. If you leave the hatch of a tank open and a drone blows it up, that was you being stupid. If you leave an ammunition truck unguarded and the enemy takes it, again, that is you being stupid.
History will look back and say WWIII started on the web, but as of now it seems a huge number of people are in denial about it.
Do you cultivate vines with fruit, or do you cultivate brambles and eat thorns?
Remember white hats don't need to exist. Black hats will exist by the very nature they are parasitic and thrive where exploits exist. We can either have a community that warns you that "Hey, the stuff on your porch is going to get stolen" or we can have a community that calls their buddy when they see some stuff fresh for the taking.
A huge portion these discussions under this article are people arguing the minutia of a puddle in the lawn while a 10 meter high tsunami is rushing their way.
they are busy writing a giant "architecture of generative AI" whitepaper.
give them a pause, they are dreaming a future agentic world of half-assed chatbots.
while the world burns with botched software updates.
If you could actually access their Salesforce instance, that would be very nerve wracking for founders, since usually Salesforce, etc, logs emails which may continue unannounced fundraising plans or M&A plans that haven’t been shared externally by portfolio company founders.
Oh no CRIME! Thank goodness that something being a crime stops people from committing them.
Thank goodness the internet isn't an international operation filled with nation state level actors and questionable companies running data gathering operations from places they cannot be touched.
Always assume your data has been stolen by an assailant in a place that's only reachable by launching nukes at them. Also assume there is some competitor on the other side of the world now using your data against you.
Please stop treating data theft like Barney Fife level candy store theft. A huge portion of the time even if you know the name of the exact person who did it, there isn't going to be shit you can do about it.
You (unintentionally) drop your house key in front of your door. Now we can all freely enter your house! It can't be trespassing with the key sitting right there, can it?
According to the article, the decision to back him was due to the 2025 tax plan to tax unrealized gains, which I hadn't heard of, but I'm not surprised that he wouldn't be a fan of that, given that his entire business is built on investing in companies, and that these investments on the part of founders and investors are unrealized. It does seem like it would de-incentivize much of the startup and venture capital economy.
I'm not smart enough to understand finance and so forth. So can't comment on that 2025 tax plan.
I do know that "Bidenomics", aka the torrent of federal money (CHIPS Act, Inflation Reduction Act, EPAs new "Green Bank", Dept of Defense's retooling, etc), has been a huge boon for startups.
I would have thought a group of savvy entrepreneurs like a18z would join the renewable energy and domestic manufacturing bonanza.
But like I said, I don't understand finance. So I'm sure a17z have their reasons to sit this one out.
I wouldn't be surprised if they would have been on board for most of the Biden era economic policies. I think it may have just been the possible industry repercussions from the coming 2025 tax plan that made A&H anxious, given that it could disincentivize the venture capital growth market.
Not a great analogy. Its more like if your endodontist hired a secretary who leaves the medical records unlocked, do you really trust them to be up to date with modern dental sensibilities when the rest of their office is ran so carelessly?
Sincere question: how do you actually make this mistake while having the skills to build a web app of this complexity level? All the frontend and full stack frameworks that I’m familiar with try pretty hard to stop you.
I’ve seen people make exactly this mistake with Next.js. IMO React server components is a fantastic tool for losing track of what’s exposed client side and what isn’t.
Next.js makes you prefix env vars with NEXT_PUBLIC_ if you want them to be available client side, and Vercel has warning flags around it when you paste in those keys.
It's obviously not foolproof, but it's a good effort.
That’s env vars, but not actual variables - it’s really easy (if you are not actively context aware) to f.ex. pass a ”user” object from a server context into a client component and expose passwords etc to the client side.
That's a fair point! It definitely feels easier to make that mistake, and anything where context and discipline is required is a good candidate for making some horrifying blunders :)
If you add `import “server-only”` to the file, it will fail to compile if you to use it on the client. React also has more fine grained options where you can “taint” objects (yes that’s the real name).
Yeah, the problem is that these mitigations require the developer to be context aware, ”server-only” only saves you in the positive case where you correctly tagged your sensitive code as such. The default case is to expose anything without asking. I have also seen developers simply marking everything as ”use client” because then things ”just work” and the compiler stops complaining about useState in a server context etc.
A little tired because you didn't sleep well, or worried about a relative in the hospital, or you stubbed your toe that morning and it's distracting... and whoops.
Yes, the answer must be additional processes and procedures. That way, you’ll never make a mistake! /s
Also bizarre to frame this as “unacceptable behavior”, as if whoever is involved was in some way aware of their mistake and/or would say “this is acceptable behavior!” when confronted with it or something.
Humans are gonna human, if you have an environment where you fail to account for this, this will happen. Reminds me of a dev dropping a production database, or the aws engineer who incorrectly entered a command and brought down s3: many things have gone wrong to even be at this point, blaming a human for behaving like a human in an inhospitable environment is silly. Effort is almost always better spent building a system which is safer to operate for the people involved.
I've considered tracing outgoing responses from nginx/traefik/whatever to watch for known API keys. The difficulty would be identifying the keys amongst the noise.
But if they have five security processes that each has a 99% chance of catching a bug, that's still a 1-in-10,000 chance that something will slip through. And I'd wager that a16z has more than 10,000 "components" that goes through those processes.
my guess is internal tool that wasn't expected to be exposed publicly.
additionally, i didn't realize there are tools to automatically discover unreferenced subdomains like this. i would have just assumed security by obscurity
If one person learns this lesson it's good. If it's on the public Internet, best to expect it will be found. Stick it behind an auth wall of some sort.
I've put internal sites behind AWS ALB's plugged into an OIDC provider[1] (Google), which works well.
> a16z did not give me any bug bounty on this because of the fact i publicly reached out instead of trying to reach out privately.
I just don't understand this petty attitude. This almost guarantees next time somebody that finds vulnerability with a16z or any of its companies to seek black market rewards that will do far more damage.
This is just like when KakaoTalk refused to payout bug bounty because you had to be a Korean citizen which ended up causing more vulnerabilities to be discovered in the wild.
Companies and billionaires reading this, please don't be petty like Andreesen. Guy went from a leader to a borderline security fraud artist. You don't want to be earning more ire from the public in the current political climate. It's dangerous.
Why does this read like a 9 year old TikToker wrote it? This reads like some little script kiddie who runs fuzzing tools (and can't make any of their own) ranting online unprofessionally.
> “On June 30th, a16z addressed a misconfiguration in a web app that is used for the specific use case of updating publicly available information on our website such as company logos and social media profiles. The issue was resolved quickly and no sensitive data was compromised,”
What the fuck is this? They are blatantly lying here. There was a lot of sensitive data compromised. Anyone who inspected the site could have had access to everyones emails.
If anyone could view any of those secrets and access emails, then sensitive data was exposed. They can't just decide it wasn't exposed because no one else told them about this.
Couldn't it be the case that the secrets were not useful for accessing sensitive emails? Their response made it sound like the secrets were limited to a specific, limit-used app.
Question to the community. I managed to expose all customer data of a well-funded D2C brand and when I reached out to them I did not ask for bounty before I shared the fix/the security hole. I only got a 200 USD gift card for their shop :D
What is best practice here? Do you first tell the company that they have a security issue, ask for bounty and then help? Is that unethical? Blackmail?
Stuff like this is what gives the entire security and white hat community a bad name.
1. "Surprise pentests" are illegal in the US and pretty much every jurisdiction in the world. If you are actively breaking into websites without a prior agreement, you are not doing anyone a favor. Save your efforts for companies that actually want you.
2. If the company doesn't have a published bug bounty program, they don't owe you anything. Yes they can still be nice and pay you, but they definitely won't if you disclose the vulnerability to the rest of the world without giving them a heads up and enough time to fix it.
3. "Oh I couldn't find an email address" is the worst excuse in the world. I found one after exactly 5 seconds of Googling (at the bottom of https://a16z.com/connect). And even otherwise there's Twitter, Instagram, LinkedIn and a hundred other ways to reach someone at the company if you really want to.
This is classic case of clout chasing over responsible disclosure.
"i like to do this thing where i search twitter, looking for companies, and then try giving them a quick pentest"
"the compromised list of services: their database (containing PII), their AWS, their salesforce (never checked, account may be limited), mailgun (arbitrary emails from a16z domains, and also could read older emails)
... and probably more"
By their own admission, this is a "pentest", and they were able to access a16z's "database" and ascertain that it contains PII. Amongst other services used by a16z.
I'm not the one to judge whether they crossed any legal (or moral) lines though.
Too much javascript for everything (front & back) seems easy but for new developers it kind of blurs the lines between what should be on the server vs the client.
>a16z did not give me any bug bounty on this because of the fact i publicly reached out instead of trying to reach out privately. the only reason i did it this way was because:
> there was no available contact on their main site
> the email i could find engineering@a16z.com bounced my emails
The age-old practice of screwing over security researchers over any possible technicality is still alive and well. Brings tears to my eyes.
Any legal basis to challenge this practice ? If a company claims that they pay bug bounties but use flimsy reasons like this to chicken out of seemingly genuine cases like these
I'm guessing no, and even if their was they could make the litigation costs very high.
The sad thing here is what has to happen is the data needs sold off to blackhats to the point that entire countries get pissed and start putting near draconian level regulations and fines against companies like this to get them to stop this insecure bullshit.
I don't remember what your post originally said, but posting about a vulnerability is not the same as disclosing the vulnerability. Especially when you're asking for a contact.
The difference, in case you really want to know is that one actually tells everyone what the issue is, another tells everyone that there is an issue.
It's pretty shocking how many commenters are blaming the individual for not "trying harder" to find contact information. It's pretty clear a16z didn't want to pay anything or appreciate the disclosure at all.
Finding random email addresses and sending them a notice would have gone no where other than spam folders. I get dozens of "disclosures" every week from mostly script kiddies that think my DKIM setting is somehow going to be the end of my business. My brain automatically ignores emails like it.
I’m surprised there is almost no discussion about the severity of reputational damage caused by an extremely amateur bug not expected of a prominent VC firm
Yes... In my mind, there are three kinds of security bugs.
1. Caused by pure ignorance and completely avoidable (this bug).
2. Caused by subtle configurations, workflows, programming (mostly avoidable, secret scanning, security linters, code reviews, general intelligence, etc). This is where 99% of security bugs are.
3. Caused by a malicious actor aligning planets with a single intent to maximize their cause. You'll never stop these people (three letter agencies, state actors).
Probably because a16z reputation has already been quite tarnished in recent years. This is par for the course. People will still take their massive bags of money and name brand boost but "these are smart, technical, 'making the world a better place' visionaries" as opposed to wealth chasing bankers, has already run the gamut.
See crypto, Clubhouse, "it's time to build [not in my Atherton neighborhood]", e/acc Nick Land manifesto, Trump '24 support, etc.
I (we) would obviously prefer the professional person who is doing good for society. The problem is, this behaviour isn't good for them. I am not an expert or anything but from what I know, pentesting without explicit prior permissions can easily lead to huge lawsuits. I would rather that the careless people get their cars stolen than the good people all lose heart completely.
Sure there is no perfect solution here.
I guess it’s a good idea to only pentest companies that do have a bug bounty program and an expressed interest in you pentesting.
While I enjoyed the article that GP referenced and agreed with most thing I thought the “hacking bad” take was a bit off.
Having a curious look is alright but it's the "beg bounty" attitude that these researchers need to rein in. It's like the sponge-and-bucket guy washing your grimy windscreen without you asking while you wait at the lights, then demanding cash for it. Thanks but no thanks.
Agreed, and all the "shame if next time someone would sell it on the black market" comments don't exactly make those "researchers" look like the good guys.
> I too, as the good samaritan that I am, like to stroll through my neighborhood and give all the cars and bikes I encounter a quick pentest, purely for the benefits of the owners of course.
In my neighborhood, "security researchers" can often be seen checking houses for vulnerabilities. During the day, it's usually a woman or a kid with a clipboard who knocks on front doors, checks for cameras, tests if the front door is locked, etc. I'm told they work with crews of men who will come back later to do a more thorough investigation when everyone is gone so as not to bother the homeowner.
Every night, there are other "security researchers" who test all the doors of all the cars parked on the street and in driveways. If you leave your car door unlocked just once, you'll be informed about it the next morning!
>I remember there was an article "the six dumbest ideas in computer security" on HN a while ago, one of those was the mindset that "hacking is cool". I'm reminded a bit of this here.
Half of that post is unhinged nonsense. "Hacking is Cool" is listed right after a rant about pentesting being dumb because your software should just be designed to be secure.
Actually, I think entitlement is the wrong word. Maybe more like "window washing panhandler who's upset because you don't give them money for their service"
Eva is an incredibly gifted hacker and a responsible one, a16z should treat them better.
[1]: https://github.com/heyPuter/puter/