Back when I used to do AppSec, these types of issues were extremely common. Software developers and their managers would argue endlessly about them not being real vulnerabilities, which meant I had to put together a proof of exploitability. And since these were interdepartmental fights, office politics get involved. Just one of the dozen or so reasons why I stopped doing AppSec and went back to development.
I left security work for a similar reason. In most companies, Security isn't there to collaboratively build more reliable and dependable products that protect customer privacy, bringing in a useful perspective of how things can go wrong, similar to QA's role. Instead, Security is there to be the internal police, who treat engineers (and other employees) like criminals, and get recognition and rewards for stopping the company from shipping. The way the vast majority of companies treat Security is deeply dysfunctional and soul-killing to anyone who wants to bring a glass-half-full mentality to work. And in an industry where it has become practically an expectation for people to jump ship after ~4 years, that's too much of a career risk to take. (side note: QA has exactly the same problem.)
While I'm sure that's also common, my general impression was that security was there to provide legal protection against being found to have been guilty of willful negligence if there were a breach. There wasn't a top down push for actual security but there was one for getting all the proper boxes ticked so the company could get the compliance certifications required for insurance and to make the legal department happy. Essentially it was financial risk management rather than data risk management.
That's the same pessimistic perspective. When SOC2 requires that commits be tied to reviews to show that work was visible and approved by management, and not just some cowboy engineer putting who-knows-what into production, but the control is implemented with a simple Jira issue regex so every developer just puts in place ABC-123 or ABC-999 on every commit, and anyway developers are free to open and close Jira issues without management noticing or approving, then the only people guilty of willful negligence are the so-called security engineers for putting in such weak controls and the auditors for approving them regardless, not to mention the security engineering's leadership's outright fraud for essentially lying about effective controls being in place when actually everybody internally considers it a massive joke.
The flip side of the joke being, of course, that everyone internally naturally prefers weaker controls (that help them ship faster compared) to stronger controls. So there's a wink and a nod and a smile and everyone moves on while institutionalized corruption is accepted. Nevermind that strong controls over commit messages can also help build automated documentation, notifications, and clear integrations like being able to link a production outage to the Git commit that triggered it, including full business context and knowledge of who to contact.
Note that there are two kinds of perspectives to build this kind of control - the glass-half-full perspective that builds Git -> Slack integrations to let people get notified quickly that a review was requested, including signals that this is a hotfix/simple/not-controversial/rubber-stamp to help get simple stuff approved quickly and deployed quickly, along with collaboration with auditors to get them the reports and commit samples they need. The glass-half-empty perspective is to say, well the auditors already have a built-in integration with Jira, so let's throw it in to Jira, along with a complicated and rigid workflow that forces everything to go through sprint planning and approvals by managers 3 levels up, and if it causes a production outage because something can't be fixed quickly, well that's not really Security's or Compliance's fault, the regulations are the regulations and the auditors are the auditors, and why are you trying to work around The Perfect Process That We Worked So Hard To Build, maybe you have malicious reasons hmmm? And maybe it's time we hired separate operators to run everything in production, like A Real Enterprise Company would, like some banks you've heard of?
I was about to say exactly this. This is like REALLY BASIC stuff in designing web services. The fact you can reset the password with a single HTTP POST is mind-boggling, bypassing the 2FA by hiding a <div> is mind-boggling. Like, completely negligent. (btw they took over a Subaru employee account, not Starlink)
Or not requiring ANYTHING to authenticate in your forgetPassword endpoint, but being able to set a new password directly instead of sending a randomly generated per email / send a one time token to reset the password yourself via email
To me that sounds exactly like what I would expect from some of the junior developers I’ve met in recent years. Most of the business logic in JavaScript. Poor modeling of a client-server relationship, and no consideration of which parts of the system can be trusted. The design was based on the non-technical requirements doc or the mockups, and an inexperienced front-end developer asked the inexperienced backend guy (or maybe they’re the same person) for an endpoint, and for the inputs, he mapped directly the fields in the form.
Thankfully, even AI writes better code than this, so as this type of developer quickly becomes unemployable over the next few years, I think we’ll see a temporary increase in code quality.
This is exactly what I came here to say as well. Whoever wrote this fundamentally just doesn't get it.
This whole thing is honestly what I've suspected/expected owning this car, but it's somehow still surprising to see. My guess is no car company does this really well right now, and makes me want to drive a 1998 Acura Integra instead.
I used my chrome inspector to edit a read only field in Jira. Surprisingly I was able to edit it and submit the change. It complete fucked up whatever protect we were about to use and we had to start over. The JIRA admins were scratching their heads.
I think you misunderstand what's being described. The server didn't check it, it accepted the modified hidden field. The server should have rejected the request.
$('#securityQuestionModal').modal('show');
is... mind-boggingly stupid of whoever got the job to write that Starlink web-app.
OTOH, the hacker hijacked a Starlink employee's account to get in, isn't that over the line in terms of "ethical hacking"/legality standpoint?