Part of it probably stems from that it's always been done this way, and the processes haven't evolved over time. Another part is also to protect the bank against rogue employees, it wouldn't be the first time a developer made changes against a production database.
You're right that production support has access to those systems, and could potentially make changes and install different binaries, but the amount of people that can do that is extremely limited. Every change also requires a change request that needs several approvals, to request data you need another data request.
Adding to that (because only when I use this example people understand clearly):
CompanyA is using ITS OWN assets, funds, IP, etc. you own it, you can burn to the ground.
BankB is holding other people's money. You can't go make a mistake, a bank losing 100m of OUR money and say "oops my dev made a mistake".
Edit: similar expectations are in publicly traded companies (aka companies where they use OUR money - we give them our cash and they give us stocks). This is why external auditors (e.g. Big4) do not like when they see "poor change management processes", such as inconsistent SoD.
> BankB is holding other people's money. You can't go make a mistake, a bank losing 100m of OUR money and say "oops my dev made a mistake".
Not only that, but once that happens, regulators will come in, and everybody involved can be held liable. Not only will the bank be fined, but depending on how bad your fuck up was, you'll probably end up losing your job and might face further penalties.
So in the interest of everyone, it's best to just avoid it all together.
Thanks for answers in this thread. I didn't really mean to have YOLO-type random access to production. I was hoping there are ways in between to bridge that gap between dev and ops in those systems, similarly how it has been done eg with SRE in more relaxed security applications.
I was hoping for some solutions on the spectrum are adopted more, like mentioned cetralized logs stripped of private data or granting temporary audited access. But it seems with legacy systems this is much harder to implements.
I believe there is an optimum balance where actually fewer mistakes could be made if both people developing and operating te system had more visibility into each other field.
As for willful fraud attempts, well you can't rule out devs would do it, so of course there should be various barriers preventing that and proper change management, but, my sampling bias aside, when I look at some recent scandals in finance, take eg Wirecard as the last one, there is more often higher management involved than devs.
At least where I work, every team can essentially decide what they do, as long as they follow a few basic guidelines. So newer projects usually have centralised logging and automated deployments. But sadly there’s still a wall between development environments and production, for good reason I think. Not everyone should have access to production data, so only a limited amount of people have access. Data is of course anonymised when send over.
But yeah, some legacy systems could be 5 years old, and that’s a long time in tech.
You’re right on the visibility part, but sadly that’s an organisational issue, you need higher ups to change this.
You're right that production support has access to those systems, and could potentially make changes and install different binaries, but the amount of people that can do that is extremely limited. Every change also requires a change request that needs several approvals, to request data you need another data request.