Today, most banks run on old systems, some dating back to the 1970s, strapped together with metaphorical gaffer tape. Banks are too afraid to update them for fear something going wrong.
I wouldn't say banks are too afraid, I'd say they are being prudentially cautious. Banks have very different standards when it comes to reliability. This isn't about some system losing clicks, or likes, or whatever. Here, every single mistake can cause significant financial loss.
Just think what would happen when client trades get stuck at some point during execution. If market prices develop unfavorably in the minutes or hours until the error is fixed, the bank has to take the loss, on every single trade.
Almost all people I know who work in banking (including myself) would love to get rid of our legacy systems, but that's much easier said than done. Especially since many of those legacy systems are inter-connected.
VaultOS is based on "smart contract" technology that allows people to tailor their own products such as mortgages, loans, and overdrafts.
Those are nice new features and all, but none of that means anything if you can't replace the existing core features first.
Yup, exactly this. And cautious is putting it nicely, I think. Tack on regulatory risk and you have an extremely strong headwind against any sort of major change.
> VaultOS is a bit like Amazon Web Services or Google Drive, but for what's called "core banking" — the technology that allows banks to hold deposits and accounts. Essentially, it's the heart of banking, around which everything else is built.
I hypothesis they are considering what I've been working on for about 4 years now, which is basically a blockchain + container/VM/repo deployment mechanism for infrastructure services. The people who were most interested in what I was presenting were financial institutions. The primary reason why was because that software shouldn't change a lot as it handles money. Changes to software that handle money typically == loss of funds. If you build a piece of software that gets deployed and versioned using smart contracts/DAOs/whatever blockchain thing, you end up with a fairly robust way of managing financial systems, and the various contracts those systems manage, including cryptocontracts running in the containers launched by other contracts.
In other words, the immutability of the blockchain should be able to be applied to infrastructure provisioning. My sense is that is what is happening here with VaultOS.
Do customers today look at an ATM and say "Oh it's running the 2.7 release! OK, I'm good then!"? It'll be interesting to see what perceived value there may be.
Do you ever look at reality and wonder what version it's running? That's where all this is going...and the second it does, those Pokemon we're all chasing become de facto reality.
Alright, my turn. The bankers want back-end software that's near 100% uptime, cost-effective, future-proof, flexible for integrations, and secure. They have all of those but cost-effective and flexible. The next architecture will have to do better on those. We'll start with a sort-of, three-tier architecture & client-server model since those have been analyzed to death with tons of tool support for getting them right.
First, the datastore that simply stores raw data everything else depends on. The datastore will be bootstrapped on HP NonStop or OpenVMS clusters to inherit their high-availability. These systems already run banking backends in multiple datacenters with automatic failover and no lost transactions in specific, case studies. Decade plus uptime is not uncommon. They're also way cheaper than mainframes with more support for modern SW & easier to access for ISV's. The software itself will be built to have minimal dependence on underlying platform with tools to rapidly export data, sync with, or switch to a replacement. The will be a licensed copy of Google's F0 RDBMS on OpenBSD & reliable servers rewritten in the manner about to be described. If not, then something similar. :)
The core, banking stack. This is the banking software for withdrawls, deposits, basic security checks, audit events, and so on. Anything that's happening constantly in real-time with high-criticality. This will be contracted to Altran/Praxis who will apply Correct-by-Construction method to produce it in C, SPARK, and Rust simultaneously. Best available tools for static analysis and testing will be applied to each to catch whatever others miss. Prior work in just SPARK has almost no defects. A combination of simplified components with extra checkers should further reduce that. The protocols will be contracted to Galois Inc to do in TLA+ and Haskell. Especially generic, secure, messaging protocols to replace SWIFT. Altran will implement anything Galois finalizes to integrate with rest of system. Paid, peer review by people with track record of finding esoteric flaws will occur for each of the deliverables.
The client, presentation, and application layers' hardware will be SAFE architecture (crash-safe.org) or CHERI CPU's (CheriBSD). These will be implemented with Leon3-FT processors on a Silicon-on-Insulator node with ChipKill and ECC RAM. They will run minimal OS's created for embedded systems with proven reliability & performance. Those will be modified to support security features of processors plus support security labels for users & apps. Each machine, a la DiamondTEK LAN, will have PCI cards (or on-SOC HW) that authenticates users on trusted path, checks system integrity, end-to-end encrypts all data, and especially tags/checks packets with security labels of users & apps. Specific hardware modules will exist with data diodes to constantly sniff network, transaction, and audit trails to check them against a security policy. Similar one for reliability and performance of network.
The software stack will akin to REBOL's reblets and container apps. The apps will be isolated on microkernels with basic, GUI forms. These apps will be developed in both a safe, systems language plus an information flow language like SIF. The systems will be shown with analysis & testing to be free of common errors. Information flow analysis will prevent common forms of information leak and security breach. The apps will integrate with trusted hardware to pass labels along. The server apps will pick up those labels & continue to factor them into their operations. Over time, the tooling will mature to automate these operations with only basic annotations by programmers plus a formal, security policy by administrators.
People still need to get work done in terms of Internet research, report writing, and so on. OSS apps for these will be ported to the platform overtime. Meanwhile, the client-nodes will support physical virtualization whereby those nodes can run on a PC with mediated, information sharing and built-in KVM. Users simply press a button to be in a regular desktop. Documents and such will be done in easy-to-analyze formats that are checked by a guard upon transfer. Those files are also labeled. Anything that goes into the trusted machine will be shown in text form for visual confirmation by operator plus automatically sanity-checked & logged for any other auditing. Overall process will be like switching tabs + drag n drop to encourage users to work with security features instead of against them.
The corporate itself will be a non-profit. The charter will put a cap on how much profit it can take with a lean approach to administrative expenses, limits on executive compensation, and limits on management-to-staff ratio. Incoming revenue is to be put into further QA/pentesting of platform, development of it, support to customers, consulting for integration/extensions, datacenters for availability, and so on. The nonprofit will be established in a jurisdiction with strong laws favorable to honest banking plus minimal corruption. Its operations will be audited by third-parties who also have their own, dedicated hardware & cages. These factors will collectively eliminate or reduce the risks of VC-backed sellouts, management cooking the books, top-heavy organizations, and stagnation from lock-in.
OK. So, let's summarize. The hardware itself will be simple but highly reliable. The software is done in languages immune to most coding errors with high-level properties precisely specified, checked, and pentested. The two integrate well to eliminate abstraction gap attacks. The initial backend is software that has over a decade of uptime with modern stuff coming online if possible. The non-core apps on client and server encode sensible use into information flow policies that are checked in several places & efficiently. All apps and network are black-box to attackers with tons of defense in depth. Insider risk reduced as they put individual name & reputation on each action with mutually-suspicious auditing they can't remotely sabotage due to data diodes. All of this tech already exists in either prototype or production form with suitable substitutes for prototypes that turn out infeasible to use. It's also legally setup to be more trustworthy in terms of what people will do & long-term benefit. Initial development costs would be huge but the first year without mainframes and SWIFT will probably pay it off. Especially spread out among numerous banks investing.
I wouldn't say banks are too afraid, I'd say they are being prudentially cautious. Banks have very different standards when it comes to reliability. This isn't about some system losing clicks, or likes, or whatever. Here, every single mistake can cause significant financial loss.
Just think what would happen when client trades get stuck at some point during execution. If market prices develop unfavorably in the minutes or hours until the error is fixed, the bank has to take the loss, on every single trade.
Almost all people I know who work in banking (including myself) would love to get rid of our legacy systems, but that's much easier said than done. Especially since many of those legacy systems are inter-connected.
VaultOS is based on "smart contract" technology that allows people to tailor their own products such as mortgages, loans, and overdrafts.
Those are nice new features and all, but none of that means anything if you can't replace the existing core features first.