That's my question as well - and an important one. Using Galera cluster with MySQL imposes several performance and usage constraints an end user needs to be aware of.
And if it's not Galera, how did Amazon work around the constraints of multiple writers in an ACID database, and what constraints does it impose?
It definitely looks like Galera (which is what powers both MySQL and MariaDB cluster implementations; it's a storage engine built atop InnoDB), but it's hard to say without more information. They mention a quorum write with automatic recovery across three nodes, but doesn't mention the method used - two phased commit, checking commits against pending transactions, etc.
It's a very complex thing to implement, and unless they have made leaps beyond what Galera has done, for some workloads it will be fast, but for others it will perform far worse than a standard MySQL instance.
Of course, I guess it could also be a cluster built upon NDB, but the lack of memory constraints on the size of the data makes that less likely.
Since it sounds like you have the information on what it is based upon (if only the principles which were used to address distributed ACID consistency), it would be good to get this information dissiminated - it's hard to trust that it will "just work" when we have so many examples of distributed ACID not working well.
You can think of Aurora as a single-instance database where the lower quarter is pushed down into a multi-tenant scale-out storage system. Transactions, locking, LSN generation, etc all happen at the database node. We push log records down to the storage tier and Aurora storage takes responsibility for generation of data blocks from logs.
So, the ACI components of ACID are all done at the database tier using (largely) traditional techniques. Durability is where we're using distributed systems techniques around quorums, membership management, leases, etc, with the important caveat that we have a head node generating LSNs, providing a monotonic logical clock, and avoiding those headaches.
Our physical read replicas receive redo log records, update cached entries and have readonly access to the underlying storage tier. The underlying storage is log-structured with nondestructive writes, so we can access data blocks in the past of what is current at the write master node - that's required if the replica needs to read a slightly older version of a data block for consistency reasons.