> Uh, OK. So, you're happy with single-core boxes then, I take it?
Not at all. You use only one core for the core execution of write transactions, but that's a small part of any real system. All cores can read simultaneously. All cores can also do all sorts of other work, including preparing transactions to execute, deserialization of requests, rendering responses, logging, and anything else your app needs to get up to.
The limit is also one core per transactional domain. If you can split your data up into lumps between which you never need transactions, you can happily run one core on each.
> Also, I'd like to point out that just because you aren't explicitly doing I/O doesn't mean that you aren't doing I/O.
Actually, it does explicitly do I/O. You do it just before every command executes.
> The OS might have paged out some stale data.
I guess that's possible, which would indeed cause a momentary pause, but this approach is typically used with dedicated servers and plenty of RAM, so it's never been a problem in practice for me.
> I just want to clarify: so when you encounter a problem, you do some rollback, which automatically moves the state to the last snapshot and rolls forward to the previous transaction, right?
You mean a bug in our code that causes a problem? Depends, on the system, I suppose. Prevayler had an automatic rollback. It just kept two copies of the data model in RAM; if a transaction blew up it would throw out the possibly tainted one. But there are a number of ways to solve this, so I don't advocate anything in particular. Other than heavy unit testing, so that things don't blow up much.
> So a single writer would block all readers, right?
Correct. For the fraction of a millisecond the transaction is executing, anyhow. Since transactions only deal with data hot in RAM, transactions are very fast.
> No, I mean like "I already wrote some data, but now a constraint has been violated so I need to undo it".
That shouldn't happen, and I've used two approaches to make sure. One is do all your checking before you change anything. The other is to make in-command reversion easy, which is basically the same way you'd make commands undoable.
Basically, instead of solving the problem with very complicated technology (arbitrary rollback), you solve it with some modest changes in coding style. Since you never have to worry about threading issues, I've found it pretty easy.
> Correct. For the fraction of a millisecond the transaction is executing, anyhow. Since transactions only deal with data hot in RAM, transactions are very fast.
Transactions don't just read and write. They sometimes compute things, like joins, which can take several milliseconds. These computations often must run within the transaction and would thus need to acquire the lock for several milliseconds.
Joins haven't been a problem for me, mainly because this approach doesn't constrain you to a tables-and-joins model of the world. With Prevayler, for example, you treat things as a big object graph, so there are no splits to join back up.
Of course, it could be that some problem is just computationally intense, but I can think of a number of approaches to lessen the impact of that in a NoDB system.
Not at all. You use only one core for the core execution of write transactions, but that's a small part of any real system. All cores can read simultaneously. All cores can also do all sorts of other work, including preparing transactions to execute, deserialization of requests, rendering responses, logging, and anything else your app needs to get up to.
The limit is also one core per transactional domain. If you can split your data up into lumps between which you never need transactions, you can happily run one core on each.
> Also, I'd like to point out that just because you aren't explicitly doing I/O doesn't mean that you aren't doing I/O.
Actually, it does explicitly do I/O. You do it just before every command executes.
> The OS might have paged out some stale data.
I guess that's possible, which would indeed cause a momentary pause, but this approach is typically used with dedicated servers and plenty of RAM, so it's never been a problem in practice for me.
> I just want to clarify: so when you encounter a problem, you do some rollback, which automatically moves the state to the last snapshot and rolls forward to the previous transaction, right?
You mean a bug in our code that causes a problem? Depends, on the system, I suppose. Prevayler had an automatic rollback. It just kept two copies of the data model in RAM; if a transaction blew up it would throw out the possibly tainted one. But there are a number of ways to solve this, so I don't advocate anything in particular. Other than heavy unit testing, so that things don't blow up much.