Right. But as soon as you have this ability, don't you quickly turn off insert/update? then you throttle/contain reads... So in the end, do we benefit from having a client side database connection? I'm struggling to put together a real-world use case for one that isn't irresponsible.
On the "read" side of things, you control what data is exposed to each subscription, and you'll be able to base access on authentication. The client doesn't get a direct database connection, but rather a live-updating subset (or arbitrary function, in the general case) of the database.
The rationale is that clients typically end up doing sorting and filtering on subsets of the database anyway, as they get more sophisticated and start caching data. For example, Gmail starts to need a notion of an email message on the client, to avoid going back to the server for the same message. When I worked on Google Wave I saw firsthand the complicated plumbing you need in order to do this in an ad hoc way. (They used GWT to share the model objects, but synchronized the data manually.)
You can also separate this facility from your database completely, and use it as a way of sending "data feeds" to clients; then use methods as RPCs.
This still doesn't sound like a straight forward answer. I think there is a justifiable cause for trying to minimize the role of the server but controlling the operating environment of the data itself is the essence of modern web security.
There would need to be some sort of public key system for authentication, but in the end you are still compromising your data if the client gets hacked. There would have to be a database control layer for the final say, and thats called a server.
Not sure if Mongo has this feature, but in many classic SQL databases, per-user views can be created which act like tables, but are actually "the user's view into that table".
This would mean creating database user accounts on the fly for people, but it would resolve this problem (as long as the views are secure).
It seems like standard operating procedure for web apps has been to immediately throw out the user account system of whatever data store is being used and use one account with full CRUD access (or worse), with a (mediocre to disastrous) home-grown permissions system shoehorned into the controller layer.
That may be because of programmer laziness or because of some sort of inherent impedance mismatch between web-scale apps and the user account system of most DBMSes, but it seems like a bad way of doing things.
I think it's high time we had a web-scale data store that actually had decent per-user access control baked into it right at the model level, to the point where a sane person could trust it to live on the open internet. It seems both possible and desirable, but maybe I'm missing something.
Show me a 'web-scale' system that has a single data-store to secure in the first place. Something like much-better-Oracle-row-based-auth or whatever isn't gonna cut it.. cause who watches over Redis, or memcache, or the filesystem?
The filesystem is in pretty much the same situation as databases. Computers have supported multiple user accounts for decades, but every user of a web service typically runs as the same user(s) on the server.
I realize that there are huge scaling/throttling/DoS issues with, say, creating a new UNIX user every time someone signs up for your online meme generator, but that's mostly because UNIX wasn't really designed for millions of users on one box.
On the other hand, as an unprivileged user on a Linux box, you can't really do much damage beyond hogging resources and possibly spying on other people's poorly-secured files. If there's a bug and you do find a way to trash the system or escalate privilege, it's front-page news.
The problem right now is that every two-bit web app implements its own ad-hoc permissions system, often at the wrong layer of their stack. If it could be commoditized into a widely-used and widely-audited system, I think it would do a lot to improve security on the Internet.
(To open up a whole new unsupported argument, on some level the fact that one needs a key-value store, a filesystem, and a hand-optimized in-memory cache to build a reasonably fast web service smells like we're still making humans do a lot of things that a machine could do a much better job of.)