I don't use it, but have been keeping an eye on it.
At launch, they limited the number of affected tuples to 10000, including tuples in secondary indexes. They recently changed this limit to:
> A transaction cannot modify more than 3,000 rows. The number of secondary indexes does not influence this number. This limit applies to all DML statements (INSERT, UPDATE, DELETE).
There are a lot of other (IMO prohibitive) restrictions listed in their docs.
Which features would you like to see the team build first? Which limits would you like to see lifted first?
Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.
indexes! vector, trigram and maybe geospatial. (some may be in by now I didn't follow the service as closely as others)
note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.
My understanding is the way Aurora DSQL distributes data widely makes bulk writes extremely slow/expensive. So no COPY, INSERT with >3k rows, TRUNCATE etc
Who would use Preview products in production? I'm building out some software that would fit perfectly into the constraints set for DSQL, but I realistically can't commit to something with no pricing / guarantees.
Which ones? It seems eminently usable from the outside now, at least for greenfield work. The subset of Postgres it supports is most of good/core/essential Postgres. (But I haven't tried it)