Your arguments make sense for concurrent queries (though high-latency storage like S3 is becoming increasingly popular, especially for analytic loads).
But transactions aren't processing queries all the time. Often the application will do processing between sending queries to the database. During that time a transaction is open, but doesn't do any work on the database server.
It is bad application architecture. Database work should be concentrated in minimal transactional units and connection should be released between these units. All data should be prepared before unit start and additional processing should take place after transaction ended. Using long transactions will cause locks, even deadlocks and generally should be avoided. That's my experience at least. Sometimes business transaction should be split into several database transaction.
I'm talking about relatively short running transactions (one call to the HTTP API) which load data, process it in the application, and commit the result to the database. Even for those a significant portion of the time can be spent in the application, or communication latency between db and application. Splitting those into two shorter transactions might sometimes improve performance a bit, but usually isn't worth the complexity (especially since that means the database doesn't ensure that the business transaction is serializable as a whole).
For long running operations, I usually create a long running read-only transaction/query with snapshot consistency (so the db doesn't need to track what it reads), combined with one or more short writing transactions.
Well to be fair Hackernews posts can get flagged too by the community itself where people then later talk about how or why a particular post gets flagged and discussion starts moving about the moderation/flag issues in HN.
(But this isn't to say that the fault's within the moderation community of HN which are great but just the issue which to me is imo that if many users flag a post, it can get flagged and the friction of getting it back is hard or a post typically ends up dying usually if it gets flagged in general imho)
It doesn't even link to an ad, it links to a weird parody attempt of the ad on the same site as the article. Which makes little sense for people unfamiliar with the original ad it parodies.
> If the summary has the info, why risk going to a possibly ad-filled site?
I can usually tell if the information on a website was written by somebody who knows what they're talking about. (And ads are blocked)
The AI summary on the other hand looks exactly the same to me regardless if it's correct. So it's only useful if I can verify its correctness with minimal effort.
What's reasonable is: "Set reserved fields to 0 when writing and ignore them when reading." (I heard that was the original example). Or "Ignore unknown JSON keys" as a modern equivalent.
What's harmful is: Accept an ill defined superset of the valid syntax and interpret it in undocumented ways.
Good modern protocols will explicitly define extension points, so 'ingoring unknown JSON keys' is in-spec rather than assumed that an implementer will do.
Funny I never read the original example. And in my book, it is harmful, and even worse in JSON, since it's the best way to have a typo somewhere go unnoticed for a long time.
The original example is very common in ISAs at least. Both ARMv8 and RISC-V (likely others too but I don't have as much experience with them) have the idea of requiring software to treat reserved bits as if they were zero for both reading and writing. ARMv8 calls this RES0 and an hardware implementation is constrained to either being write ignore for the field (eg read is hardwired to zero) or returning the last successful write.
This is useful as it allows the ISA to remain compatible with code which is unaware of future extensions which define new functionality for these bits so long as the zero value means "keep the old behavior". For example, a system register may have an EnableNewFeature bit, and older software will end up just writing zero to that field (which preserves the old functionality). This avoids needing to define a new system register for every new feature.
I disagree. I find accepting extra random bytes in places to be just as harmful. I prefer APIs that push back and tell me what I did wrong when I mess up.
* Many of them are part of families of crates maintained by the same people (e.g. rust-crypto, windows, rand or regex).
* Most of them are popular crates I'm familiar with.
* Several are only needed to support old compiler versions and can be removed once the MSRV is raised
So it's not as bad as it looks at first glance.
reply