Hacker Newsnew | past | comments | ask | show | jobs | submit | CodesInChaos's commentslogin

That's 238 dependencies (counting multiple versions of the same crate).

* Many of them are part of families of crates maintained by the same people (e.g. rust-crypto, windows, rand or regex).

* Most of them are popular crates I'm familiar with.

* Several are only needed to support old compiler versions and can be removed once the MSRV is raised

So it's not as bad as it looks at first glance.


Your arguments make sense for concurrent queries (though high-latency storage like S3 is becoming increasingly popular, especially for analytic loads).

But transactions aren't processing queries all the time. Often the application will do processing between sending queries to the database. During that time a transaction is open, but doesn't do any work on the database server.


It is bad application architecture. Database work should be concentrated in minimal transactional units and connection should be released between these units. All data should be prepared before unit start and additional processing should take place after transaction ended. Using long transactions will cause locks, even deadlocks and generally should be avoided. That's my experience at least. Sometimes business transaction should be split into several database transaction.

I'm talking about relatively short running transactions (one call to the HTTP API) which load data, process it in the application, and commit the result to the database. Even for those a significant portion of the time can be spent in the application, or communication latency between db and application. Splitting those into two shorter transactions might sometimes improve performance a bit, but usually isn't worth the complexity (especially since that means the database doesn't ensure that the business transaction is serializable as a whole).

For long running operations, I usually create a long running read-only transaction/query with snapshot consistency (so the db doesn't need to track what it reads), combined with one or more short writing transactions.


Your database usage should not involve application-focused locks, MVCC will restart your transaction if needed to resolve concurrency.


> Thank you for submitting to /r/memes. Unfortunately, your submission has been removed for the following reason(s):

> Rule 1 - ALL POSTS MUST BE MEMES AND FOLLOW A GENERAL MEME FORMAT

> All posts must be memes following typical setup/design: an image/gif/video with some sort of caption; mods have final say on what is (not) a meme

Reddit mods, man.


> 4,613 points

> 96% upvoted

> Removed by a single moderator for subjective reasons while the sub's front page is full of crap

Ah, the quintessential Reddit experience.


Well to be fair Hackernews posts can get flagged too by the community itself where people then later talk about how or why a particular post gets flagged and discussion starts moving about the moderation/flag issues in HN.

(But this isn't to say that the fault's within the moderation community of HN which are great but just the issue which to me is imo that if many users flag a post, it can get flagged and the friction of getting it back is hard or a post typically ends up dying usually if it gets flagged in general imho)


There’s nothing subjective in the removal reasons.

Born too late to be a Stasi bureaucrat, born right on time to be a Reddit mod.

I was also wondering how the image got 16k+ views (as of now) (the stat was on imgur)

I was wondering what/how many HN users clicked on the image (not knowing it was uploaded to reddit too)

But now I seriously wonder out of those 16k (as of now), how many were/are from the hackernews community and how many from reddit.


For however brief a moment. It's gone now.

Reddit shows cached versions of posts on the front-page, so it might actually remain there for a couple of hours after the subreddit mods deleted it.

Could also be a temperature throttling problem caused by dust or a stuck fan. My old work Laptop suffered from that, and recovered after I cleaned it.

It doesn't even link to an ad, it links to a weird parody attempt of the ad on the same site as the article. Which makes little sense for people unfamiliar with the original ad it parodies.


> If the summary has the info, why risk going to a possibly ad-filled site?

I can usually tell if the information on a website was written by somebody who knows what they're talking about. (And ads are blocked)

The AI summary on the other hand looks exactly the same to me regardless if it's correct. So it's only useful if I can verify its correctness with minimal effort.


That depends on how Postel's law is interpreted.

What's reasonable is: "Set reserved fields to 0 when writing and ignore them when reading." (I heard that was the original example). Or "Ignore unknown JSON keys" as a modern equivalent.

What's harmful is: Accept an ill defined superset of the valid syntax and interpret it in undocumented ways.


Good modern protocols will explicitly define extension points, so 'ingoring unknown JSON keys' is in-spec rather than assumed that an implementer will do.


Funny I never read the original example. And in my book, it is harmful, and even worse in JSON, since it's the best way to have a typo somewhere go unnoticed for a long time.


The original example is very common in ISAs at least. Both ARMv8 and RISC-V (likely others too but I don't have as much experience with them) have the idea of requiring software to treat reserved bits as if they were zero for both reading and writing. ARMv8 calls this RES0 and an hardware implementation is constrained to either being write ignore for the field (eg read is hardwired to zero) or returning the last successful write.

This is useful as it allows the ISA to remain compatible with code which is unaware of future extensions which define new functionality for these bits so long as the zero value means "keep the old behavior". For example, a system register may have an EnableNewFeature bit, and older software will end up just writing zero to that field (which preserves the old functionality). This avoids needing to define a new system register for every new feature.


I disagree. I find accepting extra random bytes in places to be just as harmful. I prefer APIs that push back and tell me what I did wrong when I mess up.


In my experience most games that don't use an anti-hack just work (probably around 95%). Occasionally a bit of tweaking is needed.

* Probably the biggest pain point for me are game launchers that use Edge WebView 2. But many games allow you to bypass the launcher.

* For DOS Games run native DOSBox and not the Windows Version inside Wine/Proton

* Install the games on a native Linux partition. Having the wine prefix on NTFS will cause weird issues.

* Use a tool like Heroic Launcher or Lutris for non-steam games. Especially for GOG games.

Applications on the other hand have problems far more often. Some don't work at all, others have bugs and limitations.


I assume Esync and Fsync will not live much longer, now that NTSync is supported by the both Wine 11 and the kernel 6.14.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: