There's already a popular OS that disables overcommit by default (Windows). The problem with this is that disallowing overcommit (especially with software that doesn't expect that) can mean you don't get anywhere close to actually using all the RAM that's installed on your system.
Windows also splits memory allocations between allocating the virtual space and committing real memory. So you can allocate a large VM when you need one and use piecemeal commits within that space.
It is surfaced to apps, but the "just detecting that connectivity sucks" heuristic turns out to be not all that easy to implement. There doesn't seem to be a better heuristic than "try and let the app decide if waited too long".
Copying the file likely forces the creation of a new one with no or lower filesystem fragmentation (e.g. a 1MB file probably gets assigned to 1MB of consecutive FS blocks). Then those FS blocks likely get assigned to flash dies in a way that makes sense (i.e. the FS blocks are evenly distributed across flash dies). This can improve I/O perf by some constant factor. See https://www.usenix.org/system/files/fast24-jun.pdf for instance for more explanation.
I would say that the much more common degradation is caused by write amplification due to a nearly full flash drive (or a flash drive that appears nearly full to the FTL because the system doesn't implement some TRIM-like mechanism to tell the FTL about free blocks). This generally leads to systemwide slowdown though rather than slowdown accessing just one particular file.
This was especially prevalent on some older Android devices which didn't bother to implement TRIM or an equivalent feature (which even affected the Google devices, like the Nexus 7).
Not really something anyone can change at this point, given that the entire web API presumes an execution model where everything logically happens on the main thread (and code can and does expect to observe those state changes synchronously).
I agree, all of these rules aren't the right way to teach about how to reason about this. All of the perf properties described should fall out of the understanding that both tables and indices in SQLite are B-trees. B-trees have the following properties:
- can look up a key or key prefix in O(log N) time ("seek a cursor" in DB parlance, or maybe "find/find prefix and return an iterator" for regular programmers)
- can iterate to next row in amortized O(1) time ("advance a cursor" in DB parlance, or maybe "advance an iterator" for regular programmers). Note that unordered data structures like hash maps don't have this property. So the mental model has to start with thinking that tables/indices are ordered data structures or you're already lost.
A table is a b+tree where the key is the rowid and the value is the row (well, except for WITHOUT ROWID tables).
An index is a b-tree where the key is the indexed column(s) and the value is a rowid.
And SQLite generally only does simple nested loop joins. No hash joins etc. Just the most obvious joining that you could do if you yourself wrote database-like logic using ordered data structures with the same perf properties e.g. std::map.
From this it ought to be pretty obvious why column order in an index matters, etc.
If you type the name of the person, it should allow you to create a filter for "Messages with: Person". It should also pop up a filter bubble for photos. From there I think you can type in some query and it should do a query on the photos via text. I don't think you can add your date filter though.
Second way would be to open that conversation view, click on the contact icon at the top of the view, which should then bring you to a details page that lists a bunch of metadata and settings about the conversation (e.g. participants, hide alerts, ...). One of the sections shows all photos from that conversation. Browse that until you find the one you care about.
I admit I was wrong in my understanding of iMessages capabilities.
I remembered its search sucking, and also it not working on all my devices, so I quit using it and regurgitated a stale criticism.
Still, the search is useless to me if I can't do it on my linux desktop (like I can with email, discord, and every other chat service I use), so I'd still say iMessage has a laughably lacking search by nature of it only working on ios/macos, when all other chat apps I use offer at least some search on ios/android/linux
Oh I've debugged this before. Native memory allocator had a scavenge function which suspended all other threads. Managed language runtime had a stop the world phase which suspended all mutator threads. They ran at about the same time and ended up suspending each other. To fix this you need to enforce some sort of hierarchy or mutual exclusion for suspension requests.
> Why you should never suspend a thread in your own process.
This sounds like a good general princple but suspending threads in your own process is kind of necessary for e.g. many GC algorithms. Now imagine multiple of those runtimes running in the same process.
> suspending threads in your own process is kind of necessary for e.g. many GC algorithms
I think this is typically done by having the compiler/runtime insert safepoints, which cooperatively yield at specified points to allow the GC to run without mutator threads being active. Done correctly, this shouldn't be subject to the problem the original post highlighted, because it doesn't rely on the OS's ability to suspend threads when they aren't expecting it.
This is a good approach but can be tricky.
E.g. what if your thread spends a lot of time in a tight loop, e.g. doing a big inlined matmul kernel? Since you never hit a function call you don't get safepoints that way -- you can add them to the back-edge of every loop, but that can be a bit unappetizing from a performance perspective.
> suspending threads in your own process is kind of necessary for e.g. many GC algorithms
True. Maybe the more precise rule is “only suspend threads for a short amount of time and don’t acquire any locks while doing it”?
The way the .NET runtime follows this rule is it only suspends threads for a very short time. After suspending, the thread is immediately resumed if it not running managed code (in a random native library or syscall). If the thread is running managed code, the thread is hijacked by replacing either the instruction pointer or the return address with a the address of a function that will wait for the GC to finish. The thread is then immediately resumed. See the details here:
That's not really correct. Compare the basic interactive page resource usage to a similar interface made with qt or even .net - it's the runtime that makes the massive difference.