When do the objects get destroyed or garbage collected?
This part was always magical. It seems to free up the local memory usage of the program, but it doesn’t seem to release the memory back to the operating system, until some type of triggered event.
> This part was always magical. It seems to free up the local memory usage of the program, but it doesn’t seem to release the memory back to the operating system, until some type of triggered event.
That's up to the C runtime's memory allocator. Modern memory allocators don't typically request a new chunk of memory for every malloc() call -- instead, they allocate a single large region of memory at a time and carve that up as needed. This is massively more efficient (system calls are expensive), but also means that those regions can't be released to the OS until all allocations in them are gone.
There's an interesting talk [0] from Bobby Powers on how to make a malloc() and free() that perform compaction, to be able to release memory back to the operating system more frequently (and improve cache hit rates, etc.). But this isn't standard at all yet.
I plan to cover CPython's memory management in the future posts. In a nutshell, an object gets destroyed when its reference count hits 0. In this case, CPython calls `tp_dealloc` [1] of the object's type. The `tp_dealloc` slot releases all the resources the object owns and frees the memory. The implementation of `tp_dealloc` differs for different types. Eventually, the `free` function of the memory allocator is called to free the memory. A memory allocator is a set of functions to manage memory. The default memory allocator for objects is pymalloc. It allocates small objects (<= 512 bytes) using the arena allocator [2] and falls back to the raw memory allocator otherwise. The latter calls the `free()` library function to free the memory.
The Python/C Reference Manual has a great section on memory management [3].
I quickly skimmed over OP's previous posts and I don't think they mention it. According to [0], CPython's GC is ran every X instructions (not sure how up to date the source is).
From this, I guess CPython opted for simplicity instead of implementing something like a memory-usage monitor.
I haven't read the article but as I understand python will hold on to acquired memory and pre-allocates 'blocks' of memory ready for items of various sizes. When you use more memory and it's freed iirc it holds on to most of these blocks.
This part was always magical. It seems to free up the local memory usage of the program, but it doesn’t seem to release the memory back to the operating system, until some type of triggered event.