This is probably because my explanation is very brief. I don't see how a shader (a program running on the GPU) can detect that the OS has killed a process and initiate a clear.
shader is a program executed by the gpu and can manipulate the memory, driver can create a fake surface out the freed memory and run the shader on it (which would avoid the need of zeroing the memory from the cpu trough the pcie)
when you do a release on a texture object, when the context is destroyed, when the glDeleteTextures is called.. you just have to enumerate it all, but eventually all functions are passed to the graphic drivers to be translated into gpu operations.
It's as same as saying that a HDD driver can zero deleted files and delete temporary files when a process is killed because it translates API calls into HDD controller commands.
Yes, the idea is that the manufacturer is in the position of knowing the most efficient way to talk to it's gpu and the driver knows everything it's happening memory wise. It'd be interesting to have a prototype done in some open source linux driver. Tbh, I'm probably not good enough for that.
sorry missed 'So, in your opinion the driver issues TRIM, not the OS' from the previous reply. I never said that and that was not my point
my point was that an optional post delete cleanup feature was added to the protocol ready to be used, which is a perfect example on how to evolve long term features. then I said the post cleanup feature for the GPU should sit on the driver, since the GPU driver is the one knowing how to talk to the hardware, as there is not a shared protocol between boards (except vga modes etc but those contexts are memory mapped and os managed) and knows when a clear is performed, since all operations go trough it.