Is that the lesson to learn here, really? An employee makes a stupid thing and the leadership didn't look into its security practices to avoid this type of attack?
Potentially, yes. Someone could go nutty before being dismissed, and it just be Tuesday. That's why there's things we often refer to in this industry as backups. If it is just a software bit of mischief, you should be able to recover from that without too much down time. s/disgruntled employee/ransom ware/ and it's really no different.
Not "terminating" your employees and having meaningful conversations on how it goes after they know they're let go helps a lot.
That forces the employer to come up with actual explainations someone can digest, and gives the employee time to be rational and plan the pivot.
There will still be ugly stories, and preemptively removing accesses will be needed in some cases, but that's by far the exception and not the norm, and we usually know when that will be the case.
> and the leadership didn't look into its security practices to avoid this type of attack?
It sounds like they very much did, starting with physical security. Especially in that time period, physical access was probably the biggest component even.
The lesson is that employees should only have access to the resources that they need to do their job at all times, and that there should be a fine-grained permission system to check if someone can read or read-write to all these resources.
Even when I am working on my projects, by myself, I use different accounts to access my services, depending on the role. At first it might seem crazy, but if you learned how to do this and you automate this process, it is a life-saver if you suddenly find yourself need quick help from some contractor or if you want to give a backup key to a trusted friend as a way to say "here is what you need to do in case something happens to me".