Because we rarely put it in a position of power. When we do, stuff like the 737 Max (where badly-programmed software attempted to compensate for badly-designed hardware, and probably could've got away with it if it weren't badly-programmed), or the 1994 Scotland RAF Chinook crash (where the software controlling the engine was just rubbish, but it's unknown whether that actually caused the crash), or Therac-25 (where the software controlling the radiation therapy machine was rubbish) happens.
If you're counting war systems, the list is much longer. I don't.
All of these issues were caused by ordinary software development practices being applied in a context where the software was actually important. Real-world software development practices completely suck, and have to be completely thrown out of the window if everybody's going to get out alive.
The reason more people haven't died is that software isn't given positions of responsibility very often, and it's usually held to actual standards when it is.
And that's what good engineering is. When you know your product can't do the job, you don't put it in a position of responsibility. If you can't do it safely, don't do it. That's good engineering.
Physical world engineers could stand to learn a lot from software engineers about the hard parts of engineering. Sometimes it's about saying "No, we shouldn't."