It is a very active area of research, AI alignment. The research so far [1] suggests inherent hard limits to what can be achieved. TeMPOraL's comment [2] above points out the reason this is so: the generalizable nature of LLMs is in direct tension with certain security requirements.
[1] check out Robert Miles' excellent AI safety channel on youtube: https://www.youtube.com/@RobertMilesAI
[2] https://news.ycombinator.com/item?id=44504527