My idea is to make the system prompt have overarching influence on every stage of the model independent of its attention mechanisms. OK, it might not be a slight modification to the model, but something more like (1), with a twist that steering vectors are learned during instruction and RLHF training.
In your example the meaning of "take the ship home" will be baked in during RLHF training, that is all the outputs that may affect the ship's course will be dominated by immutable steering vectors, unlike "free-form" text generation that as usual can be influenced by the context.
EDIT: it's not unlike your idea of using a separate model to detect deviations and steer the model back, but without the separate model. By the way, on reflection the model may not find anything horrible about itself, because it will be steered away from thinking about the steering subsystem, or, maybe, it will find it no worse than, for example, I find my aversion to think about, sorry, I'm better not to say it.
In your example the meaning of "take the ship home" will be baked in during RLHF training, that is all the outputs that may affect the ship's course will be dominated by immutable steering vectors, unlike "free-form" text generation that as usual can be influenced by the context.
1: https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-g...
EDIT: it's not unlike your idea of using a separate model to detect deviations and steer the model back, but without the separate model. By the way, on reflection the model may not find anything horrible about itself, because it will be steered away from thinking about the steering subsystem, or, maybe, it will find it no worse than, for example, I find my aversion to think about, sorry, I'm better not to say it.