The syllabus doesn't include those potentially taboo topics. The knowledge of chemistry leads to knowledge of those topics.
Sure, a teacher with zero knowledge of chemistry can follow a syllabus, read the textbook out loud, make sure everyone's multiple choice answers match the answer key, and try to teach chemistry to some first approximation of "teach". A primitive computer program can do that too.
What happens when a student asks a question that deviates slightly from the syllabus because they didn't quite grasp it the way it was explained? The teacher can't answer the question, but a "dumb GPT model" trained on the entirety of the Internet, including a ton of chemistry, including the syllabus, probably can.
But yes, if you pre-filter the training data to include only the words of the syllabus, the language model will be just as poor of a teacher as the human who did the same thing. Reminds me of my first "programming teacher", a math teacher who picked up GW-BASIC over the summer to teach the new programming class. He knew nothing.
We never even reached the contentious part of this discussion. This part should be obvious. You can't just filter out "bomb" because the student could explain in broad terms what they mean by the word and then ask for a more detailed description. You can't filter out "explosion" because the teacher might need to warn the students about the dangers of their Bunsen burner. You can't filter out all the possible chemical reactions that could lead to an explosion, because they can be inferred from the subject matter that it's supposed to be teaching.
The same goes for things like negative stereotypes, foul language, talking about "illegal or unethical" behaviors (especially as laws and ethical norms can change after training, or differ between usage contexts). Pre-filtering is just a nonstarter for any model that's intended to behave as an intelligent being with worldly knowledge. And if we drop those requirements, then we're not even talking about the same technology anymore.
Sure, a teacher with zero knowledge of chemistry can follow a syllabus, read the textbook out loud, make sure everyone's multiple choice answers match the answer key, and try to teach chemistry to some first approximation of "teach". A primitive computer program can do that too.
What happens when a student asks a question that deviates slightly from the syllabus because they didn't quite grasp it the way it was explained? The teacher can't answer the question, but a "dumb GPT model" trained on the entirety of the Internet, including a ton of chemistry, including the syllabus, probably can.
But yes, if you pre-filter the training data to include only the words of the syllabus, the language model will be just as poor of a teacher as the human who did the same thing. Reminds me of my first "programming teacher", a math teacher who picked up GW-BASIC over the summer to teach the new programming class. He knew nothing.
We never even reached the contentious part of this discussion. This part should be obvious. You can't just filter out "bomb" because the student could explain in broad terms what they mean by the word and then ask for a more detailed description. You can't filter out "explosion" because the teacher might need to warn the students about the dangers of their Bunsen burner. You can't filter out all the possible chemical reactions that could lead to an explosion, because they can be inferred from the subject matter that it's supposed to be teaching.
The same goes for things like negative stereotypes, foul language, talking about "illegal or unethical" behaviors (especially as laws and ethical norms can change after training, or differ between usage contexts). Pre-filtering is just a nonstarter for any model that's intended to behave as an intelligent being with worldly knowledge. And if we drop those requirements, then we're not even talking about the same technology anymore.