> It doesn’t get the kind of things taught in most classrooms wrong in the way it gets business applications wrong, because there’s a (mostly) correct response that isn’t going to vary a ton from source to source.
It's fabricated legal cases and invented citations to back up it's statements.
The issue is, it can be difficult to know when it's wrong without putting in a lot of effort. Students won't put in the effort, and that assuming they're even capable of understand when/where it's wrong in the first place.
Just like self driving cars - we can say "pay attention and keep your hands on the wheel at all times"... but that's not what everyone does and we've seen the consequences of that already.
We need to be careful here. This tech is new. ChatGPT hasn't even existed (publicly) for a year. Getting it wrong and going too fast has consequences. In the education space in particular, those consequences can be profound.
This is nothing at all like self driving cars; firstly the risks are not even in the same ballpark, and secondly every piece of advice given includes, “check the response independently.” It says nothing about a tool like this if people choose to misuse it.
At some point, using LLMs like ChatGPT recklessly is on the user, not the tool.
It's fabricated legal cases and invented citations to back up it's statements.
The issue is, it can be difficult to know when it's wrong without putting in a lot of effort. Students won't put in the effort, and that assuming they're even capable of understand when/where it's wrong in the first place.
Just like self driving cars - we can say "pay attention and keep your hands on the wheel at all times"... but that's not what everyone does and we've seen the consequences of that already.
We need to be careful here. This tech is new. ChatGPT hasn't even existed (publicly) for a year. Getting it wrong and going too fast has consequences. In the education space in particular, those consequences can be profound.