The point is that that it can be trained to be convincing in the first place.
The current batch of AI can be trained by giving it a handful of "description of a task -> result of the task" mappings - and then it will not just learn how to perform those tasks, it will also generalize across tasks, so you can give itva description for a completely novel task and it will know how to do that as well.
Something like this is completely new. For previous ML algorithms, you meeded vast amounts of training data specifically annotated for a single task, to get a decent generalisation performance inside that task. There was no way how to learn new tasks from thin air.
Is it really that different from socialization (particularly primary socialization [0]), whereby we teach kids our social norms with the aim of them not being sociopaths?