For Watson, probably not "trivial", no, but also probably not a hugely expensive undertaking. It really depends on how the system works, though. Something like an artificial neural network would be impossible to manually prune like that; they'd have to retrain it and "teach" it that those things are bad. Without knowing much about Watson, though, my guess would be that its knowledge is largely stored in a structured database, which is more directly accessible.
Watson's 'memory' is just a big database of facts, rules, and statistical models. To 'forget' a source they'd just have to rebuild any models derived from it and purge any facts it had extracted.
I didn't mean to imply it was simple, just that there's nothing magic about how Watson's knowledge is stored. Obviously at this scale any change is unlikely to be trivial.
Given the wide range of unstructured sources Watson uses, and given that the linguistic rules they use to extract facts are likely to frequently change, I don't think it's unreasonable to assume they'll have a process to make building its knowledgebase and models from sources fairly straightforward.
they just overwrote every urban dictionary word instances with the string "rainbow".
So Watson still wants to call the query bullshit, but says rainbow instead.