> train an LLM to differentiate appropriate vs inappropriate comments
> popup asking ... which would be triggered by client-side javascript based on a hashed word
Yeah, this is the feature that I think I should build my own social media for. Owning "the algorithm" and using it to indoctrinate users en masse with my own view. Are you sure you want to insist it's in any way or form even remotely ethical thing to do?
Yeah, this is the feature that I think I should build my own social media for. Owning "the algorithm" and using it to indoctrinate users en masse with my own view. Are you sure you want to insist it's in any way or form even remotely ethical thing to do?