Eh, there is far more at stake then you're giving it credit for. Yea, you can go to India and get someone to write some bullshit, that is far different than having a OAI/Microsoft service telling some kid to neck themselves.
So ya, you're focusing on the low hanging fruit of 'wokeness bot' when the issue of trustworthy and safe model output is an absolutely huge problem that could affect everyone and we have no really good solutions on solving it at this point.
> that is far different than having a OAI/Microsoft service telling some kid to neck themselves
Ok I see what you are saying. I feel like with AI/GPT, we would almost have to change the concept of a "company being responsible for it's software". Up until now, most software has an X number of inputs and a Y number of outputs (click "get new mail" in Gmail, new messages arrive for example).
But what happens when the software you design can accept and return billions of possible parameters? There is simply no way of determining what the output is or could be and that's fundamental to the software.
The example I think of in my head is a user opening MS Word, typing out "F* YOU" and then sending a screenshot to Microsoft telling them "How could your software offend me like this?". Now obviously this is different than GPT but it follows the same rough rule of "billions of possible inputs, billions of possible outputs"
> There is simply no way of determining what the output is or could be and that's fundamental to the software.
I would argue that releasing a product that has the potential to do harm and that you can't predict the behavior of is radically irresponsible and should not be done.
> The example I think of in my head is a user opening MS Word, typing out "F* YOU" and then sending a screenshot to Microsoft telling them "How could your software offend me like this?".
That's not even remotely comparable, because MS word didn't create the output. The user did.
>But what happens when the software you design can accept and return billions of possible parameters? There is simply no way of determining what the output is
Welcome to the trillion dollar AI safety question and the reason some experts are deeply concerned that we'll solve general intelligence long before we're ready to deal with the outcome of what general intelligence can do.
This is why we talk about the AI alignment issue. GPT-3/GPT-4 without a RLHF in front of it is mostly a weird alien that doesn't behave in a generally useful manner. This is why Chat/BingGPT took off recently because we put a pretend human mask in front of the monster. But behind that mask is the internet, jumbled up and thrown in a neural network blender. Unless they did a lot of filtering, there is also all the things that we'd consider generally bad, like if for example your teacher started telling your kids the proper method of smoking crack.
Please explain what the huge problem is. You haven't listed one. An AI saying something offensive to you is not a "huge problem". Just close your eyes. Walk away from the screen. "Problem" solved.
So ya, you're focusing on the low hanging fruit of 'wokeness bot' when the issue of trustworthy and safe model output is an absolutely huge problem that could affect everyone and we have no really good solutions on solving it at this point.