> So I very clearly described a multitude of things that fit this description
No, we aren't seeing this damage though.
That's what would convince me.
Existing harm. The amount of money that people are losing to scams doubling.
That's a measurable metric. I am not talking about vague descriptions of what you think AI does.
Instead, I am referencing actual evidence of real world harm, that current authorities are saying is happening.
> said that they need to double or increase in frequency
By increase in frequency, I mean that it has to be measurable that AI is causing an increase in existing harm.
IE, if scams have happened for a decade, and 10 billion dollars is lost every year (random number) and in 2023 the money lost only barely increased, then that is not proof that AI is causing harm.
I am asking for measureable evidence that AI is causing significant damage, more so than a problem that already existed. If amount of money lost stays the same then AI isn't causing measurable damage.
> I pinned you down to a standard
No you misinterpreted the standard such that you are now claiming that the harm caused by AI can't even be measured.
Yes, I demand actual measureable harm.
As determined by like government statistics.
Yes, the government measures how much money is generally caused by or lost by scams.
> you just don't want to worry about it
A much more likely situation is that you have zero measureable examples of harm so look for excuses why you can't show it.
Problems that exist can be measured.
This isn't some new thing here.
We don't have to invent excuses to flee from gathering evidence.
If the government does a report and shows how AI is causing all this harm, then I'll listen to them.
But, it hasn't happened yet. There is not government report saying that, I don't know, 50 billion dollars in harm is being chased by AI therefore we should do something about it.
No, we aren't seeing this damage though.
That's what would convince me.
Existing harm. The amount of money that people are losing to scams doubling.
That's a measurable metric. I am not talking about vague descriptions of what you think AI does.
Instead, I am referencing actual evidence of real world harm, that current authorities are saying is happening.
> said that they need to double or increase in frequency
By increase in frequency, I mean that it has to be measurable that AI is causing an increase in existing harm.
IE, if scams have happened for a decade, and 10 billion dollars is lost every year (random number) and in 2023 the money lost only barely increased, then that is not proof that AI is causing harm.
I am asking for measureable evidence that AI is causing significant damage, more so than a problem that already existed. If amount of money lost stays the same then AI isn't causing measurable damage.
> I pinned you down to a standard
No you misinterpreted the standard such that you are now claiming that the harm caused by AI can't even be measured.
Yes, I demand actual measureable harm.
As determined by like government statistics.
Yes, the government measures how much money is generally caused by or lost by scams.
> you just don't want to worry about it
A much more likely situation is that you have zero measureable examples of harm so look for excuses why you can't show it.
Problems that exist can be measured.
This isn't some new thing here.
We don't have to invent excuses to flee from gathering evidence.
If the government does a report and shows how AI is causing all this harm, then I'll listen to them.
But, it hasn't happened yet. There is not government report saying that, I don't know, 50 billion dollars in harm is being chased by AI therefore we should do something about it.
Yes, people can measure harm.