> ChatGPT is more of an existential threat because it will propagate to infect other knowledge bases. Luke Wikipedia relies on "published" facts as an authority, but ChatGPT output is going to wind up as a source one way or another. And worse, then ChatGPT will digest its own excrement, worsening its own results further.
This is what people do collectively, long before any GPTs were in sight. Lots of strong convictions people hold today and publish all over the place, are re-processed excrements of long-gone mental viruses of past civilizations.
Security cameras have existed for a long time, but storage cheap enough to keep years of footage and algorithms capable of processing thousands of streams in real time create massive privacy problems that didn't exist even with the richest companies paying humans to watch.
I don't know why such a simple fact needs to be repeated over and over again. It's either naivete or malice that makes people ignore that fact.
A change in scale can easily lead to a change in kind. A party popper and a flashbang are functionally the same thing, but their scale makes them have wildly different implications.
Another example is the police. Most people agree the existence of a police force to enforce laws is a good thing (society would function very differently otherwise). But if there was a policeman for each other person on the planet following them 24x7 and enforcing every possible law on them, not so much anymore.
On the other hand, why have a law if it’s not meant to be enforced universally and consistently?
When laws are applied selectively it creates an unequal experiences in the population.
No one wants the tyranny of oppressive applications of overbearing laws. So, in those instances, change the law to be fair enough and compassionate enough that it can be applied in all instances where the letter of the law is broken.
And obviously privacy is important and ubiquitous surveillance would undermine our ability to enjoy life. But in public spaces, consistently applying fairly written compassionate laws wouldn’t necessarily be a bad thing.
Because the real world has nuance and is not black and white. Humanity relies on people using their judgment; trying to make absolute laws with zero tolerance has been a failure everywhere it's tried. It is impossible to numerate all reasonable exceptions, and impossible to specify exceptions precisely enough that bad actors can't exploit them.
If you make the rules overly strict and enforce them universally, you end up with people in jail for offenses no one cares about.
If you make the rules at all loose, bad actors instantly seize on any loopholes and ruin the commons for anyone.
Yeah, but that’s why you have a “human in the loop”, to handle the infinite number of edge cases. You’d never want end-to-end AI for anything mission critical like justice.
You have a human in the loop explicitly to, in your words, not "enforce universally and consistently".
* Most people agree that stealing from a store is wrong.
* Most people agree that opening food/medicine and consuming it in the store before paying is stealing
* Most people believe that helping those in a medical emergency is important
If I was in a store and saw someone going into hypoglycemia and grabbed a candy bar and handed it to them, or if they were having a heart attack and I grabbed a bottle of aspirin and opened it to give them one, I am committing a crime. Most reasonable people would say that even if a police officer was standing in front of me watching me do it that I should not be charged.
> Most people agree that opening food/medicine and consuming it in the store before paying is stealing
In my jurisdiction, that is only stealing if you do it with intention of not paying for it.
Sometimes I go to the supermarket, pick a drink off the shelf, start drinking it, take the partially drunk (or sometimes completely empty) bottle to the checkout to pay. Never got in trouble, staff have never complained - I know the law is on my side, and pretty confident the staff training tells them the same thing.
If you’re depending on sussing out peoples intent then you’re accepting that we can’t be clear/zero tolerance about it. If you catch me stealing and I just go “oh no dude, I was totally going to pay” but you don’t believe me, what then? You can’t possibly know what my actual intention was.
The physical design of the store makes it clear in most cases. The checkouts form a physical barrier between the “haven’t paid yet” area and the “have paid“ area. It is difficult to assume an attempt to steal in the former, much easier once one passes to the later with unpaid goods.
The legal definition of theft - at least where I live - is all about intention. It involves an intention to deprive another of their property. No intention, no theft. If you absent-mindedly walk out of a store without paying for something, no theft has occurred. When our kids were babies, we used to put the shopping in the pram. One day I left the supermarket and down the street discovered a loaf of bread in a different section of it, that I’d forgotten to pay for. I went back and explained myself to the security guard, did he call the police? No, he commended me for my honesty, and let me pay for it with the self-serve checkouts.
For a supermarket, their biggest concern with theft is the repeat offenders. If it is an unclear situation, it is in their best interest to give the customer the benefit of the doubt. But, if the same unclear situation happens again and again, that’s when the intent (which is legally required to constitute stealing) becomes obvious. Ultimately though, it is up to the store staff, police, prosecutors and magistrates to apply a bit of common sense in deciding what is likely to be intentional and what likely isn’t. But yes, given theft is defined in terms of inferring people’s intentions, “zero tolerance” is a concept of questionable meaningfulness in that context.
And yes, I do realize that intention is part of the law. That wasn’t what I was saying really. I am saying that because we have that, we are implicitly accepting that a lot of this stuff cannot be ironclad. There has to be room for interpretation and enforcement.
This is where the law ends up discriminating in practice. The law professor who claims “I forgot it was in my pocket” is far more likely to be believed than the homeless person who makes the same claim. If it makes it as far as the prosecutors - and it probably won’t - they’ll see the homeless person as an easy win (gotta make that quota, keep up those KPIs), the law professor’s case will be put in the “too hard” basket.
Unless they have the law professor on video “forgetting it was in their pocket” again and again and again. With enough repetition, claims that it was an accident cease to be believable. Although then the law professor will probably have three esteemed psychiatrists willing to testify to kleptomania, and the case will go back in the too-hard basket again
>This is where the law ends up discriminating in practice. The law professor who claims “I forgot it was in my pocket” is far more likely to be believed than the homeless person who makes the same claim.
> if they were having a heart attack and I grabbed a bottle of aspirin and opened it to give them one, I am committing a crime
Only if the store insists you pay for it and you refuse. And maybe the law needs to be rewritten to include some type of “good Samaritan eminent domain” clause.
But let’s say you misdiagnose the incident and the stranger refuses the medicine and you refuse to pay. Even then, the punishment for tampering with a product should be a small fine.
Laws could have linear or compounding penalties to account for folks that tamper with greater numbers of products or over multiple instances in a given time period.
But if there’s an automated system that catches people opening products and alerts the property owner or police then they could decide if it’s a high enough concern to investigate further.
But the alert would be the end of the AI involvement.
I think the main problem is not in universal law enforcement but in constant surveillance which is a bit orthogonal.
Why should people be under constant surveillance even at times when they are not breaking any laws? Why should someone else have access to every moment of your life?
Good point, but I think my main point still stands: being ocasionally surveilled by the police is OK (I don't mind them looking at me in public places if I'm near them), but if you scale this up to constant surveillance it's a very different story.
>A change in scale can easily lead to a change in kind. A party popper and a flashbang are functionally the same thing, but their scale makes them have wildly different implications.
What a fantastic example. Borrowing this for sure.
Sure but humans can’t do it at nearly the rate that GPT can, and GPT will never be applying critical thought to the memes it digests and forward on while humans sometimes do.
We are talking about a model that at its core is about making statistics of what the next word will be in a sentence based on an existing corpus. It gives that model the ability to find and summarize all of the existing content in relation to a prompt beyond what humans could do, but I still see no critical thinking there.
This isn't exactly accurate. It's not creating one word at a time, that's the illusion given by the way it illustrates the text on the screen. Doing that would be impossible to create code that compiles for example.
It's not the same. This is something I've observed many times but have never quite been able to put a name to it.
When you lower the friction of an action sufficiently, it causes a qualitative change in the emergent behavior of the whole system. It's like how a little damping means the difference between a bridge you can safely drive over versus a galloping Gertie that resonates until it collapses.
When a human has to choose and put some effort into regurgitating a piece of information, there is a natural decay factor in the system where people will sometimes not bother to repeat something if it doesn't seem valuable enough to them. Sure, things like urban legends and old wive's tales exploit bugs in our information prioritization. But, overall, it has the effect of slowly winnowing out nonsense, misinformation, and other low value stuff. Meanwhile, information that continues to be useful continues to be worth the effort of repeating.
Compared to the print and in-person worlds before, things got much worse just with social media where a human was still in the loop but the effort to rebroadcast was nil. This is exactly why we saw a massive rise in misinformation in the past couple of decades.
With ChatGPT and humans completely out of the loop, we will turn our information systems into galloping Gertie and they will resonate with nonsense and lies until the whole system falls apart.
We are witnessing the first cracks now. Look at George Santos, a candidate who absolutely should have never won a single election but managed to because information pipelines about candidates are so polluted with junk and nonsense that voters didn't even realize he was a con man. Not even a sophisticated one, just a huckster able to hide within the sea of information noise.
The question is, then, is the human-borne friction enough to slow the diffusion of GPT-derived "knowledge" back onto Wikipedia through human inputs? It is very easy to imagine that GPT-likes could apply misinformation to a population and change social/cultural/economic understandings of how reality works. That would then slowly seep back into "knowledge bases" as the new modes of reasoning become "common sense".
I think the worst-case scenario is that some citable sources get fooled by ChatGPT and Wikipedians will have to update their priors on what a "reliable source" looks like.
sure, we need dampening in our information systems and our social trust systems. it's clearly not there now. if the problem gets out of hand to the point we're forced to address it, i think that's a good thing overall.
But, overall, it has the effect of slowly winnowing out nonsense, misinformation, and other low value stuff. Meanwhile, information that continues to be useful continues to be worth the effort of repeating
Unfortunately, in some (many?) cases the very fact some "information" exists is the "usefulness", independent of the usefulness/accuracy of the information itself. The unsubstantiated "claim" of crime being up can result in more funding for police, even if the claim is false. There are people profiting from the increase in police spending, they don't care if the means to obtain that are true or not.
Over the long term, the least-expended-energy state, accepting the truth, will win out, but people have some incentive/motivation to avoid that in the shorter term.
But also, this is an "AI", not human thought. Why conflate the two as if they are equivalent? We are not at the point where machine learning is smarter or produces better quality content than humans.
This is so on point. While everyone was arguing that LaMDA couldn't possibly be conscious a few months back, I was asking: what if we're not conscious?
Yep, not sure what the panic here is, ChatGPT is probably churning out better quality stuff than the average SEO spammer. The internet has been mostly garbage for a very long time at this point.
I think this is an interesting take actually. Content on the internet is in a steep downward spiral.
Masses of spammers and SEO hackers are filling the tubes with garbage. There are still some safe-ish havens, but those bastions can only survive the onslaught for so long.
We need a new internet at some point relatively soon. Maybe ChatGPT will accelerate the demise of this one to force the creation of some new paradigm of communication and dissemination of knowledge.
This is what people do collectively, long before any GPTs were in sight. Lots of strong convictions people hold today and publish all over the place, are re-processed excrements of long-gone mental viruses of past civilizations.