The GPL is not ouroboros. Restricting the freedom to restrict is not at all comparable to the PATRIOT Act. It is more akin to the Bill of Rights, or other such laws preventing certain types of laws from being made.
I'm not sure what you intend with this. I am using that as a counter example to "laws against slavery restrict freedom", because the person I'm responding to was implying that restriction is inherently good: restricting slavery is generally a positive, but restricting gay marriage is generally a negative. The fact that the same idea in two different scenarios can be good or bad means you can't use it as a maxim.
No, the implication of the post to which you were responding is not that restriction is inherently good, but rather that restriction is not inherently bad.
I think it would be helpful to be clearer as to why you disagree with the analogy rather than simply calling it stupid.
If I had to guess, I would suspect that your real issue is that you do not find the things being restricted to be comparable (i.e., human life/will vs. the use of software), rather than believing that the comparison is somehow invalid for other reasons.
First of all, it's not a good analogy. Software licenses are a contract that you can choose to agree to, whereas laws give you no choice.
Second of all, it is highly debated wheter the restrictions imposed by the GPL are a worthwhile tradeoff; while noone in their right mind claims anybody has a right to own slaves...
And lastly, by comparing the GPL to laws against slavery you are trying to evoke certain associations, much like stallman uses the word swindle instead of kindle -- those are just cheap tricks that distract and make fruitful discussions hard.
Really, the only thing the GPL restricts is your ability to restrict.
(Incidentally, this is comparable to restricting the freedom to enslave; it is a restriction on restriction.)
If you think the GPL somehow "imposes an ideology on others" but literally any other license does not, I am curious as to how. (Hint: a license cannot "impose an ideology"; if you do not agree, you are free to avoid it and stick with BSD code, or whatever else might suit your fancy.)
I see a lot of very opinionated dissent to even discussing the ideas presented in this article. Let me try to paraphrase in a way that hopefully levels the playing ground and maybe removes some biases or irons out some personal wrinkles we all may have for one reason or another:
It is conceivable that we (humanity) may one day obviate ourselves. Arguably, most of us would prefer that does not happen.
That's it. That's really what I see the discussion being about. I think it's a worthwhile discussion to have.
The main problem is that most of the time arguments about humanity obviating itself are couched in a framework where the only thing that has advanced, in this instance artificial intelligence, is the science needed to make it a reality. This has never been the case, which is why you see so much eye rolling when arguments like this (or some of the others here about robots replacing human workforces) are made.
How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create? Why do we assume that humanity won't move past our current societal constructs when we introduce new actors into the mix? These are the questions we should be asking when someone writes or speaks about the perceived dangers of some future event.
In light of this, while some of the dissent may seem opinionated, I would argue that the original premise of the article is somewhat opinionated itself. I think it goes without saying that most of us would prefer that humanity not obviate itself - but when we think about it do we really believe that the technology to create hyper intelligent machines will come before our society adapts to handle them? The answer may be yes, but lets not pretend such technology will be born into a world that looks like today.
> How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create?
How can we find ways to move ourselves forward if we don't talk about and actively explore how to do so?
We are, just not so much in this thread specifically. Think about all the progress we are making in the bio-tech field - although this is clearly not the only answer to the problem. Don't get me wrong, conversations about moving ourselves forward are important, but I'm not sure starting such a conversation with what amounts to high-brow fear mongering is the correct way to do things.
1. Machine intelligence, traditionally called artificial intelligence, which surpasses human intelligence.
2. Your category (b) is generally the primary concern in these types of discussions.
3. The anecdote of the progress of humanity. Compare the impact of human life/intelligence vs. evolutionary relatives like chimpanzees. I do not know that chimps have hunted species out of existence, for instance, but people have. We have also incidentally wiped out populations in efforts to make our lives better (via things like leveling forests, etc.)
To be fair, I don't think that the reason why Chimps haven't hunted something to extinction doesn't stem from a built in morality or sense of balance with nature.
I'm not trying to put words into your mouth. I was just thinking of some of the new research that shows that primates of all kinds actually commit organized violence that mirror human violence in many, many, ways including war and capital punishment.(It's not a one for one thing, but similar.)
Yeah, I'm not talking about morality at all here. Our technological prowess, resulting from the application of our intelligence, has enabled us to wipe out entire species.
Thanks. Seems to me these anecdotes have to do with humans.
So is the implicit assumption that machines will do what humans are doing ('bad' things) but at several orders of magnitude faster and without the ability to comprehend longer-term consequences of their actions any more than humans do at the present time?
Sort of. 'Bad' here is of course an extremely subjective term. And it may not be the case that the machines do not understand the longer-term consequences of their actions; they could understand full well, but they could know that the preservation of humanity is not important (for whatever reason). So, we might not matter to them. We matter to us though, so that would be a problem for us as things stand now.
It has been my experience that the more a particular person attempts to understand and control machine intelligence, the more she grows to fear it and its potential.
The only people who claim that machine intelligence is dangerous are the ones on the outside looking in. Everyone who actually works in on AI and understands it (hint, it's just search and mathematical optimization) thinks the fear surrounding it is absurd.
> Everyone who actually works in on AI and understands it thinks the fear surrounding it is absurd.
This isn't true. Please don't state falsehoods. Stuart Russell, Michael Jordan, Shane Legg. Those are just the ones mentioned elsewhere in this thread.
How many of those AI researchers are actually working on AGI though? As you mentioned, most of them are in fact just developing search and optimisation algorithms. Personally, I believe the fields of neuroscience/biology are more likely to produce the first AGI. People who claim machine intelligence is dangerous are not scared of k-means clustering or neural networks, they are scared of an hypothetical general intelligence algorithm which hasn't been discovered yet. One could argue that the fear is absurd because AGI is not likely to happen within our lifetime but it's hard to argue that it will not happen eventually and be a potential threat.
How do we ensure a proper association between the samples taken and the person in question, in particular as described in the case from the article, especially without some sort of formal framework and authorization before hand?
tl;dr - It would be a huge coincidence if you found DNA that matched the actual perpetrator despite the fact that you collected it in the course of following the wrong guy, or canvassing for DNA at random.
Usually this happens by way of other non-DNA evidence. Imagine: you pick up a hair (or swab a coffee cup, whatever) in public that you believe (but, to your point, do not know) to be from your suspect. You take it back to the lab and, sure enough, it matches your sample from the crime scene.
Now, if you know nothing about the people in the vicinity of there you picked up your test hair, and were just randomly canvassing for DNA, this might not prove much. But presumably you were following a particular person, whose hair you tried to collect because you also have other evidence against him (though probably none so strong as DNA identification). This puts you in a very different epistemic situation with respect to that hair. Now you know that you picked up a hair in an environment full of people, only one of whom was a suspect in your investigation. It so happens that this hair matches a hair from the crime scene. It is, of course, possible that you picked up an unknown person's hair, and that person just happened to be your culprit -- but it is far far more likely that the hair you got is from your suspect, since he's the only one in the area believed to have any relationship at all to the crime.
(Of course, there could be situations that confound this analysis, for example if you collect hair at the end of the day from an interview room that's been used to question a bunch of suspects in the same crime. But law enforcement will not usually be this sloppy. They are well aware of the need to positively associate a hair with a particular person.)
One possible route: conduct the interview in a room that's been thoroughly cleaned beforehand (cleaners would need to be gowned). Take a sample from the chair pre-interview, then post-interview. Keep the room tightly controlled between cleaning and the interview.
I am not entirely sure what you are saying here, but the way I have always understood it is: data is knowledge; the ability to apply knowledge is intelligence.
Netflix has a considerable amount of data (knowledge) and its algorithms exemplify some efforts to apply that knowledge (intelligence). As it stands presently, though, humans still tend to be more intelligent than any algorithms we have created. (Generally speaking, of course.)
I think we are trying to say the same things here, right?
I'm trying to reframe what Netflix does as something that is already done - we just tend to use different names for it in those other domains. What's new is that we're doing this old thing - processing massive amounts of data, extracting the relevant facts, synthesizing those facts into a coherent story, and presenting that story to decision makers - in new contexts.
In the Netflix context, what is mostly called "data" is called "intelligence" in, say, government decision making.