This whole news story has taught me a lesson about the downside of tracking users. I now understand why some people opt out of, say, behavioral targeted ads: it has nothing to do with annoying ads, and everything to do with data eventually getting into the wrong hands. The only way to prevent the misuse of data is to not collect it in the first place.
Apparently up until this point people figured that everyone but the government would have access to their data, but they were OK with that... it's free to signup, after all!
Yeah, that's all kinds of weird. For starters, what rational person would expect that in a world where everyone but the government has access to their data- that the government would not have access to their data?
The statement is of course a strange construct when examined literally, but have you ever tried keeping a secret from just one person?
So this is a whole piece built out of guilt-by-association. Decru, for instance, the In-Q-Tel data storage investment, was a box that transparently encrypted iSCSI storage networks; in other words, it was the kind of technology that made surveillance harder, not easier.
Why would anyone believe Pando Daily's take on such a complex topic?
The original source of this thing seems to be NSFWCorp, which if anything is nuttier than Pando (although never as lame). Indeed, the explicit preoccupation with "libertarians" (as if they could be anything but a tiny slice of SV) reads more like something Mark Ames wrote.
I agree, silicon valley does not support development of behavior tracking technologies.
Strong encryption, user privacy, "really delete my data on your server", robots.txt, have all been implemented using the latest standards, across all websites.
Terms and conditions clearly state that, user data or backup data will not be mishandled by admins, although users might be purged from the database from time to time.
Nerds and Geeks are very hippie-liberal so you won't find them supporting tyrannical governments, to sell ads.
I can't even imagine a Nerd not caring, being unemotional or careless. They wouldn't hurt a puppy ! Let alone kill 200_000 sand people !
Privacy is about choice of who is privy to what information. Facebook can be used for xyz non-pervasive feature, or it can be used for spying. If I build a chainsaw and it's used in texas for some massacres, it is acceptable for me to be outraged without having to defend my position on the usefulness of chainsaws and benefits it provides to society.
I suppose by the article's line of thinking, we shouldn't have any big data analysis and just stay in the dumb world of anecdote, after all, a tool can't be abused if it doesn't exist right?
EDIT: I've been drinking and the original wasn't as coherent as I thought upon third read through.
> If we’ve learned anything in the past few days it’s that the NSA does precious little of its own spying, relying instead on companies like Palantir and Booz Allen Hamilton.
Do we actually know, or is the writer just assuming that because they use some outside contractors, outside contractors do most of the work?
It is common for government agencies to farm out much of the work. At NASA, for example, the government employees do almost no engineering work. It's all private contractors. I never worked at NSA, but I would imagine it is similar.
The ideal is for things that are clearly a "common business function" to be outsourced to contractors where possible. E.g. things like janitorial services, there's nothing inherently governmental about wiping the commodes down.
Then you have "inherently governmental functions" such as setting and determining policy, obligating expenditure of funds from the Treasury, supervising other civil servants, etc. That has to be done by a Federal employee.
Then you have stuff in-between, which typically was also done by a government employee, except perhaps for highly specialized skills where it made sense simply to pay someone smart to do it for you and then make decisions based on that. E.g. how the Pentagon paid think tanks to do geopolitical analyses, and then evaluated and used those inputs in the decision-making process.
Personally I was surprised to hear that contractors were as embedded as they were in the NSA. Sysadmin is a common-enough function, but has to be considered inherently governmental in the context of the NSA for crying out loud. But then we've been on a long slide toward more and more contractor entrenchment within the halls of government for awhile now, unfortunately.
I'm hoping that if nothing else happens, at least these leaks will arrest or even reverse that trend.
Things are different for agencies tied to defense or intelligence gathering. I would imagine the NSA tries to do as much in-house as they can. NASA design secrets leaking aren't nearly as potentially damaging as internal NSA plans or techniques leaking.
Of course, Snowden was a contractor so that shows they do allow contractors with TS clearances to work on their projects, but 1) it's usually only specific firms that they trust and 2) the contractors may help build the systems but probably will be told little or nothing about the exact end usage and results.
Do the Debian legal people annoy you? Then why not release your software unlicensed; they won't be 'able' to use it and other users who care less will. Ah, but that isn't much fun... why not rile them up as well? Enter the "do no evil" clause! By being 'open source' in such a way that they cannot use their software, you can apparently cause a decent amount of grief.
They also seem to be good for getting amusing quips like "IBM requested an exception so that they can do evil with my software".
Do no evil clauses are great for trolling. An anti-corporate and anti-'takes-themselves-seriously' license.
Hardly even a use of the legal system, the clause would not stand up for one second. What is really being used is the victims attitudes toward the legal system.
It sort of seems like this license may bother you. It really is good at this.
So did we, because it's not really open source, and most distributions won't accept stuff that uses it (fedora, debian, etc).
Thus, if you want to use it in anything open source that you actually want packaged, you need an exception.
Sure, but you could come up with a license that's more specific, like "The Software shall not by used by any government agency or contractors for surveillance purposes" or whatever the equivalent legal speak is.
It's hardly free/open source software if you need to get a license from the author for each specific usage. Even MS treats their customers better than that.
You don't need a license for each specific usage. Most software doesn't involve surveillance, and most government surveillance software would probably not be an "exception" in the author's eyes. Rhino surveillance is an obscure edge case.
It's not just Silicon Valley. Basically all work on big data, NLP, computer vision, etc. is making spy tools.
If you think that's not true, consider that major funders of academic research into these areas are orgs like DARPA, IARPA, the NSA, and others. Consider, too, that even if this research isn't funded or done by groups like these, the fruits of that labor will be exploited by said organizations.
Basically all work on big data, NLP, computer vision, etc. is making spy tools.
I think it would be more accurate to say that such work can be used for spy tools. It can be used for plenty of other purposes besides, just as satellite imaging can be used for spying but also for weather prediction, environmental assessment, and so on.
> I think it would be more accurate to say that such work can be used for spy tools.
I think that's a distinction which lacks a practical difference. Of course they have applications that don't involve enabling surveillance by three letter agencies. But you're kidding yourself if you don't think that every major advance in any remotely useful discipline isn't reviewed and employed by these agencies.
These groups don't just hire sys admins and suits and engineers to monitor signals, etc. They hire PhDs and researchers, both to help assess technologies and to develop them internally.
I guess the point is, the advance of technology must inevitably advance tha capabilities of the surveillance state.
Oh, but I do. Further, I think it's natural and inevitable. It's ridiculous to expect governments to artificially hobble themselves in terms of potential just because they're governments. It's what they do with that potential that concerns me. As an analogy, the fact that the government controls vast arsenals of weaponry does not necessarily mean it's planning to kill everyone else.
There's a popular meme on HN that government inevitably tends towards tyranny. I think this is total bullshit; tyrranical government is far from unusual, but it isn't the norm either, and the assumption that it is, is a false premise. Or put another way, there's no good priding yourself on your skills at Bayesian analysis if you have a habit of selecting inaccurate priors.
Edit: while I was making dinner it popped into my head that while nowadays there's a high probability that any given terrorist is Islamic, the probability that any given Muslim is a terrorist is, of course, quite low. Likewise, while tyrannies are often perpetuated by governments (that is, bureaucratic organizations as opposed to warlords or roving tribes), the probability of a given government turning tyrannical is far lower.
In fairness, only if we, the people, allow our government to conduct surveillance on us, the citizens. We need to roll back the provisions of the patriot act and FISA that are currently being interpreted to allow the collection of data on citizens who are not suspected of any crime and without any warrants. Then advances in surveillance technology will not necessarily lead to more invasive surveillance of citizens.
But the advance of technology also inevitably advances the capabilities of murderers, rapists, and parking meter scofflaws.
The question is, does it advance their capabilities more than it advances the capabilities of those who oppose them? Is there any way to influence that balance?
Or should we just say "any technology can be used for good or evil, so don't even bother considering the question, let someone else worry about it"?
My advice, is wait at least 2 weeks, then start reading the news on this saga. That is what i am doing, all i hear is snippets on tv, now i hear that the Guardian is walking back many of their statements, Mark Andreessen just said that he was holding judgement as well until everything becomes clearer. Everybody else on HN are having emotional and knee jerk reactions, when we get a solid longform peace of journalism in a few weeks, then we will get a clearer picture. Until then i just ignore all the links here on the topic and focus on tech.
The Guardian is not "walking back their claims:" the NSA slide claimed "direct" access. Now, does it really matter if the NSA has root on the server or if it's just some sort of mirroring/splitting scheme?
You seem all too happy to adopt the loaded language of the naysayers for someone claiming rational forbearance.
That obviously matters a tremendous amount, both in terms of how much we should continue to trust the companies involved and also what countermeasures we can take.
I've been trying to lately. I'm not as outraged as many so my opinion tends to get smacked around a bit. So now I try to ignore a lot of it... or at least read things with an open mind, knowing that perhaps some people are wrong at times.
Ya... a lot of other people have too. Unfortunately, I'm looking for confirmation rather than guessing. Palantir has said they're not involved... but I'm not sure that counts as confirmation either. lol
This is a general trend you can see at work in lots of venture-connected industries, and it's not just about surveillance.
Lots of people take gigs working on 'cool', 'interesting', terrifyingly profitable projects at well-funded, fast-growing startups that the press and investors both love. It looks great on their resume, they get to tell their friends about the cool work they're doing, and they learn lots of new stuff. Great, right?
Of course, on the other hand, lots of these startups are directly preying on the uninformed, the mentally ill, and groups that simply can't make good decisions, like the young. Over time eventually controls and safeguards get added to reduce the damage done, stop those 'top customers' from paying TOO MUCH, and staunch the blood loss from horrible churn and counteract the consequences of short-term focused designs. Ideally, by the time these fixes start happening and consequences start destroying revenue, these startups have IPOd or exited and the founders and investors all exit happy and wealthy to move onto the next gig.
Here's the thing - that whole last paragraph was describing games startups, not the obvious evil of surveillance companies. It's almost like we gleefully allow ourselves to be blinded to the consequences, both short-term and long-term, of our actions, and pretend that whatever we're working on is innocent and harmless, just because it isn't directly selling rifles to terrorists. It's easier not to think through the consequences; to think about where our money is coming from, to think about who's buying our products, to think about what using our products does to people's lives, and to their relationships. To think about what those viral acquisition pathways do in the long term, to think about what those retention and churn metrics really mean, to think about WHY those metrics are going up or down. Most people don't think about what the unanticipated consequences could be from storing users' address books, or their call histories, or leaving a camera/microphone on all time, or automatically logging conversations. It's easy not to do those things.
Because at the end of the day, the person with the clean conscience doesn't have 50 million dollars, and if you somehow come face-to-face with the consequences of your actions, well... lawyers don't cost that much, do they?
It's easy to look at the whole venture-funded startup industry and find the big picture kind of exhausting when viewed with this perspective. Most of the people starting these companies, and working at these companies - they're not objectively bad people! Their actions, in a short-term context, are wholly defensible - there's no reason to question them. But somehow we keep ending up in this situation, and we're looking back, and saying, jeez, that sure hurt a lot of people, didn't it? Or wow, how did they do something that stupid? Why did anyone invest in them? Why was anyone stupid enough to go work at that company that did those awful things?
Aaron Swartz looked back at reddit. He realized the company that made him rich was now nothing but a mechanism to create addictions to meme-bytes, waste time, destroy relationships, normalize sexism and racism, and literally rekindle the Neo-Nazi movement (reddit now has the biggest concentration of Neo-Nazis on the internet--the fanatical kind that even Stormfront won't tolerate on their website).
If this is even the fate for something as benign as reddit, how much worse will it turn out for people working in Big Data or directly on technologies like facial recognition?
How many Silicon Valley technologists will look back at their work in 50 years and have the same kind of feeling that Nazi scientists had, that Manhatten Project physicists had, that the inventors of mustard gas had?
Whether technology will be used as a weapon in the hands of a selfish elite, or as a tool for liberation for the impoverished and underprivileged, is being decided now by how these technologists use their time.
Yes, when you create a venue for free speech, it will -- gasp -- be used to express various opinions and ideas you will not agree with. So?
The Nazis had no trouble doing their dirty work the last time without Facebook, Reddit, or Twitter. These days, they have more to lose from exposure than they would ever have to gain. The antidote to bad speech is more speech, and that's precisely what Reddit is good for.
History does not support your naive view that allowing criminals to freely congregate, organize, and recruit stops their ideology.
The Neo-Nazis on reddit have publicized the tactics they're using to normalize racism across the internet. According to watchdog groups the Neo-Nazi movement has grown tremendously since it was able to start recruiting and spreading that hate ideology online.
That is utterly ridiculous. For one, there are boards out there that are about as popular as reddit that are also far less restrictive in what they allow (such as any of the many popular American imageboards out there).
Reddit does not "normalize sexism and racism" or "LITERALLY rekindle the Neo-Nazi movement" any more than Youtube does, if you go by the utterly inane highly rated comments on thousands of Youtube videos. Does that mean Youtube, and Google, are trying to usher in a new area of racism and intolerance?
> literally rekindle the Neo-Nazi movement (reddit now has the biggest concentration of Neo-Nazis on the internet--the fanatical kind that even Stormfront won't tolerate on their website)
I guess it's a legal problem, not technological. Neo-nazis from European countries flee with their online activity to the 'land of the free', because around here their websites are banned, many of them would be hunted and some possibly locked up (depending of the case and specific country's law). It's one of those cases where free speech backfires in your face. Not like we don't have those problems around here, but at least spreading extremist views is a criminalized pathology, not a protected right.
> Whether technology will be used as a weapon in the hands of a selfish elite, or as a tool for liberation for the impoverished and underprivileged, is being decided now by how these technologists use their time.
Most Americans would tell you it's not a legal "problem" at all, but a clear homage to time-honored First Amendment rights.
On the other hand most of my friends from in the E.U. are quite willing to give up freedom of speech for neo-Nazis and extremists. They find it a worthy trade-off to avoid the possibility of widely spreading that form of hate and filth again.
I find it all kind of amusing, how a lot of the rights we find inviolable and those we find we can bend a bit depend as much on our national origins as it does on anything else.
On a funny side note, a few years ago hitler memes were very popular here in Poland, and even a few clubs advertised saturday parties with them. It was hilarious, but shit hit the fan when butthurts reported it to mainstream media and the topic got up to the evening news, lawyers argued if it's promoting ideology, etc. Borderline humor problems, lol. Recently a newly opened restaurant owner had his business shut down and evicted, all because of the name - Fritzl's Basement.
The article rubs me the wrong way. I agree on principle: I wouldn't want to willingly build technologically that I know will be used solely or principally to circumvent the constitution and reduce individual liberty in a non-transparent manner. Problem is, I don't see any evidence that this is what's happening en-masse in Silicon Valley.
There is a big difference between unconstitutional spying and surveillance per recent new stories (which is abhorrent) and the principle of intelligence work (which is a legitimate government role of even the most minimal libertarian government).
Accepting In-Q-Tel/CIA funding does not imply knowledge and approval of everything that agency does. In-Q-Tel publicly and openly funding a mobile cryptography startup has three potential implications:
1) They want to make money: they have people on staff who are capable of doing the technical due diligence and they think they have a fairly good chance of a return on their investment.
2) They want good cryptography software to be available to their own spies, foreign dissidents (including foreign whistleblowers), and US citizens/corporations that are being targeted by criminals or foreign governments.
3) They hold a master key to this software and want to be able to backdoor all of its users.
Problem is that if claim #3 is true and is uncovered, they risk a great deal: loss of their investment, huge news story, and mistrust of any other software projects that NSA has invested in (including SELinux -- which ships with many popular Linux distributions)[1]. Note that this is security software: extensive third-party testing is required by law for certain applications (and is implicitly invited in most any case); even if source code is closed, the binary is readily available and can be disassembled (revealing which open source libraries and algorithms are used for actual encryption). "Security by obscurity" is not a claim anyone is willing to trust, so pretty much with any kind of security software, intense scrutiny is expected.
So given three of these potential reasons, #1 and #2 seem to be a lot more plausible than #3.
To sum it up, this "guilt by association" article reads much like the hyper-partisan hit-pieces I've read recently in defense of the NSA surveillance programes ("Greenwald once wrote something for Cato Institute, Snowden donated money to Ron Paul, both of which imply that this is clearly part of Koch Brothers/Tea Party/Michelle Bachman/whatever conspiracy.").
[1] For a demonstration, see the scandal around Checkpoint firewall software, which didn't even involve three letter agencies, but merely corporate espionage...
Palantir's PR firm must be working overtime figuring out how to deal with this. I don't think their prism is the same as NSA's PRISM, that's too much of a clever coinkidink, but they do make a big deal about selling their software to, working with, and being sponsored by the Intelligence Community and all the good and bad that comes with it.
No amount of cool looking track suit jackets or young upstart moxy can take away from the fundamental nature of who their customers are. This is the moment when they're going to have to grow up as a company and come to terms with what exactly it is that they're doing, and it's not just selling nifty software to good guy spies on our side.
If they got their panties in a bunch over a little dirty work from the US Chamber of Commerce, then to be consistent, they'd pretty much just have to give up their work for this.
"We" is highly presumptive, but I'm sure a lot of people would. I think legal, constitutional surveillance does have its uses and is a vital component of law enforcement and security in many occasions. Improving that technology means that when it is used, it'll be more effective at what it's actually trying to do: find violent criminals and prevent violent crimes.
Of course, when you look at how China uses surveillance or how the NSA purportedly uses surveillance, things get much more murky.
My point, and I don't think I'm alone in this, is that powerful tools/weapons will always be used for "evil" as well as good.
It is naive to say "well, let's build the perfect surveillance tool and then trust that some government will never abuse it (against its own citizens, political opponents, other countries, or whoever)." Such a pure government has never existed, and will never exist, not while humans are running the show.
More succinctly: "absolute power corrupts absolutely."
There is more to this headline. The way the internet unfortunately evolved is the devil's bargain of gathering intel on people in exchange for the targeted, corporate ads. It's only natural to use the giant spy tools like Facebook or Google beyond the pinpoint product placement.
The data that Facebook and Google have... have nothing to do with them trying to get me to install gay dating apps in their sidebars. It's my pictures, my texts, my messages, my friendships.
The important part of "the way the internet unfortunately evolved" has more to do with the fact that we didn't have the forethought to use encryption by default, or design the network in a truly "Free" way like other networks have been designed since then (but of course without any fraction of the size of the Internet)
Nobody understands the potential of this technology except the community that builds/markets/etc. it, plus some highly secretive government types. I.e., the tech community needs to take a leadership role in reining in the risks.