Hacker News new | past | comments | ask | show | jobs | submit login
Out of the frying pan and into the fire (ar.al)
59 points by dredmorbius on July 31, 2018 | hide | past | favorite | 8 comments



Quoted in the article:

> Google’s algorithm was developed with funding from the National Science Foundation, and the internet came from DARPA funding.

While Page was indeed a PhD student when he designed that algo, he licensed the patent from Stanford and the university got paid about $500 million for it in Google's IPO. And the guy mostly made it alone.

You might as well say people owe their algos to whoever was paying them at the time. Maybe. Work for hire counts. But otherwise, and without personal knowledge of how PageRank was developed, it's a very personal, introverted endeavor (IMO) to design an algo, especially one with as many moving parts as PageRank.


> When she says “let’s not forget that a large part of the technology and necessary data was created by all of us” it sounds like we voluntarily got together to create a dataset for the common good by revealing the most intimate details of our lives through having our behaviour tracked and aggregated. In truth, we did no such thing. We were farmed.

No, we willingly and voluntarily gave our personal information away to companies in exchange for their services. Sucks to be us, but there do exist people who refuse to give away their information online, and it's not exactly impossible to be like them. To turn around after having received the benefits from Google, Facebook, etc. and cry foul about them receiving their part of the deal is childish.

> Specifically, surveillance capitalists like Google and Facebook design proprietary and centralised technologies to addict people and lock them in. In such systems, your data originates in a place you do not own. On “other people’s computers,” as the Free Software Foundation calls it. Or on “the cloud” as we colloquially reference it.

These systems, even if you think they are addictive (which is debatable), are still voluntarily chosen by users. Users choose them because they are BETTER than managing free software on their own machines (if they weren't better, people wouldn't do it after all). People just don't want to manage their own data, as shown by the mass transition of data into "the cloud" and the abandonment of "ownership" of data for a streaming rental model (e.g. in music, TV/movies, productivity software, etc.).


>No, we willingly and voluntarily gave our personal information away to companies in exchange for their services.

There is something to be said about informed consent, deceptive behaviour and the like here. But I don't have time for that. I wanted to touch on

>People just don't want to manage their own data, as shown by the mass transition of data into "the cloud" and the abandonment of "ownership" of data for a streaming rental model

I suspect people actually do want to retain control and ownership of their data. But it is getting increasingly complicated in ways it wasn't when everything was physical. Perhaps the choice to transition into the cloud and streaming services is less of a choice by consumers and more of a choice by the companies what products they want to offer.


> Perhaps the choice to transition into the cloud and streaming services is less of a choice by consumers and more of a choice by the companies what products they want to offer.

Definitely. I find it that a lot of things in computing can be better understood if you realize that consumers have little to no meaningful choice. They don't know the possibilities, they don't know their needs, so they choose from what's available on the market. If companies decide the phones need to keep getting thinner, or software needs to move to the cloud, and then back that up with serious marketing expense, then people will follow - because your typical user understands so little about technology that their thinking works in categories of "what looks most hi-tech" and "what everyone else uses".

The trend of moving everything into SaaS is insidious; negative consequences aren't visible immediately. The first day you're happy you no longer have to install anything to watch movies, or worry about your hard drive space running out. But then, maybe a month or a year later, you find yourself on a trip, unable to watch a movie because of a spotty Internet connection. Or you lose the Internet halfway through your movie night. Or the movie that was there yesterday is gone today, because $reasons. Or you want to show it to your SO, who doesn't speak English, and you can't, because there's no way to add subtitles and the company doesn't offer the language you need. SaaS means losing ability to use capabilities of your computer, but you only find out when you actually need them.


> No, we willingly and voluntarily gave our personal information away to companies in exchange for their services. Sucks to be us, but there do exist people who refuse to give away their information online, and it's not exactly impossible to be like them. To turn around after having received the benefits from Google, Facebook, etc. and cry foul about them receiving their part of the deal is childish.

It's become quite clear that most users did not make a reasoned choice about what data they were giving away or to whom. It is the latter that is most concerning as scandals like Cambridge Analytica or the Equifax data breach show. It is hard to see how you can argue that this type of misuse is not a serious problem.

A better criticism is that the cited article does not go far enough. We've effectively ceded the public square to Facebook, WhatsApp, and other large social media apps. You can reasonably argue this is undermining democratic values (see US 2016 election) and even inciting extreme violence, as happened to the Rohingya in Myanmar. [0] It's likewise becoming clear that these are side-effects of the business model followed by many social media companies.

https://qz.com/1170111/is-facebook-playing-a-part-in-myanmar...


These systems are designed to be addictive based on our current best understandings of what drives addictive behavior in humans. Is it debatable that they are actually addictive - sure, lots of things are debatable even if debating it would be sort of silly.

I don't think Spotify tries to addict people in the same way Facebook and its subsidiary companies do.


Every time I play around with the idea of decentralized/p2p networks in my head or discuss it with friends, one of the questions that always comes up is:

* If a company or product can't lock the user, their data, and their friends in their platform, how are they supposed to make money, learn, train their models, etc?

Could we somehow create a common framework that is responsible of normalising and anonymising the user's data and providing them either for public consumption or to exchange them in return of a company hosting their full (encrypted) data or providing add-on services for them that require cpu cycles?

I understand that it would hard to agree upon the normalisation and anonymisation rules and maybe limiting in some cases but I don't think that it would be out of the realm of possibilities.


How come touch screen displays were developed using public funding? What’s the story behind them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: