Hacker News new | past | comments | ask | show | jobs | submit login
Open Source Is Winning, and Now It's Time for People to Win Too (linuxjournal.com)
223 points by rbanffy on April 8, 2019 | hide | past | favorite | 143 comments



I don't know if this article is the right entrypoint for this discussion, but I think a conversation we need to have is how to teach people that they deserve to be treated well by their software.

If you'll permit me a (perhaps inappropriately severe) analogy to relationships, victims in abusive relationships often stay in the relationship because they think their partner's behavior is excusable. Sometimes, this is because they haven't seen good, healthy relationships modeled for them before. "This is just how relationships are," they might think.

While I think the term "abuse" is likely too strong for corporate software's relationship with their users, I do think we can leverage this same approach. By developing and maintaining "shining stars" of the open-source world, we can provide concrete examples to people that it's possible for their software to have a purposefully healthy relationship with them.

The article calls out examples like Emacs, Perl, and Python, but the proportion of people who use software and have used (and liked) these tools is vanishingly small. What's a more appropriate shining star, our new beacon to show everyone how Software Ought To Be?

The first example that comes to my mind is Firefox - but that was perhaps more applicable in a pre-Chrome world.


Ted Nelson was banging this drum as far back as 1974 with the concept of Cybercrud as discussed in Computer Lib/Dream Machines: New Freedoms Through Computer Screens. If you go back and actually read that book, you'll realize that basically everything he says about computers then is applicable to computers now when it comes to lies told to users. As an example, he points out that 'computer people' will routinely tell users that something can't be done when the reality is it's entirely possible but the software won't do it.

That we don't seem to have made much progress in 50 years is pretty disappointing.


> tell users that something can't be done when the reality is it's entirely possible but the software won't do it

As we digitize increasingly "serious" systems, this is increasing a logistical crisis in addition to a social issue. Variations that would have been trivially handled by even the least helpful of bureaucrats becomes outright impossible to submit to computer systems, often with no channel for appeal or remedy.

The story of Ellis Island screeners changing the spelling of names is infamous, but in that story at least the immigrants ended up with coherent paperwork and a usable name. Today, people with PII that doesn't match software formats are increasingly out of luck. If your name includes 'é', ASCII systems will flatly refuse to accept that. You can enter 'e', but when down the line the ASCII field gets compared to some UTF-8 record of your name, they might well be ruled different people. If you live in Apartment C, you'll suddenly and unhappily learn how many computer systems treat apartment number as a strictly numeric field. And god forbid your apartment has a name, or your street number has a decimal - there isn't even a graceful conversion for those values.

Robust design is the first step here: many systems could easily be adjusted to fail in 1% of the cases as they currently do. But human-focused design on a much larger scale can't be avoided either; when these standardizations _do_ inevitably fail, there needs to be some way of conceding that the system, and not the data, is wrong. As @patio11 put it years ago, "anything someone tells you is their name is — by definition — an appropriate identifier for them".


> tell users that something can't be done when the reality is it's entirely possible but the software won't do it

This bit really stands out to me, and it bothers me to no end when I encounter it in the wild.

It even afflicts FLOSS software! Without meaning to single out any one project in particular, a concrete example that's fresh on my mind is KDE's Dolphin. Lately "sudo dolphin" results in the flat-out lie that "Executing Dolphin as root is not possible.", rather than a much more informative statement such as "Doing this would make an entire class of otherwise impossible exploits possible, so the maintainer has made a personal judgment call to disallow this behavior."

I think all software everywhere ought to conform to KDE's Human Interface Guidelines of "simple by default, powerful when needed" but would also like to see something added along the lines of:

* Don't lie to users - be as transparent as reasonably possible.

* Don't make value judgments or be (needlessly) authoritarian - software should restrict itself to doing its job and doing it well.

Another (historical) example that comes to mind: https://github.com/tpope/heroku-fucking-console


> he points out that 'computer people' will routinely tell users that something can't be done when the reality is it's entirely possible but the software won't do it

This is not the full equation though. Usually, "it can't be done for the price that users want to pay and the reality is it's entirely possible but I will not do it for free for you".


>While I think the term "abuse" is likely too strong for corporate software's relationship with their users.

For much of it, I don't think it is too strong a term. I think most Dark Patterns can be accurately described as abusive. There are Games that are specifically designed to irritate people into spending more money than they intended.

There are layers, Some of it is ignorance, some of it is willful ignorance, the worst of it is deliberate and focused.


That's fair. I originally worded it as "is perhaps sometimes too strong," but I've been trying to cut back on my weasel words lately.


I think it is a very interesting choice of words given the moral imperative that launched the FSF. I always found it frustrating that while the FSF had a word for software they wanted -- free software -- they never had a word for software they didn't want. It tends to go by a variety of different names: closed, proprietary, non-free, etc. However, it's become increasingly clear to me that free (as in freedom) or not, people often leverage software to achieve pretty despicable goals. It's even interesting to me that there are occasional projects that aren't free (as in freedom) that I hold in really high esteem. For example the game Dwarf Fortress does not enable you to fix bugs, learn about the code, help your friends by making derived works, etc. However, I can understand the authors' desire to keep the project as close to their own vision as possible. It's more of an artistic venture than a utilitarian one. I'd be hard pressed to find a project that is as welcoming and nice as Dwarf Fortress.

Over time software will have a bigger and bigger impact on our lives. Already there are organisations which make assumptions about me based on data that their software gathers that is completely untrue. While it is currently laughable that Google is convinced that I want to go to Hawaii, the potential for it to beyond even what the most famous dystopian novels warned us about is truly frightening.


> What's a more appropriate shining star, our new beacon to show everyone how Software Ought To Be?

Well, there's Wikipedia. Everyone uses it and it models great respect for its users and itself.


Other than obnoxiously begging for unnecessary donations each year...


Myself and many others might argue the "unnecessary" bit...

But I don't think there are a lot of better options for Wikipedia to ask for money.


The Wikimedia Foundation already has enough money to run Wikipedia for decades if they stopped blowing money on other things: https://www.legalmorning.com/wikimedia-foundation-rich-one-c...


Let's be fair, the Wikimedia Foundation does many other worthwhile things besides keeping the English Wikipedia running. By way of example their latest full-project, Wikidata, was only started in 2012 and is already getting more edits than Wikipedia.


Thanks, good to know! This year's will be my last donation to Wikipedia.

Realized the same thing about a year ago about SPLC (splc.org). Once I realized how much cash they have I saw that my contributions would have more effect elsewhere. I like what splc does: help discriminate which groups are hate groups and which aren't, among others. But they simply don't need any more money IMO.


It's my understanding that by donating a small, but recurring amount such as $1/month, you are still helping organizations like SPLC or PP by adding to the numbers of their contributing supporters. That seems like a good alternative to cutting off support, which would free up the money for a more cash-strapped cause.


How does The Internet Archive get funding? From an outsider’s perspective, it should cost much more money to run (web crawler, storage space, responding to legal requests).


Seemingly via donations only: https://archive.org/donate

"The Internet Archive is a US 501c3 non-profit organization."

So, similar to Wikipedia.


Pray tell how you would argue it, keeping in mind the (supposedly) pretty beefy salaries being paid out and the enormous nest egg the foundation is currently sitting on.


Would you prefer the government to take your money by force and give it to Wikimedia instead of them 'begging' for money?


The next one is an Open Alexa - voice UIs will treat people very differently

(I do think there is a value in something like a MOOP - massive open online psychology - an agent that monitors us as deeply as is done currently but listens to our conversations but with intention of learning and guiding - not CBT but reviewing the past day, almost automated therapy)

This is most likely to start in corporate environments but it has value (if done at medical levels of no harm and patients best interest - of course a Black Mirror version for the Chinese government would explain why your conversation with a neighbour was not in line with party policy)


Many, perhaps most people don't want to think about software. It should just work and you shouldn't have to mess with it. If something is broken, they will ask for help. Somewhat cumbersome procedures are okay as long as they always work and don't change.

I don't see how that fits all that well with a do-it-yourself, volunteer ethic. It fits better with a service-based approach where people just hire others to help if they need help. But then you're competing with Apple stores.

Or, alternately, you can buy a Chromebook for a less tech-savy relative. It's a rather practical solution.

The place where open source really works is with young people who want to get involved. There is a big enough community to sustain it, but not enough to really take over.


> Many, perhaps most people don't want to think about software. It should just work and you shouldn't have to mess with it. If something is broken, they will ask for help. Somewhat cumbersome procedures are okay as long as they always work and don't change.

There are free software distributions that stick quite close to that model, such as Debian and CentOS. The problem with the Chromebook is that it is only "on support" for 5 years after release, past that it's just a brick. Plus it expects you to depend on outside "cloud" services when there's no real reason to do so - a mainstream Linux distribution won't do that.


> a mainstream Linux distribution won't do that

With apt, there's no straightforward way to download packages on one machine and take them to an offline machine. Apt assumes that you you will be installing on the machine doing the downloading and only downloads the packages the downloading machine needs, which may be different from the one the offline machine needs, and last I checked you had to do quite a lot of piping and text parsing to work around it.


You can add a local file based repository to your sources.list "deb file:/mnt/debian-dvd/ wheezy main contrib", redownloading the whole ISO is probably the easiest way to handle upgrades like this. There's also "dpkg -i <some package file>" will install packages from anywhere.

If those options don't cover your needs can you tell me what they are?


The need is: for any given package, download that package and all dependencies, regardless of their installed status on the current system, so that they can be put on some external storage and taken to the other system.


It's not exactly the same as what you described, but apt-offline[1][2] comes close. You run it on the offline machine to obtain a "signature" which is then used on the online machine to download the correct set of packages.

[1] http://rickysarraf.github.io/apt-offline/ [2] https://www.ostechnix.com/fully-update-upgrade-offline-debia...


So now I have to visit the target, then schlep back to the source, then back to the target again. What if it is several hours away? You know, the kind of place likely to not have an internet connection to do it itself?


To solve the original problem of downloading all potential dependencies without any knowledge about the state of the target machine, I suppose you could just prepare a minimal Debian installation in a container with only essential packages and run "apt-get clean && apt-get -d install $package". In principle you could then copy the downloaded package files to the target and install them offline with dpkg. You'll probably end up downloading much more than you need, though. That's inevitable without two-way communication about the state of the target.


That isn't actually a better solution than the bash and awk hacking that I'm trying to avoid though.


I have no experience with this but there's apt-mirror, basically mirroring the entire repo: https://www.howtoforge.com/local_debian_ubuntu_mirror

We're pretty much at the point where you could mirror the entire archive including every arch onto a portable hard drive, the total size is 2453GB: https://www.debian.org/mirror/size


Great, so best case it takes me 1.13 days to download. Why is this so hard? The whole purpose of a package manager is to know the dependency chain of any given package, yet that information is completely useless because nobody ever considered this simple use case.


This simple use case was actually the norm back when apt was created and most people installed from CD. In fact, if downloading the whole repo is too much you can still order a DVD set: https://www.osdisc.com/products/linux/debian?affiliate=about... .


It is completely ridiculous that I need the entire repository to install a single package and its dependencies. End of story.


You can connect to the public repository or copy your own version of the repository. How else could it possibly work ?


Issue a command to apt that says "download this package and all its dependencies regardless of installed state on this machine".


Five years isn't great, but it's not so bad for something inexpensive, and you need to be able to buy a replacement in case of hardware failure. It's more important that getting a new one is easy, it works the same, and no data is lost.

In my case, it's a nice improvement over maintaining a relative's increasingly unreliable Windows XP computer running obsolete software for long past its expiration date.


> The article calls out examples like Emacs, Perl, and Python, but the proportion of people who use software and have used (and liked) these tools is vanishingly small.

All tools have benefitted from standing on the shoulders of these and other giants. Software doesn't exist in a vacuum and it doesn't appear fully formed from nothing. Ideas and inspiration are drawn from a multitude of projects. Take a look at the third party software required to build Firefox sometime.


Audacity, GIMP, Firefox, Android, the opensource Chrome-version Chromium, every ad-blocker worth installing on either browser?

We have the examples. Then again, we have a lot of corporate bs in between the user and the opensource projects that actually drive our world.


> I think a conversation we need to have is how to teach people that they deserve to be treated well by their software.

I don't think we need to teach people so much as we need to teach programmer. Emacs, Perl and Python were made to do something in a better way for the user. Any esoteric technology was the result of that. Not as you often see today were it is the other way around and the use case is shoehorned in at the end. If you want to do something nice you don't start with your super advanced back-end, you start with how someone is going to use it.

> What's a more appropriate shining star, our new beacon to show everyone how Software Ought To Be?

VS Code, but that is probably a bit obvious. Maybe something like Jupyter Notebook? WordPress used to be a big one. The thing is that so many things these days are proprietary web applications, mobile applications and/or networks.


VS Code comes default with "you need to turn this on" when it comes to invasive telemetry. If you don't want it at all, you must compile it yourself (as there are chunks of the telemetry you can't turn off without a compile flag). When you compile it yourself, it's unclear whether or not you can actually use the plugins that exist.

Without plugins, you are left with a text editor of only nominal use and mediocre novelty. That, or enabling telemetry. That's a pretty coercive pattern.


Perhaps I've misunderstood what you've said, but VS Code enables telemetry by default. https://github.com/Microsoft/vscode/search?q=telemetry&type=...

It's extremely abusive. The parent company, also.


Nobody likes someone else expressing opinions on their relationships though. And nothing gets someone to defend their abuser more strongly than telling them they're being abused.


Exactly, that's why you gotta show, not tell. We've been telling them stuff like "walled gardens are bad" instead of showing them a better way, and allowing them to realize that they can have better on their own.


Nowhere is this more evident than in the editor wars. I have been trying to convert vi users to emacs for years. They just go running right back into the arms of their abuser.


I think this tacitly article is admitting something I've come to believe, that open source came to concentrate on on the wrong end of the problem.

What I care about is being able to get to my stuff - forever. Open source might help with that if it runs on my computer, and someone out there with the skill to maintain it decides to keep doing so (even most programmers have little hope of actually maintaining most of the software they use). The focus, I believe, should always have been on data formats, interchange, and local storage, not on the source code of programs.

I'd rather live in a world with no open source software and open data formats than the world we seem to be heading into: one running on open technology stacks with all the data sitting on servers somewhere in formats we can only guess at.


What free software are you using that is creating a ton of data in a locked format without allowing a local copy? Solving this end of the problem largely solves the data format issue, and that is not the case in reverse.


Well I, like everybody, use the internet which mostly runs on Linux. But that wasn’t my point. Open Source probably hasn’t made things any worse, but the ideological battle was waged at the wrong end, resulting in a hollow victory.

Tell some open source advocate from 2001 or so that most people do most of their computing on Linux in 2019, and I think they’ll picture something quite different to what we have in Android and the cloud.


Tell that to a Free Software advocate and you might get interesting results :)

To be less obtuse: this is exactly what people who say GNU/Linux and Free Software rather than Open Source have been concerned about since the beginning.


Yes, indeed. I think that Richard Stallman and similar have a more balanced message in this respect. At some point the popular focus seemed to shift the notion that open source software needed to “win” in terms of market share, regardless of whether that was helping in the cause of computing freedom.


That argument was not dominant (though it existed) within the world of people who cared. It was primarily pushed by people who felt that ethics were in the way of business goals.

For point of reference, Linus Torvalds only half cares about ethics. He likes all the value of collaboration from a developer perspective and doesn't care so much about whether software treats people well and gives end users any freedom.


Perhaps not 2001. But there were certainly people starting to ask by the mid to late-2000s whether source code freedom was what we really needed to be worrying about the most, see e.g. Tim O'Reilly and Tim Bray here: https://www.tbray.org/ongoing/When/200x/2006/07/28/Open-Data


I'm very excited about tools like Dat and Scuttlebutt, because they do promise this exact transformative rethink of the control and ownership of data.


I really don't understand the sentiment of this comment. Can you go into more detail on what that world would look like?

fb and the like have options to pull down your data, generally. It's not thaaaaaaaaaaaaaat much work to parse it. Certainly much less work than, say, writing a RDBMS....


Parse data? You've already lost. What percentage of the population can do that? What I'm saying is that I believe the main effort should have been in data portability, not in the side-show of source code copyright.

But I can give you a concrete example - let's say you're a blogger - it's a large part of your professional output, and a great resource for the people in your field. Now you're not really techie, but you do keep the some original writing locally because you write in Word or whatever. But all the images have to be put in the CMS directly - you can't paste that in - and they are very important. Then there's a bunch of comments, which might greatly add to the post.

Then the place that was hosting your blog goes broke/changes business model. Okay maybe you have a techie friend who thinks about this stuff, but they maybe forgot to wget your blog for a few months/year [ahem!].

Those posts are gone. The entire stack of the blogging platform could be, and probably is, open source. It doesn't help one bit.

Those open source CMSs could have agreed and maintained a file format for local editing and tools around that. But they didn't because that wasn't seen as the main problem for some reason.


Thankfully it looks like we are heading into a world with open technology stacks (Ethereum, IPFS, WASM) and open data formats where all dapps integrate and share data with each other permissionlessly.


I see zero evidence that IPFS or Etherum will have substantially market adoption in the near future....


How is WASM going to help? It looks like it's further down the road of losing the control of your files by doing stuff in ephemeral environment (the browser), without even the readability of Javascript.

Perhaps I'm missing a use case here.


Unpopular opinion...but imo open source has, completely by accident, caused the centralization of the internet. Because software licensing is less profitable, we've since gotten all the big software makers into the server game, and their infrastructure becomes what you are licensing instead. Their infrastructure could be based on something OpenSource, but you might still be vendor locked into AWS, Azure, GCS, etc.


Turned out to be a very popular opinion [1] that OSS having commoditized the cost to develop Software to near zero that the primary benefactors are the major cloud hosting providers who are able to extract rent from hosting OSS software. Which are using their consolidation of wealth to build out their billion dollar infrastructure moats ensuring barriers of entry that no-one else will be able to participate in.

The richer and more powerful the Cloud Hosting Oligopolies get, the worse off it will be for Indie OSS developers who will be unable to compete with their network lock-in, paid resources, marketeers + advocates, advertising, etc.

What's worse is if you release OSS software the Cloud Vendor network monopolies naturally stand to make more revenue then any attempts in trying to host it yourself, making it impossible to compete against given the more effort you put into development, the more revenue you will create for them which they in-turn can use to fund their competing efforts.

Personally I'm all for companies like Redis Labs, Elastic, Confluent and MongoDB who realize the threat and start distributing their future investments under "OSS free that's free for everyone else except major cloud vendors" to force the cloud oligopolies to revenue share back some of the money they've made from hosting their software. Hopefully we'll see more of OSS follow which will force them to implement a system for revenue sharing back to OSS software they're getting paid to host.

Even when they're unable to license it they still have the luxury to sit back and wait to see what software becomes popular and proves itself in the market before instructing their army of paid devs to clone it as they've done with their MongoDB clones that implement the MongoDB protocol (also requires a lot more effort to create something with constant customer feedback + iteration, then it is to clone it). Thanks to the integration with their ecosystem ensures their investments will have better ROI and popularity then the original authors could ever have.

[1] https://news.ycombinator.com/item?id=19431444


Imagine universe in which open source does not exist. Would the internet be less centralized? What's the mechanism?


More likely the internet would just not exist or would be entirely centralized in services like AOL.


I agree, even wrote a quaint blog post on that long ago https://gondwanaland.com/mlog/2006/08/07/aolternative-histor... ... but partiallypro seemed to be claiming the close to the opposite, that open source caused centralization. I'd still love to read a justification for that claim!


there is no internet there is only the bitcoinstream.

  The early internet was "opensource like" as in not monetized.
  it was a very different time in the 80s and early 90s.
Grassroots BBs and FTPs were the fodder. websites were made to convey information, many of them made for the love of it.

i think no FOSS would have allowed the current state of the internet to occur much more rapidly. many more people would have experienced the net/web as a service, extended to you on the terms of the service provider.


All of the internet was built on FOSS. If you want to understand what the world would have been like without FOSS, you need to look not at the internet, but at the competing technologies that were duking it out at the time. Novell Netware for the bare backbone and Compuserve/AOL for services. Keep in mind that these services charged per hour for usage. Even when we talk about grass roots systems like BBSes (the vast majority of which were FOSS, BTW), the task to connect them up together was ridiculously expensive. I remember paying $4.50 per hour plus 1 cent per bit transferred to interconnect my BBS. It's one of the reasons I switched to Usenet and UUCP email as soon as I knew what it was.

Back then there were gatekeepers who made sure you paid to play. The internet broke that because it allowed anybody to connect for free as long as you found a way to do it. The gatekeepers used proprietary protocols and software to lock you into their paywall. The internet used FOSS to make it easy for you to connect. That was literally the difference.


And for whatever reason everyone bought the rhetoric that you could save so much money by moving your stuff to these services in the "cloud" though there is no clear reason to think that this is true. You still need employees to manage those services, you still need to backup stuff and restore it if that stuff fails. etc.


It really depends.

If you're doing a "lift and shift" then running stuff in the cloud is almost certainly going to be more expensive.

However if you migrate to using managed services then there can be a lot of cost savings.

To be honest it's a lot like the old days of PHP hosts - comparing VPS to shared hosting - except it's now the shared hosting is scaled up and "enterprisey".

What I do find most people overlook when saying "the cloud is cheap" is just how much time is required to deploy to the cloud. I don't mean POCing a few things in the AWS console but rather deploying stuff properly using Terraform, Puppet, etc and architecting it correctly so you have resilience without paying through the nose for it.


Because you buy these services, you can blame someone else when they fail. That's why cloud services are favored among IT staff. You can no longer get fired for the backup failing if it's Google Compute that failed to back up your instances.


Ha. Yeah right, that assumes the boss is stupid enough not to know who recommended using Google Compute and what justifications they used. Don't worry, they'll be asking all sorts of questions while you're twiddling your thumbs, unable to do anything at all about the infrastructure being unavailable.


That's certainly an attractive aspect of the cloud but in my experience what tends to be the bigger driving factor is because you're effectively paying for your hardware on a subscription basis rather than paying tens or hundreds of thousands of pounds / dollars up front.


I don't really get either of these.

If your backups fail then you still get fired, but now it's for choosing an unreliable cloud vendor.

If you don't want to pay for your hardware up front, most hardware vendors are happy to sell it on an installment plan.

I think that "cloud" started taking off right after x64 servers got hardware virtualization support, and then you had cloud vendors showing IT departments how much more cost effective their virtual machines were than the single-application physical machines that were the status quo ante. Not bothering to mention that they're frequently not more cost effective than hosting the virtual machines yourself.


> If your backups fail then you still get fired, but now it's for choosing an unreliable cloud vendor

It wasn't me that came up with the backup example - which I do think is the best example of the point they were making because you wouldn't expect that particular service to fail nor someone to get fired. And if it does, it's more likely yourselves to blame for setting it up wrong than it is the cloud vendor at fault.

Their general point was more that if you self-host and the infra goes down due to hardware failure (eg switch goes pop) then it's up to you to fix. With cloud services that responsibility now becomes someone else's problem.

Personally I find this the least convincing reason to move to the cloud because I'd rather be proactive fixing something than waiting for someone else. However it's undeniable that it is an incentive to some - I suspect more for manager types than engineers though.

> If you don't want to pay for your hardware up front, most hardware vendors are happy to sell it on an installment plan.

You pay a premium for that (interest and insurance) and are still stuck with the repayments even if your project crash and burns before the instalment plan finishes. If you're a business and looking to buy hardware you are much better off buying it outright - you have a stronger position to negotiate a discount (so you're not paying advertised price) and you have capital you can resell if the worst happens. However if your a small or medium sized company and you don't have any self-hosted equipment currently or are looking to re-outfit several racks of gear, then the cloud is a very attractive offering.

> I think that "cloud" started taking off right after x64 servers got hardware virtualization support, and then you had cloud vendors showing IT departments how much more cost effective their virtual machines were than the single-application physical machines that were the status quo ante. Not bothering to mention that they're frequently not more cost effective than hosting the virtual machines yourself.

It sounds like your impression of the cloud is massively outdated. If you're just looking for a VMWare equivalent then AWS et al isn't going to be that impressive. Where public clouds have overtaken self-hosted solutions is with their SaaS solutions. This is why in my other post I make the distinction between "lift and shift" type deployments on the cloud, and using their SaaS offerings.

I do totally get the appeal of self hosting though. I've spent the majority of my career in server rooms and love the feeling of owning my own equipment. However pragmatically I can also see why the cloud is such an attractive solution to businesses these days.


> It wasn't me that came up with the backup example - which I do think is the best example of the point they were making because you wouldn't expect that particular service to fail nor someone to get fired. And if it does, it's more likely yourselves to blame for setting it up wrong than it is the cloud vendor at fault.

Backups are notorious for going untested because too often nobody cares if they work until you discover the hard way that they don't. That's the "setting it up wrong" problem and cloud doesn't really change that.

Cloud vendors then add a different problem, which is that they operate at a scale that brings multiple redundancy failures into the realm of plausibility. Then, because the same infrastructure is shared by many customers, you end up with greater exposure to those kinds of previously negligible probability multiple systems failures when something happens that affects thousands of customers at once.

Which is still not very common, but it can be very bad when it does.

> Their general point was more that if you self-host and the infra goes down due to hardware failure (eg switch goes pop) then it's up to you to fix. With cloud services that responsibility now becomes someone else's problem.

There are two components to this. One is whether it's "your problem" in a CYA sense, which I understood to be the original argument, i.e. if the system fails and your company loses ten million dollars then you lose your job. But that hasn't really changed. If the backups don't work, pointing at the vendor is not very effective cover when you're the one who chose them.

The point you're making is that when something fails you don't have to spend time fixing it because that's somebody else's job now. Which is fine, but then you're paying a premium for that and the question becomes whether such hardware failures are common enough that you come out ahead that way or not.

> You pay a premium for that (interest and insurance) and are still stuck with the repayments even if your project crash and burns before the instalment plan finishes.

You pay even more of a premium for third party hosting, and server hardware is a general purpose commodity. If your project ends then it can be used for the next one. And even something bought on credit can be resold and the money used to pay off most/all of the debt, especially now that Moore's Law is in decline and hardware doesn't depreciate as fast as it used to.

Moreover, if you know ahead of time that your project has a high short-term failure probability then it can make sense to not want a longer-term commitment, but not everything is so uncertain. Many projects are known to have a >95% probability of still existing in five years.

> If you're a business and looking to buy hardware you are much better off buying it outright - you have a stronger position to negotiate a discount (so you're not paying advertised price) and you have capital you can resell if the worst happens.

That's true. But what if for some reason you prefer "effectively paying for your hardware on a subscription basis rather than paying tens or hundreds of thousands of pounds / dollars up front"? :)

> If you're just looking for a VMWare equivalent then AWS et al isn't going to be that impressive. Where public clouds have overtaken self-hosted solutions is with their SaaS solutions. This is why in my other post I make the distinction between "lift and shift" type deployments on the cloud, and using their SaaS offerings.

I don't think we're really disagreeing all that much. There are circumstances where cloud hosting makes sense, particularly when your needs vary unpredictably over time.

My point is that people overuse it. You see companies with entirely static and predictable future needs moving their entire operations into the cloud anyway.

And I suspect people are going to regret the long-term experience of SaaS. Once you have a decade of your business data on someone else's proprietary service with no export function, what does your negotiating position look like?


> Backups are notorious for going untested because too often nobody cares if they work until you discover the hard way that they don't. That's the "setting it up wrong" problem and cloud doesn't really change that.

At risk of repeating myself:

1/ I didn't make the backup comparison

2/ I've already said I didn't agree with GP using that specific example

3/ I didn't make the backup comparison

So can we drop the discussion about backups please.

> The point you're making is that when something fails you don't have to spend time fixing it because that's somebody else's job now. Which is fine, but then you're paying a premium for that and the question becomes whether such hardware failures are common enough that you come out ahead that way or not.

You can argue this all you like however that doesn't change the fact that I'm only reporting what I've observed some decision makers base their decisions on. You may not agree with their opinion but arguing with me isn't going to change that.

> You pay even more of a premium for third party hosting, and server hardware is a general purpose commodity. If your project ends then it can be used for the next one. And even something bought on credit can be resold and the money used to pay off most/all of the debt, especially now that Moore's Law is in decline and hardware doesn't depreciate as fast as it used to.

As I've already said several times, it depends on how you utilise the cloud. Lift and shift projects where you replicate an on-prem solution in the cloud often works out more expensive. However using SaaS solutions can work out cheaper - if you know what you're doing. It's really not a clear cut as you think.

> Moreover, if you know ahead of time that your project has a high short-term failure probability then it can make sense to not want a longer-term commitment, but not everything is so uncertain. Many projects are known to have a >95% probability of still existing in five years.

If it were as easy as that then significantly fewer startups would fail. I get you don't like cloud services but your exaggerations are now bordering on the ridiculous.

> That's true. But what if for some reason you prefer "effectively paying for your hardware on a subscription basis rather than paying tens or hundreds of thousands of pounds / dollars up front"? :)

If that's what you want to do then do it. If you want to store all of your hardware inside a rocket and shoot it into space then you're free to do that too. It doesn't mean any of them are sensible suggestions though. But if that's what you prefer to do then go for it ;)

> My point is that people overuse it. You see companies with entirely static and predictable future needs moving their entire operations into the cloud anyway.

> And I suspect people are going to regret the long-term experience of SaaS. Once you have a decade of your business data on someone else's proprietary service with no export function, what does your negotiating position look like?

Of course people overuse it. That's how IT industry works. Someone comes up with a good idea and then everyone rushes to clone it regardless of whether it makes sense or not. As much as we'd like to consider ourselves intelligent engineers our industry is still full of hypes and fads. Often it's cyclic too - with new generations of engineers re-discovering old technology and rebranding it. I've seen it happen time and time again in my many years in IT.


I'd like to add...no one (read: users / consumers) care. It could be Martian source and as long as it worked 99.9% of people would be fine with it.

The problem is, OSS has changed the biz model for software development, but we're still stuck with bugs, poor design, shite UX, frustrations, etc.

Yes, OSS solved a problem. But it wasn't The Problem. So now what?

And while I do embrace the OSS ideal, I wish I could total up the hours lost to vague documentation, delays in support (if support even exists), etc. For example, you can probably build a damn good modern CMS with all the time lost to the WordPress codex.


The cloud SaaS is basically a buisness model. The user cant just download the software and host it him/herself. When the user dont pay you can just cut off access. So in the buisness angle its briliant. From a tecnical standpoint its stupid as Personal computers today is very capable to run most cloud software itself. And we have reliable networks for when networking is required. Most PC's are online all the time, just like servers, and have dual core processors, just like the servers. I wish there where better business models for free open source software.


You’re free to host stuff on a desktop PC if you want. Some people do. However the devil is in the details. For example you was a redundant power supply, then you’ll need a server case with dual PSUs, two power rails backed by two UPSs. You’d probably then want some kind of out-of-band management console which means you then need a VPN server too. And of course that would need the same redundancy with power too. And all that is fine until you realise your router is a central point of failure, so you’d then want two switches and two hardware firewalls. But what if the hardware failes on your PC? You’d then want a battery-backed RAID controller and a device for making off site backups (since you’re not using the cloud, that probably means a tape drive or DVD writer- depending on the volume of data you’re storing). However you’re still reliant on one PC so you’d need to double that up too. And since you now have 4 network devices, 4 servers and 2 UPSs, you’d probably need to store than investment somewhere secure - like a dedicated room with a camera and keypad lock on the door. Best install some air conditioning too while you’re at it. And you’ve still yet to address the looming problem of the builders next door accidentally digging through your leased line (this has literally happened to me before).

Costs can quickly spiral when you’re self-hosting internet services for paying customers who expect their internet services available 24/7. Which is why the cloud is so attractive - it allows you to build your product without paying the upfront cost of self hosting.

That all said, I do honestly get the appeal of self hosting. It’s fun owning your own hardware. That is until it fails....


You got resilience pretty wrong here. If you start with a single PC at home and a single ISP, you actually get like three nines of availability, not everywhere of course, but in urban areas you mostly do. Which is almost exactly what a datacenter can get you. But interestingly enough, you can get even better resilience at home than in a datacenter by having just another PC connected to another independent ISP. That's because an ISP is a pretty significant point of failure and you can't get another independent ISP in a datacenter (not that you need it, as using a different independent datacenter instead is much better for resilience).


ok, let’s play out your scenario: so now you need to buy two houses in two different towns so you get two different ISPs and running your server on two different power substations (since you’re no longer building that resilience in at the server room level). That is still going to cost you way way way more than deploying something on AWS in a multi a-z or multi region. And you still haven’t addressed the issue of secure out-of-band management.

The only benefit with your solution is that the second home is at least an capital asset; but even still, it’s more than any small or medium sized company would be willing to stump up (I say this from experience since I’ve worked on disaster recovery solutions for SMBs in the pre-cloud computing era)


I guess you didn't get my point. You can subscribe to multiple independent ISPs in one house. You don't get that while renting a server in a datacenter.


I did get your point, however you’re still dependent on one pipe into the house which then doesn’t solve the issue I highlighted. You could buy additional leased lines with a different physical route out but those are not cheap either and you still haven’t solved the issue of redundancy in any other system aside your gateway.


The complexity grows the more users you need to serve. And the available percentage you need. For example, maintaining your own e-mail client, on your own computer, is not that much work, compared to what Google needs to host Gmail.com


Absolutely agree. I do actually run a server in my house for personal services however the difference between running personal infrastructure and running a service for paying customers is night and day. You may not need your own data centres ala Google but that doesn’t mean the only other option is running everything on a PC stashed away in your closet.

In fact the very advantage of cloud computing is you can scale your infrastructure to meet your demand (and I don’t just mean auto-scaling; but also the instance sizes you run and infrastructure you deploy). Which is the point I was making in my previous post: you can build the infrastructure which suits your problem and let the cloud provider deal with many of the “enterprisey” problems for you. Then when you reach the point where you need to start worrying about the big problems, well you can decide how you want to design your solutions there after.


Does cloud infrastructure have to be centralized for the foreseeable future? I'd imagine there's a bunch of people even on HN working on ways to decentralize compute, something like you run a shell command and part of your computers resources are now allocated for anyone part of that ecosystem to use.


I'm keeping a close eye on the Golem network (https://golem.network/). However, they mostly concentrate on distributing heavy-weight computing like rendering or ML, not website hosting.


> It's sometimes okay—and even preferable—for a company to make less money deliberately, when the alternative would be to do things that are inappropriate or illegal.

Or, to make it more personal: it's sometimes okay for you to make less money. Sure, Facebook might pay more than anyone else, but you need to balance that with the fact that you might be paying with your ethics…


People know about this, they just don't care. As long as people can't see the immediate consequences of their actions, like 9/10 people will take the money and run.


Depends on education and social circles imho. It's not about caring, it's about confronting with the question, which may involve communication, research, experience.


I once had a conversation with a manager at a fairly famous software company that I was working for. I asked, "Is it an imperative for us to make as much money as possible, no matter the ethics". He answered in the affirmative. I followed up with: "Then why aren't we selling drugs?" Interestingly, he understood what I meant.

It's easy to say that you won't sell heroine to addicts, you won't murder people, you won't poison entire cities because we sell software. But we don't have to sell software. There is no law saying that although we are a software organisation that we can't do something else if it made us more money. So why are we selling software? We've already arbitrarily built this wall of ethics around ourselves. You're lying to yourself if you say that "We need to do whatever is necessary to make more money". The reality is that we need to do what ever is necessary to make money within the boundaries that we have set for our company -- and those boundaries are whatever we want them to be. We choose.

It was interesting, because after I explained this to the manager, he was a lot happier. I honestly think he originally felt that he had no choice but to do horrible things. It was his job to optimise for profit at the expense of everything else. I don't think he wanted that job.


> "Then why aren't we selling drugs?"

Because Google makes much more money than the cartels and employs far fewer people, so it's more efficient and more profitable.


Because selling drugs are hard and risky? You get caught and puff gone all your money, you also go to jail. It really suck to be in jail. That's why.


Open source is winning. But it’s armed by money from the big software giants that want to commoditization software (see “commoditize your compliments[1]). This largely comes at the expense of medium sized companies and furthers tech consolidation into the Big 5 tech cos. Is this a good thing?

[1] https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/


To be frank, most companies trying to make money from open source software have (IMHO) very poor business plans. You need to understand where your value is. Non-foss organisations tend to think that their value is in the accumulated bits that is their software. The artificial scarcity that is their "intellectual property", their monopoly on copying the software, is what creates the value. If you relax your exclusive rights (as you do with FOSS), then this no longer holds value.

Some companies, having relaxed their exclusive rights decide that they will provide autonomous services on top of their software. This is basically SaaS. But again, if you relax your exclusive rights to the software, then anyone can offer the same services. If, because they are able to scale better than you can, they can offer it cheaper (and/or better) then you will not be able to compete.

These things are so obvious to me that I find it absolutely incredible that it is a surprise to anyone. And yet, I see companies building business plans on top of nothing.

If you want to make a business doing FOSS, then you need to understand where your value lies. It is not in the exclusivity of the software. It almost certainly is not in the SaaS because unless you have a really compelling story for executing your service, more established companies will eat your lunch. So what is your value?

Your value is that you wrote the software. You are the expert. You drive the direction of the project. You own a trademark that you don't share with your competitors (unless you are insane).

But what that means is that you can't just write some software and sit on your laurels as the money rolls in. You can't write some software and stick it up on some server somewhere and watch the money roll in. You actually have to do something to make more money. Everything in your organisation has to be oriented towards, "What's next?" What are you going to do that someone will pay for? Training? Services? Custom development? A promise for the next version? A tie in with hardware? It doesn't matter, but the key is that it has to be of the category: you do something to get paid. It can't be: I already did the thing and now I'm going to sell it.

And yes. I think it's a good thing.


For me it's public infrastructure, like roads. It's better if basic software is in the public domain. Then I have nothing against specialized domain software being commercial and closed.


Opensource is not volunteers. I would say the most successfull opensource projects are driven by people who get their paycheck for developing them. Look at Linux, Kubernetes, mariadb ... They represent multi milion dollar investments by small, big and huge companies. People doing mainstream opensource as their side task do not exist.


The obvious counter-example is OpenSSL: used everywhere, yet survived based on volunteer work for years. I would be surprised if there weren't plenty more in the myriad of dependencies used by those well-known projects.


Yes, my feeling always has been open source works best when the developers are the users since they feel the pain and see what needs to be changed.


barely survived, and then Heartbleed


Open Source is Open Infrastructure; and as an intellectual infrastructure we should be doing a lot more to fund it with 'tax dollars' as something that is enshrined in higher learning institutions; all of them. That would be the best way to support those interested in a career that produces open source contributions to our intellectual commons.

Unfortunately the plague that is rent seeking and patents have muddied the waters for that type of work for a long time.


Many of the largest, most known projects are this way but like most arts there is a really long tail of smaller projects that are done and maintained by people without pay.


> Opensource is not volunteers. > People doing mainstream opensource as their side task do not exist.

The vast majority of github projects prove that statement false.


This is why we need the (A)GPL: to ensure that products built on open-source tech can benefit the community at large.


It's not a panacea. The companies that are using the (A)GPL often uses it more as a cudgel to get their customers to pay up for an enterprise version under commercial licensing.


I actually think this is a positive, as it provides one way of funding open source projects, while keeping them available to the community at large. But I'm willing to be persuaded otherwise.


Open source both won and has died. Despite my ability to dig into big C/C++ codebase and fix a bug, I don't do the Linux kernel, KDE, LibreOffice nor a browser anymore. My eventual fix (regardless if accepted by upstream) is doomed to become obsolete quickly because of all the complexity and the rush for new shiny things.


This paragraph spoiled whole article for me. I fail to understand how switching Scheme for Python relates to ethics?

"A few years ago, MIT changed its intro computer science course away from the traditional (and brilliant) class that used Scheme to one that used Python..(snip).. the professors who wrote the course indicated that for today's software engineers, learning to code isn't enough. You also need to learn topics such as ethics."


Maybe Python as a "real world" language enables programmers to participate in open source projects more easily compared to Scheme, which is primarily used in academia?


Great article by Reuven, he always writes interesting stuff. Open source has won in the sense that it plays a central role in the modern cloud to edge device world we live in.

However, I think we can only declare victory if we agree on a shared victory: proprietary software like iOS, macOS, Windows, Word, etc., etc. is also central to most people’s lives.

I am happy with a shared victory but my respect to people who are free of non-libre software.


Thanks so much; I just found the article on HN a few moments ago!


I don't know that open source software as whole is winning I think that license free open source libraries published through source management on the internet are winning. But I tend to think of "software" as being end to end applications and I don't know that there are really all that many OSS or FOSS applications that I think are truly "winning"


"Teaching kids about open source? Don't forget to teach them ethics as well."

Just teach them about Free Software. That way they won't think that ethics is just an additional cost that you can discard while still being """open""".


Ethics is about humans in general, while Free Software primarily cares about the freedoms of software tinkerers. Most people won't magically feel more free or more ethical when their Wifi card stops working because something or someone wanted to exclude "non-free" drivers.

Free Software ideology (yes, it is one) also won't tell you how to make the lives of those people better who - for whatever good or bad reason - can't tinker themselves out of their problems. Which is mostly everyone. In that sense, teaching them about Free Software instead of about ethics is a similarly bad idea as wanting to teach people Christianity instead of ethics. Christian and Free Software culture give you some start when it comes to ethics, but can't replace ethical considerations themselves.


Here's a clear benefit to non-tinkerers, from Stallman himself:

"You deserve to be able to hire your favorite programmer to fix it when it breaks."

from https://www.gnu.org/philosophy/why-free.en.html

I don't need to know any plumbing myself to appreciate the fact that I can hire professionals to fix or enhance my home system other than the ones who originally built it.


Free Software is a specific application of general ethics. This is an instantiation, not a replacement.

Free Software cares about the freedom of all software users, who are humans that just so happen to be using software. The belief that this just benefits software developers is false, although it was the way that Open Source was promoted over Free Software.

And that is part of the reason we are having this conversation.


As much of a fan of Open Source as I am, I've come to realise the OSS has killed indie development. Many OSS projects are built for free to show as proof of skills when applying for jobs, with the outliers being cash cows for major cloud service providers.


If I could go back in time and give myself 2 words of advice for the future, they might well be 'Open Source'.

Or 'Alabama football'.

Either one could easily be monetized to great effect.


'open source' is a brand name co-opted and subverted


[flagged]


> In the real UNIX(R) circles

Real UNIX(R) sucked. Those who had to use it, were glad, if they could install the GNU utilities. Compared to the vendor supplied tools, they were high quality, and without the arbitrary limitations that the supplied tools had. There are earned lessons from these real UNIX(r) behind some rules in the GNU Coding standards.


"Compared to the vendor supplied tools, they were high quality, and without the arbitrary limitations that the supplied tools had."

I've been using both GNU and SVR4 tools for almost 30 years and GNU tools are horrible, even worse than horrible when compared to SVR4. Which limitations?


The vendor utils assumed limits on data they processed (max file size, max line lenght, etc). If you ran over them, you wouldn't either get the correct results, or it died, or something else happened. That's the reason behind the section of 4.2 Writing robust programs.

One of the other things is a rapid slownown and allocating too much memory. For example, when using Sun's grep and hitting a binary file, now you could wait hours for the result. It didn't help, that some Sun applications did put binary files into /etc, so in the end, you couldn't grep /etc. Well, not until you installed the GNU version, which didn't have that problem.


That is utter nonsense what you write: these were bugs in earnest, which got fixed 20 years ago! You obviously haven't used a modern SVR4, illumos-based UNIX.


Nothing beats personal experience. The last Sun I've used was Blade 100, which is almost 20 years ago, indeed. At that time, GNU tools were vastly superior.

It also happens to be time, since when the Real Unix(R) is dead. Yes, there is an occasional zombie, but that doesn't change the point. So they caught up with GNU? Who cares today?


Anybody who wants simple, robust, bare metal performance containers: provisioning lasts a few seconds and is dead simple. Unlike the cgroups mess, one gets a full blown UNIX server running at the speed of bare metal, and it can run GNU/Linux distributions at the speed of bare metal as well. It's superior cloud technology, superior to any GNU/Linux solution, by several light years.


Can confirm. My first job out of college was building out a compatibility layer across some 14 variants of UNIX. Hopping between boxes involved an uncomfortable amount of time waiting for nroff to render your man page because common options to common commands were decidedly uncommon .


SVR UNIXes like HP-UX, Solaris and IRIX were very consistent (I used all three of them simultaneously): if one understood SVR4, one could move between those three seamlessly.

DEC UNIX, OSF/1 and Ultrix were a variation of BSD. If one knew that, there was no problem either.

No comment on AIX, since my brush-ins with it were far too short to be able to claim anything.

What else was there? 14 is a large number when it comes to operating systems.


DEC UNIX and OSF/1 (and Tru64) is the same OS, just different releases.


I know that, since I used all three of them.


I mean I get there is an argument to be had about GNU tools breaking some pure UNIX code (which is mostly a vaporware)..... what’s beyond deranged is to hold System V up as some kind of uberalles... like where to begin? IPC? Mandatory locking? Unix true or not has been a bundle of hacks for years.


When was the last time you worked on a true UNIX subsystem source? Have you looked at illumos code?


The fact that you conflate 'UNIX' with 'computer science' as if those things are at all equivalent is enough to dismiss this comment outright. Let me guess, Windows is just a hack job too? If only Microsoft hired people with real computer science degrees.... /s

GNU's success speaks for itself.


How did you draw the conclusion that I conflate UNIX and computer science?

UNIX was developed by computer scientists with formal education in computing theory and algorithms, on top of those people inventing some of the algorithms and data structures which are still taught in formal computer science today, and for a good reason. Then the work was taken up by engineers with formal education in the same. But how is that conflating an operating environment (not even an operating system, but an operating environment) with computer science? What the hell...

The success of GNU is built on ignorance -- then-teenagers fresh out of high school downloading ISO's on their P.C. buckets because they didn't even know there is such a thing as a UNIX workstation, or thinking they couldn't afford one (which wasn't true, as older systems could be had for a bargain). They could have just as easily downloaded NetBSD or FreeBSD but didn't because they didn't know any better. Even today the entire hype is based on not knowing any better (why are we still running Linux, eh?) That's not something to be proud of, but I suppose it does "speak for itself" in a way.


Your complaint about GNU was that it departed from the UNIX philosophy of one tool doing one thing, and then contrasted that philosophy with what 'real' computer scientists do. Last I checked, there is no 'rule' in computer science that tools must do one thing.

GNU is not Unix, as the name so clearly articulates, so other than your seeming complaint that 'real' computer scientists always write Unix-style utilities, I'm not sure how your criticism of GNU tools doing more than one thing is at all relevant.

FreeBSD was released in 1993, and NetBSD in the same year. BSD as a whole only went from a proprietary distribtuion to open source in that year. GNU was started in 1983, thus making even that criticism of GNU completely unfounded.


Ignorance or not, the success of GNU (such as supplying the C compiler used in your BSD darlings) was put into place by people in their 20s-40s mostly. Long before any high schooler could afford a CD burner let alone wait to download an ISO. The most prolific GNU contributors have CS backgrounds from backwaters institutions like MIT, CMU and Stanford and we’re pushing greybeard status in the 90s. Your narrative is based on pure make believe.

Redhat, SUSE, and IBM... bash them all you want, are not high school side projects.


> In the real UNIX(R) circles

is this a meme ?


More of a historical quirk than anything else. IIRC, the commercial UNIXes were descended from the original Bell Labs code. Linux is a independently developed clone of UNIX and was so primitive in the era where commercial UNIXes were dominant that more than a few old-timers look down their nose at Linux as "not a real UNIX".

https://unix.stackexchange.com/questions/4091/is-linux-a-uni...


GNU/Linux operating sytstems are still very primitive (and very broken) by UNIX standards, especially by SVR4 standards.

For example, things like fault management architecture, correctly implemented NFS V3 and V4, or correctly working fiberchannel stack are science fiction on GNU/Linux.


What the hell? Which meme? UNIX(R) is a registered trademark.


... yes, which almost no one cares about anymore. Linux and GNU have won, get over it. Even on BSD systems installing GNU tools is the only way to stay sane.


That's just nonsense propaganda. As for me, I will rather never touch a computer again than accept that GNU/Linux garbage.

I work on GNU/Linux intensely every single day. The more I use it, the more I hate it.

It's amateurishly hacked-together vomit. I will never accept it.


Reading things like this, I wonder if there's actually a shortage of ethical understanding, that is, that people who do unethical things simply don't know that what they're doing isn't okay, that they perhaps never even had a thought occur in that direction. That could be solved by teaching everybody about ethics.

My guess is that this isn't the case. People know that what they're doing isn't ethical, but they are doing it nevertheless because it gives them money, power, status, their desired partner etc. You can't prevent that by teaching them about ethics.


Teaching ethics doesn't really make people more ethical. Rather, ethics flows from self-actualization as a responsible human being (what used to be called dignity) and self-training (whether intentional or not) in making ethically-meaningful choices. So, if you want to "teach ethics", including as it relates to open source, you could encourage folks to read the CatB series http://www.catb.org/esr/writings/cathedral-bazaar/ and point out what that might imply about the ethical implications of FLOSS. But going further than that would probably be useless.


One can also understand that you think certain practices are unethical, and disagree.


Sure, I'm not claiming to be an authority on ethics here. It's just my experience that people aren't unsure of what's ethical and what's not, they just ignore their knowledge about ethics because it gets them what they want. That cannot be solved by making them more aware of ethics - they already are aware. Telling a pyromaniac about the dangers of accidentally setting a building on fire won't make him not start fires.


It is not hard to find proprietary software developers who routinely query user data for debugging and product analytics. I’ll posit that almost none of them find this inherently unethical, though they probably understand that Stallman does.

There are of course some bright lines, like looking up people you personally know, which norms and security auditing heavily deter. But the general concept? Not at all. The HN consensus is not the industry consensus.

I mean look at the Hadoop community. What do you think is in those warehouses? The weather? No. Records of user interactions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: