Hacker News new | past | comments | ask | show | jobs | submit login
Canonical, Ubuntu, and Why I Seem so Upset About Them All the Time (mjg59.dreamwidth.org)
174 points by ihuman on Feb 19, 2016 | hide | past | favorite | 129 comments



I am only a casual user of Linux and have no axe to grind or sacred cows to protect. From my point of view, one of the biggest issues with Linux is fragmentation - which of the 2000 flavours of Linux do I choose for my home use and why. My perception of Canonical/ Ubuntu as an outsider, is they have come along and tried to make a one-stop shop for Linux for the masses. This is a good thing. If you are an expert you can still use one of the other flavours or even branch Ubuntu. They are not making it impossible to create "yet another Linux flavour" but they are not making it easy. From my point of view this seems to be a plus and not a minus.


I don't think you properly understood the article or FOSS in general. It is not about protecting a "sacred cow", it is about protecting the principles without which there will be no FOSS in the first place.

We FOSS devs rightly care less about market share because market share is not, and never has been, what sustains the FOSS ecosystem. What sustains it is the code content that attracts developers from all over the globe to look at it, learn from it, be interested and inspired by it, and the principles of freedom enables them to contribute to it and sustain this cycle.

(edit: downvotes are not for comments you disagree with. That's what responses are for. Stop abusing the system selfishly.)


> We FOSS devs rightly care less about market share because market share [does not] sustain the FOSS ecosystem

Here's a direct response.

Telling someone they don't 'understand' something is abrasive and mean-spirited. Your paragraph on FOSS is your opinion, you don't represent FOSS or "the community". Not everyone thinks the same as you, and people are entitled to have different opinions. Personally, I care lots about FOSS being widely used, think market share does sustain the FOSS ecosytem (who pays developers?) and I'm damm sure I understand FOSS just fine thank you: again that's just my personal opinion and I don't go round giving it some higher moral equivalence to other peoples.

As a FOSS developer what's not opinion is the license you use on your code. The license is real, the rest is heat and noise. By definition, people will take your code and use it (that was the point right), and sometimes that's going to mean they'll use it in ways you disagree with. Your entitled to have an opinion on how someone is using the license (as is anyone, all equally), but you either have to convince them to agree with you or take them to court.


> Telling someone they don't 'understand' something is abrasive and mean-spirited.

It is no more abrasive, than to dismiss someone's concerns as "protecting a sacred cow". I don't think either was abrasive, worthy of a downvote at any rate. I did not downvote the OP, I only wrote to disagree.

> I'm damm sure I understand FOSS just fine thank you

Great, then write some more specific things about why I'm wrong instead of complaining that I'm not polite enough, without actually applying the same standards to the comments coming from the same viewpoint that you happen to agree with.

> market share does sustain the FOSS ecosytem

Sometimes market share does sustain the FOSS ecosystem indirectly. Many other ways of increasing market share do not sustain it at all. So growing a large user base for its own sake should not be one's primary goal or strategy. I'm quite happy to elaborate more but it's probably more suitable for an essay.


> "It is no more abrasive, than to dismiss someone's concerns as "protecting a sacred cow". "

Since that comment is aimed at me I should probably chime in now and say that my comment about "sacred cows" has been misinterpreted. I was stating that I don't have any hidden agendas, I was not trying to imply that the author of the original article had any "sacred cows". In other words, I was emphasizing the fact that I am a reasonably independent outsider.


You've made two claims:

a. "[I] care less about market share because market share is not, and never has been, what sustains the FOSS ecosystem"

b. "Sometimes market share does sustain the FOSS ecosystem indirectly"

I disagreed with what you said the first time. The second response is different because you're accepting that sometimes market share does sustain FOSS. You're also saying that sometimes building market share does not improve the sustainability of FOSS: I can accept that as true easily.

If you accept that sometimes it does sustain FOSS, then the questions are what level of "sustaining" you want and what you precisely mean by the FOSS ecosystem

My definition of sustain would be quite simple, I want to see FOSS developing quickly and end-users receiving the benefit of using free software. Notice my definition of "FOSS ecosystem" includes users, and the variety of contributions from supporting FOSS, to working on documentation through to development.

Consequently, market share is important to me because more users means more contributors: both paid and unpaid. Without paid developers there would be no Intel graphics/networking for Linux, no Python, no Firefox or any of RedHat or Ubuntu. The larger your market share the more paid developers you can attract which speeds innovation overall. That is not to negate the work of unpaid developers who have done and do fantastic work in FOSS: if there were no commercial developers there would still be FOSS but it wouldn't be as developed or as mature. The capabilities and speed of innovation within FOSS without full time paid developers would be fundamentally different.

Your last point is that as "sometimes" market-share does not sustain the FOSS ecosystem then it should not be the "primary" goal "for it's own sake". You'd have to write a longer essay as I don't see how that point follows. I'm sure some market share increase for FOSS doesn't increase the number of contributors. But overall I perceive a positive spinning wheel where more market share (e.g users) means more contributors to FOSS both paid and unpaid. In fact, I think we already have fantastic FOSS, if there were 'units of effort' to spend on improving the FOSS ecosystem I would be all-in spending them all on increasing the number of users.


> If you accept that sometimes it does sustain FOSS [..] I don't see how that point [sometimes it does not] follows.

There are many different ways to achieve market share, and these different strategies have (a) many consequences beyond merely (b) getting more paid developers. Even though (b) can help the FOSS ecosystem, in many cases (a) includes effects that harm the FOSS ecosystem, that more than offsets (b). For example, this mjg59 article talks about Canonical's trademarks. This can be argued to "increase market share" but it undermines the principles of FOSS, and prevents other developers from building on top of much of Ubuntu et al. (edit: another thing is not bothering to spend money on core infrastructure products because they're less attractive to users; OpenSSL is one famous example but there are much better pieces of software out there that deserve similar if not more levels of funding.)

Another thing to consider long-term and short-term effects. Even if one big company really does FOSS genuinely well for a short time, if they capture 99% market share that is still an uncomfortable situation, because eventually there is a massive risk they will ditch the FOSS part and turn parts of their software ecosystem less free (now that everyone is locked in). The risk increases with churn as new executives are brought in, that don't care about the values of the previous executives. (edit: the risk is less with a product that is more easy to fork, e.g. software that does not used centralised services, and FOSS already carries this quality somewhat.)


I'd say that I have to agree with your core point. The article was about FOSS. You could replace Ubuntu with any other company that holds the same draconian ideals that they hold and the argument would still hold.


> (edit: downvotes are not for comments you disagree with. That's what responses are for. Stop abusing the system selfishly.)

I didn't downvote your post, but your statement about downvoting isn't true. It's certainly not in the guidelines[0]. And if one needs an appeal to the HN operators, as others have pointed out on separate occasions where your claim has been made, pg has previously said downvoting for disagreement is fine[1]:

I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness.

[0] https://news.ycombinator.com/newsguidelines.html

[1] https://news.ycombinator.com/item?id=117171


I don't know what the effect was historically but right now if there's too many downvotes a comment gets greyed out. Furthermore you have to have over several hundred karma to be able to do it in the first place. In other words, the behaviour isn't symmetrical with upvoting, so I find it very uncomfortable, even for myself, to downvote merely for disagreement (i.e. the opposite of an upvote).


Same here. I reserve downvoting for unhelpful comments.

One litmus test for unhelpful commenting is, that the word 'you' is used. I try to keep it on topic, and the topic is never the other commenters.


There's gonna be tiny exceptions, but this is an excellent guideline for comments that I will try to follow in future


Well, I disagree with the guidelines. I beleive hiding text from others, just because you don't agree with it, is censorship.

It discourages open discussion.


I'm with you. History is filled with ideas, words that were originally laughed at/down-voted.

I have never felt the need to down-vote. I hope I never turn into that guy.

I inderstand moderation. I understand the up-vote. Why make a post unreadable? If needed--put a minus sign on the post, but don't make it difficult for people to read. These posts go onto Google forever. It's just un-American. That's been my only gripe with this site.


People quoting the guidelines won't change a thing. There'll always be threads like this one if downvoting is clearly abused.

And hardly anyone cares about pg's opinion, especially in a thread about FOSS, to which he has contributed fairly little.


Maybe that is what you value.

I use software from free software ecosystem for the no bullshit, high quality software that it delivers. People like me would benefit from free software being more mainstream.


> I use software from free software ecosystem for the no bullshit, high quality software that it delivers. People like me would benefit from free software being more mainstream.

It isn't obvious that you benefit from this, there certainly is a kind of "eternal september" effect which makes popular software worse.

For example, in the good old days googling for Linux driver related issues yielded useful posts explaining what's going on and how to make things work, but when Ubuntu gained popularity the results suddenly became flooded with posts by clueless Ubuntu noobs trying package up/downgrades, alternative repositories etc without any understanding what they are doing and why.

I think it's getting better now, mainly thanks to sites like stackoverflow, but few years there was no way to get usable results without "-ubuntu".


It's long established on HN that downvotes are for whatever the downvoter wants them to be. If you have a reference that says something different, I'd appreciate a link.


Exactly what are those principles?

Mine are to avoid enacting barriers to having my code used on systems where failures mean people could die. The GPL is not an ideal license for that, but there are worse licenses. One example of why the GPL is not an ideal license for that is that US federal law prohibits the GPLv3 in automobiles because it mandates tivoization. If choices are between good code under the GPLv3 and bad code under a license the law permits, the bad code is used and then we get things like Toyota vehicles' drive by wire being able to live up to the saying, "Always moving forward". In this example, this also affects people outside the US whenever the engineering is reused and I doubt any automaker would design 1 drive by wire for the US and another for everyone else using GPL code that is well known to be superior to their embedded garbage. This is far from the only example and not all examples involve federal law. However, if you write something good that could have a mission critical application, it is good enough that the alternative code is more likely to kill people and your principles are to exclude anyone who does not do exactly what you want them to do with it, you can have your principles, but people might die as a result. If deaths occur and those dying from it include the principled person who could have saved them, having those principles would be what killed that person. Having the developer die too in such situations is improbable, but it is something I consider when I ask myself "what do I want a software license to do?".


Have manufacturers previously added dedicated countermeasures to prevent physical car modding and were those countermeasures legally mandated? What made software especially highlighted for protection?

It seems like your counterexample simply highlights yet another discontinuity in US law.


> They are not making it impossible to create "yet another Linux flavour" but they are not making it easy.

In 2004, the entire Ubuntu distribution team fit into a small conference room and still managed to produce a new distribution that was significantly more compelling than any other available at the time. They were able to do this because Debian provided a convenient base and was released under a license that made this possible. If Canonical had been forced to modify every package in order to remove every reference to Debian, there would be no Ubuntu.

The same might happen again. There's no reason to believe that Ubuntu is the final stage in the evolution of desktop Linux, but right now it's probably the most credible. If someone wants to be able to experiment then they should be able to do so. We don't produce better software by stifling creativity for no reason other than brand protection.


There aren't 2000 flavors of Linux. I would guess there are about 10 GNU/Linux distributions that matter to most people.

Given that there are about 7+ billion people on earth, I have a very difficult time trying to understand what “linux fragmentation” is. If you think we should all use the same OS, does the same thought process applies to governments, shoes, food, etc?


Careful about oversimplification there. To put it another way: would you suggest that everyone build their own car? Perhaps people care less about operating system choice than you think. Remember, people within our domain of knowledge represent a very small minority of those seven billion people.


> would you suggest that everyone build their own car?

Not sure where are you going with that. There are several hundred different models of car for sale around the world. And still several hundred that count.

There are some tens of Linux distros that count. That is indeed evidence that people mostly don't care about them, and that's what the GP was arguing.


I don't think you understood what he meant. AFAICT he said that there is not that big fragmentation considering how big community is - and not that everyone should use different OS. Maybe people do care less about OS choice than car choice but that doesn't mean it is less important.

Btw, people from almost any domain represent small minority compared to seven billion people.


I missed the suggestion that everyone should build their own OS, and note that there are hundreds of largely incompatible models of car successfully sharing the road.


I think we are at the end of the Fragmentation Era. It use to be that different distros were so different that you would get only specific versions of software and you couldn't get the newest version of OpenOffice on X but with Y.

The main Desktop distros are all fairly equal now. Ubuntu, Mint, OpenSUSE (My favorite), and Fedora are all fairly straight forward and ready to be used and abused.

Arch Linux goes out of their way to just make it hard but that has even stopped, heck their community has changed drastically over the past five years. I still use it on my ancient machines (32 bit is dying though).

I want to install just about anything I can do OpenSUSE's one click or the SUSE build service, go through the pain of PPA or Fedora's flavor of the year installer of outside packages. In Arch you have AUR. It is vastly different then the way it was just 8 years ago or the way Slackware currently is made.

Heck even the RPM vs DEB stupidity is mostly over. I think it has been over a year since someone said I don't do RPM based systems due to dependency hell which was just a issue from 10 years ago that got fixed but that stupid line kept getting repeated all the time!!!! I always told them build a DEB and an RPM and you will see why this is an non-issue.


It's not the end of fragmentation for me, since I'm completely uninterested in all those major distros ever since NixOS came along—I'm happily fragmenting along.


Well that is 100% my point. Your NixOS (Which I really LOVE the idea of their package manager) Still isn't as fragmented. You get your Linux packages differently but the software is still there. Also it uses systemd by default.


> they are not making it easy. From my point of view this seems to be a plus and not a minus.

The entire point of free software is to allow it to be endlessly copied, modified, and distributed. This is why we have things like Android and WebKit and the firmware on your home router. They're all based on free software. It may not be "good branding" for there to be hundreds of Linux distros, but that's not what any of this is about.


I find criticism of Canonical interesting. They are one of the most freedom respecting software companies out there. But people don't compare them to almost any company, they compare them against Red Hat. It seems that Ubuntu gets criticized not so much because they are really bad but because Red Hat is so awesome.


I think its because most people do not notice the RH behavior. they used to release patch sets for the kernel along with its source. this made it easy for CentOS to roll a "clone" of RHEL. but that changed when Oracle did the same as CentOS and produced Oracle Linux. And now CentOS is part of the RH "family".

frankly i think Canonical is trying to avoid a "Oracle Linux" scenario happening to Ubuntu.


> this made it easy for CentOS to roll a "clone" of RHEL

This makes no sense. Having patch sets or a single huge tarball is exactly the same as far as CentOS is concerned. Also, the patch sets for RHEL6 would be around four times the size of the kernel itself so there are also practical reasons to stop distributing them (of course Oracle _is_ a reason, I'm not denying that).

The delay in CentOS 6 was due to personal reasons, not to the change in the distribution policy for kernel patch sets.

CentOS was acqui-hired because Red Hat needed CentOS as the base for community development of its RHEL-based products (oVirt for RHEV, RDO for RH OpenStack, and so on).


When it comes to freedom and privacy, Linux is a luxury brand. This can be a blessing(if you want that userbase), and a curse (if you don't want to live up to the expectations).


You raise a solid point. The Red Hat/Canonical comparison paints them bad. Yet the Apple/Canonical comparison on the other hand...


Except Apple isn't trying to subvert the intention of the GPL, it's just using code not covered by the GPL.


Torvalds explicitly said he doesn't like the GPLv3 because the GNU people tried to subvert the intention of the GPL and lied to people about what it meant. The GNU people are the same ones as the main article refers to when looking for a definition of 'Free Software'.

So we find ourselves in a situation where the article author is apparently arguing against group X because they're trying to subvert the intention of the GPL, and using as evidence a definition provided by group Y, who tried to subvert the intention of the GPL.


It's kind of hard to make the argument that the FSF tried to subvert the intention of the GPL, since they wrote both the GPL itself and the four freedoms (the philosophical principles on which the GPL is based) .

You can make it, yes, but it's not a very convincing one.


Well, have a crack at it, then. Here is Torvalds talking about how he sees the point of the earlier GPL being subverted by v3, and that he thinks that the FSF lied to people about it. In his opinion, it should have been named a different license. He goes into detail about how it's not the license he hates, but how the FSF behaved around it.

https://www.youtube.com/watch?v=PaKIZ7gJlRU

> since they wrote both the GPL itself and the four freedoms

Also, having been part of organisations that have operated against their own philosophical principles and mission statements, I don't accept that the FSF is immune to changing their angle on things. And having hung around plenty of radical progressives growing up, I've seen some handy re-rationalisations of earlier positions.

Overall, I'm on the side of the FSF. But fucked if I'm going to be part of this thing that progressives do, which is piss all over their allies (in this case, ubuntu) just because their opinion is slightly different to one's own. If someone's going to raise the "nuh-uh, subversion of license = evil", then fuck it, here is a video of someone at the heart of the whole licensing debate literally saying that the FSF behaved immorally about licenses too.


But isn't it healthy to re-examine your core beliefs and made tweaks or even whole substitutions? I don't see anything in GPLv3 that directly contradicts the original four freedoms. It was evolutionary to close loopholes like Tivo.

But you have a second point: how you treat your allies and other sympathetic parties. Sadly, idealism seems to trend towards that behavior and it's pretty counterproductive.


I absolutely agree that you should be allowed to change your opinion over time, as long as you do it 'honestly'. For example, it bugs me when people lambaste politicians for having a different opinion now as opposed to 20 years ago. But at the same time, by 'honest' I mean being open about the changes and not quietly twisting them or hiding them, which is what Torvalds is complaining about in that video. In the organisations I mention I was part of, the same thing applies - lip service paid to the (sometimes stale) principles, but the behaviour was rationalised very baroquely to fit those principles.

Re: the idealism stuff, my armchair psychology take on it is this: conservatives are passionate about things not changing, so minor differences don't matter so much because the overall goal is the same; progressives are passionate about things changing, so if the cart is going to move at all, it really needs to go in my direction. Add in the human predeliction for not seeing the forest for the trees, and the progressives will squabble over what really are minor differences.


Interesting, I hadn't seen that video before. For those annoyed by video, he gets right to the point.


Well the whole GNU-Linux kind of sums up the subvert issue :)


I can't see how that matches up with any definition of "subvert" I'm familiar with.

There are reasonable argument pro and con the whole "Gnu Linux" thing, many of them silly and some cogent. But nobody was subverting (or attempting to) anything that I can see.


> Sunvert (def) undermine the power and authority

Seemed like it was to undermine Linus and place GNU on the same level as Linux kernel. (hurd) Don't get me wrong RMS is someone I am glad is alive and engaged with the community but I disagree with him most of the time.


  Seemed like it was to undermine Linus and place GNU on the same level as Linux kernel.
I don't agree with your characterization at all, but let's not go over old ground.


That's not really the GPL though, that's just because very few people run Linux without also running countless GNU utilities. The people that argue for calling it GNU + Linux don't bring the GPL into it at all.


The meaning of a work is determined solely by the interpretation of those who read it and the author's input is completely irrelevant[0]. In effect, what the FSF thought the spirit of the GPL is and what its actual spirit is, are two very different things.

[0] https://en.wikipedia.org/wiki/The_Death_of_the_Author


However if an author decides that their work is not being interpreted the way that they intended the author is free to release new work. Hence when the FSF decided that the interpretation of the GPLv2 was not what they had intended they were fully within their moral rights to release the GPLv3.


This is absolutely true within postmodern literary criticism.


You can start by defining what the intention and purpose of the GPL is and move on to explaining how Canonical's uncooperative and unclear stance towards the community whose contributions enable it to exist serves to further those purposes.


Which version of the GPL? Because there are several, and in the Torvalds talk linked above, he points out that the different versions have different intentions.

> Canonical's uncooperative and unclear stance towards the community

Uncooperative and unclear? The 'flat refusal' linked in the main article is pretty clear and cooperative. "Do this thing at this link and we'll be cool with it, and we have a track record of being so". Sounds clear and cooperative to me.

> whose contributions enable it to exist

This is the kind of shit I mean above when I say pissing on allies. You're painting ubuntu as a parasite rather than an ally. Ubuntu has done plenty to help the community and is part of that community - in particular, they took making a non-techie friendly desktop and driving that goal forward as their specialty. Do they do everything perfectly? No, but none of the major players do.

I mix debian and ubuntu in my work, and the debian/FSF purists really puzzle me sometimes. One friend of mine read mjg's comments on the same issue on lwn and bought it hook, line and sinker. Started arguing with me that there was zero difference between Apple and Canonical, simply because although Canonical provide you with their source code, you need a little more elbow-grease to make a derivative of their whole integrated environment (which is a violation of a very liberally-read 'freedom 2', apparently). You're free to make that derivative, but it's just not as simple as removing one package. Apparently this makes them the same as a corporation famous for it's industrial secrecy and lockdown of user capabilities, and who has had the police raid someone's home because they thought one of their prototypes was there. It's a ridiculously binary view of the world.


Foo is less bad because bar and baz are so much worse is a terrible line of reasoning


In a world where there are pressures on foo, bar and baz to be bad, and foo is fighting those pressures and bar and baz are not, it seems to me that foo should be praised for being less bad than bar and baz.

(There are probably also situations where that would be a terrible line of reasoning.)


Red Hat is energetically criticized on many fronts. I think early growth in Ubuntu on servers was entirely due to Red Hat hatred.


They are so awesome they basically took over the OS by creating systemd.


My bigger problem with Ubuntu was that they went away from creating a really nice and easy-to-use distro that I wanted to throw money at to breaking everything just to be like Apple (don't get me wrong on this, I'm an Apple fan, I just find their ux hard to use and think people should copy the rest of what they do, not the menus etc.)


I notice that there's an inherent conflict between the democratic approach found at Debian, Mozilla, W3C, etc versus the dictatorial approach of Apple and so on.

In a way, it mirrors the difference between free markets and command economies. Free markets tend to be resilient, offer individuals the most power, and tend to favor those with skill at the expense of lots of market fragmentation and inefficiency. Command economies, by contrast, can have everything working together towards one common goal, and allow for every individual to have some measure of equality and security. They can operate with either far more efficiency than a free market, or far less... and the fortunes can rapidly change.

Democracies respond by then copying the innovation of the risky dictatorial/command experiments.

Ubuntu is trying to do a little of both. It tries to be like Apple, and thus doesn't try to conform to the general FOSS community initiatives if it thinks they aren't innovating correctly or quickly enough. That is why they created Unity, Mir, and many other in-house projects. Tightly integrating branding into the products is just par for the course.

By not trying to go along with the program, they annoy everyone else. But since the Debian Technical Committee, X Foundation, Linux Foundation, and W3C committees are not like the Comintern or the Apple board of directors, they have no power to enforce that program. Hence the fragmentation.


An aside regarding the details of your analogy. I have to contest that command economies can be more efficient than free markets. Please note that this does not mean that free markets are perfectly efficient: they aren't. But even if we dismiss the outsized impacts that leadership personality can play in determining the functioning of a command economy, they cannot be as relatively efficient as a free market.

We need to define what it means to be efficient: I use efficiency to mean the measure of how market producers meet the demands of market consumers. In these cases a central planning authority cannot obtain enough actionable data from the market nor can it act on it in a timely enough way as to actually meet that definition of efficiency. Individual consumers have differing wants and needs: differing to the point that you would be hard pressed to define the difference between what constitutes a want vs a need in all cases. Even the best central planners would be hard pressed to deal with such complexity. You may be able to use prior consumption numbers to plan the next cycle, but things change. Also adoption of new technology (not just computer technology) is difficult to understand. When the car is invented does the command economy order a bunch to see how it plays out? Does it order none because there were no prior demands expressed? Can it even be invented or manufactured because the likelihood of seeing it even get considered so small as to make it not worth it... or needed enabling technologies like, say, better lathes (or some such thing) not developed for a car that may exist because there is no purpose for it now, or because there is no reward for pursuing such a path to the inventor? How does one set prices or does one dictate consumption as well? Fail in pricing and you have shortages for one thing and over-abundances for another. This goes on and on. And again, these are the questions outside graft and ill-will from the central authorities.

Free markets on the other hand distribute the penalties and rewards based purely on how well a producer meets a consumers requirements (note that this is different from producing a perfect world for the consumer... consumers have a limit of what is acceptable, so you may buy something you wish was much better/cheaper/etc than it was, but you still concluded that you'd be worse off having the money rather than the product so you traded... a need was met). An individual producer can focus on a an individual area of need for specific consumers much better than a central authority can; and if the existing producers fail some consumers, an opportunity for other producers to enter the marketplace exists. If a producer meets consumer demand in a way that maximizes the resources that production itself consumes, they have the resources to continue to serve the market; if they fail to meet consumer demand in a resource efficient manner, they cease to be in the market.

Profit is the measure of efficiency of meeting a consumer demand: you have bad profits, you're not efficient in some way, good profits and you are. Rewards and "punishments" from the market itself coordinate resources and do so quickly and without needing to wait for the next 5 year plan. In command economies, if consumer needs aren't met, rarely are there any bad outcomes for the producers (the state). You can't choose to buy from the other guy because there is no other guy. And to reward good outcomes? Why bother when you control the rewards and the penalties either way?

Anyway, to bring it all home, fragmentation in the Linux world is really about this random walk of producers trying to meet needs: the interesting question is, "who's needs"? Ubuntu is shooting for customers: those not interested in building the operating systems/tools and those that just want to use it. Red Hat is similar except for a narrower set of consumers. But, say, Debian (to some degree), or even better Slackware (a personal old favorite)... who is the consumer? I would argue that often times it is the developer/maintainer that is actually the consumer: they consume their leisure time and make other consumer expenses because of their desire to engage in the activity of developing or maintaining a distribution; OK, maybe less so Debian than Slackware, but I think the point still applies. Many of these distributions don't exist because an entrepreneur was trying to reach an underserved market... they exist because the developer/maintainer set out to achieve personal goals. As such a distribution cannot fail on adoption rates or by using monetary success measures since the reward/punishment is really a matter of developer satisfaction in the pursuit. That's also not to say that these personal pursuits cannot also be commercial pursuits or that developers don't want to see their work adopted by others, but the primary motivation will not weed out less successful distributions because they are actually successful for what the makers tried to accomplish. And in that sense, they are meeting a need and that is actually efficient... just not in the way you may desire or expect.


Command economies can make risky bets on particular technologies or strategies, and just reorganize everyone's life around this goal. It is very authoritarian, often tyrannical, but the notion that it is necessarily inefficient is just stupid.

Planning occurs in markets too. A Boeing jet takes decades to bring it to market, and the organization must make risky long term plans and bets. They may pan out, or may not. But every entrepreneur and startup at YC is doing the same sort of thing. Taking losses for years, even, because of a belief in their plan.

Sometimes, risky bets pay out. Sometimes spectacularly. And sometimes they crash and burn. The resilience of a market economy means that not everyone has to be on board for such a venture. Indeed, the market is a collection of organizations that are all either slowly or quickly in the process of failing. None of them will last forever, but they generally won't all fail at the same time. A command economy can, and often does.


I suppose I would tend to agree with you that a fully planned economy will be destined to fail, just like a fully unplanned economy would, but no economic system is ever pure. Certain Asian Tiger economies, like South Korea and Japan, had large scale economic plans to begin manufacturing electronics for export. This was an intentional, top-down strategy. A successful one, at that.

Other types of economic planning that we see have included nuclear energy, damns, highways, railways, satellites, and so on.

The US only began funding space exploration after they realized that the USSR was ahead. The US was able to be more resilient over time, and still incorporate a lot of the top-down innovations.


I miss when Ubuntu was the unqualified choice if you wanted to recommend a Linux distro to a non-techie.


It still pretty much is along with Linux mint.

The average Joe does not care much about the open source philosophy. If it's free and works well out of the box without having to touch the command line, they would be happy. This is where I think Linux mint wins even though it includes many proprietary codecs.


I agree. Will point out though that Linux Mint is also made available without the proprietary codecs included:

http://www.linuxmint.com/download.php


I'm an Ubuntu user and I want to give Mint a spin (because my parents and acquaintances are a bit afraid of the "you have to upgrade your whole OS" prompt). Which desktop should I pick for a casual user?


Cinnamon would be a safe bet.

It looks and works like Windows 7 and most people would feel comfortable using it.

For old computer hardware choose the lightweight Mate edition of Linux mint.


Great, thank you!


I still recommend OpenSUSE all these years and still think it has the best distro for new users and people who want to use it in a work environment.

Ubuntu had its break through when it was the first one to get things to be more standard. Now with Systemd and easy build systems https://build.opensuse.org/ just about anyone can get a basic computer going quickly and easily. I still think Windows is the hardest and longest OS to install.


> I still think Windows is the hardest and longest OS to install.

But they have gotten way better, too. The last Windows that I regularly used was XP, and I remember it being a pain in the ass to set up (on the scale of "I'll dedicate the whole weekend to getting the OS and drivers installed").

Just recently, though, I set up a dual-boot on my desktop PC, with Arch Linux as default, and Windows 10 for the few games in my Steam library that are not Linux-compatible. I explicitly chose Windows 10 for enabling my DirectX-12-compatible card.

The setup process was surprisingly straightforward and despite the very new hardware, it found all drivers out of the box and everything went smoothly. The only annoyance is the gazillion spying features that you have to disable, and the other gazillion spying features that you cannot disable. But since I'm only playing games on that OS, and the other OS with all personal data is on an encrypted disk, I don't care anyway.


Could you please list (or give a summary) of those gazillion spying features?



Many groups are trying to reuse Ubuntu recently. I'm thinking about Linux Mint and Elementary.io.

One day, one new group will be more focussed on graphics and hire UX designers, and publish a polished, paid version of Linux. As in, you can redistribute, but if you want to subscribe to the repository which contains the latest bugfixes and the awesome UI design upgrades, you'll have to pay - and this is perfectly legal with free software, even GPL. We've only avoided that until now because so many groups have a huge disdain to making money off products (which is an awesome value - but just a value).

So it is understandable that Ubuntu tries to raise barriers of entry.


The purpose of GPL is not to prevent people from earning money. It is so that you can always modify your software yourself, or that you can seek a better alternative because the standards are open.


> you can seek a better alternative because the standards are open.

That's not really true, GPL says nothing about standards, you can create your own proprietary standard with your code covered under GPL. Funnily enough you can have GPL software and create a minefield of confusion like Oracle does with the OpenJDK which is GPL.

I fully agree on the first point though. Sadly many people forget that.


Ubuntu is based on Debian. If Debian acted the same way to 'raise barriers of entry', Ubuntu would be much worse off for it.

If you're a Linux company, you don't have to restrict access to your code to make money, you can make money by offering something the community does not: support contracts. That's how Ubuntu makes its money now anyway (as far as I know), opening up access to the code will not affect that.


> So it is understandable that Ubuntu tries to raise barriers of entry.

It's not understandable if they violate the GPL in the process (the point of the article).


> One day, one new group will be more focussed on graphics and hire UX designers, and publish a polished, paid version of Linux. As in, you can redistribute, but if you want to subscribe to the repository which contains the latest bugfixes and the awesome UI design upgrades, you'll have to pay - and this is perfectly legal with free software, even GPL. We've only avoided that until now because so many groups have a huge disdain to making money off products (which is an awesome value - but just a value).

That reminds me of what elementary did, with their forced payments.


You don't have to pay for elementary - you just have to put in '0' as the amount you're paying.

Recently I tried a bunch of Linux flavours, and it still pissed me off. No other distro forced me to do that.

Elementary claim making software takes effort, but how much are they contributing upstream - to Debian or to the kernel creators?

In the end, I ended up with Mint, which has a better UX anyway imo.


> Recently I tried a bunch of Linux flavours, and it still pissed me off. No other distro forced me to do that.

We are closing in on the definition of privilege here, aren't we? Being annoyed because you have to explicitly had to write (ok, update) a form to inform that you didn't want to pay?


That does make me think of Randy Marsh at Whole Foods in the South Park episode "Safe Space" last season.


Because it deceives users, and makes them think they have to pay?

Worse, users might even think their money goes towards "linux" – not that the money just ends up with devs who barely managed to write a desktop shell?


> not that the money just ends up with devs who barely managed to write a desktop shell?

Aside from the fact that it is a very nice shell, what is this attitude (in a startup forum) that you shouldn't earn money?


The issue isn’t about making money, it’s about making money with something other people made.

Imagine I’d take Google’s Android Apps, add a new icon and a new theme to them "DARK THEME GMAIL", for example, and then tell everyone what a great app I made and sell it for 5$.

Most of what elementary is is a nice Linux distro – but just a distro, not even special support.

You pay for a product, where most of it was not actually made by the people you pay.

Is that fair? Is that even moral? No.


I don't think that's an entirely fair assessment of what they've done. Most of their core apps are developed in house, or by a different group that I'm reasonably certain they fund (though I could be wrong on that point).

Additionally, Ubuntu does the exact same thing, and has been doing it for longer than Elementary. As soon as you click "download" on their overview page, you're sent to a contribute page. Sure, Ubuntu has a link in plain text that directly says you can download without contributing, but it's not immediately apparent either.

Arguing that it's immoral for Elementary to ask for money, even if they don't fund Ubuntu, Debian, and the Linux Kernel directly is the same as arguing it's immoral for Canonical to ask for money because they either don't, or only very minimally fund Debian and the Linux Kernel from what I've seen.


Is it were open source, selling the binaries is perfectly alright. I do not think that app is open source though.


The open source / not open source changes the legality of selling someone else’s work, but it does not change the morality.


If you don't want your code compiled and sold by others, don't give it a license that allows that. It's hardly immoral to do something the author explicitly allowed.


Yes, and if I don’t want people to steal stuff I should lock my door.

It’s still not morally okay to claim the work of someone else for yourself.


> Yes, and if I don’t want people to steal stuff I should lock my door.

Wrong example. More like: If you don't want people to take and make a thousand copies of what you placed outside your garage and do whatever they want with it, don't place a sign outside your garage door that people are allowed to help themselves.

Yes, by all means do give back to the original author. Give credit, buy support agreements, recommend, report bugs etc etc. I'm happy to pay a little more than necessary here and there to support good projects (including elementary OS) and I push for a support agreement with the team that provide the wonderful server stack we use.

But don't tell me I have a moral obligation to not do something the license goes out of its way to allow me to do, please.

Oh, btw, people picking on elementary OS might have picked the wrong target : it is actually beautiful and works surprisingly well for some of us and it seems the money they get in goes towards bug bounties.


So, you have no problem with the business practice from Sourceforge or Elementary?

Bundling existing software with only minimal own involvement, and either scamming users with malware or by convincing them to pay?

Both Sourceforge and Elementary only produce a tiny amount of the code they sell (be it either a simple shell for a whole OS, or an install wizard for a most complex software).


elementary : no

Sourceforge 1m ago : yes

Reason : sourceforge tried to deliver something other than the users wanted, either by providing two misleading advertising on the download page or even by bundling adware/malware.

elementary provides something some people want and in exchange asks for money. This is something HN actively recommend again and again.

elementary just happens to be nice on a number of levels from letting you decide the price yourself to feeding the money back into development.


So, good idea.

I'll make a webpage where I'll sell elementary.io then.

Obviously, the money stays completely on my account, but don't worry, the OS comes with a few additional programs I wrote (or, rather, will have written), like a nicer IRC client.


As long as you are honest about it I wouldn't even demand an extra IRC client.

I have my doubts about whether this will make you rich but as long as you are being honest and honor the GPL etc I see no legal or moral problems.

BTW: I know that I personally was not scammed and I would guess nobody else who paid for it felt scammed either.

I think many like me are happy to chip in with a little money as I don't have capacity to contribute code or support.


Opt Out never feels good.


But there is no rule that giving people an option to download something for free should also make them feel good, is it?


That is why I never send people to download.com or avoid like the plague sourceforge


Sourceforge seems to be slowly getting out of the mess they created.

download.com, yes, it is.

Then again, elementary OS is completely different as not only do they package a complete distro but they also create and maintain multiple programs.


I somehow have the feeling that the longer the software licenses are the more licensing-related drama is happening.


There was a lot of drama between BSD and AT&T even though the licenses were short too. There is also drama over MPEG formats due to patents, again, even despite the short licenses, which led to webm.

A short license does not mean that it will have fewer problems. It just means it addresses fewer concerns.

At any rate, the mjg is not complaining about software licenses, but about trademark policies. Trademarks are a very different set of laws than the copyrights that software licenses typically handle.


so, just use Debian.


If you have the source, it shouldn't be difficult to build binaries, should it?

If you are building a new Linux distribution based on Ubuntu, I don't even understand why you would want to reuse binaries they built.

This post would be more meaningful if it literally quoted from unfavorable license terms used by Ubuntu to actually prevent people from making derivatives. Saying that Canonical "appears to require" something is using weasel words.


> If you have the source, it shouldn't be difficult to build binaries, should it?

You'd think, but no - there's no guarantee that the shipped binary packages can be built with the shipped toolchain or shipped dependencies.

> If you are building a new Linux distribution based on Ubuntu, I don't even understand why you would want to reuse binaries they built.

Because it's entirely unnecessary? Building the entire archive is a huge amount of effort.

> This post would be more meaningful if it literally quoted from unfavorable license terms used by Ubuntu to actually prevent people from making derivatives. Saying that Canonical "appears to require" something is using weasel words.

The terms are at http://www.ubuntu.com/legal/terms-and-policies/intellectual-... - I probably should have linked them, I've just been writing about this enough lately that it's easy to forget that people might read a single post without context.


> there's no guarantee that the shipped binary packages can be built with the shipped toolchain or shipped dependencies.

That problem will be with us as long as we use binaries as the primary means of distribution. If you consider that to be a problem, you could put effort into distributions where the packaging is the source code. I know I do.

> Because it's entirely unnecessary? Building the entire archive is a huge amount of effort.

How do you know that the source code provided actually corresponds to the binaries unless you compile them yourself[ with a toolchain you compiled yourself [...]]?

> The terms are at http://www.ubuntu.com/legal/terms-and-policies/intellectual-.... - I probably should have linked them, I've just been writing about this enough lately that it's easy to forget that people might read a single post without context.

No company's policies are perfect. You might be more effective trying to deal with the overall distribution of how software is done rather than imperfections in community leaders. There is a point of diminishing returns in focusing on a small subset of the companies out there while completely ignoring the rest.


> If you consider that to be a problem

I do consider it to be a problem, but since reality is problematic I think policies that force people to rebuild all binaries before they can redistribute them are harmful.

> How do you know that the source code provided actually corresponds to the binaries unless you compile them yourself[ with a toolchain you compiled yourself [...]]?

I don't, but I also don't have a verifiable path to bootstrapping a toolchain. I trust that Canonical haven't backdoored their packages.

> You might be more effective trying to deal with the overall distribution of how software is done rather than imperfections in community leaders.

I'm interested in it being straightforward for people to produce modified derivatives of the market leader, ie Ubuntu. I can achieve that in two ways:

1) Helping convince Canonical to change their IP policy 2) Helping convince Canonical to rearchitect their entire distribution infrastructure

(1) strikes me as being easier, so that's what I've chosen to work on.


If you are looking at some repository for Ubuntu's source code, and find that their build process isn't perfectly documented as regards the version of gcc they use, that really isn't a GPL violation no matter how hard you try to claim that it is.

You don't have a moral right to base a distribution on binaries produced by some other distribution. Unless you are going to produce actual evidence of a legal violation you really don't have anything here.


Not sure whether this was aimed at me, but I don't believe that having source be difficult to build is a GPL violation.


> I don't, but I also don't have a verifiable path to bootstrapping a toolchain. I trust that Canonical haven't backdoored their packages.

You do not need a verifiable path to bootstrapping a toolchain to reduce surface area of attack to just the toolchain.

That being said, you could try building Clang with GCC and then GCC with Clang (or vice versa) to break any exploits designed to persist in either one indefinitely unless you are the victim of a really special hypothetical attack that targets both simultaneously, although I imagine you could interleave multiple GCC and Clang versions to make such an attack even harder to pull off or even add more compilers to the mix like EKOPath. That should reduce the attack surface area to binutils.


And this would still only buy me something if Canonical are backdooring their packages, which I don't think they are. It's a huge amount of extra effort for what is (as far as I'm concerned) zero benefit.


That assumes that anyone interested in putting backdoors into Canonical's packages never gets a job making their packages. If you are really concerned having trustworthy systems, you need to build from source and if you doing that anyway, you might as well just use the binaries built from source.


Reproducibility from source and build env are fundamental to science or making changes to improve something. Without these, it can appear to signal hoarded proprietariness waving the banner of FLOSS or just a poorly-maintained project without high standards, even if it were unintentional.


> Reproducibility from source and build env are fundamental to science or making changes to improve something. Without these, it can appear to signal hoarded proprietariness waving the banner of FLOSS or just a poorly-maintained project without high standards, even if it were unintentional.

I am typing this on a system running Gentoo that has only 3 proprietary packages installed, with one being Intel's microcode. In time, I hope to lower that number to zero. If you are concerned about this and do not run a source-based distribution, I suggest that you switch.


Source-based distributions are not the only solution. >> https://wiki.debian.org/ReproducibleBuilds


That requires making the builds easy to do, such that Matthew's claim that people cannot easily build things themselves probably is not valid. At least not if that project succeeds.

Anyway, I am happy to see that building from source is becoming easier for users of Debian and presumably Ubuntu by extension.


> You'd think, but no - there's no guarantee that the shipped binary packages can be built with the shipped toolchain or shipped dependencies.

If the shipped source doesn't contain enough information to actually perform a build surely that's a straight-up GPL violation (and a far more serious one than any possible ZFS issue, and one that you're in a position to do something about as a copyright holder). If the shipped source doesn't include whatever their developers actually use to build (and I refuse to believe an organization like Ubuntu wouldn't have a unified "build it all" script, or at least a README that described the steps you needed to take) then how can it possibly be the preferred form for making modifications?


> If the shipped source doesn't contain enough information to actually perform a build surely that's a straight-up GPL violation

Package A's source code contains some undefined behaviour that gcc 4.4 tolerates. The distribution upgrades to gcc 5.1 and the build now breaks, but nobody notices because no new version of Package A has been uploaded and so no new build has been performed. Is your position that it was in compliance with the GPL before the gcc transition, and in violation afterwards?


No. If the package A developers when working on the code just use whatever version of gcc's installed and don't have any process then they're compliant (but dumb). But if they have a script, or even just a "building.txt" that says "install gcc 4.4 and set these environment variables, build is broken on 5.x, known issue", and they exclude that from their source distribution then they're in violation. They need to ship the preferred form for making modifications i.e. what they would use themselves or give to a new developer on their team. It's no different from the example of C source generated by a perl script (you can't ship the generated source, you have to ship the original script), and it's exactly what's necessary for the four freedoms - end users need the same facilities that the original developers have when working on the code, not a watered-down version.


> But if they have a script, or even just a "building.txt" that says "install gcc 4.4 and set these environment variables, build is broken on 5.x, known issue", and they exclude that from their source distribution then they're in violation.

They don't.


Really? They being Canonical? They just build their binaries with whatever happens to be installed on Bob's machine this week and then that becomes their official release?


They build their packages with whatever is in the distribution at the time, yes. It happens on the build daemons rather than on the developer machine.


Then how can you get into the situation that the shipped binary packages can't be built with the shipped toolchain or shipped dependencies - isn't that what the build daemons do, use the system to build itself? I mean if they use the previous release or something that's fine too. Whichever way they do it, surely it's documented internally, because it's the kind of thing developers would need to know when developing. I can't believe it would be a case of "if the build doesn't build ssh into the build daemons and update them until it does".

I mean a developer working at Canonical, trying to make a change to one of the packages they're preparing for distribution, has to have some way to answer the question "what version of gcc is being used to build this package". Whether that information is kept in the source tree, on their wiki, or on a post-it on the employee fridge - in any case, it's part of the source in the preferred form for making modifications, because it's, well, part of what the employees use to make modifications.


> Then how can you get into the situation that the shipped binary packages can't be built with the shipped toolchain or shipped dependencies

The distribution isn't rebuilt every cycle. If no new source release has been uploaded, the existing binary will be used for the next release.


So if they need to apply a small patch to a given package they'll sometimes discover that it was built 3 versions ago and no longer compiles? Yuck.


Then how can you get into the situation that the shipped binary packages can't be built with the shipped toolchain or shipped dependencies

I think that Matthew might be suggesting that not all packages are rebuilt with every release? A binary package might be compiled with GCC-4, then the distribution is upgraded to GCC-5, dropping GCC-4. If for some reason the package is not compatible with the new compiler, you would then be in a situation where the binary cannot be recreated from source without outside tools. Do you know if Canonical recompiles everything on each update to the toolchain?


I have no specific knowledge about Canonical's processes. I'm just amazed that a serious software company in 2016 wouldn't have a reproducible build process for their primary product.


There are scripts for building each package individually, and these scripts list other packages that are required for the build process. There is nothing to enforce the requirement that the build requirements are a subset of the packages built, or even correlated to them.


>If the shipped source doesn't contain enough information to actually perform a build surely that's a straight-up GPL violation

Why would it be? Build and testing systems for something like a Linux distribution are far more complex systems than a simple script. In any case, there's no requirement in the GPL (or any other OSI-approved license AFAIK) to include components outside of the software itself to simplify building and installing. Of course, there are a variety of things like package managers that do simplify the process but there's no requirement in the GPL that it be straightforward to install from sources.


The requirement is to ship the source in the preferred form for making modifications. That doesn't include standardized external tools, but it surely includes the build instructions (with things like which versions of the tools to use) and test suite (and indeed git history - mjg59 has disagreed with me regarding that before, but I don't think he gave a rationale) - those are things you'd definitely want to have for making modifications.


mjg59: when you link to a Shuttleworth comment as a 'flat refusal', and that comment is actually a statement of position and procedure to achieve your general aims (if not your specific ones), but finally ends with a paragraph stating that you misrepresent what Shuttleworth says, it's hard not to believe him.


The post needs to be read in the context of the series of which it forms a part, eg. http://mjg59.dreamwidth.org/38467.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: