Hacker News new | past | comments | ask | show | jobs | submit | more luso_brazilian's comments login

From the article:

> The "Paranoids," the internal name for Yahoo’s security team, often clashed with other parts of the business over security costs. And their requests were often overridden because of concerns that the inconvenience of added protection would make people stop using the company’s products.

That's the best summary of the problem for the industry as a whole, not only tech but any industry where failures are uncommon but with grave consequences.

A quote from Fight Club that illustrates that problem:

> Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall?

> Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X.

> If X is less than the cost of a recall, we don't do one.

That's the current mindset of the technological world, estimating whether the cost of atoning for the problem is lower than the cost of securing the systems.


Calling them the 'paranoids' probably seemed like a fun idea at the time, but I wonder if it set up a subconscious bias against their work. I wonder if they had been called 'The Guardians' or 'The Defenders' there would have been a different outcome.

Seems trivial, but words matter.


The Yahoo Paranoids chose their own name. It was designed to be light-hearted in a way that didn't make them seem stuffy so that engineering teams would be more receptive to their work. In my experience, this is incredibly important from the outset.

Anyone who has worked in information security for a month knows that the relationship between product engineering and security engineering defaults to antagonistic. It takes a lot of work to make it friendly and productive, and as a security professional I think "Paranoids" is much better for overall collaboration than something like "Defenders", which in my opinion reeks of self-importance.

The more pertinent issue here is management not fostering the culture enough.


Where I'm working now, we've got security engineers assigned to seating in each development team.

They're not managed by, or working for, our teams. They have their own manager and security work that they're getting on with.

Having them sitting amongst the team, however, is resulting in a much different narrative than any I've been around before. There's a much higher quality, and less antagonistic kind of engagement going on. They've become someone you chat with at the watercooler, or at their desks, instead of having to file tickets, or wait for scheduled reviews to raise things.

People can quickly consult with them and deal with a whole heap of small potential risks way early on in the development process, and it's paying serious dividends down the road.


That approach Works well with Q&A too.


You're talking about Squads basically. Bring different people in the same group. And yeah, QA is very similar to Security in some points, but if you think straight QA should include security. Weird to say that a software has quality without security included, but the truth is that security is specific that the regular QA usually can't handle.


You've capitalized Squad, but it's hard to Google. Where did you get that term, and where is it defined outside your head?


As xxr said, Squad is how Spotify names their (previously Scrum) teams. Other interesting concepts they use are "Tribes" and "Guilds". Take a look at the Spotify engineering practices, they are really inspiring.


Not the commenter you're replying to, but at least at my organization we borrow the term from Spotify.


Security engineers are seen as experts you consult about something you don't know. QA are not seen this way. Some QA engineers actually are experts that can give good advice on structuring an application in a more testable way, but that's not the norm.


Most QA guys only check that something meets the spec/story requirements, not that the code is sane or testable... many don't even go beyond UI testing. That said, I think GP was referring to having a QA embedded as part of a team.


You know... I keep thinking that with source control systems like Bitbucket enterprise, etc... why aren't more mid-large sized orgs requiring a security signoff for every pull-request with a pull request to master/release branches being the trigger point.

I do a lot of PR reviews, and while I may not catch everything, I will catch a few things here and there... someone with that mindset would be in a better position to handle that from the start...

Having a few security guys that do PR reviews for about half their workload would go a long way to improving things.

We're going through an audit for an internal application now... there's 1 major flaw (SSL2/3 is enabled), a minor (session cookie isn't https only) and a couple trivial (really non-issue) concerning output caching on api resources and allowing requests with changed referrers (this can be spoofed).

In any case, having auditing earlier on and as a potential blocker would make each minor change easier to deal with than potentially much larger changes... the app in question was developed for the first 8 months without even a Pull Request check in place... by then many issues regarding code quality are already too late to fix completely. :-(


Nobody wants this.

No "security guy" who has a choice wants to spend half their workload waiting for PRs to come in so they can chime in with feedback about default configurations.

No product programmer wants to deal with some "security guy" parroting the results of an automated tool to them over a code review platform.

No product manager wants to see progress stall because the product programmer and "security guy" are arguing over whether or not a call to strncpy should be replaced with a call to strcpy_s.

In the immortal words of my generation, ain't nobody got time for that.


Honestly, someone should have time for that, it's part of the problem... I go out of my way to comment on as many PRs as I can, because I'll catch things that will become problems later far more than other peers who just click approve.

The same can be said for security guys... they spend their day needing to work as well, and seeing a bunch of smaller things fly by is just as valid as a big audit periodically. It's easier to catch a lot of things before they become big as well...

There are plenty of times I'll comment (Okay, letting this through, but in the future revise to do it this way), sometimes I'll push back, but not always, that's what the review process is for. I'm just suggesting multiple approvers for PR, where one is someone who is security minded.

It's funny how many issues I'll see from other systems where someone does something per the spec, that has a flaw because they were completely compliant. Someone crafts an exploit, and I'm interested because I'd usually be more pragmatic in implementation. Last year there was a huff about JWT allowing cert overrides in some frameworks, as they don't ensure the origin cert matches a whitelist... when I'd implemented JWT, I only checked against our whitelist and ignored the property.

Sometimes security guys will see things and think of things in a way others won't... for me, one thing I often catch that others don't are potential points for DDOS target viability. Some of that comes from using node, where you do NOT want to constrain your main event loop thread. Others don't think about putting limits on JSON size, or compute heavy tasks, etc.

And, frankly, I'm tired of fixing related bugs to patterns that were broken from the start.... turtles all the way down, but the turtles are eating all the errors.


In the immortal words of every other generation :), "someone is going to find your issues. It's either you or your customers."

You don't seen to have an appreciation for the difference between a secure and an insecure product. Yahoo didn't either.


Personally, I have an appreciation for it. I'm a working security professional.

However, for a decade and a half I've been part of many different security regimes at many different organizations. None of them had an appreciation for the difference between a secure and insecure product, and additionally, none of them were punished by the market for it. Products have success or failure because of other factors. Security is something that organizations invest in, in the best case, because it's something they believe in, and in the worst case, for compliance reasons.

So now Yahoo has a big problem because they had this breach. First of all, is this actually a big problem? Yahoo has many other big problems. Is this going to make or break the company? No. Has any security issue made or broken a company? Microsoft thought they could be broken by security, so they invested billions into it. They were wrong. They were broken because they had crappy products that people were forced to buy. They figured this out and shut down their security organization. What about Target? What badness has befallen them? Surely not to their earnings or stock prices. What about any company that has suffered a breach? The biggest thing that happens is the CSO gets fired. Maybe some vendors get fired. That's it.

This is where the questions end when you start to push for more security involvement in the product. Ultimately you will (personally!) stand in front of the CEO who will ask you "will I lose my job, or suffer some other negative outcome on that scale, if I don't listen to you?" and you will answer, truthfully, "no." And that is the end of the conversation.


> Target’s chairman and chief executive officer, Gregg Steinhafel, a 35-year company veteran, is stepping down, as the massive pre-Christmas data breach suffered by the Minnesota retailer continues to roil the company. The decision is effective immediately, according to a statement posted today on the company’s website. John Mulligan, Target’s chief financial officer, has been appointed as interim president and CEO.

http://www.bloomberg.com/news/articles/2014-05-05/as-data-br...


Well, I am most certainly a working security professional. It sounds as if you've given up and become a bean counter.

If the answer you give your CEO is "no," then you aren't giving the proper answer. You are just being a "yes man," saying comforting words.

>> So now Yahoo has a big problem because they had this breach. First of all, is this actually a big problem?

I mean this in absolutely the best way possible, you shouldn't ever be allowed near either a business or a security decision that affects people's lives or livelihood. If you think that disclosing hundreds of millions of records (many of which must contain PII) is without repercussion, then I have a pretty good idea of which end of the security stick you are holding. You are describing a business model where you piss on your customers by transferring 100% of the risk to them.


You don't pay me. The C-suite pays me. Thanks for making this personal when it has no need to be, by the way.

Personal attacks aside, let's you and me go out to a bar and sing songs of how things should be. Tomorrow, we have to go back to how things are. In the land of how things are, to the business, the disclosure doesn't matter. Full stop. Does it matter to the customers? Oh yes. Dearly. It's a really big deal to humanity. The business and humanity are discrete.

Is that a tragedy? Yes. I weep. I go home and drink every night for this reason. Until I don't want to work for people that pay money, though, you have to think about the business first. Humanity second. Anything else is a fairy tale or communism.


Much more valuable to have the security folks a critical part of reviewing the _frameworks_, and then pushing adoption of those frameworks. Human reviewers won't catch everything no matter what, but you can make entire classes of problems go away by making them impossible to commit.


Does that mean we can kill angular 1.x because it encourages points of disconnect, undiscoverable code, too much pfm (pure fucking magic) and failure?


I understand what you are saying, but having been around similar dynamics in the past I think such deprecation is a little like starting off the relationship apologizing for what they're supposed to be doing.


Indeed, regulatory and security are the two parts of the company that are supposed to be antagonistic in order to keep the company out of trouble. How that plays out in practice has a lot to do with the personalities involved.


I used to slip in words like 'awesome', 'clever' and 'amazing' when talking to colleagues from other teams about the work that I was doing in the hope that it would influence their perception of the work. I've no idea if it worked though.


That was my first thought when I read it. A better name would have been "Tron", "Patronus", or "Endor".

Calling it "Paranoids" or "Inquisition" is just giving it another reason for people to loathe it.


Infosec isn't the ones doing the defending or guarding or any of that. They're typically working with the teams to do build and maintain to ensure their policies and procedures lead to and maintain a secure posture.


As a developer who is not very much into security, I am guilty of this crime. Infosec teams are very important and deserve respect and attention.


My experience with Yahoo (admittedly ending more than a decade ago, so I'm sure much has changed) was that cost probably was a huge deal. I ran engineering for the European billing platform. We processed many millions of dollars worth of transactions a year.

Yet when I had to ask for a new database server, I had to submit a written request to a committee in Sunnyvale, with graphs and other supporting documentation to demonstrate that the load of the server we already had was high enough to justify it. Then I had to join a hardware review meeting, that included maybe a dozen people. One of them being either Jerry Yang or David Filo (Yahoo founders; I've forgotten which one of them it was that did these).

The people in the meeting, even excluding whichever one of the founders, easily cost Yahoo more in salaries for the time they spent discussing my request for one lonely server than the fully loaded amortised cost of operating it for a couple of years.

It's not that I have an issue with reviews, and cost controls - on the contrary, but some degree of delegation and trusting staff with budgets would have been nice. I mean, I could have trivially cost Yahoo millions of dollars with a few keypresses if I wanted to or didn't pay attention - they trusted me with the ability to mess up their entire European payments platform with basically no oversight, yet I couldn't approve a single cent of hardware expenditure for the production platform, and neither could my manager, nor, I believe, could my managers manager, who was responsible for all of engineering across Europe.

I suspect a structure like that may have created a lot of resistance to recommendations from the Paranoids even when engineering (they seemed generally very well respected; one of my old developers is part of the Paranoids now - he'd wanted to for years) would like to accommodate them for the simple reason that getting approvals would be a massive hassle and slow things down.


Marcus Aurelius specifically talks about how important it was to have governors that he could trust, because the empire was so large that he could not possibly know everything about the empire in its current state. His lesson about task delegation is timeless. Well, his lessons are timeless, full stop.


I've been thinking a lot about how ancient empires operated and functioned, and what institutions they required.

Realise that Egypt, Greece, Macedonia, Rome, Persia, and China each spanned a thousand miles or more, the most effective transportation was over water, either along rivers or across seas or oceans, that ocean travel was impossible for much the year (Roman vessels were restricted to port from November through May, this lasted until the 1300s in Europe), and the minimum time for a message to traverse a thousand miles was easily ten days, if not months.

You needed autonomous lieutenants in place who could be given general orders (much like goal-seeking AI, now that I think about it), be trusted to be only modestly corrupt, not collude with enemies or others against the centre (a frequent problem), and to truthfully report what they'd experienced, in words -- writing existed, but not photography, video, audio, etc. Testimony, that is, someone's testement or attestation of fact, was all you had, though multiple testimonies could be compared against one another.

I find it interesting that every major empirical power had some intrinsic religion, probably serving as a moral check and guidance, a role that's often underappreciated today. Also that other than a set of strictures, the religions themselves often had little in common with one another: polytheistic vs. monotheistic, theistic vs. meditative, commandments vs. ancestor worship or reverence.

It's a topic on which I'm almost wholly ignorant, but find fascinating.


Communication delays were a big part of this.

I think unappreciated problem with modern communications tools is that by default, they enable and encourage micromanagement.


And as a consequence, deprecate trust.


That works... before the ipo. After that relentless success is the expectation - delegation has built in risk so is very hard to justify.


His column is pretty cool too.


> That's the current mindset of the technological world, estimating whether the cost of atoning for the problem is lower than the cost of securing the systems.

And for the record, this will always be the mindset of corporations whose only concern is the bottom line. Until we as a culture accept that the market does not solve all problems, we're not going to solve these kinds of problems.


  > > That's the current mindset of the technological world, 
  > > estimating whether the cost of atoning for the problem 
  > > is lower than the cost of securing the systems.
  >
  > And for the record, this will always be the mindset of 
  > corporations whose only concern is the bottom line. 
  > Until we as a culture accept that the market does not 
  > solve all problems, we're not going to solve these 
  > kinds of problems.
My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.

This is particularly true in the security space, because no system is 100% secure. And since resources aren't infinite, where do you stop? 90%? 99%? 99.9%? What if addressing that incremental 0.9% costs as much as the rest of the security apparatus combined? As much as the rest of the product combined? As much as your total revenue?

What's the other option? It can't be "not release anything", so a middle ground is found. We're arguing about shades of grey.

And sure, the government can help. Either by bearing some of the cost (e.g., investment, tax breaks, etc.) or increasing the impact of an incident (e.g., penalties, etc.).

But this isn't a big, bad, greedy corporate problem. This is a broader issue about how much risk we're willing or unwilling to absorb, and how efficiently we can address that risk.


> My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.

You're looking at this in only monetary terms, or at least Yahoo is. But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.

> But this isn't a big, bad, greedy corporate problem.

Of course it's a big, bad, greedy corporate problem. The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line. And quite frequently Yahoo's bottom line is in direct opposition to improving my life and the lives of other people.


>... But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.

In this situation it doesn't matter that yahoo is a private corporation - the same cost/benefit analysis essentially needs to be done no matter what the structure of the organization. Let's pretend that email had been created by a government agency and that agency has to decide how much of the budget to spend on security. If it costs X dollars to make something 90% secure, 10X for 95% secure and 10,000X for 99.9999% secure, etc etc eventually you have to choose how much to spend - resources aren't infinite for that government agency either. (And to make it much more difficult, they just have a guess that X dollars will make their product N% secure.) It isn't as black and white as you are trying to portray it.

I think it is fair to criticize yahoo for how the prioritized security but the same kind of issue has happened with non-profit companies and with government organizations, so no, it isn't just a "big, bad, greedy corporate problem."


You're the one trying to make it black and white, he's simply saying that unlike private industry, government can have another motive be primary rather than profit, i.e. help it citizens as the primary goal. Yea, budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken. Profit is not the correct goal for every action that can be taken by an organization, government isn't a business.


If "profit" is defined as: "generating more value than is consumed in the production process"...

Then yes, we damn well better demand that profit be the correct goal for every action regardless of organizational structure.

If our system is distorted to inaccurately measure profit locally, without properly accounting for negative externalities, then that's a legitimate problem, but the way to solve it is by factoring those hidden costs back into the profit calculation, not giving up on "profitability" properly defined.


If profit is defined as $income - $expenses = $profit, then you'd be using the word the way everyone else is using it, and you'd be participating productively in the conversation.


  > ... government can have another motive be primary 
  > rather than profit, i.e. help it citizens as the 
  > primary goal.
But there's still ROI here, and there's still a budget (no matter how big the deficit gets). So the question remains: how do I spend that money? Do I spend all of it on security apparatuses, or do I have to scale back and spend some on other social services? How much? What's the best bang for my buck?


Given the current state of computer security, a government program that fines companies for poor security practices could easily pay for itself.


> budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken.

Profits are still required for gov't spending, but they are just made by someone else in the country and transferred to the gov't via taxation. Even deficit spending is just the choice to spend money today that will be obtained from taxation at a later date.


I know this is snarky, but: tell it to the OMB.

Corporations do not have any sort of exclusive lock on cost-benefit analysis.

Edit: including bad cost-benefit analysis.


I'm looking at this in quantitative terms. Money is one measure. Effort, time, security, and others may be harder to quantify, but they're still important factors. "Security at any cost" quickly becomes simply impossible.

This is the general sense. Yahoo is probably on the "wrong" side of average.

But in some sense, you can vote with your feet. Companies who don't value security won't get your business. If enough people feel as you do, then the ROI calculation changes. And the same applies to politics as well: if you think more money should be spent on security and there's a societal good here, write to your congressman, or elect one who's receptive. Again, if enough people feel as you do, the political ROI makes this an imperative as well.


The fiction of markets is that costs and value can be reasonably determined. The truth is that in far too many instances, they cannot. Surface appearances or gross misbeliefs drive costing or valuation models and behavior, and as a consequence, goods are tremendously disvalued.

That's on top of the problems of externalities in which the costs or benefits aren't fully contained to the producer or consumer of a particular good or service.

A misprioritisation of values is what the drunk waking up with a hangover, the sweet-tooth spending 40 years dealing with systemic effects of diabetes, or the smoker suffering 20 years of emphysema and COPD comes to realise. The externalities are the drink-driving victim, the socialised medical costs (and privitised profits of the sugar firms), and the 2nd and tertiary smoke victims.

There are rather larger issues far more fundamental than these in the modern industrial economic system, but I'll spare you that lecture.

The point being that trusting on "the market" to offer corrections simply doesn't work.


>The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line.

I would argue that it's ALL corporations that only care about their bottom line. The entire reason a corporation exists is to make money, any other considerations like employee well-being, care for the environment, etc are driven entirely by either legal requirements or a need to retain talent in order to make that money. Any corporation who successfully projects an image of being "different" just has a good marketing team.


Or they’re just a small-to-medium-business with a consistent set of ethics? Ever thought about that?


Externalities are a word we use to describe costs we find hard to model, but I find that most externalities do cost corporations real money. They just often aren't aware of it and haven't developed enough sophistication in their business cases to account for it. The best companies who support their security teams understand this. They understand that broken things lose them trust, customers and goodwill and those things are, even from a purely monetary and numerical perspective, incredibly valuable for a successful business in the long term.

The problem is not merely whether or not a profit motive exists to do right, but whether or not a business is insightful enough to model the full costs and include what we normally let go unexamined as mere "externalities".


Externality != "hard to model". Rather, it means difficult to internalise.

Garrett Hardin's "Tragedy of the Commons" gives a quite simple model of what an externality can be (overgrazing). The problem isn't in the modelling, but rather in the mutual enforcement of a collectively beneficial behavior.

That isn't to say that there aren't costs which are hard to model, but that's an orthogonal issue, and can apply just as well to internalised effects (e.g., the goodwill loss of a massive security breach) as to externalities.

Goodwill loss is not an externality.

I agree, adamantly, with your comment that businesses are frequently not enlightened or intelligent enough to model full costs. I'm seeing the issue of the long-term development of both cost and benefit awareness as a pressing issue, general to economics. It undermines many of the assertions of market efficiency.


I'd argue it >is< a corporate problem, and the article we are looking at shows exactly why. There should be consequences for running a company in this manner, and there are not. The people who made this decision did it because they were protected from the damage they did.


> There should be consequences for running a company in this manner, and there are not

And the consequences should be users choosing another company and they don't. So the core problem are users.


No, that assumes people are rational actors and they are not; preying on human psychology doesn't alleviate you of guilt, the companies are the problem, not their victims for not leaving.


It's similar to a company selling defective products or contaminating a city's water supply. The market response is too late to deal with those types of problems, and undervalues individual lives.


Yup, and it's too reactionary to problems that can be easily avoided by regulation, food safety for example. If it were up to the market, people would be dropping like flies because safety doesn't tend to increase short term profits as well as corner cutting.


I don't think you need to even concede the idea that users are rational actors--there are plenty of reasons why a rational actor would prioritize another factor over security. For example, many people got Yahoo email addresses a long time ago, and built a personal contact list of people who only know their Yahoo email. A rational actor might value keeping in contact with those people over their privacy. That doesn't mean that it's okay to expose that person's data.


The consequences should be that the company loses its ability to run a business. You've arbitrarily decided that the only acceptable mechanism for this happening is users choosing a different company. There are a whole host of reasons that doesn't work, and simply shifting the blame onto users for not making it work doesn't solve the problem.


> The consequences should be that the company loses its ability to run a business.

Or gains ability to run it properly.

> the only acceptable mechanism for this happening is users choosing a different company.

I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.

> shifting the blame onto users for not making it work

Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in.

And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.


> I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.

That's simply not realistic on technical issues. Users can't take responsibility for choices they can't be reasonably expected to understand.

> Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in

If you believe that, in your own words, user's responsible choice should be the primary mechanism of enforcement of this, you've rejected any effective means of achieving the above trite and obvious truisms.

In fact, security should matter to us a lot more than performance, pleasant design, or simple UI, because unlike those, security can be a matter of life and death. Which is why I don't want to leave it up to users.

> And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.

Which begs the question why you want to leave security regulation up to users moving away from the product.


Security people grade issues from two simultaneous yet different perspectives, security risk and business risk. It sounds like you are describing accountants not security people.


But what's the concrete proposal?

The default "better idea" seems to be "let the government do it", but if you've been keeping up with the news in the past few years, "the government" doesn't exactly have a stellar track record either. Where a corporation may prioritize making money over security, government prioritize politics over security, wanting to spend money on things that visibly win them political points or power, not on preventing things that don't happen, which aren't visible to anyone. It's the same problem in a lot of ways. And both corporations and governments have the problems that specific individuals can be empowered to make very bad security decisions because nobody has the power to tell them that their personal convenience must take a back seat to basic operational security.

Even the intelligence agencies have experienced some fairly major breaches, which count against them even if they are inside jobs.

"The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.


> "The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.

My usual reply to this is that we use government to nudge market incentives, which is also what I think would be reasonable here: simply create a class of records related to PII, and create HIPPA like laws regarding those records that certain kinds of information brokers keep on people.

You then provide a corrective force to the market by providing penalties to violations, which raises the costs of breaches, and shifts the focus of the corporation towards security.

HIPPA or financial systems aren't perfect, it's true, but they're at a standard above what most of our extremely personal data is stored at, so we know we can do better, if we choose to as a society.


These laws would also be a lot more effective if you held the executive staff accountable as opposed to the shareholders. The model that corporations seek profit doesn't work in some cases, it's a group of individuals all seeking personal profit.


s/HIPPA/HIPAA/rg


So adjust the market.

There's a worthwhile conversation to be had about the corporate liability shield, and whether A) major security/privacy breaches should have some sort of ruinously high statutory damage award rather than requiring people to prove how they were harmed, and B) more suits -- not just over breaches -- should be able to pierce the corporation's protective structure and cause personal liability for corporate officers who make careless or overly-short-term decisions.

Adjusting the incentive structure of the market in which companies operate could do a lot.


It was already done by DOD under Walker's Computer Security Initiative. It succeeded with numerous, high-assurance products coming to market with way better security than their competitors. Here's the components it had:

1. A clear set of criteria for information security for the businesses to develop against with various sets of features and assurance activities representing various levels of security.

2. Private and government evaluators to independently review the product with evidence it met the standard.

3. Policy to only buy what was certified to that criteria.

Criteria was called TCSEC with Orange Book covering systems plus "rainbow collection" covering the rest. IBM was first to be told no in an embarrassing moment. Many systems at B3 or A1, most secure, were produced with a mix of special-purpose (eg guards) or general-purpose (eg kernels or VMM's). The extra methods consistently caught more problems than traditional systems with pentesting confirming they were superior. Changes in policy to focus on COTS not GOTS... for competition or campaign contributors I'm not sure... combined with NSA's MISSI initiative killed the market off. Got simultaneously improved and neutered afterward into Common Criteria.

Summary here:

http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-B...

Example of security kernel model in VAX hypervisor done by legendary Paul Karger. See Design and Assurance sections especially then compare to what OSS projects you know are doing:

http://lukemuehlhauser.com/wp-content/uploads/Karger-et-al-A...

Best production example of capability-based security was KeyKOS. Esp see "KeyKOS NanoKernel" & "KeySAFE" docs:

https://www.cis.upenn.edu/~KeyKOS/

So, that was government, corporations, and so-called IT security industry threw away in exchange for what methods and systems we have. No surprise the results disappeared with them. Meanwhile, a select few under Common Criteria and numerous projects in CompSci continued to use those methods with amazing results predicted by empirical assessments from 1970's-1980's that led to them being in criteria in first place. Comparing CompCert's testing with Csmith to most C compilers will give you an idea of what A1/EAL7 methods can do. ;)

So, just instituting what worked before minus the military-specific stuff and red tape would probably work again. We have better tools now, too. I wrote up a brief essay on how we might do the criteria that I can show you if you want.


I posted this elsewhere, but I think I intended to post it in response to your post:

Well, there are a few possible solutions, and they don't all involve corporate incentives:

1. Government regulation

2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)

3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)

Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.

However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.


> Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)

Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.

> However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.

I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.

However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.


> Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.

You bring up a deep problem that I admit I'm not sure how to solve. I'd love to see a large rise in non-profit foundations, but I'm not actually convinced even that would solve the problem.

I think the solutions proposed by i.e. the FSF where up-front contractual obligation to follow through with their ideals may be a better solution, but we're beginning to see very sophisticated corporate attacks on that model, so it remains to be seen how effective that will be.

> I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.

> However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.

This argument is purely sophistry. You take a step back and talk about a more general case to make your position seem more moderate, admitting that the free market isn't self-regulating, but then return to the stance that the free market solves this problem because a regulated market (regulated how? by itself? In the context of free market versus government regulation, "regulated market" is a very opaque phrase) solves most cases, and on that principle, we don't know whether the very general field of data/communications/security warrants alternative solutions (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?).

We're speaking about a case where the free market didn't work, de facto: Yahoo exposed user data, hid that fact, and likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is.

So let's not speak in generalizations here: the free market has failed already in this case, and you admit that the free market doesn't self-regulate, so you can't argue that the free market will suddenly start self-regulating in this case. Regulation isn't an "alternative solution", it's the only viable solution we have that hasn't been tried.


" (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?)."

I presume an unregulated market is preferable to regulation at the outset, yes. Government regulation should be done in the face of systemic failures while retaining Pareto efficiency.

Put another way, I think the market can be very effective with well thought out regs. but I don't believe there are better general/default alternatives than to start with a free market and use the empirical evidence to guide policies...

"likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is."

I disagree that this is a case of market failure.

This is a case of "you know better than the market" and you want to force a specific outcome through regulation. But I'm not sure that's what people want.

what if people don't really care about their data being exposed all that much? It's a risk they're willing to take to use social networks. The penalty is that people might move off your service if you leak their information (as is likely to some degree with Yahoo). That to me seems to be the evidence here. That's not a market failure, that's a choice.


With legisilation, we can change the market. In the Fight Club example legislation can make C be ten times as big, and change the equation. For Yahoo, legally mandated fines, or restrictions on what they can do in future[1] could make them wake up.

[1] Maybe if you run email and you get hacked, you're not allowed to run email again for a few years? That'd've have woken them up.


And until we figure out how to incentivize this behavior (or discourage malicious behavior), corporations won't be willing to solve these kinds of problems.


If only we had some sort of structure in our society that could solve the problem but wan't profit-driven. Maybe something that could oversee these corporations. We could call it "government" or something.


Pretty sure the US has one of those, doesn't seem to be working. In fact, it often acts against that (preventing sharing of encryption algorithms, trying to force inclusion of backdoors..).


If only Hobbes, Locke, Rousseau et al. were around today..


Maybe you can try Bernard Stiegler. His english wikipedia page is a little poor in information, so I put a link about his last book (not yet translated).

https://en.wikipedia.org/wiki/Bernard_Stiegler

http://www.samkinsley.com/2016/06/28/how-to-survive-disrupti...


bookmarked this interview link for later. Thanks!


Not sure why you're getting down-voted - if the market is set up to incentivize a certain behavior, then someone will do it eventually. People can get mad all they want, but they should be furious that the government has laws in place to incentivize that type of behavior (or at least allow it to happen.)


Well, there are a few possible solutions, and they don't all involve corporate incentives:

1. Government regulation

2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)

3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)

Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.

However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.


The market seems to be solving the problem just fine: nobody uses Yahoo anymore and companies with solid security practices (e.g. Google, Apple, Facebook) are thriving. If Google had a serious security breach, you can bet the market would respond to it and Google knows it.


I mean, i am not agreeing with Yahoo here... but isn't that a reasonable thing to do?

Every act of securing or ensuring quality has a cost, and there is a line. I think most of us would agree that the line is very broken currently, but it appears you're citing a problem with the line in general, not the location of said line.

Everything has a cost, from a recall to better security to even a human life, the debate should be what we think should be paid, not whether or not we should worry about costs at all.

(If i misunderstood your intent, apologies)


The problem here is that the people who pay the costs of security are different from the people who are hurt when security is breached.


Loss of user trust hurts Yahoo.


Not enough that it isn't in Yahoo's favor to take that risk (you can't argue this--this is what happened).


And who are you to decide that that isn't a legitimate decision made by the users? If people cared more about security, they'd move away from Yahoo after something like this, and Yahoo would be more incentivized to keep this from happening.

Your problem is that you disagree with other users - but that's totally legitimate, not everyone has to care about the same things you care about.


> And who are you to decide that that isn't a legitimate decision made by the users? If people cared more about security, they'd move away from Yahoo after something like this, and Yahoo would be more incentivized to keep this from happening.

Who said this wasn't a legitimate decision by users? Certainly I didn't and wouldn't say that. There are a lot of reasons why a rational actor would choose to stick with Yahoo--that doesn't mean Yahoo exposing their private data is okay.

The other thing to realize here is that users aren't rational actors. My grandma is senile--is it okay for Yahoo to expose her private data because she doesn't know they aren't secure?

You've arbitrarily decided that users have to take all the responsibility here, and that the only way we can judge or punish Yahoo is by users leaving. But a) in many cases Yahoo is the only actor with agency to make a decision, and b) there are other ways Yahoo could be punished for using that agency to make decisions that harm users.

> Your problem is that you disagree with other users - but that's totally legitimate, not everyone has to care about the same things you care about.

No, I don't think that I disagree with other users--I think that many people care about their privacy, they simply a) don't know enough to make pragmatic decisions on how to protect their privacy, or b) have other priorities. And this is beside the point--none of this makes it okay for Yahoo to endanger their users' privacy.


> If people cared more about security, they'd move away from Yahoo after something like this

That's why Yahoo's failure to disclose this immediately bothers me so much.


Maybe the long-run solution is to make the coupling explicit: publicly post the value the company places on an account not being breached. (Ideally, this would work in tandem with some insurance policy that pays out for that amount, to validate that they really do so value it.)

Then, you can choose the provider with a high enough value to make you feel comfortable, in the understanding that higher-valued accounts will cost more.


This would work in many more contexts: The window sticker on my car can include the value they placed on passengers' lives when making cost-benefit trade offs.


It is reasonable, when your estimates are good and you're honest with regulators and customers. Sometimes your estimates are off by a factor of 10.

https://en.wikipedia.org/wiki/General_Motors_ignition_switch...

And you kill over 100 people, lie to regulators, lie to consumers, and end up spending billions trying to rectify the situation (recalls, settling suits, fines).


Yes, using the outcome of a formula to determine your actions generally relies on the formula being accurate.


It also relies on whomever is modeling the reductive, simplistic "cost model" to know the effect of all the other variables that factor into the companies success. Do these people really think that the legal/compensation costs are the only effect? How many sales did Ford miss out on because they were labeled as the "There is a known issue in this car that might kill you but until your life is worth more than a replacement part we wont repair it" car company? Did they factor in those costs into their revenue model projections? Did they factor in the sag in price point demand "Boss I wouldn't bid the same on that contract because they've shown themselves to sell a known defective product and we'll open ourselves to legal issues if one of their cars kill one of our customers we're transporting in their vehicles"

Despite what an MBA will tell you, the world is more complicated that X<Y*Z


There will always be things you can do that increase safety at a cost, but some of them will necessarily not be worth the effort, or you're forced to spend without bound on ever-more safety to the point that it's not worth using (and which may push people into still-riskier alternatives).

>How many sales did Ford miss out on because they were labeled as the "There is a known issue in this car that might kill you but until your life is worth more than a replacement part we wont repair it"

If you're turning down a company for making such a tradeoff, that's like saying "I'll buy a Ford rather than a GM because people might die in GMs."

You're right that you can legitimately criticize a company for failing to include certain things as costs, but it's not fair to fault them for somehow making this inevitable tradeoff, especially in the belief that you have some alternative provider that isn't.

(And example of such a cost -- that they can legitimately be expected to but don't -- would be something like "impact on general perception of risk", "impact on reputation of the car industry".)

>Despite what an MBA will tell you, the world is more complicated that X<Y*Z

It sounds more like you're agreeing that it's that simple, but that Z (events worthy of consideration) is not as simple as in typical models.


Which is why actuarial reports have around 2 pages of conclusions and 20 pages explaining the assumptions underlying them.


I also agree with this, security is always in a balancing act with convenience. Yahoo fell to far into the convenience side on this one but that debate on security vs convenience is happening in everywhere. The issue I've seen is that many companies are bad at doing risk analysis about these choices. That's the bigger issue in my view.


> security is always in a balancing act with convenience

I don't think that's always the case. A whole lot of security can be had with little or no inconvenience, given an appropriate mindset, though one might argue that such a mindset is an inconvenience in itself. :)

> many companies are bad at doing risk analysis about these choices Amen to that!

I think that having a basic, security aware mindset goes a long way, even if there is very little 'budget' or 'ability' to do inconvenient things.


Philosophically speaking, you cannot improve security without sacrificing usability. What I mean by usability is the capability for someone to do something, not simply convenience for the users themselves. No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.

For example, the user might not see a capability decrease if you use MD5 or bcrypt, but you certainly see a capability decrease because you can no longer see their passwords and you have to do extra work to maintain them securely. Sometimes security decisions are easy, like hashing passwords, because these days no one wants that capability. But sometimes they are not easy decisions.

You can pass a lot of convenience savings on to users by assuming the capability sacrifice yourself (for example, choosing the password hashing algorithm behind the scenes), but you can't do this for everything (for example, mandating two-factor authentication or password resets be masse).

This might come across as pedantic, but it's very important to maintain a mental model this way because it helps you understand risk analysis for more complicated security and usability tradeoffs. Starting from the premise that you can have any security without a decrease in usability is not helpful in that regard.


Your argument is assuming something that I don't believe is true, which is that we're already on the Pareto optimality frontier for security/convenience. It is certainly true that you can not forever increase security without eventually impacting usability, but I don't think many people are actually in that position.

I've improved a lot of real-world security by replacing functions that bash together strings to produce HTML with code that uses functions to correctly generate HTML, and the resulting code is often shorter, easier to understand, easier to maintain, and would actually have been easier to write that way in the first place given how much of the function was busy with tracking whether we've added an attribute to this tag yet and a melange of encoding styles haphazardly applied. What costs you can still come up with ("someone had to create the library, you have to learn to use it") are generally trivial enough to be ignored by comparison, because the costs can be recovered in a single-digit number of uses.


"Your argument is assuming something that I don't believe is true, which is that we're already on the Pareto optimality frontier for security/convenience. It is certainly true that you can not forever increase security without eventually impacting usability, but I don't think many people are actually in that position"

That's true that we aren't at the sweet spot yet but that what I meant by companies being bad about doing the risk analysis judgement of security versus usability.

On you second point languages have gone through that cycle. Look at Java doing boundary checks. That helps avoid a whole class of security issues but at the cost of making things that C was able to do easily more difficult. These tradeoffs happen at every layer.


> No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.

It seems strange to describe this this way for something like fixing a memory corruption bug or switching from a vulnerable cryptographic algorithm to a less vulnerable one. The capability that you're giving up is ... potentially breaking your own security model in a way that you weren't even aware was possible?


I think I might not be conveying my point very well. Let me clarify this as succinctly as I can.

Usability doesn't just mean things users want to do. Usability means things anyone (users, developers) can do. By definition, "securing" things means limiting the capability of certain users or developers to do (hopefully) specific things. How efficient you are at this determines whether or not you'll also reduce the capability users or developers want to have when you reduce the capabilities they don't want to have.

To give a concrete example: using a cryptographic algorithm immediately impacts usability along performance and capability axes. Previously, you could arbitrarily read and manipulate that data because it was plaintext. Afterwards, you could not. Now you need to be careful about handling that data and spend developer time and resources implementing and maintaining the overhead that protects that data and reduces its direct usability.

It doesn't matter if you wanted that capability - it's gone either way. That was a trade-off, and it is an easy decision to make, but not all decisions are easy to make. Every security decision can be modeled as a trade-off.


I fondly remember the convenience advantages of plaintext password storage, both as a user and somebody supporting users.

Occasionally I wonder if there are user accounts in my life that are irrelevant enough I'd be happy to buy that convenience advantage with the necessary security risks ... but of course people's tendency towards password re-use makes that trade-off basically unofferable in any sort of ethical way.

At least bcrypt makes it moderately easy to not completely screw up the hashing part.


Although I'm tempted to argue against your view, it ended up reminding me of

http://www.oreilly.com/openbook/freedom/ch07.html

and somewhat relatedly https://web.archive.org/web/20131210155635/http://www.gnu.or...

which tend to support your point.


That's a good example but a bit cherry picked. I could just as easily point out the opposite with accessing an account. If insecure, it still requires a certain amount of information and time upfront then some login just to identify the user. The server will compare that to its local data. The time due to network latency or server load means it usually happens in seconds.

Adding a password that gets quickly hashed by the application to be sent instead costs some extra time. Almost nothing given libraries are available and CPU cycles cheap. If remembering the password, the user has to just type it in once or rarely. The hashing happens so fast that the user can't tell it happened on top of already slow network. Most of the time the user of this properly-designed system will simply type the URL, the stuff will auto-fill, and exchange will take same time. No loss in usability except one time whose overall effect is forgotten by many interactions with identical, high usability.

Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems. They could theoretically get performance benefits without the tagging but that's only if incorrect software + much extra work is acceptable. If premise is it's correct, which requires safety much of the time, then the more secure CPU and language are better plus improve productivity. Easier to read, too.

A final example is in web development. The initial languages are whatever crap survived and got extended to do things it wasn't meant to. So, people have to write multiple kinds of code with associated frameworks for incompatible browsers, server OS's, and databases. Many efforts to improve this failed to generate productivity/usability and security. Opa shows you can get both by designing a full-stack, ML-like language with strong types that makes many problems impossible by defaults. Easier to write and read plus more secure. Ur/Web does something similar but it's a prototype and functional programming rather than production.

Conclusion: usability and security aren't always at odds. They are also sometimes at odds in some technical, philosophical way that doesn't apply in real-world implementations. Sometimes getting one requires a small sacrifice of the other. Sometimes it requires a major sacrifice or several.

It's not consistently a trade-off in the real world.

Note: I cheat with one final example. An air-gapped Oberon System on Wirth's RISC CPU uses far less transistors, cycles, energy, and time than a full-featured, Internet-enabled desktop for editing documents + many terminal-style apps. Plus you can't get hacked or distracted by Hacker News! :P


> Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems.

In the parent commenter's framework, I suppose the safer language still comes at a cost in terms of the ability to use unsafe programming techniques -- like type punning and self-modifying code.


Hmm. You could use those as examples. There would be cases where type punning might be true in developer time. There would be cases where self-modifying code might buy you better memory or CPU efficiency. Yet, self-modifying code is pretty hard to do and do right for most coders that I've seen. Type punning happens automatically in dynamic, safe language with decent conversion rules. You often only do the conversion rules once or when you change the class/type but you mentally do that in your head anyway if you're analyzing for correctness. Difference is you typed it with conversions being mechanically checked.

These you bring up seem to be double-edged swords like the others that can have about no negative impact or significant one depending on context.


Ok, let's not talk philosophy and talk capability-based security with CapDesk instead:

http://www.combex.com/tech/edesk.html

They already demonstrated that integrating POLA at language and security level with simple, user authorizations could knock out most problems automagically. Did a web browser that way, too. KeyKOS previously used that model for whole systems that ran in production on IBM's mainframes with checkpoints of apps and system state every 30 seconds on top of that.

Still think you have to screw usability to improve security? And does it matter that it might be true in an absolute sense of some sort if in practice it might be no different (eg File Dialog on Windows vs on E/CapDesk)?


The point is that not ensuring security also has a cost, one which is harder to see.


> Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. > If X is less than the cost of a recall, we don't do one.

This is a reasonable expected value calculation - and not really that controversial. The real issue is that the model for cost isn't quite accurate; if you ask an actuary, whose livelihood is based on accurately measuring and accounting "risk," he/she will tell you that you would need to account for the probable loss in future revenues due to negative customer sentiment. Once you account for that, the cost of recall is a _much_ better proposition.


I know it's not good economics, but if you see a human life as priceless, the numbers don't work out quite the same. I think that's what 'Fight Club' was about. I guess the conversation departs the realm of economics at that point, and becomes one of philosophy and/or religion.


I think I'd challenge the assertion that Fight Club in either medium was about human life being priceless (but I understand you probably meant the quote). Quite the opposite, I'd think.


Seems to me that the character, the author, and the audience find something tragic in a life measured only in calculation, and that they all think there ought to be more to life than what's apparent. 'Priceless' may be a stretch, I agree.


If you've built an organization on numbers based decision making, you can no longer consider anything priceless, because an infinity (especially if there are two competing infinities) will cripple your ability to decide.

Companies run on strictly utilitarian ethics, which is why so many ethical complaints are invisible to them. For example: (Customer) ad tracking is bad for me! (Company) But tally the value of our services, we're clearly in the black!


I would contend that those equations are a bit more nuanced than you give credit.

Let's say I hypothetically give you that the Customer sees ad tracking as "bad" (whatever that means ...let's just accept it for argument.)

(1) Then the [Customer] utility function is: (Value from free services) - (Negative experience from ad tracking) + (Possible positive experience from learning about a new product or service from better targeted ads)

(2) The [Company] utility function is: (Value from ad revenue alone) - (Negative feedback on ad targeting) + (Revenue gained from higher ROI on marketing spend resulting in more purchases/subscriptions/whatever.)

In (1), I think people on average don't care about "privacy" related news because users don't see the negative experiences outweighing the other parameters. In (2), the negative feedback on ad targeting isn't really that large at the [Company] level to warrant much change (at least if you leave the echo chamber of HN every now and then.)

In the case of Yahoo, I still hold the hypothesis that they underestimated the (Negative feedback on a breakdown in security) as well as (Positive revenue gained from trust in security.) Then again, I doubt myself because if this were true, Box would be lightyears ahead of Dropbox; sometimes the coefficient on UI _really is_ larger than that of security...?


> In the case of Yahoo, I still hold the hypothesis that they underestimated the (Negative feedback on a breakdown in security)

Yahoo's stock is up (+53% since February, with a small dip in late June). Where is the miscalculation?

Volkswagen is back to positive sales growth, and their stock has recovered 50% since their discovery last September. Their calculation was correct, too.


Ah good point - at least for Volkswagen...Yahoo has other confounding factors (their sale, etc.) but overall the impact is probably a short-term shock with few longer-term lagging effects.


And that's why, if the problem was obvious/known, we need to fine companies enough that x becomes way bigger than a recall.


If you make cost X so high that X is an existential risk, people/companies will chance it because security isn't binary and "Either way we're fucked if we get a breach".

So then companies just never disclose.


Or it makes the cost so high that the underlying product becomes impractical. I'm pretty happy to live in a world where I can buy a car for less than $100k, even if that car ends up being much less safe than an S-class.


That's true, but a company can only play that game so many times before it catches up to them. "Never disclose" isn't a workable policy because eventually someone will leak the data.

It's also worth noting that you're talking about a hypothetical, but there are real life examples of this sort of security working despite your claim that it won't work. I've worked for HIPAA-regulated companies. It's certainly difficult to meet their requirements, but it's not impossible, and the regulations do have a real impact on the security of the data.

I'm also not convinced that security isn't a binary. You're either secure or your not, and you're only as secure as the weakest link in your system: that seems pretty binary to me.

A more accurate statement might be that perfect security is prohibitively expensive in many cases. But in many of those cases, data is actually not needed, and is collected because business wants visibility into users, even if that means compromising user security. This divides companies into three camps:

1. Companies where security is cost-effective.

2. Companies where security is cost-prohibitive, but which don't need to collect data.

3. Companies where security is cost-prohibitive, but which need to collect data.

I'd posit that the vast majority of companies are in categories 1 and 2, and that it would be a net benefit to people if all companies in category 3 stopped existing.


> I'm also not convinced that security isn't a binary. You're either secure or your not, and you're only as secure as the weakest link in your system: that seems pretty binary to me.

You cannot use the phrase "as secure as your weakest link" and then assert that security is binary. You're using terms that indicate varying levels of security.

More to the point, security is clearly not binary. You can support login over HTTP, which is quite insecure. You can support login over TLS which is much more secure. You can support only more recent algorithms over TLS which is more secure still. You can enforce two factor authentication, which adds more security. You can make your clients use certificate pinning which makes you more secure yet. You can allow easy access only from known clients and otherwise make the clients go through some extra authentication steps (secret questions, email verification, etc.). You can do the same for known locations.

Each of these options provides different levels of security. None of them are "secure" in any binary sense.


I think the missing piece in what you are saying is that there's an unspoken question here: "Secure against what?"

Let's use your examples to explain:

> You can support login over HTTP, which is quite insecure. You can support login over TLS which is much more secure. You can support only more recent algorithms over TLS which is more secure still.

Secure against what? If it's password exposure you're worried about, then HTTP is definitely not secure unless some other security is used. But given the attacks I know of against older versions of TLS, I don't think it makes sense to say that older versions of TLS are less secure against password exposure than newer versions of TLS, because the vulnerabilities I know of in old versions don't leak passwords[1]. So HTTP: not secure, TLS: secure, for password exposure. It's a binary whether it's secure for password exposure.

If, however, it's unauthorized access we're worried about, the CRIME and BREACH attacks are usable against all versions of TLS for session hijacking, so we could say that neither HTTP nor TLS is secure against unauthorized access. Again, it's a binary whether you're secure for unauthorized access.

So yes, actually each of these options is secure in a binary sense, when you ask what it's secure against.

Security, as a whole, as I see it, is a big `&&` of all the smaller binary pieces of security that matter for a given product. In reality, for most products, you have to be secure against password exposure and unauthorized access. It doesn't matter if you're secure against one if you're insecure against the other--that's what I mean when I say you're only as secure as your weakest link. So when talking about your security as a whole, it really is a binary: either you're secure or you aren't.

[1] This is for the sake of argument--don't take my word that older versions of TLS are secure against password exposure, as I haven't investigated that claim fully.


You're trying really hard to fit this into your binary model. Security is all about managing risk. It's not absolute. TLS didn't change when the CRIME attack was revealed, but it suddenly became less secure because the risk profile changed. But before CRIME, TLS wasn't perfectly secure. There was always the risk that the protocol could have undiscovered flaws, that an attacker could guess the private keys, that a cert authority could issue a valid cert to an attacker, etc.

In a world of imperfect security, talk of binary security is meaningless.


Security is all about managing risk for you because you've already chosen to compromise on security.


Not compromising on security is an unrealistic ideal. A perfectly secure system is a perfectly unusable system.


That isn't something made up for the film, in real life at least one "no recall" decision has been made using exactly that sort of cost/benefit analysis: https://en.wikipedia.org/wiki/Ford_Pinto#Cost-benefit_analys...


Isn't this the case of any business? Strictly speaking, they're profit generating machines. That's the purpose of regulation is to offset this equation by some amount that makes the equation balanced where society collectively deems it reasonable. That's the intended purpose anyways.


  Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential 
  locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall?

  Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by 
  the average out-of-court settlement, C. A times B times C equals X.

  If X is less than the cost of a recall, we don't do one.
The example reminds me of this discussion[0] between Milton Friedman and a student.

[0]: https://www.youtube.com/watch?v=jltnBOrCB7I


Note that:

1. Friedman positions the student's view as wrong. And changes the question.

2. Friedman argues himself to the student's argument. Without acknowedging this.

3. Friedman never once acknowedges that the problem was that Ford was aware of the risks but chose to conceal them from the public, such that the public was fundamentally unable to make an informed choice.

4. That allowing people to bargain with their own lives leads to numerous other slippery-slope and logically-constrained tragic inevitabilities. Individuals almost always think they can beat the odds. They're almost always wrong.

What cost-benefit analysis almost always fails to consider are the moral and goodwill costs of making a decision which is intrinsically harmful to the customer. Most especially when not informing the customer of the full risks.

That specific clip is among the more prominant reasons I find Friedman an entirely unfaithful and bad-faith debater. He keeps moving the goalposts and using equivocations just enough that unless you're quite attuned to the fact, you'll miss it completely. And that is where he's not lying outright. Curiously enough, his son David does pretty much precisely the same thing.

Neither seem capable of admitting error either, which is the final loss of credibility.


But cost is one - pretty good - way to figure out which branch of that tree to take. You can decide to (arbitrarily?) weight the decision towards the cost of securing the system.

I don't see how it is that security isn't totally analogous to a lighthouse, which is the classic example used to explain public goods. Yet we're expecting Yahoo to underwrite security on its own?

And how is it that we simply let the attackers off the hook? Using naval piracy as a metaphor, the response was rather violent suppression by (primarily the British) Naval forces.

The seas were commons, and pirates were hung from yardarms as a public service.

I could cynically project that "security" is being used somewhat as a make-work program for engineering staff. Te concern is that language systems of inappropriate weight may be used simply because they're "secure". Granted, the hardware ecosystem certainly makes this less than problematic.

And I am sure these metaphors break down at some point, but they work for me, for now.


  That's the current mindset of the technological world,
  estimating whether the cost of atoning for the problem is lower
  than the cost of securing the systems.
That mindset can be changed if the companies are fined heavily for breach of customer data.


This was so fking stupid because it's not even how this works. Yes, the manufacturer can initiate a voluntary recall, but there are other paths to recall. Insurance companies are the point of the spear for payouts -- if they think that a particular model or manufacturer has a problem they're going to say something if only to reduce their costs.


That's not just the tech industry, that's more or less the foundation of society: maximizing value. The only aberration here is that Google, not Yahoo, sees that there is a second component of X: [how much the average customer believes they will lose in future security breaches] * [total number of customers].


that fight club example is amusingly cynical, but a true cynic might think it idealistic to believe that high-level decisions are made according to any formula. if they were, what calculations could explain the decision to allocate resources to Katie Couric?


A long of the good ones left a long time ago.

Y! been stuck in a rut, coasting, like AOL for a long time... hence Verizon sees their old white grandaparents with email as a stable user base. Most old people won't change email addresses no matter what happens.



To this add what I call "pre-breach failures of imagination". Most often manifesting itself as "Why would anyone hack us?"


It sounds to me like Fight Club just repurposed the Ford Pinto lawsuit.


I'm not a lawyer but if the "probable rate of failure" passes a certain threshold, criminal negligence should be a consideration. Certainly in the cases of vehicles and also in the case of computer security where lives are stake, hospital systems for instance.


The XKCD already updated the live comic about this subject [1].

All panels are available at its sister wiki [2]

In my opinion it is a beautiful work of art, pushing the limits of what the media allow the artist to do (the media in this case being comic strips in the webcomic format.

[1] https://xkcd.com/1446/

[2] https://www.explainxkcd.com/wiki/index.php/1446:_Landing/All...


Thank you!

To those who didn't click on the second link, you really should. It is like a journal of the lander, except in webcomic format.


Wow. I had only mild interest in the news and was about to close this tab after I learned what the hell Philae is (without opening TFA), but this comic series made me excited about the project and made me check out the article. Good job, Randall, and thanks!


That comic actually helped me understand much better why the OP article was so important.


Looks like there's an error on that page, and it's only showing a static image.


First first link is "live" in the sense that the panel changes based on the current status of the lander.

The second link, that's the one that has all the panels.


> and that "free speech" is only about forbiding the government from limiting the expression of opinions, not of anyone else.

> This is true, and a big problem.

This is not true, it is a product of people confusing two different things:

> the first amendment to the constitution of the United States

> the unalienable right to freedom of speech

The american tradition is founded on the principle that there are natural rights that can never be given (as people are born with them), only taken away. Among these rights it's the right to freedom of speech and of assembly.

The first amendment (along with the rest) only clarifies the role the founding fathers intended to the government they created: one that shouldn't be allowed to infringe on these rights by legislation or otherwise.

The first amendment doesn't give the right to freedom of speech, only prevents the government from infringing on it, that's right. But it doesn't follow from that that private companies cannot infringe on anyone's freedom of speech, it is a non sequitur.

Private companies can and do infringe on people freedoms of speech and assembly all the time, and what happened to this reporter seems to be another example of that.

The thing is: it is perfectly in their right to do so, as it is very clear in their terms of service that they reserve that right, the right to control what can and what cannot be published on their properties.

The confusion comes from the fact that most of these companies (social media companies) like to pretend that's not the case, that they are proponents of freedom of speech even when it is inconvenient to the powerful.

For an example of that see the Arab Spring, what was publicized in those days as "a revolution powered by social media", "a media where people can freely express their thoughts and exercise their freedom of assembly without government interference".

As we came to realize in the subsequent years is that it was only true because 1) governments underestimated something they still didn't fully understand and 2) there was a temporary alignment between the position of these companies and that one of the revolutionary.

This alignment is over now. Governments worldwide pressure and succeed in making these companies to remove inconvenient speech from their services (examples abound).

In other cases perfectly valid forms of speech gets removed, suppressed or banned from their services for no other reason as they don't align to these companies goals. It's their right to do so and they exercise it.

The important thing is: let's not pretend they are not suppressing speech, curtailing freedom of assembly only because it's their right to do so.

They do curtail freedom of speech (on their premises), they do ban speech that is valid but inconvenient to their goals and objectives. As is their right. It is just that they lose the claim to be (as in the case of a particular service) "bastions of freedom of speech" when they do so.


We seem to be of the same opinion? But you don't seem to think something should be done about it, and I do.


We are mostly of the same opinion including on the need of something being done about it.

My only objections are:

1) freedom of speech has absolutely nothing to do with the government or its ability to curtail it.

It is a natural right, one that everybody is born with and one that cannot be given, only taken away.

2) social media sites are private property of their owners and, as such, they can impose any limit they want within its (virtual) premises.

The solution to the problem is not regulating it (that in my opinion would increase the ability to curtail the freedom of speech) but to revert to the descentralized web of links that used to define the World Wide Web.

People can self publish and should do it, be it video, audio or text.

People should also, if they want to compete, to invest as much resources trying to be heard as these companies invest trying to be gatekeepers.

We agree in general, just not on these small points.


A central point of the post:

> The last posting which I made on my profile related to recent events in Europe.

> I wrote that I considered the wave of terror attacks in Germany and France to indicate that a ‘low level Islamist insurgency’ was now taking place in those countries.

> A few hours after placing this posting, my account was ‘disabled.’

It is sad that the promise of a media that facilitates worldwide free exchange of thought and speech was voluntarily dismantled.

Instead it was transformed in another unidirectional media tightly controlled by corporations.

It happened because:

- the content creators traded freedom for convenience

- the audience preferred centralization to the web of hyperlinks

There is still time, unlike radio and television there is still no legislation or regulation preventing people from hosting, creating and broadcasting their own content.

It may be hard and ir may be almost too late but is reversible.

No company will ever live up to the ideals of freedom of speech, assembly or otherwise. Companies have a goal, and it is to advance the objective of their owners.

Anything they may do that give people the impression that they are fair and good is just an artifact, a momentarily alignment between whatever gives that impression and their short term interests.


According to her own Twitter about she "signs" the tweets she sends on her own.

Tweets from Hillary signed –H

- https://twitter.com/hillaryclinton


I'm sure there's an autopen analogy here somewhere.


To add to that the author really seems to have an axe to grind with Linus. The suggested read on "More from The Register" links, all accompanied by the same "Linux middle finger" or "Linux Nuttella" images:

> Linus Torvalds in sweary rant about punctuation in kernel comments: http://www.theregister.co.uk/2016/07/11/linus_torvalds_in_sw...

> Linus Torvalds releases Linux 4.6: http://www.theregister.co.uk/2016/05/16/linus_torvalds_relea...

An excerpt:

> Torvalds says "I'll start doing merge window pull requests for 4.7 starting tomorrow." Expect that release about two months from now, unless Linus takes a summer break or things go awry in some unpredicable fashion.*

> Linus Torvalds wavers, pauses - then gives the world Linux 4.5: http://www.theregister.co.uk/2016/03/15/linux_4_5_released/*

An excerpt:

> Linux often caters to esoteric tastes, which is why this time around Torvalds has seen fit to include code that does a better job handling PS/2 mice. For both of you still using those. *

> Latin-quoting Linus Torvalds plays God by not abusing mortals: http://www.theregister.co.uk/2016/06/06/latinquoting_linus_t...

> Linus Torvalds warns he's in no mood to be polite as Linux 4.2 drags: http://www.theregister.co.uk/2015/08/03/linus_torvalds_in_no... *

> Linux infosec outfit does a Torvalds, rageblocks innocent vuln spotter: http://www.theregister.co.uk/2016/04/27/linux_security_bug_r...

This one is not really about Linus. It sports the same "Linus making the middle finger" image anyway.

> Linus Torvalds fires off angry 'compiler-masturbation' rant: http://www.theregister.co.uk/2015/11/01/linus_torvalds_fires...

All those articles come from the same author: Simon Sharwood: http://www.theregister.co.uk/Author/2488

I'm not commenting on the content or the substance of the linked article, just pointing out an interesting pattern displayed by the author.


From the article:

> In a large room in a nondescript modern office block in Seoul, staff from a recruitment company are staging their own funerals. Dressed in white robes, they sit at desks and write final letters to their loved ones. Tearful sniffling becomes open weeping, barely stifled by the copious use of tissues.

> The macabre ritual is a bonding exercise designed to teach them to value life. Before they get into the casket, they are shown videos of people in adversity - a cancer sufferer making the most of her final days, someone born without all her limbs who learned to swim.

Showing how worse things could be is a very cheap and torturous way to build appreciation and attempt to increase morale.

> The participants at this session were sent by their employer, human resources firm Staffs. "Our company has always encouraged employees to change their old ways of thinking, but it was hard to bring about any real difference," says its president, Park Chun-woong.

> "I thought going inside a coffin would be such a shocking experience it would completely reset their minds for a completely fresh start in their attitudes."

And also

> He [Park Chun-woong, company president] also insists that his staff engage in another ritual every morning when they get to work - they must do stretching exercises together culminating in loud, joint outbursts of forced laughter. They bray uproariously, like laughing asses together. It is odd to see.

Here is a much better way to improve morale and to prevent the helplessness that leads to suicides:

- pay workers a livable wage

- create an environment where the workday is completely and separated from the personal day

- and one where it is possible for the worker to fully live their lives apart from their workplaces

Then none of these charades will ever be necessary again.


I think you misread this from beginning. As someone from eastern asian background, this feels not like some shady play that employer forces people into, rather I think it is close to a what is a catharsis or some sort of confession in western countries.

The belief system of Sinosphere is built on ancestor worship. Every year, family gather together and goes to ancestor's tomb, do some cleaning and present offering to the other world, memorizing the ceased while appreciating contemporary life. I think this ritual works pretty much the same way, though it might feel weird, but outcome isn't that bad.


I'd almost believe you if it wasn't for the part of the article that read "The participants at this session were sent by their employer, human resources firm Staffs.". I don't see staff members volunteering for this, let alone people signing up for this of their own free will.

I wouldn't necessarily trust the positive feedback from participants either. After all, who would willingly give truthful feedback to an employer that actually believes that this would have the kind of positive effect they seem to believe that it would?


There is a documentary from Vice that talks about the kind of fake funeral in more details.

http://www.vice.com/video/a-good-day-to-die-fake-funerals-in...

It is used and viewed as a therapy for a lot of people. I do agree that people should be given the choice to turn down it.


Might be, but it's a personal experience in that case, not one that should be forced upon employees by their employers. Keep work and spiritual life separated, if at all possible.


Agreed, should offer like program for employees to participate. But I want to say this is different than those forced laughters or exercises the article tries to put them together, the latter, especially the laughing part feels pretty creepy to me, much more than this one.


Even 'catharsis' as a Western idea isn't actually that helpful in practice. For example, taking out your anger and feeling 'cathartic' is known to make your emotional problems worse, not better, because you've given it a physical outlet that affects everyone around you.

This is totally a cult-worshipping style setup where the name of the game isn't employee health but ever increasing amounts of company loyalty. Notice how one of the participants was quoted as wanting to "bring more passion" to their work. People who need this treatment need a bigger life change than some theatrics plus some what is most likely unenthusiastic if not totally coerced admission that the bullshit helped.

Finally, don't bring "eastern asian background" into this. You should be able to argue for or against something without using your background to appeal to idiots and racists.


Bringing his eastern asian background isn't appealing to racists. There are differences between cultures and educations and they are not due to differences between races (ie an adopted kid from Korea raised in Europe will have the same culture as his family and not the culture of his origins).

Taking differences of culture into account when talking about practices in different countries make perfect sense. If there were no differences, then the "strange behaviors" we see reported would not baffle us and there wouldn't be any need for such an article. So, I think that the fact that eva1984 mentioned his background let's us know that by having a shared culture he can give us insights into what's happening that we don't necessarily have. Even if we don't agree with the practice, it's interesting to see how it's perceived there.

-

That said, I do agree with you in that overly paternalistic employers and a definition of life centered around the company (leading to overtime, obligatory parties with the company, etc..) as done in Japan and Korea is often not very good for employees mental health and is probably one of the reason for the increased rate of suicide.

Some people are realizing this (in Japan at least). A bit more than 10 years ago I taught English part time on weekends to retired people in Japan and one of the thing that stuck with me is how many of the men regretted that they hadn't taken more time to enjoy life and do things they want before retiring. I had that discussion with them as I was starting a seishain position in a Japanese company and was surprised by the amount of Sabisu Zangyo (unpaid over time) and the time spent on company dinners etc... They were warning me to be careful and not let myself eaten by the company life.


> Finally, don't bring "eastern asian background" into this. You should be able to argue for or against something without using your background to appeal to idiots and racists.

This x 10. There is being open-minded and there is letting your brains fall out. You can be culturally sensitive, but that should stop when you see victims being stamped upon by a large firm boot, slowly and quietly into the dirt.


Who gets to say if not the people themselves?

Assuming you know better than the people might be the boot stamping you don't see.

(I'm not part of the culture and have no opinion of it. I just disagree that being east asian is irrelevant to something happening involving east asian cultures and religious practices.)


It's South Korea. I don't know why are you assuming they don't get a good wage or have interesting personal lives. Some may work longer hours, but it's really not unheard of in our culture.

It's just a way to get something of your head and accept the inevitable. The culture is different, so I doubt it could ever work here, but given how many upvotes any story related to depression or suicide gets on HN, SV could probably use something too.


Korean and Japanese group actives and social organization always weirds me out. I just can't relate and it feels totally alien. I understand that most of it comes from totally different perspective.

Somebody tried to explain to me that East Asians "roleplay" their parts social hierarchies. It's the job of the boss to patronize and it's the job of the workers to play their role. Officially everything is strict and proper, but in reality there are many accepted ways to break the rules without causing undue fuss. There are lazy workers in Japan and Korea like everywhere else. It's the job of the boss to yell at them like he is really angry, but workers know that their job security and place in the organization is secure if they do the minimum (compared to the same situation in US).


That actually sounds a lot like many US offices. Many bosses don't care that their employee browses FaceBook all day at work, they just pretend to care, as long as the work gets done.


I am ending a 2-week work week/vacation week in Seoul. My first time here. The culture is definitely different. The glimpse I have gotten into work life is that it is industrious and punctual (for a white collar job), and long hours for a more manual job. But at night people seem to get along pretty well with coworkers or friends and go out. Even on a Monday night I stumbled into a Korean BBQ joint with mostly-drunken coworkers jovially making fun of their boss (who was present) and having a good time otherwise. To me, Seoul seems to have a ton of character. I've liked it so far, would like to visit again!


Going out for food and drinks with co-workers is indeed a way to relieve stress and socialize; and it is indeed a place where the hierarchy is (seemingly and within certain bounds) relaxed a little. The shadow side in South Korea (and Japan) is that these outings are to some extent implicitly mandatory. It is a part of their traditional business culture, and during the work week you hardly spend any hours at leisure at home. This pattern is often criticized for its detrimental effect on family life and raising children; or even starting a relationship (in Japan in particular).

Might not all be so bad if your wife is a bad cook though. Korean BBQ is delicious.


whenever I read an article like this, I am so glad for the German labor laws. They might make it harder for employers, but they make it possible to work AND have a decent live.


In Portugal, we have some "theoretically" restrictive labor laws, supposedly very protective towards the employee. Still, everyday you'll hear about someone being told that he/she should feel lucky to even have a job, let alone a salary (because, yes, unpaid stuff is also common).

My point being: labor laws are fundamental, but they don't trump a) on one hand, a bad economical situation; and b) on the other hand, a die-hard culture of constantly finding ways around the law.


This, especially the second point. It does not really matter how protective labour laws are when specialist market is small enough for the option of not signing mutual contract termination agreement and waiting to be laid off by employer would be too risky and detrimental to further career path. Word of mouth is very powerful ally for higher pay grade specialists.


You can substitute "Portugal" with "Italy" without affecting the validity of the sentence.


Labor scarcity is usually more important than labor laws for quality of employment.


"I am so glad for the German labor laws."

Labour laws are not so much the issue.

This is a cultural phenom.


I think one influences the other.

And you need a certain age to appreciate them (for example the child holiday thing).


For someone not from Europe; what sort of worker protections would you like to highlight (also what is the healthcare like there)?


Regarding Healthcare, this wikipedia entry sums it up pretty good: https://en.wikipedia.org/wiki/Healthcare_in_Germany

Regarding the worker rights: Not only do you get a decent amount of holidays (around 30 in most companies, with additional free days for parents), but you can also take up to 14 months leave after childbirth (both father and mother, but combined not 14 months each and one person has to take at least 2 months to get the full amount). Your company has to offer you the job if you come back.

You can't get fired on a whim if you didn't do anything wrong. Most jobs will have to give you a 3 to 6 months notice (depending on how long you were working there).

Companies have to pay at least 8,50 Euro per hour.

The law says that you are not allowed to work more than eight hours a day, however exceptions are possible to some extend.


The getting fired part is a two sided sword though. I have to give notice up to 6 month in advance (I don't know the exact number it's a bit complicated). In theory I can dissolve my contract ("Auflösungsvertrag") on the spot but my employer has to agree to that.

It's generally a good idea to have longer periods of job security but for someone who can (in theory) get rehired quickly the laws can get in the way, too.

There's also inflexible stuff that gets in my way quite often. For example I have to rest for 11 hours before I can work again and can only work 11h/day without bureaucratic hassle. I get that it's meant to fight abuse but I'd prefer to have an easy way to waive these rights.


I don't think this would be good idea. Then there would be the danger of employers pressuring their employees to waive their rights: "If you insist on working only 11h/day, we can hire someone else."

I would totally expect this to happen for basically every minimum wage job if you could waive your rights like this.


"I get that it's meant to fight abuse but I'd prefer to have an easy way to waive these rights."

An easy way to waive those rights would mean that effectively no one would have those rights. Quite frankly, it is better to err more on the side of caution in these things. Work will still be there tomorrow.


This is basically the obvious flipside of having these sort of rules. See also, "I wish my business didn't have to follow as many worker regulations so I could make more money faster".


I my case it's more a philosophical issue. I feel fairly uncomfortable working under those regulations (to the point of contemplating quitting an otherwise very interesting job every now and then) because I have a problem with unconditionally delegating my personal decision making to some elites that "know best what's good for me" without a chance to conditionally opt out. I do understand the general concerns and that some middle ground is needed to fight abuse but personally I favor hefty fines for employers that abusing the system and good whistle blower laws/infrastructure.

I think some of this could be solved with different regulations for different kinds of jobs. More "thinky" jobs probably need a lot less regulation than say manual labor.


The 8h/day rule is an average per week if you are working 6 days a week, this is because the maximum work per week is 48h. Usually you work 5 days, so you fall in the rule that you must not be above 48h/week and a maximum of 10h/day.

If you work in shifts, some other rules apply but at the end, you cannot go over the 48h/week on average and have special "recovery" days.

But in Germany you have a lot of agreements for let say metal workers or people in the chemical industry. In some cases, the official week is only 37h or even 35h long.


It's actually even more involved than that. For example there needs to be a 24 hour down period per week and there are mandatory breaks an employee is required to take by law (refusing to take a break, even voluntarily, puts the employer in violation with the law).

I'd say that other than civil servants, people working in the "Handwerk" (a German concept that includes various industrial jobs and skilled crafts, literally "manual labour" but often implying specialised expertise) have it best as they typically work under union contracts and have fairly tight regulations.

OTOH loan workers tend to have it worst: loan workers bypass a lot of the labour laws and often get all of the drawbacks of being self-employed without the benefits of actual free agency. They're often hired to replace permanent employees, so they're sometimes met with hostility from their colleagues and they have no job security and often no way to transfer to a permanent position (both because that's often exactly what they're there to replace and because they're often under non-compete contracts that forbid them from doing that).

Agency work also tends to be pretty bad, especially for designers: you're often expected to work unpaid overtime and there is a lot of pressure keeping people from exercising their rights or asking for a raise. I've routinely seen designers work weekends and massive overtime (think 12 hours, not 10) for months on end. The worst story I heard was of a 32 hour day (working on a regular day until the next morning and then leaving "on time" the next day).

Note that not every agency is like this, but they exist and it's taken for granted that you have to "eat dirt" if you really want to work in the industry. This especially used to be a problem with internships but thanks to some changes to a few laws interns have it pretty good these days (e.g. "voluntary" interns have to be paid minimum wage, making them a lot less cost efficient than they used to be).

Also, of course, almost none of the labour protection laws apply to leadership roles (typically C-level jobs). The reasoning behind this probably being that if you run a company having to take time off can get in the way. Sadly this also means there are some strange corner cases for founders (e.g. you get almost none of the maternity protection).


I've heard in practice it's near impossible to fire badly performing employees in Germany as a result, even if you're a small company. Or firing a bad employee becomes very expensive which is even worse for small companies.

It's probably hindered a bunch of people from starting companies in the region and makes getting a job much harder because employers have to be extra careful.


> I've heard in practice it's near impossible to fire badly performing employees in Germany as a result, even if you're a small company. Or firing a bad employee becomes very expensive which is even worse for small companies.

By tradition, first 6 months of employment are on probation ("Probezeit"), with 2 weeks' notice possible on both sides. Companies can opt out of probation time, though, if they want.


"It's probably hindered a bunch of people from starting companies in the region"

I very much doubt it. If you were really wanting to start a company, you're not thinking, "I'd do this totally awesome thing! If only it wasn't so hard to fire people!"


You start a company with a few small people in germany somewhere.

You quickly realize that because of investment money and running a company, that running it in germany isn't that great and you move the company to the USA.

It happens more than you think.


I'm from the Netherlands. If an employer wants to fire you, he has to follow certain rules. If you have a temporary contract, he can just let you go after that period, so that's a popular method - just hire people temporary. However, you're only allowed to do this three times for a maximum of three years. After that, the contract is fixed without end date. You could fire someone for one month, then take him back, but that will not be allowed. If you do this as employee, then you get fired the fourth time, you can object this in court and the judge will award you with a fixed contract. If you want to do this, you have to fire someone for at least half a year, and that makes it less attractive to do.

Another option is to hire someone who is self employed. It is a popular method since the crisis. Fire an employee, then hire him back. However, if the employee only has one client, you can go to court and prove that in fact nothing has changed, and the judge can reinstate your old contract. I don't know if this happens a lot, but it has happened.

When you have a fixed contract (no end date) and work somewhere for five years, the law says the employer can fire you but has to pay you five months salary. Because of common law practice, this will normally be doubled. He has to prove that he has good reason to fire you. This is in general too little work, or malpractise, or distrust or something that disturbed the work relationship. If this can't be proved, the only option is negotiation about extra money to leave. If they can't agree on this, they can ask a judge. That will cost money as well, so most of the time that path is avoided.

Only in the case of theft or when you're drunk at work, say bad things about your boss or company, can they fire you right there and then without pay. Still they have to prove that.

When unemployed, you can get unemployment benefits, normally 70% of your last salary (average over last 6 months), but you have to search for new jobs and prove that. This will last maximum 24 months, after which you go into welfare, which is super, but little money and you have to sell your house etc.


Same in Germany, the temporary contract is something to be used to hire people when there is too much work. It makes sense to close that "Just renew it" loophole.


They are really strict on after hours working. It's now illegal to require you to check email or call in. Some companies have even gone as far as locking inboxes after hours and during holidays.


In France. There is nothing like that in UK.


"- pay workers a livable wage - create an environment where the workday is completely and separated from the personal day - and one where it is possible for the worker to fully live their lives apart from their workplaces Then none of these charades will ever be necessary again."

No, I don't think that's really it.

The suicide issues in East Asia are highly contextualized to those cultures.

They don't live the same way we do.

They live in highly structured and organized societies with very strong social norms and obligations.

Add in some hyper materialism in S. Korea since the war.

The benefit of this social order is very low crime, fairly efficient society. But the drawback is that a small percent can't take it type thing.

You know how many gun deaths there were in Japan last year? Like one! Of course it has something to do with extremely strict gun control, but it has mostly to do with their culture.

It's not so much a 'workplace' issue so much as it is a cultural issue.


Imagining how much worse things could be is an effective stoic technique often called negative visualization. Many HN readers probably practice it. Whether or not it's a good idea to use it to extrinsically motivate employees is another question..


It's a standard technique of sociopaths in management roles. See "Implement an 'It could be worse' Program" in the relevant classic (and hilarious) textbook: http://www.demotivation.com/excerpts/7-2.html


> - pay workers a livable wage

Common misconception is that companies choose what wage to pay. Unless you're a massively growing startup infused with millions or billions of VC cash, you're pretty much locked into market forces just as the consumer is locked into market prices.

> - create an environment where the workday is completely and separated from the personal day

> - and one where it is possible for the worker to fully live their lives apart from their workplaces

They already do this.


In what way are your suggestions better? Do you mean that they are actually more effective, or are you just talking about your personal preference?

Personally, I would have thought of better mental health care and limiting access to firearms and dangerous medications, so your proposal seemed very out of left field.


I agree with most of your comment except with this comparison:

> Try to imagine a USA where someone from Texas would not be able to move to LA or NYC or the other way around. Border controls against movement on every state-border.

> If you find that hard to imagine try to come up with a really good argument why it should be any different within the EU.

Before the recent migratory crisis free movement of people was not really a very hard pressing issue, even after the inclusion of the former eastern bloc countries.

It only took the currently seen proportion when, as response to the migratory pressure and the reticence of neighboring countries, Germany (unilaterally) signaled that they would accept and welcome the migrants without a clear vetting or logistic process in place.

A more apt analogy would be if NAFTA, originally an economic treaty, evolved into a free movement treaty between the participant countries and then, as the result of some kind of adverse conditions in South America, Canada declared that they were willing to accept the South America migrants that managed to reach their country.

There would be a big migratory pressure on the southern Mexican border (the actual border of that union) and unable to control or unwilling to deal with the problem on its own Mexico started to organize the flow of migrants from its southern border to its northern, passing the problem along to the next country, the United States.

In that hypothetical scenario the parties opposed to the original conversion of NAFTA from an economic treaty to a freedom of movement treaty would be bolstered by this unforeseen and unrelated event.

Two notes:

1) nowhere in this analogy I'm making any statement of opinion or passing any judgement about rhe situation depicted

2) comparing NAFTA to the EU is more apt than comparing US to the EU because the states that make the later have a much stronger and older sense of their own national identity than those of the former.


I'm not well-versed in EU politics but I find some parts of what you say surprising. After Brexit, there were reports of racist signs some idiots had put up. These weren't against syrian refugees, but rather against a particular former eastern bloc country.


There is undoubtedly an uptick of xenophobic reactions by some groups in England, even against Portuguese studying and working there (friends of mine even). There is and there will always be a segment of the population that will behave like that, blaming their problems on the "others".

But I don't believe it would ever reach the current proportions in all of the EU without the current migrant crisis and the lack of proper coordinated response.


There never was and never will be coherence in regards to this. Think of copycat crimes as a similar phenomena. Its basically a venting of poorly defined anger and frustration against anything "different".


From [1]:

> What is the Google Feed API?

> With the Feed API, you can download any public Atom, RSS, or Media RSS feed using only JavaScript, so you can mash up feeds with your content and other APIs with just a few lines of JavaScript. This makes it easy to quickly integrate feeds on your website.

"To quickly integrate feeds on your website".

The latest API deprecations and product removals (everywhere, not just at google) are part of what I believe to be a growing trend to take away the ability for the end users to be publishers and to bring them back to be passive consumers (like in the old radio / TV days).

[1] https://developers.google.com/feed/


Reminds me of a rant I read recently, on why Apple killed HyperCard.

> The reason for this is that HyperCard is an echo of a different world. One where the distinction between the “use” and “programming” of a computer has been weakened and awaits near-total erasure. A world where the personal computer is a mind-amplifier, and not merely an expensive video telephone. A world in which Apple’s walled garden aesthetic has no place.

> (...)

> The Apple of Steve Jobs needed HyperCard-like products like the Monsanto Company needs a $100 home genetic-engineering set.

http://www.loper-os.org/?p=568

Personally, I've lost my faith in Google, Apple and other "hip" IT companies long time ago. They feed us toys, instead of tools. Not sure if they do that on purpose or because they think Moloch/Market demands it, but they do so nonetheless.


Serious question: Why would Apple then create Swift and Swift Playgrounds for iPad in order to get more people to program? Seems like they are giving us some tools.


My current theory: they need more programmers making stuff they can sell for iOS. Popularizing Swift is beneficial to them because it still technically keeps people locked in the Apple ecosystem (a kind of soft lock-in - they've opened the language, but it's still primarily supported by Apple).

Increasing the pool of hirable developers does not otherwise exclude selling computing toys to the general population.

I haven't heard of Swift Playgrounds for iPad before. I've just finished watching its demo - I must say, it looks nice and has some cute solutions for entering code via touch interfaces (like that for-loop dragging thing) that I hope will spread around to other applications. That said, it's an educational app - I can learn some Swift with it. But what can I actually program with it? Can I use it to make my calendar talk with my SMS app? Or with my Bluetooth headset?

That's the thing I complain about when I say that mobile devices are developed as toys, not tools. You're always limited to some functionality provided by the vendor and third party apps. Apps that don't talk to each other, that restrict your choices to few operations their authors thought about. Not to mention apps that increasingly want to suck out and monetize all your data, but that's beside the point. On Android we have Tasker, which is something that I believe should be a default component of Android (albeit it could use some ideas from Swift Playground to make UX better). But it isn't, and the current trends in mobile, web and desktop suggests it'll never be - the end user is forbidden from using the computer, they must only ask their apps for services.


Yes you can.

Swift Playground has full access to the iOS APIs.

https://developer.apple.com/videos/play/wwdc2016/408/

I applaud Apple for this, as it makes the developer experience closer to the Xerox PARC ideas.

EDIT: typo have => has.


I think rather than assuming malice, assume either incompetence or different goals.

The saying I've heard is that most people want elevators, but we've been selling them helicopters and blaming them when they crash.

Someone who is largely computer illiterate can download and run apps without any fear of messing things up. There is essentially no malware for iOS. They can always exit the app. They can always delete the app. If the app wants to get the user's location, or email, or documents, or pictures, it has to ask for permission from the user. The app will not mess with things that the user can't figure out - there are easy ways to check (and in some cases limit) space usage, bandwidth usage, and battery usage. Apps are now safe.

This is an amazing achievement!

However, apple has not yet figured out how to safely enable development at the same time. They're getting closer, but they're not there yet. It is obvious that that was not their top goal. There are already lots of general purpose computing devices, and if you want development access to your device, you can get it as a developer.


Think of tools as apis to use and integrate existing services in their ecosystem into your own. A programming language is not that.


I'm not an iPad user, but hasn't Apple had a long-running policy of forbidding any app which includes an interpreter?

I've seen this being mentioned for years by various programming communities (e.g. Squeak Smalltalk), where the only work around is to buy a Mac, buy OSX, buy the Apple developer tools ("Xcode"?) and pay a $99 subscription fee in order to transfer your own app to your own iPad.


> I'm not an iPad user, but hasn't Apple had a long-running policy of forbidding any app which includes an interpreter?

That policy was rescinded almost 6 years ago: http://daringfireball.net/2010/09/app_store_guidelines and had been instituted only a few months earlier: http://daringfireball.net/2010/04/why_apple_changed_section_...

IIRC there's a policy against running downloaded code (at least automatically downloaded code) in anything but the bundled JS runtime, but bundling an interpreter in an application and running user-provided code is not an issue. You can run a Python app on iOS (e.g. via Kivy) and there are Python interpreters in the appstore.

> buy the Apple developer tools ("Xcode"?)

That's free (with a mac and OSX obviously): https://itunes.apple.com/app/xcode/id497799835

> and pay a $99 subscription fee in order to transfer your own app to your own iPad.

The developer account is only necessary to publish in the store, since Xcode7 and iOS9 you can sideload applications with a regular appleid: http://www.howtogeek.com/230225/how-to-sideload-apps-onto-an...


To be clear, you can sideload to a device without paying, but you still can't use all the features. If you want to make an app that uses App Groups, for example, you have to have a paid account. There's no way to even test it without paying your annual Software Development Tax.


I don't understand why this has been downvoted so aggressively. I didn't say anything inaccurate. You can't develop an app that uses App Groups without a paid developer program account. You can't even test such an app locally on a simulator without a paid account.


I think your info is a bit outdated (and some incorrect). Buy a mac (or ignore the Eula and run OS X in a VM), install the dev tools (free), install the app on your iPad (free). If you want to publish, there's a fee (per year).


They don't forbid interpreters. But they forbid downloading code into these interpreters.

Ie running what the user types is OK.

(At least as far as I remember.)


Your free to down load code now, many of the interrperters come with build in support for version control now.


This is the downside of free services. They can be easy to get started. But will bite back when they go out. It has been hard to trust Google after they took reader down.


Sounds kind of like Yahoo Pipes. Is that still around?


Pipes was shut down in fall of 2015.



Is that all it does? I'm sure there are other services out there that do exactly this. Does anyone know of any?


Check out riko [1]. It was developed to programmatically replicate Yahoo Pipes (without the UI). But as it stands, I'm pretty sure it can do most of what google feeds does.

[1] https://github.com/nerevu/riko

(full disclosure: I'm the lead dev.)


Writing a system that (correctly) interprets the zillion flavors of "RSS" out there and presents them with a common API is non-trivial. An 80% solution is pretty easy, and a 90% solution isn't that hard, but a near-100% solution will take you into some very dark and ugly places.

I don't know of any service that does this as well as the Google Feed API did. If someone else does, please share.


Well, start the service, state you use RSS library X and that you will keep it updated and send people over to it for bug reports and contributions.


Ah but RSS is a standard, so where's the problem? One method to recode them all surely?

I jest, I'm the author of http://www.weegeeks.com - I know only TOO well, how there is no such thing as a 'standard' RSS feed... (sigh)

So yeah, I get your point.



That theory is nonsense. This API is a tiny thing compared to YouTube and Android app store, two services that provide publishing opportunities to millions.


Publishing on third-party platform, with content limited to what those platforms allow, with monetization limited to what those platforms allow, with constant exposure to random nonsense DMCA takedowns.

But I guess the primary culprits of limiting people's ability to be publishers are NATs.


Even crappy home routers can port-fwd, the real barrier here is the ubiquitous (in America at least) prohibition on "hosting a server" on a residential internet connection.


You think everyone serving their own services from their residential networks is the solution? Have you thought through the security implications of that?

I can just imagine the technical support hours I'd have to log when my dad calls me up because his email server is down or his home storage is encrypted by ransomware.


Of course I have, that's why I run pfSense, a separate host for my DMZ, a reverse proxy, an IPS, and don't have DNS pointing to it (marginal benefit, but I don't need it so it's off).

I'm not arguing that people should do this, I'm arguing that the "NAT is a barrier" position espoused by the "evil ISPs are trying to keep us down, man" camp is incorrect.

NAT being necessary because of IPv4 has lead to this really useful low barrier to self-hosting. People who grok security and want to do it can, but it's just arcane enough that lots of people who shouldn't self-host are scared off and go to AWS.


How does that work with WebRTC, e.g. as used in messaging apps?


You've lost me, what's the connection between "people's ability to be publishers" and WebRTC messaging apps?

If you want to publish a WebRTC service, port forwarding will work fine and then you'll be bitten by the no servers restriction.


A messaging app doesn't have to require port forwarding, it can use a central server to broker the connection between endpoints. In that scenario, the "server" is on the internet, not the residential connection, but data comes from the endpoints.


The epiphany the author of the article arrive is an useful one but one can wonder if it is really a surprise.

Of course Facebook probably has a lot more information about us than we volunteered to their service.

Be it via other parties (Whatsapp, Instagram), like button, share buttons, Facebook connect, they have a lot other avenues to collect info about us and they do, even if they don't openly use it.

Much like War Games the only winning move would be not playing, although we started playing way before we even understood what the game was.


I do accept it completely. But how can It suggest some other person data for someone else. I might have wondered if they got somehow my number, but i will be shocked if it suggests some other number for me


I think they're banking on user behavior being as follows:

If the correct number is suggested, you get slightly creeped out, but confirm. If the wrong number is suggested, you get slightly outraged or puzzled, fill in the right number, and confirm.

It's sort of a natural A/B test: will people respond better to the correct number, or the wrong number? And they don't even have to control the A/B population, it's just whatever their reverse lookup API returns.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: