> LinkedIn argues that imposing criminal liability for automated access of publicly available LinkedIn data would protect the privacy interests of LinkedIn users who decide to publish their information publicly, but that’s just not true
Protect them from what, your unlocked front door? [0][1]
Is it even comparable to an unlocked door, though? To me it seems a lot more like leaving something on the front of your house and trying to prosecute when someone takes a picture of it.
Nothing is removed or destroyed, and nothing was hidden or publicly unavailable.
This right here folks. This is how I would prefer government worked. Imagine putting the liability back on the corporation for confirming access because in place "protocols" that approved it?
I recall seeing in the wild an HTTP User Agent string that included a EULA for the server stating essentially that they, not the client, were on the hook for any BS if they failed to immediately close the connection.
Well, there is precedent for that at least in the EU. You are not legally allowed to take a photo of the Eiffel Tower at night, because the arrangement of bulbs are considered works of art, and thus copyrighted.
The linked Snopes article that they use for this viewpoint is badly worded. Although the headline claim is 'It is illegal to take photographs of the Eiffel Tower at night without explicit permission', nowhere in the text does it describe the act of taking a photograph as being illegal. It is all about publishing your photos and sharing them with others.
In that article, and the snopes article it links to, it is implied that it is illegal to even take the picture. But then I fail to actually see where it is implicitly stated that it is illegal to take the picture. I can understand the copyright claim on publishing said photos, because that's actually the case for lots of things that can be viewed in public, but not taking the photo itself.
I really wish articles would refrain from potentially untrue clickbait headlines, but oh well.
My account was probably in that password dump. LinkedIn has yet to reach out to me, but will still spam me with people that are not on linkedin
Side rant:
LinkedIn is a piece of crap in societal concept and implementation. Recently I was so frustrated by removing old connections I just simply deleted my account.
Warning: I am going to be crude at this point: linkedin is an HR circle jerk of pointlessness
If I leave my front door to my personal residence unlocked, and someone comes to the front door, opens it, and walks inside without permission --- is that illegal?
If you don't have a legal right to be on a piece of property, in a given structure, or in a vehicle, you're trespassing.
If you used force to gain access to the property, vehicle or structure, it will often be considered breaking and entering. Typically, these laws use a very loose definition of "force" which includes opening an unlocked door.
If you leave your door ajar, it's just trespassing. If you had to open the door, it's probably B&E even if you didn't break anything to do it.
What about walking up to someone's door and knocking to see whos home? What if there is a picket fence around the yard with a latched gate that you have to open to get to the front door?
In the only jurisdiction where I've actually read the trespass law, it stated that a "legal fence" was sufficient to indicate no trespass, and that a "legal fence" was any number of acceptable structures that were at least 4 feet high.
In that context, a wall of a house being at least four feet high, would carry an implicit "No Trespassing" sign on it, but the picket fence would not. However, if the property had an obvious path to an entryway, then walking up that path to the entryway was not trespass. So walking through a picket fence with a low-latch would not be trespass, unless the pickets were four feet high, or if the latch was locked.
To the best of my knowledge, no - your place of residence is considered to be private property and thus there is an implicit "no access without authorization".
I think a better comparison would be comparing LinkedIn to a public property (such as a commercial store) and thus there is an implicit "access allowed until revoked".
I think that realistically, there are strong parallels to this being a customer/company dispute over who has access to the company's store. The door (HTTP protocol) has to be walked through for the customer to see the wares (LinkedIn profiles) and can be guarded by security (some form of authorization).
I think the question being asked is a valid one - should a company have the right to bar access to otherwise public information if the customer is not tampering with your system? If so, to what extent? If undesirable robots shouldn't be turned away what about DDOS traffic? What forms of flow control become legal in this case?
I'm honestly curious what the courts decide and how that may impact other websites that have tried to combat scraping, such as Craigslist.
Depends on the person (stranger vs close uncle vs not close uncle, etc.), but in general, yes it is. It's also illegal in some states to leave your keys in your running car. It's still illegal for someone to get in and drive off.
Not if it's a personal residence. Entering someone else's property is trespass unless you have license. When private property is open to the public, there is an implied invitation to the public to enter, so you have license to do so unless it's revoked. With a personal residence, however, there is no implied license for strangers to enter (though there might be based on the parties' relationships or prior dealings).
No, you just think it is. The intruder must be there to commit a further crime, usually a violent one.
An intruder must be making (or have made) an attempt to unlawfully or forcibly enter an occupied
residence, business, or vehicle.
The intruder must be acting unlawfully (the castle doctrine does not allow a right to use force
against officers of the law, acting in the course of their legal duties).
The occupant(s) of the home must reasonably believe the intruder intends to inflict serious bodily
harm or death upon an occupant of the home. Some states apply the Castle Doctrine if the occupant(s) of the home
reasonably believe the intruder intends to commit a lesser felony such as arson or burglary.
The occupant(s) of the home must not have provoked or instigated an intrusion; or, provoked/instigated
an intruder's threat or use of deadly force.
well, 'breaking and entering' in the US requires that something (i.e., the door) actually be broken in the process of entering the house...otherwise that charge doesn't apply.
Fun aside: breaking and entering is referred to as such in English Common Law because criminals used to bust through the wattle and daub walls to break in, thus housebreaking, or breaking and entering. [1]
> I'd also note that these companies are barely (if ever) held liable for life-compromising hacks on their platforms.
You do know it is impossible to stop all cyber attacks? Its always a matter of when, not if. Zero day attacks are developed everyday with not even the best funded cyber security systems able to thwart them. The geniuses are on the offensive side, if they want in, they will get in.
The industry is held to no standards at all. You can keep plain-text passwords in your databases, do no tests at all, and be incompetent in a million other ways. I usually get downvotes when I say this, but by now there needs to exist certain regulation on commercial software and software-based services. It should be ensured that certain practices are followed in security and ethics (do you take the basic, well known precautions against the well-known attacks?, do you respect your users' privacy at least as much as the law requires you to, do you follow the terms and conditions you declare?). What we need is CE for software, and it's sad that I can ensure my cheese comes from a certain town and is produced from the milk from cows eating according to a certain diet, but not if Twitter (or any other commercial website) hashes and salts my password, and actually uses basic precautions against CSRF or what not. These companies should be obliged to get their stuff audited by third parties, and there should be a way to tell if they are really approved to maintain a certain standard in producing their software. I do understand and share the hacker culture, and appreciate how it's possible to spin off a start-up website business on the internet, but business is business. You don't become exempt from regulations when all you do is to run a tiny B&B with 2 rooms. Similarly, as soon as you're a company selling online services, regulations and standards should kick in. Because by now those online services are no less important than food business. You say it's impossible to stop all cyber attacks. Then, as it is impossible to stop all burglary attempts, should banks just deposit their money in some apartments, or in some random rooms where all the security is a wooden door? Fire all the security guards because it's impossible they survive all the guns out there? These companies like LinkedIn are no different in banks insomuch as they deposit not our money, but our personas. They should actually be more cautious because while money can be replaced, nobody can have a new self.
> It should be ensured that certain practices are followed in security
Let's not legislate specific practices.
Imagine if we had security legislation from 1995 to follow when programming today. Imagine trying to explain to senators why last year's XSS protection rules need updating. Imagine Oracle lobbying to get their database enshrined as the "security-compliant" one.
The law should focus on outcomes: if a site gets hacked and people are harmed, the site should be penalized.
"Security compliance" is about how you use a given database, not which one you happen to use. You can securely (but inefficiently) store credentials in a plain text file.
WRT some defences becoming outdated by time, well, it probably would not be two-decades behind, but a couple years or so at most. Even then, ensuring that is better then nothing.
People need tools to judge if they can safely use some product, and that's why standards exist. Otherwise companies are going to continue to screw us until they drop the balls.
> WRT some defences becoming outdated by time, well, it probably would not be two-decades behind, but a couple years or so at most. Even then, ensuring that is better then nothing.
Not necessarily. What if the law mandates use of, say, an encryption algorithm that has been cracked? You can't move to a new one without breaking the law.
Larger organizations use ISO-27001 and SOC-2 to audit this kind of stuff. But even so, sometimes the devil is in the details and it's possible to comply with the letter of the regulation while still being unprepared for the kinds of attacks that your service attracts.
Thanks, I'll look into them, but are there any compulsory standards anywhere? AFAIK this is entirely optional, i.e. left to the good will of the company.
Oh thanks. I guess you're referring to GDPR. I'll take note to research this in the future and have found some resources after seing your comment, but I'd fancy some links if anybody has them that elaborate this topic.
Certain industries are regulated, although the regulations are not consistent. It is not uncommon for jurisdictions to require by law protections on electric grid control equipment. For example, in some places in the US, servers that can ultimately affect a large scale change in power generation equipment (such as switching the configuration of a power plant) must have anti-virus installed on them (NERC-CIP).
>You do know it is impossible to stop all cyber attacks?
This is a fallacious argument, specifically the Nirvana Fallacy. Perfection not being achievable in no way means that there can't be standard best practices that are a minimum requirement, nor that liability cannot still exist. Certain types of cyberattacks are in fact possible to stop perfectly merely by virtue of not holding onto information at all. As a trivial example, there should be no plaintext password leaks (or even easily brute force password leaks) at all, ever. Adaptive hashes/key stretching have been a thing since the dawn of security, Robert Morris described CRYPT for unix password usage in 1978. bcrypt is from 1999. There has been no reasonable basis at all for plain text or even raw fast hash primitives to be utilized, ever, yet they have been. In no other industry dealing with these kinds of privacy and safety concerns is that sort of practice considered acceptable, not should it be.
Holding personal private information at all long term should fundamentally be considered a liability situation, because it's not necessary, it's a commercial choice. Can't be hacked if it doesn't exist. If businesses choose to hold it, they should also be taking reasonable steps to protect it, and accept liability for failures. That's the natural balancing flip side to them getting profit from using it. If they're allowed to turn any costs of holding it into externalities that distorts the market.
From my random perusal of the various reports of compromises over the last few years, my impression is not that organisations tend to get hacked using the latest zero-day vulnerability, but rather that organisations get hacked because they have glaring security holes that you could drive a double-decker bus through.
For example, bcrypt has been around for how long now? And don't almost all the reports of hacks report that a database was lifted with usernames and passwords either in plaintext (for the love of all that is holy) or hashed with unsalted SHA1, or similar?
I wish there was a "web security checklist" where if you ticked all the boxes, you can be pretty sure you have the well-known holes covered. This is why web frameworks are really useful, the decent ones get you way ahead in securing your application from the most common attacks. But if you self-bake, then you have to manage the entire complexity of the web platform.
While I agree, as a CTO I would be terrified if a data breach could hold me personally liable. It'd be like a Director of Security at a bank being liable for their bank being robbed with a tank.
But at the same time there is a line. I would be for holding companies liable if, for instance, the data gets out there and you find it is entirely unencrypted and the passwords are MD5 hashed or plain text. There has to be a baseline.
Mistakes should not be punished as long as there is not also negligence.
The Director of Security at a bank should at least be fired if their bank is robbed by a guy brandishing a banana. I'd speculate that that's the nature of most data breaches: amateur attackers taking advantage of grossly incompetent security.
No one said anything about holding the CTO personally liable. The idea is to hold the company liable. This makes sense because the company is in the best position to prevent the bad outcome. If the company is always liable, it can find an optimal balance between the costs of security and the costs of breaches.
If the company is only liable when negligent, it is incentivized to minimize the cost of security to the bare non-negligent minimum. This pushes all the costs onto the people whose data are compromised. These people are not in a position to spend small amounts of money to dramatically lower the expected costs of breaches, so they just end up paying huge costs that cannot be mitigated.
The problem isn't that someone is getting IN;
it's that the company throws up their hands and says "tough sht."
Or in a worse case, when Equifax puts up a compromised site to find if you were hacked that requires a significant amount of your SSN and personal details.
> it's that the company throws up their hands and says "tough sht."
What exactly is your solution to the problem? You are more or less complaining without providing any insights into addressing the issue or without knowledge of the threat landscape.
You also can't stop all failures of infrastructure, but outside of computing, anyone calling themselves an engineer is generally required to hold to various ethical and professional standards or have their work signed off by someone who is.
It's not impossible to stop most though. and Hacks like sony, equifax, linkedin and many others are the result of what should be criminal negligence. I.e. not encrypting sensitive personally-identifiable information.
instead of investing in securing their customer data these companies pad their bottom line. so yes, they should be held accountable for failing to follow basic industry-standard data protection practices.
Silly; it's impossible to stop all murders, therefore we shouldn't bother with making it a legal liability.
If the criteria is that it must be possible to stop all instances of an action to make it a legal issue, then we should just shut down all the prisons.
Protect them from what, your unlocked front door? [0][1]
[0] "Hackers selling 117 million LinkedIn passwords" http://money.cnn.com/2016/05/19/technology/linkedin-hack/ind...
[1] https://en.wikipedia.org/wiki/2012_LinkedIn_hack
I'd also note that these companies are barely (if ever) held liable for life-compromising hacks on their platforms.