Hacker News new | past | comments | ask | show | jobs | submit login

> So you are defending this as ethical and not something to obviously be fired for?

Not at all - you're presuming my argument. I certainly have opinions on that subject and we could talk about them if you want - but I'm not arguing that it is or isn't ethical. What I'm saying, specifically, is that by definition it isn't malware, and it's false to claim it's "basically malware." That "basically" does a lot of work, and it serves to put any ethical argument into a losing position from the very outset by forcing the other person to first defend malware (an untenable position) and then to make their point.

> The mechanism was there, but this person abused it. Deliberately.

We could debate whether or not it was "abuse" - but the NLRB seems to disagree (the complaint is up, if you want to read it [1]). I think that carries a lot of importance.

> to do what they knew was absolutely not allowed nor what the system was designed for, abusing the system.

Again - you're claiming this was against the design of the system. That makes it hard to debate in good faith, because by all accounts the system operated exactly as designed! It was designed to show a notice to employees when visiting certain sites, and was installed and approved by Google management before the incident happened. If you keep claiming it wasn't "what the system was designed for," then you're not arguing in good faith. You're trying to paint it as malware, as something bad by design - but on a technical level, it operated precisely the way that Google management intended. That can't be malware, unless it took some other effect in addition (which it did not).

The debate is really about "is the message she configured it to display right or wrong", but we can't have that if you keep insisting that she coded some malware and surreptitiously installed it on people's computers. That's just mud-slinging - it's honestly a lie. It isn't truthful. Nobody - not even Google - is claiming that it is malware.

I'm more than happy to debate the ethics of the message she configured or the ethics of "changing the configuration without management approval" or even "disagreeing with what your managers tell you to believe," but I can't honestly engage in debate about this if you keep claiming it's "malware."

> Who's to say next time there would not be JS reporting who goes where? Maybe there's a counter how many times this was shown? > What is they'd made a mistake? What if they'd broken browsing for the whole company?

What if she transformed into a werewolf and ate our babies?! We should grab some torches and run her out of town!

I say that because this is the textbook definition of a "slippery slope" argument. Some slopes truly are slippery, but I don't see any reason to answer those questions - there is no reason to believe she intended to do any of those things, and there's no reason to beleive she didn't test out her change like any good engineer.

1. https://cdn.vox-cdn.com/uploads/chorus_asset/file/22140676/C...




> Again - you're claiming this was against the design of the system. That makes it hard to debate in good faith, because by all accounts the system operated exactly as designed!

That's like saying you deliberately pulled the fire alarm when there's no fire. Yes, the system worked as designed. The people got out and the fire trucks arrived.

No, the fire alarm system wasn't put in place for you to not have to take that test.

I'm willing to bet that from now on it will require a two-person system to change anything like this again, because of the abuse. And that makes the system worse. It's similar to imagine if schools required two people to independently pull the fire alarm at the same time in order for it to go off.

> The debate is really about "is the message she configured it to display right or wrong"

Absolutely not, no. Again my analogy about how someone could replace the frontpage of google to no longer have a search box, only a copy of the US constitution.

How could this POSSIBLY be about the truthfulness of the message?

> What if she transformed into a werewolf and ate our babies?! We should grab some torches and run her out of town!

If you don't think this kind of change involves risk then I'm sorry, you don't understand complex systems.


> That's like saying you deliberately pulled the fire alarm when there's no fire. Yes, the system worked as designed. The people got out and the fire trucks arrived.

So - this is a good point! Honestly this is what I think we should be debating: "was it right or wrong to pull the proverbial fire alarm?"

That's why I've stuck so hard to my point - if we inaccurately compare it to malware, we never even get to these interesting ethical questions (we'd instead be stuck in a shouting match about "it's bad because malware is bad!" ).

> I'm willing to bet that from now on it will require a two-person system to change anything like this again, because of the abuse. And that makes the system worse.

You're probably right about the effects - management has a tendency to lock down things when this happens. I think that's tragic, but I don't personally lay all the blame at her feet for it. Google management is ultimately capable of making a different decision, and they do not need to take that course of action.

My gut tells me that they will take that course, based on what I've seen management do at other companies. I guess I just have to point out that their response is a choice, and it could be different.

> It's similar to imagine if schools required two people to independently pull the fire alarm at the same time in order for it to go off.

In a way, yes. But it also shows something else - schools would never actually do that, because it's absurd and makes things more dangerous. Instead, a school would focus on "why did this person pull the fire alarm in the first place" and address that instead of locking down the fire alarm.

Google management could do the same, in our analogy. But... that's a lot harder, so I imagine they'll take the easier route.

> Absolutely not, no. Again my analogy about how someone could replace the frontpage of google to no longer have a search box, only a copy of the US constitution. How could this POSSIBLY be about the truthfulness of the message?

I apologize - I think I wasn't clear enough with my words. I meant "right or wrong" in the sense of "ethically right or ethically wrong," not in the sense of "is it factually correct or factually incorrect."

> If you don't think this kind of change involves risk

I didn't say it didn't involve risk. I was saying that it's a "slippery slope" argument because we have no evidence to suggest that she didn't take the risks into account, and we don't really have any evidence that the change she made could have realistically led to the outcomes you mentioned. We have to fill in a whole lot of blanks before those outcomes seem probable: and hence why I called it out as a slippery slope. Not because it's completely impossible, but just that it doesn't seem probable based on what we know.

> then I'm sorry, you don't understand complex systems.

I know sometimes comments get heated; and I know I've sometimes said things like this in the past. But that's just really unkind. You and I don't know each other, and I don't think it's appropriate to insult each other's intelligence like that. If you're interested in my career history and want to know about the complex systems I've worked on, I'm happy to share it.

But insulting me in your argument isn't appropriate. If you don't want to discuss this stuff further, that's fine. No need to be needlessly cruel.


I didn't mean to insult. I saw your comment like the defense "sure, she was driving drunk. But nobody actually got hurt!". I think it's fair (and generally for drunk driving it is the case) to treat the action at least partially from the potential harm, that was avoided due to mere luck.

I don't think it's a slippery slope to point out that it could have had unintended side effects in addition to the (for Google unwanted) intended effects.

And unlike with drunk driving, complex systems break every day from honest "oh wow, I was sure this couldn't cause an outage".

When someone has honest intentions there's no blame[1], but when someone doesn't, then yeah they are to blame for any outage it caused, and for damage that statistically would happen for every N times someone did this.

[1] including if someone bypasses an 'annoying' safety feature. Because it should be set up such that the safest way is the easiest way.


Reading the complaint (thanks for the link), the action in 15(a) is part of describing what happened as part of a larger whole.

I'm talking about 15(a). IANAL, but it's not clear to me (even reading section 16) that NLRB is saying that any amount of sabotage and circumvention of access systems is authorized in order to spread information about workers rights. Nor do I hope a court does (imagine the precedent, well you don't have to because I've already given examples), and I can't understand how anyone would think it's remotely ethical.


I’m interested to know the outcome of the case (if it’s not sealed, or something) next year because I think you have a good point - the NLRB seems to be making a really broad argument.

I’m assuming that their logic to support that claim is more nuanced. I don’t know, of course, but my read of the document is that it’s charging google with violations and not necessarily outlining all of the exact reasons that support it. Akin to being charged with any other crime, perhaps - all of the prosecutor’s evidence and arguments aren’t laid out from the very beginning.


Section 7-15 are laid out as facts, not necessarily as individual allegations of misconduct by Google. Seems like a reasonable starting point for a negotiation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: