I created software that was used by call center agents to bid on “bathroom” break time slots and kept track of who was on break and actively punished those who didn’t follow the rules. It rewarded those that had higher performance and who took less breaks with higher priority. If an agent didn’t come back from their break a security guard would automatically be dispatched to find them. For the same company I also made software that reduced the same call agents to numbers and effectively automated the layoff/termination process. It would contact security with orders to have people escorted out, and had a sinister double verification process that would check to verify the agent was actually fired, or else the responsible security guard would be punished via the same point system. Everything was done via e-mail and would come from “System” and at the time used fancy HTML e-mail templates that looked official. I would frequently hear people talk about how they received a “System e-mail” with a chill in their voice, not knowing I was the one responsible. People who I ate lunch with sometimes didn’t even really know. Embedded in each e-mail was a count-down timer to create a sense of urgency to do whatever was being asked before a “punishment” was applied.
After an agent had been terminated, their punishment points would decay over time until such a time they reached zero (or another configurable threshold depending on how desperate the company was for warm bodies), at which time they would be sent an e-mail to their personal e-mail (which was collected during the application process), inviting them to “re-apply”. Being an early telephony company we also would send them a robo-call with the “good news”. This process was known as a “life-cycle” and it was common in certain labor markets for employees to have many such lifecycles. Another way employees could stave off automated termination was to work for unpaid overtime, which offered to reduce their point values per unit of overtime worked. Everything was tracked to second granularity thanks to deep integration with phone switches and the adoption of the open source Asterisk CTI.
This orwellian automation terrorized the poor employees who worked there for years, long after I left, before it was finally shut down by court order. I had designed it as a plug-in architecture and when it was shut down there were many additional features, orders, and punishment_types.
I was just starting out in my career and was more pre-occupied with the task at hand to have enough mental headspace to contemplate the full picture of what was happening. The system itself was designed by traditional software means: trial and error, trying to see what worked and didn’t, usually trying to maximize some KPI at the company. There was also the thought that punishment and reward should be applied fairly and be “data driven”; take the bias and human factor out of decisions and implement performance management in a predicable, deterministic and transparent way.
The problems arose when people realized that the same controls and abilities to instill this equity and “fairness” provided the platform to enable wide scale exploitation in the other direction. Want more profit? Change a variable that was a single number whose action would cause great stress for many people, but would produce a desired result. The goals and metrics would always start out resonable but eventually would cross a certain point, and once they did, there was no going back to the way things were.
Draw your own conclusions to the similarities in this system to our modern day technical web triumphs.
Personally, as both a writer and a programmer, I often consider implementing the systems that I’ve written dystopian fiction about. Not to create a dystopia, but for the same reason I write: to point one out, such that people can get incensed, laws can be made, etc.
To put this another way: the most efficient way we could have possibly found to get the use of nuclear arms in war globally banned, was to have someone use one. The Cold War would have been far riskier if the world hadn’t seen Hiroshima and Nagasaki—it would have been a stand-off to the use of weapons we would have as-yet had no understanding of the consequences of using. It probably would have ended with the use of hundreds of bombs, rather than just two.
It’s sort of the moral equivalent of a “work to rule” strike: the best way to get through the lesson that something is bad, is to stop pushing back against it and just let it happen for once.
You’re actually suggesting being a sociopath is a good thing because it’ll make all the bad ideas eventually die by us trying them. This is simply not true, we have to make bad ideas die by reason and logic, before we do too much harm.
I suppose doubling down of fossil fuels and burning of rainforest is great, we’ll just have to adapt quicker to the super extreme weather. But we can adapt with more aircon! Yay!
Maybe we are just monkeys who can’t actually learn from things that we haven’t experienced but then we’ll be replaced by nature any day now. I’d rather learn things the easy way, if at all possible.
This is an extremely uncharitable reading. Infosec is a field rife with kneejerk dismissals until people actually see the exploitation in action. How long has the (ethical) tech community been warning people about the NSA and Faceboot, until Snowden and Cambridge Analytica actually happened?
Infosec is totally different because it’s exposing problems people have already created in a standard way. This is using technology for maliciousness in the first place. Try telling to those people who were abused by that system they were just collateral damage in a bigger plan.
I think there's merit to his argument. People really are very stupid, especially in groups. Maybe you're smart enough to realize that burning rainforests and doubling down on fossil fuels is a stupid strategy, but there is no shortage of people in the US, who actually have an education no less (unlike many people in developing nations where the rainforests are being burned), who really think we can't hurt the environment and laws to protect the environment are wrong. In fact, there's probably a bunch of them right here on this site; they usually call themselves "libertarians", and will scream about "private property rights" in regards to this issue.
It'd be nice if humanity was smart enough and empathetic enough to avoid the horrible mistakes we've made, but we're really not.
Ironically this is sort of why I supported the 'Bernie or Bust' movement, I felt in order for us to get a social democratic leader, we needed Trump to ruin things and push us more towards a dystopia. I'm in utah anyhow, so red state doesn't matter, but as a principle it still mattered to me.
Not trying to hijack and turn things political here, but it was just an observation. Sometimes you need some chaos to bring about change for the good...sometimes the thing we fear most is what we NEED to happen in order for those in power to pivot and change their ways.
I would file this under 3. Moral Disengagement - A generalized cognitive orientation to the world that differentiates individuals' thinking in a way that powerfully affects unethical behavior.
But see, I can understand weapons building. The motivation is protective towards one's society, and that is the intended result, even if that might not be the actual case.
Whereas the intention of building an oppressive system such as the one described above is, what, selfishness? Laziness? Programmers, because of the high-demand nature of our role, do not tend to be subject to the usual financial pressures that other communities are. I'm not sure what would motivate someone to build such a system as opposed to walking and finding a better job offer.
>But see, I can understand weapons building. The motivation is protective towards one's society, and that is the intended result, even if that might not be the actual case.
If you work for a defense contractor, you are not 'protecting your society' - that's not even the intent. The intent is to make money by selling tools designed to kill other human beings.
Not saying this was likely, but one potential reason could be a kind of selfless penny-pinching.
Say you have a call center run as a co-op. You’ve got workers and HR people. Both the workers and HR people are shareholders. If you can eliminate the HR people, then the workers can each have a larger proportionate share for the same work. Automating HR eliminates the HR people.
Automating a job is one thing. Some people may consider it unethical. But it is going to happen.
But writing coffee to terrorize people is something else.
Eventually someone will come in and say the system in the OP was necessary for the company to stay afloat, pay its employees or retain value for its retired shareholders.
There is a difference between defending your nation and defending your country.
But in this particular case, some of the things described are actually illegal. While building a weapon is not necessarily illegal. Ignoring the ethical implications.
There is not really all that much difference. Weapons are used to terrorise and murder people. What this person wrote will give some humans stress but it will never cause physical harm.
That you think one is justified and the other is not has very little to do with their relative harm, which is by no means a solved problem or a given. It has to do with your view of it, and those views will differ between different people.
It enables cooperation. Human nature is such that it's hard to keep working hard when others slack, you feel taken advantage of and a fool. This way, you know everyone's doing their part.
It creates jobs. A lot of people don't have the self-control to keep from taking longer and longer breaks, either costing the company money or getting fired. Some jobs pay more to hire people that do have this self-control, but there are only so many of those people. This creates a business model that works when supplied only with the lazier employees who are left.
Another system like this is the timeclock. It's a tyrant and getting out of bed on time every morning is the hardest thing I've ever had to do, but there's just no way to run a factory without it.
I’ve always looked at these systems as “if someone else built this, it would be worse.” Doesn’t the same moral conundrum exist anytime you build a system that dehumanizes people for profit. Ad networks? Drug trials?
Ad networks - people can just disconnect or look away if they like.
Drug trials have benefit to society.
I would argue that creating a system to fire people in an automated fashion when they take too many bathroom breaks like this is morally worse than both of those systems you mentioned.
However using that as an example, there are people who work as vermin exterminators, or people who work in labs that have to euthanise lab mice. They probably don't enjoy it, but when there's a need for something you'll find someone to do it.
It's not a straw-man, because it's using your literal point. I can't get any closer to your argument than that.
> However using that as an example, there are people who work as vermin exterminators, or people who work in labs that have to euthanise lab mice. They probably don't enjoy it, but when there's a need for something you'll find someone to do it.
Yes, but that does not serve as a reason why I should be the one to do it (instead of them), that only postulates that there are other people out there to do it, which is an irrelevant point.
Woah. That's just scary. Also: didn't know about the whole "life-cycle" thing before, is such re-use and re-termination of workers common in this industry?
I often say that telemarketing should be considered a less honourable occupation than prostitution (in the latter case, you have two people voluntarily exchanging value; in the former case, you have one person trying to scam the other). Now I'm beginning to suspect it also has worse working conditions.
Since you mentioned it in this thread, you clearly are aware that such a system is morally "challenged". How did you rationalize it to yourself? Was it something that started innocuous and evolved to become worse, or were you aware of how insidious it was from the start?
This isn't fiction anymore. Amazon uses systems essentially identical to this to manage their warehouse "pickers". Their performance is continuously tracked and they're retained or terminated based on those performance metrics.
First i thought that this is scary but I've had many friends who told me that they they play games all day at their office and once boss called them in their cabin and they expected to be laid off but were given a small raise instead.
"I take lots of cigarette and bathroom breaks to fix my makeup/hair and Snapchat my friends. I can't see myself working hard. I am not a sheep". Those are the exact words.
This is a scene from a government organization in Romania.
Now, i am in the US and I've not witnessed it here but here i am in executive role so i hangout with different people.
I asked them why not do your job properly? They answered, if they are going to do their job then they'll most likely receive a promotion, raise and more responsibilities come with it. For them more money = more problem.
They told me that for them money does not matter and all they want is experiencing different cultures, traveling etc... and work has no place in it.
This is something I had never heard before!
They somehow managed to rationalize not performing their jobs.
I don't slack off, but I feel like there's a position as a Software Engineer past which I wouldn't want to be promoted. I see folks in the higher levels and the stuff they have to deal with has zero appeal to me. My salary and bonus at my current level are already more than enough to sustain my lifestyle and save plenty for retirement - I don't need more.
A lot of people are attracted to the idea of "getting more responsibility", and they like the prestige/visibility that comes with a fancy (e.g. "Principal" or "Staff") title. Me, I just want to put in my hours solving problems and go home do something else.
This is a I believe a common sentiment in post-communist countries, esp. for anyone working for the state/city/municipality. It's normal for people to take a one or two hour break to go shop for groceries, or even leave at noon if they don't feel like working longer. There is even a common saying in ex-yu countries: “They can never pay me so little that I can't do even less work.” (sorry for the bad translation).
Why would you consider such a system unethical? As a customer constantly annoyed by laziness and incompetence of the low-level support employees, I can only thank you for working on such system.
After an agent had been terminated, their punishment points would decay over time until such a time they reached zero (or another configurable threshold depending on how desperate the company was for warm bodies), at which time they would be sent an e-mail to their personal e-mail (which was collected during the application process), inviting them to “re-apply”. Being an early telephony company we also would send them a robo-call with the “good news”. This process was known as a “life-cycle” and it was common in certain labor markets for employees to have many such lifecycles. Another way employees could stave off automated termination was to work for unpaid overtime, which offered to reduce their point values per unit of overtime worked. Everything was tracked to second granularity thanks to deep integration with phone switches and the adoption of the open source Asterisk CTI.
This orwellian automation terrorized the poor employees who worked there for years, long after I left, before it was finally shut down by court order. I had designed it as a plug-in architecture and when it was shut down there were many additional features, orders, and punishment_types.