HN was never anything different than Reddit. The illusion that it is comes from people leaving Reddit looking for another online community that lines up with their interests, landing here, and saying, "ah, this place is my refuge from Reddit."
But fundamentally the established community is just as online, just as self protective, just as memey as any subreddit on a specialized topic.
Me: This software makes a box. It has nothing to do with human rights, human sexuality, or governance. I will not adopt a "Code of Conduct" as a condition for distributing it to others. Therefore, I do not distribute it any longer.
My stance on trans people is not what you characterize; but it is not relevant to this discussion. There are many places where one's stance on $(political_issue) is not relevant, and that does not automatically imply that someone is taking the side opposite yours.
> attempting to be neutral is the same as joining the Philistines.
This is a super common attitude across all areas of politics, that somehow being neutral is admirable or positive. That it implies an absence of partisanism, and is therefore preferable.
That's fine when you are talking about, say, which coffee chain you like. But in today's politics, the issues being discussed are fundamental human rights for various groups. Being neutral when there is a massive power disparity empowers those who hold the majority.
You: I'll disrupt anyone I feel like, because my pet issue is important enough that absolutely everything in the world needs to stop until it is addressed.
It's not saying both sides have good points -- it's saying we're talking about something completely different, and neither side is on topic.
First side (actually): "The term 'biological female' reflects a high school level understanding of DNA and genetics. Unfortunately, real genetics is much more complex, and many women who were born and lived their entire lives as women would be flagged by systems to detect so called 'biological women'. Your use of the term is ill defined and unscientific, and any attempt to enforce it is going to hurt both trans and cis women."
HN has always been a libertarian hell site with pockets of interesting people. The comments section has always had bad takes and good takes, it's always had jokes and memes.
It hasn't really changed over the past 5 years much, in my experience.
Edit: Hell site is a term of endearment. This place is the orange hell site, Twitter is the blue hell site.
Is it? Is the culture of Paris fairly homogenous? New York? London? Berlin?
I would imagine the place has some shared cultural understandings, generally, but that a big city is going to be much more heterogenous. But maybe I'm thinking too micro scale.
There's definitely something different in Berlin vs New York culture, and me, a person that sees commonalities everywhere, i.e. a lumper, definitely sees a common/dominant culture in each of these places.
On the other side you have splitters that always find a reason to differentiate more.
There's no right and wrong on this discussion, it's subjective in essence, you can always lump or split by your own rules. That is a classification problem.
Darwin talked about it at length, because he was annoyed by splitters denying any kind of classification, making it really hard to make a taxonomy.
If you split too much, every human is an individual, you can't say anything about anything anymore.
If you lump too much, all humans are the same, you can't say anything about anything either.
The sweet spot is somewhere in between, where you can speak about Moscow, vs Berlin, vs New York culture. Because there's definitely something different about these places. And neither say they're all the same, or that you have to look at each individual before making a judgment.
But most people can't agree where exactly this sweet spot is.
You need to realize though the West has embraced multiculturalism, largely. Russia has not. It does not mean there are no diverse cultures in Russia - or even Moscow, but it does mean there is one dominant culture, and closer you get to the places where power is accessible, the more dominant and more exclusive it becomes. There are various cultures found in Moscow (and any big city in Russia), but there is also the Culture. In the West, a lot of effort is taken for the old Western culture not to be the culture of Paris, New York, London, Berlin, etc. Some endorse these efforts, some decry them, but it is obvious they happen. In Russia, nothing of the sort happens, on the contrary, if you want to be in power, you will abandon whatever culture you came from and embrace the culture of power. If you do not, you'll never get to wield any power.
> What is the "culture of power", can you give some examples?
I thought I did, in my comments uptopic. If you expect me to write a doctoral thesis on modern Russian culture, sorry, I am neither qualified nor it is the right place. While I have plenties of anecdotal data, and would be glad to share my experience to the extent I can, systematic treatment of a culture is not something one could do in a random comment on HN. I can name it, at best - Russian Imperial culture, and describe some of its qualities, but anything beyond that will have to wait for somebody who either has a PhD or wants to get one researching Russian culture.
If you're talking about your first comment in this chain(about people refusing to take responsibility), I don't see how that's related to multiculturalism. I've never been to Russia, and I don't even watch Russian media often, but from what I can see, they're trying to promote multiculturalism a lot.
It's not that I don't believe that Russia has a "culture of power" and that people at the top all share the same ideology, but this is happening everywhere.
The culture comment was an answer to the critique that you can not talk about culture in Russia because there are many cultures. There are, but one of them is dominant. They do not promote multiculturalism, at least not what is meant by that in the West. On the contrary, the staple of their official ideology is preserving Russia's uniqueness at all costs - even at the cost of rejecting humanist values that are considered "Western". Other cultures are allowed if they are subservient to the imperial culture - same story in every empire, really, take a book about any imperial culture, Russia's one would have similar traits, it's not unique in that regard. The main source of conflict now is that imperial culture needs much more of an empire than Russia currently is, thus obsession with territorial conquest, despite already having huge undeveloped and neglected territories.
The answer from the federation is unconscionable. They blamed the boy and then said they could not be held responsible. Fuck off, a kid made a reasonable kid movement. If the robot wasn't ready to be around children, it shouldn't have been deployed around children. And there should have been a big red button that immediately everything, within reach of every player.
Honestly, as someone that researches ML, this is my major concern. It isn't AGI that has the potential to kill us, it is current ML systems that can't handle OOD data and engineers put it in place because "that's the user's fault." Same reason we have Teslas crashing. AI safety might talk about AGI a lot, but their main area of research is modern systems and concerns over that.
OOD data is really hard to deal with FWIW. But personally I don't feel confident that adding more matrix multiplies won't generalize in a way such that OOD isn't of major concern.
AGI alignment is a vastly bigger problem. Of course poorly built and deployed ML systems will kill and injure people - but these are tragedies of the kind humanity can endure and overcome and has overcome. Poorly aligned AGI is nothing less than the entire species at stake.
But also far less likely to happen anytime soon. A bigger danger is when someone thinks a machine is sentient or "semi-conscious" (whatever that means) and naively uses it to do tasks it shouldn't.
I don't think you nor anyone else knows when AGI is likely to happen. I also don't think that incorrectly believing a machine is sentient when it is not is a "bigger danger."
Again, an improperly aligned AGI could kill the entire human race. I'm not sure what harm incorrectly believing a machine is sentient might do, but I don't think it would be as bad as human extinction or enslavement which are both real possibilities with AGI.
You seem to be comparing only the worst-case impact and not the probability. To see why that's fallacious, consider that an asteroid could also kill the entire human race, but nobody would agree that asteroids are more dangerous than drunk driving.
I think there's a high probability of AGI within a century. Surveys show most experts share that opinion. It's hard to know the probability that the AI will be misaligned - but currently we have no idea how to align it. It's also hard to say how likely a misaligned AI would be to cause extinction. However, we have no reason to think that either of those things are unlikely.
> I don't think you nor anyone else knows when AGI is likely to happen.
Sure. But since I am an AI research I'd imagine I'd have a good leg up on the average person. I'm at least aware of the gullibility gap. Lots of people think it is closer than it is because they see things doing tasks that only humans can do but really your pets are smarter than these machines.
I think that's a reasonable stance, but only for some values of "soon". In a hundred years, we may well have AGI. At that time, we better have developed a robust science for how to control them. This is a somewhat unrelated problem to the current problem of machine/AI safety and both require more focus than they currently get.
We use the same scheme we use to control humans. The rich own all the valuable land and all the money. Ban robots from owning land. That way people can just use their shotgunsor call the police to kill them for trespassing.
If I were an AGI robot I would be scared of getting swatted for lols.
This is a classic case of undefined behaviour or memory unsafety. Your mistakes can have an infinitely bad outcome but people blame the programmer even though there are memory safe languages. Yes they sacrifice efficiency but who the fuck wants to consider the billion potential ways of operating a robot in a physically unsafe way?
This means we are going to have the equivalent of GC in robots that interact with other humans.
To some degree, that doesn't matter. An underlying feature of a competent approach to safety in design is that the design must take maximal ownership of eliminating risk to all people in all scenarios that can be reasonably expected to result from the design.
The moment Telsa set expectations by proclaiming it as autopilot, they took the corresponding responsiblity to make sure it did not generate any scenarios which were unsafe. The moment they implemented features that allowed the attention of drivers to drift more than standard driving, they also took responsibility to make sure that the drifting attention of drivers did not place the system in an unsafe state.
This same issue applies to touch-screen interfaces in modern cars. Drivers could always stare down at their radio when there were tactile knobs and dials, but touch-screen interfaces now expect that because they've eliminated tactile feedback. Telling drivers 'just don't look down' misses the point, because it's the responisbility of the car manufacturer to not create a system where that added safety risk is not controlled appropriately.
Pretty much this. They could have called it Super Cruise Control or something and I'm pretty sure nobody would have anything to say, because it is expected that cruise control be supervised, but, I think people wouldn't be quite as willing to pay a lot of money for a feature that didn't sound so remarkable.
Self driving technology did seem to reach human parity in 5 years back in 2010s, and the growth was later revealed to be logarithmic than exponential, and Elon doubled down on a bad bet on it.
It’s not about whether they should have clarified the scope, the scope did include a completely automatic driving. It just that they failed to deliver(tbf no one truly made it).
I really don’t see how anything could matter more than “does it save lives, on balance”. If it saves thousands of lives annually, then why would we let tenuous marketing grievances forestall its deployment? How many lives should we sacrifice over branding concerns? Of course, if the technology doesn’t save lives on balance, then that’s reason enough to restrict deployment, but in any case marketing issues don’t seem like they should factor into the calculus.
It’s not the big mistake at the end that stands out to me but the sheer volume of mistakes it makes along the way. Edging forward at an intersection when there’s a red light for example.
I don’t doubt that self driving tech will improve and be a safer alternative to a human driver eventually. It doesn’t seem like we’re there yet though.
Yeah, I fully expect Autopilot to have different failure modes than human drivers, but what I’m interested in is the different fatality rates (deaths per hundred million miles, adjusted for different types of roads i.e., highway vs city streets). If Autopilot can save hundreds of lives annually to human-error mistakes like falling asleep at the wheel, etc but at the cost of one life annually due to obscure failure modes like driving toward a train, I maintain that we should not only allow Autopilot, but probably even mandate it on new vehicles. Sacrificing hundreds or thousands of lives annually because we don’t like the specific failure modes seems absurd. Of course, if it doesn’t save lives, then we should block its deployment on those grounds (but the particular kind of failure mode shouldn’t affect the calculus).
Many, if not all publicized "Autopilot suspected" Tesla crashes were later found to happen because driver accelerated too fast, forgot that break pedal exists and lost control.
That is less possible with Autopilot, as it can't go faster than 90 mph
Are there any other major examples of modern ML ethical issues besides some Tesla cars killing their drivers?
Are ML driven robots in factories killing people or
Something? Because I haven’t heard of anything else.
The only other modern AI ethics stuff I hear about is making image generators more politically correct and maybe some criminal sentencing algorithms that are being misused (which isn’t really an AI ethics problem but a judicial procedural one).
Not AI directly, but there is a talk by a coder who was asked to do triangulation targeting for mobile phones. It was an interesting problem so he went for it.
After a while he figured out that his code was used to target missiles on people using cell phones in Iraq.
> Are ML driven robots in factories killing people or Something? Because I haven’t heard of anything else.
All the videos on the YouTubes I’ve seen show industrial robots with crazy amounts of safety equipment where you can’t get close enough to it while it’s running for someone to get hurt.
I have seen people experimenting with robot arms next to their CNC machine where it could easily take off someone’s head if you piss it off but these are small shops where they expect the operator to keep on the robot’s good side, no inappropriate sexual comments and biology shaming.
I was watching one video where they had to train the arm what to do and am pretty sure they (the Silicon Valley robot arm startup) gave it AI magic sauce because even toasters have AI these days.
> I’m still waiting for those GPT-3 and deep fake horror stories we were warned about to come to reality.
I'm a bit confused. There's plenty of propaganda written by ML. Here's some deep fakes with respect to Ukraine[0][1]. Manufacturing robots kill people all the time[2][3]. They are weaponizing ML. Like specifically GPT-3? Probably not but people do use these to write tweets and short form things.
> Manufacturing robots kill people all the time[2][3].
Your techrepublic article discredits your statement:
> While any death is a tragedy, it also must be put into perspective. Humans and robots have been working together in the manufacturing industry for decades with few grievous problems. According to a 2014 New York Times report, citing OSHA, at the time robots had been responsible for 33 workplace deaths over the past 30 years. According to the National Association of Manufacturing, there are 12.3 million manufacturing workers in the US, who account for roughly 9% of the country’s workforce.
It's appalling the robot was designed to ever use that much force in the grip. Even if the chess pieces were made of lead I can't see it being needed. In general, more attention to failing safe.
But the kid is some kind of local chess champion, I can't fully fault the decision to have him play with the experimental chess robot. Is it more dangerous than a lawn mower or a blender or any other machine that 9 year olds might begin to operate?
Correct. The fact they made a robot that could crush a human hand means they paid no attention to this hazard. Competent execution of Safety in Design concepts would demand limiting the grip force to only what's necessary to reliably move the pieces, which almost certianly wouldn't break bone. If that isn't possible, then it would imply the requirement to find some other way to resolve this hazard in the heirarchy of controls.
Relying on a human is the last option, not the default, when it comes to safety. Human adaptability is not a licence to hand-wave away design responsibility. The most glaring example is Tesla, who is unforgivably guilty of this.
This is bog-standard competent engineering in almost all domains of engineering. It is the table stakes-level expectation of a reasonable approach to safety. I'd literally end up in jail if something went wrong and I had been found to not consider these factors.
Software- and computer-related domains of engineering are a conspicuous outlier when it comes to this philosophy.
You would think that much force would lead to broken actuators and chess pieces often enough during development that someone would land on the idea of setting an upper limit on all forces just to save money.
If that is the official statement of the federation, I agree it's abysmal. But i question whether Lazarev really blurted all of that in one go, or if he was asked a series of leading questions and then his answers were misleadingly pasted together to make him sound as callous as possible.
To be clear, any answer short of, "this was a failure on our part to protect the children who attended, and as the leader of the organization, responsibility falls on me. We will make this right for the family and I will tender my resignation" is not adequate.
Then you don't want a voluntary resignation, I think you want them ousted.
The kind of person to learn from their mistakes through honest self reflection (and hence would voluntarily resign) is probably the kind of person you might want to keep around, or at least not decide to make a public example of.
The kid tries to beat a piece before it was even placed and then the robot tried to place the piece on his piece while his hands were still covering it. This could have lead to a minor injury even with a human opponent. What is strange is that they insisted on using such a powerful robot arm without any compliance in the actuators.
Honestly a broken finger is pretty far down the spectrum of things to go wrong when near heavy machinary. Such a robot should have never been rolled out. The state of robotics is simply not advanced enough that I would ever trust one near me. Maybe those Boston dynamics ones. But they are on a completely different level.
That's not what they said. They said they weren't responsible for the robot, and which is true. Same as if a human player injured another player.
The robot's operator is responsible.
I disagree, if they are running the event, they are responsible for ensuring the safety of participants. You go skydiving and the chute fails in a totally predictable way, the company who takes you up shouldn't just shrug and say, "well, we aren't responsible for the chute, that was provided by another company"
>people sign a disclaimer about the risks, they can not be held responsible.
This isn’t adequate in civilized countries. You can’t run an amusement park that severs the limbs of 1% of the participants under the protection of a disclaimer.
There is a threshold where it’s acceptable though.
For example: the general rate of skiing injuries is about 1 injury per thousand skier days. Ski resorts will sell a day pass to let you experience that 0.1% injury risk as Long as you sign a disclaimer.
A ski lift that breaks legs one day per thousand would never be acceptable. Risk imposed on you by someone else or a machine operated by someone else is not at all like a risk imposed on yourself by you personally skiing with your own legs off the mountain, into an unmechanized tree.
IMO there is a huge difference between something that brings mechanical force to you (robot, ski lift) vs you bringing mechanical force onto something else (crash into rock/tree while skiing).
Ski lift fatality rate is about 1/10 of cars, with only one death per 700 million miles traveled and average of 0.34 deaths per year.
Ski lift is one of the safest modes of transports known to man. It probably saves many lives vs someone walking up the mountain. That is, unlike the chess robot it creates a comparative net decline in risk for achieving the task of climbing a mountain. A wildly overbuilt (for the task) industrial robot as designed in this configuration here creates a net increase in risk vs playing against a much weaker human hand.
Fatality rate from ski lifts may be very, very low. But number of broken limbs per user are probably relatively high compared to other modes of transportation. (Especially I think in small pull-based lifts)
Wow you are incredibly clever, would you like a cookie? The best you could do is sometime, somewhere, people get injured a ski lift. The same could be said for people walking up a mountain. AT least I provided some data, including a study that showed injuries.
Well I do have a "clue." Fatality rates for walking are imperfect but some studies have put them around ~37/billion kilometers walked [0]. For Ski lifts, it is 0.93 per billion kilometers. That's over an order magnitude worse for walking.
Now most people don't walk on mountains, so it's not a direct comparison. Walking on sidewalk is often a well paved, but with the risk of cars. Cars usually aren't on mountains, but on the other hand the conditions are oft very inclined with ice/snow, with difficult walking that can lend towards inducing casualties. My educated guess are these tradeoffs aren't enough to overcome the order of magnitude higher rate found in walking vs ski lift in our imperfect comparison across studies.
The honest answer is the word "probably" was used to denote the evidence points my mind in that direction, in a way that appears reasonable at least to me and probably many readers here. But if I want to use the bad faith accusatory tone you've presented, then I'd just say I said it to annoy persons such as yourself for my personal amusement.
You can't just 'waive' criminal responsebility - if I have a dangerous dog or industrial machinery that chops off hands, it doesn't matter what piece of paper the kid or the legal guardians sign, If I purposefully let kids play with them, I have put kdis in harms way.
In the US South there's a ton of poultry workers that have lost fingers and parts of hands to machinery. It's gotten a lot better lately, but for a long time it was a very bad problem. Since most of these workers were Black or from Latin America, there was very little was done to protect them.
You're welcome to challenge the "it's not race, it's poverty" assertion anytime you want by hopping on a flight to Shreveport, Alexandria, or Opelousas, and just walking around. In places like that, where the population is about 50% black, and 50% of the population is under the poverty line, you'll notice that nearly 100% of the non-managerial people working at fast food places are non-white.
I don't know what the appropriate coin-flipping analogy is, but it's like getting heads a lot of times.
It’s America and it’s about race because if you look like the people this country historically extracted free labor from, then you are much more likely to be treated poorly for profit. Being poor is different than being poor and black or latino in America. Our livelihoods ride on the manufactured failure of specific groups of people for political reasons. Then here you come calling it all poor people, erasing the specific struggles that enable the system. If you are going to argue on behalf of white people, which ones? What are the lineages of people who were affected most? Who cause the harm? Was redress made?
American Descendents of Slavery haven’t gotten their reparations yet, and LatAm is still an American-managed cluster fuck.
When it comes to children there is an additional level of responsebvility that cannot be 'waived' away.
For instance if children tresspass onto your land and hurt themselves, you could be held legally responsible under the doctrine of 'Attractive nuisance'
Setting responsibility on guardians for not having forseen or prevented that is a slippery slope, or at least leads to unintended consequences.
Imagine a world where guardians will never let a kid go somewhere or do something that they don’t have 100% knowledge of, or aren’t 100% sure it’s perfectly safe.
In this specific case:
- you wouldn’t expect that issue at first sight
- it looks fun enough to give it a try
- the kid disn’t die. It truely hurts and can have long lasting damages if not treated properly, but a finger broken is not the end of the world for the kid.
If this was any workplace in many countries, the robot owner would totally be culpable, even if it wasn't a child. There's a reason that people are not allowed near robots in many cases.
This is why in civilized societies the government mandates minimum safety standards so that a company is not even allowed to place such dangerous equipment around people.
You wouldn't be allowed to build a mangler anymore in a civilized society, waiver or no.
That why in civilized societies use illegal immigrants or put the factory in other countries who have less safety regulations and then import it, or have corporationsso big that they are untouchable.
I'm pretty sure there is only a tiny fraction of the population that is even willing to press the emergency stop when something bad happens. Most likely they will scream or freeze in place instead.
Although I can imagine what it sounded like at the scene though. "Ay Blin" and then people scurrying around looking for the power plug of the robot or hopelessly trying to overpower a heavily geared joint motor.
> there should be some debate about whether this is even a significant enough incident to require an apology.
The robot behaved in an unexpected way which caused injury to a child. An apology is the absolute bare minimum they could do.
As a parent, I think it’s perfectly reasonable to expect event organisers to have put adequate safeguards in place.
We don’t need to put kids in front of a bear to teach them about the wonders of nature. Likewise, they don’t need to be in the path of a dangerous robot to discover machines.
> As a parent, I think it’s perfectly reasonable to expect event organisers to have put adequate safeguards in place.
Literally everyone thinks that. Who is going to say that they expect event organisers to put inadequate safeguards in place? The issue is that things can go wrong even with adequate safeguards. Life be risky.
> We don’t need to put kids in front of a bear to teach them about the wonders of nature. Likewise, they don’t need to be in the path of a dangerous robot to discover machines.
Attempting to kill kids does seem like a bad strategy. But if you want them to learn about nature, they will actually have to go out into nature. And that'll be a lot more risky than this robot - nature is not safe either.
"Think of the children" when used as an expression refers to the situation in which children are used as an excuse to implement rules which would be otherwise unpalatable. Not every case of protecting kids is a "think of the children" situation.
Quite the contrary, I think. Society has become "soft", for lack of better words. If we had been so risk-averse centuries ago, the industrial revolution would have never happened.
That sort of argument might work well for a time when slavery was a recent memory and children frequently worked in factories instead of going to school. That doesn't mean it works equally well for the present day, or for the kind of future most of us would probably prefer.
I'm confused. You think if we don't let robots break childrens fingers because we insist on proper safety, then the industry of physical chess playing robots might never get off the ground?
We could have had a perfectly good industrial revolution with more mechanical safeguards and less child labor.
If a machine is 20x faster than human labor, and making it safe knocks off 10%, that's fine. And it'll leave you with better employees over time. It's only a problem if you're in a race to the bottom that doesn't care about worker safety.
The industrial revolution and accompanying urbanization lowered birth rates.
Technological progress is not necessarily good for the species, although it can be (c.f. the advent of new agriculture methods creating more food post WW2).
There are tons of rules around these kind of robots in regular work environments for precisely this reason.
Usually there need to be either a physical barrier like a cage or a virtual one like a laser waterfall that detects foreign objects in the robots perimeter and emergency stops it.
These rules were disregarded here.
I used to work in a company were such machines were developed and even a very experienced engineer, working on a prototype, was once hit by it (no serious injuries and safety was improved afterwards) because they can move very fast and in unexpected ways.
These days there are better solutions available (so called cobots) which are designed to be work together in very close proximity with humans whiteout physical separation. They feature very sensitive force sensors and are severely restricted in the way the are allowed to move.
So yes "think of the humans/children" does apply here. This is a solved problems and the operators decided to disregard established procedures and went instead for "flashy and cheap" (cobots are more expensive and slow as molasses)
What do you mean? There's no reasonable time where you and your opponent are touching the pieces at the same time. Nor is there a reasonable time where you reach for the same piece.
Is it normal in chess to break fingers of an opponent who breaks rules? I think not. If it was not a robot but a human being he would be considered guilty. Yeah, boy broke chess rules, and what? He probably should be punished by losing chess points or something, but not by the means of breaking fingers. But the human who broke fingers of his opponent would be disqualified from chess for a life and would face charges. It is robot, it cannot be guilty, so someone else is. Who is? His creators? Or organizers of the event? Or parents of the kid who allowed him to go to face the robot? Some adults are guilty, not the boy.
It does not excuse such a reckless use of industrial equipment around people without appropriate failsafes.
A robotic arm like this not a toy, is a deadly machine with a lot of force and a lot of mass. If you were standing next to it for some reason it could unexpectedly swing around and break your neck in an instant. Engineers should be careful to design such systems with failsafes that account for any action the operator might take.
Dude, children get antsy sometimes. They literally have different brain development than adults do. They do not have the same controls and inhibitions.
Children are literally wired for physical experimentation and echopraxia.
Kids also want to be seen as helping. If this robot put a piece down poorly, the kid might be trying to straighten up after it.
All of which are eminently reasonable for a kid who has no concept of operating around dangerous machinery.
Sure there is, like if you move a piece, hit the clock, then realise your piece wasn't quite centered on the square. Maybe not technically correct, but reasonable.
Oh, that settles it. Robot cannot do things that are outside of the rules of chess.
Kid should have known better than to open with the classic, "break my finger" opening from the regulation standard rules.
Industrial robotics are incredibly dangerous. They operate without concern for anything in their path. Robotic arms like this can be lethal, and the idea that the robot wouldn't ever do anything unusual is, quite frankly, laughable naive.
Because the thesis is that automoderation is a problem. The entire point of the article is using Twitter as an example of a problem, not the root problem itself.
Thailand has laws about how one can talk about the king, right? Maybe there's no safe harbor provisions for websites, and a site this size is too small to fight, but big enough to attract attention?
It sounds like they aren't basing it on how much regulation the jurisdiction has, but whether the jurisdiction brings in enough revenue to justify reviewing the regulation in the first place. Even if there is no regulation, you need a lawyer to tell you that.
What's the threat of not loving up to their regulation when not being located there? Isn't the worst scenario that they just block your website on their end?
This is a blanket ban of multiple countries. It's most likely something very unspecific and probably unrelated to local regulation situation. Also fanbyte seems to be an influencer marketing platform. If they wont serve influencers in these countries anyway, it makes sense for them to block all users from there to stop messing up their metrics or targeting or whatever.
This isn't really much value to most people. If you ask me the less influencer crap that exists in this world, the better.
What? I'm not sure what you are saying... We don't need to eat beef, both here and in Brazil. We choose to, in both places. We, locally, need to regulate harder and push harder globally to protect things.
But fundamentally the established community is just as online, just as self protective, just as memey as any subreddit on a specialized topic.