Hacker News new | past | comments | ask | show | jobs | submit login
Someone has entered an AI in a Japanese mayoral race (otaquest.com)
132 points by chasontherobot on April 13, 2018 | hide | past | favorite | 78 comments



AI is a vague term. A Pong bot is AI. A Roomba vacuum cleaner crashing into walls runs on AI. Possibly your fuzzy logic powered washing machine could also be considered to run on AI.

Sophia, the "robot with a citizenship" from Saudi Arabia is a glorified chatbot combined with Furby grade technology for facial expressions, but received worldwide press coverage.

This must be some similar thing to take advantage of the technologically unsavvy and click-happy Internet people.

The only reality here is that the people covering this fake robot mayor from Japan will make tens of thousands of dollars from ad revenue.


We probably need a term like artificial consciousness to signify the meaning that AI used to have, before it entered a terminal inflation cycle.


I think there is actually the opposite effect in action. There used to be a time when doing simple arithmetic was thought of an expression of intelligence. When machines learned how to do it much better than humans, this notion was quickly dropped.


This got me thinking about the whole “What is AI” issue again. I decided to post my highly opinionated thoughts…

For most of its existence as a “field of knowledge” AI has been shepherded by the Academic community. This same community likes to throw around inside jokes like “If it works is isn’t AI” and such. If you boil down all the discussions that take place over “what is AI” online and at conventions, a pictures emerges something along the lines that to the Academic community, AI is what makes an interesting research question. This has the unfortunate side affect that the definition not only changes constantly but that it can also be quite myopic. Expert systems, discussed in the 80’s as a massively successful branch of AI, was kicked out of the field in the 90’s -- at least until recently when the Academic community figured they hadn’t really exhausted all research possibilities and it was reincarnated as Answer Set Programming. Geofrey Hinton (one of the most influential figures in AI in the last 50 years) stated in a talk he gave at the 2015 AAAI conference “Unless it has 100,000 attributes it’s not AI, it’s applied statistics”, essentially demoting most of all AI ever researched or developed. I even been involved in discussions where the question as to if Machine Learning is really AI any longer was brought up (though everyone seems to agree that Deep Learning is AI). I find it odd and fascinating that a community responsible for developing an area of knowledge is so aggressive at denying everything it achieves. I can’t think of any other fields that approaches governance of their domain this way.

Things get more complicated by the fact that AI has different definitions depending on where you are working on it. AI in manufacturing tends to break down into satisfiability, constraint programming, and optimization – none of which are considered ‘fields’ of AI for the most part. In these cases, the pipeline (the process that results from combining all the steps in AI solution) is referred to as AI, not the individual techniques applied in that process. In the Games industry AI is the part of a game engine that makes decisions regardless of what technique you are using to achieve that. I actually think those definitions are probably more consistent and more practical.

Because I lack a better term for it, I’ve always referred to terms like “AI” by my own made-up phrase “problem words”. These are words that were made to refer to a challenge that practitioners in a field commonly run into but that is not created from an effort to make a scientifically precise definition. I lump “Deep Learning”, “Big Data”, and “Cloud Computing” into this category. Because the scope of what we find to be a problem always changes, so does the meaning of the term. Big Data 10 years ago is not what Big Data is today as an example. These kinds of terms NEVER have a final definition. Lastly, I find it anecdotally interesting that during the registration process for some AI conventions the question of “How do you define AI” is on the registration form.


Artificial General Intelligence (AGI) seems to be the most common term for that.


I'm all for replacing our elected officials with small shell scripts. We're automating everything else, why not government?

I could fill out a questionnaire answering hundreds of questions to use as a reference on how I might vote. That data could then be used for voting on subsequent laws.

Why have a representative government when the human doesn't actually represent you. A small algorithm would be a much better representation of you.


Tyranny of the majority much? Elected officials don't exist to simply determine what 50%+1 of the voting population wants. They're not supposed to act as a popularity sieve.

You should be voting for people because they generally share your views, yes, but more importantly because they're willing and able to become experts in the things you need to be an expert in in order to be an effective lawmaker.


> You should be voting for people because they generally share your views

I have actually come to disagree with this sentiment. Look at what we have right now in the US: pretty much complete gridlock when it isn't just partisan curb-stomping. No one is willing to compromise, no one wants to discuss things rationally. Everyone goes in with their answer and attempts to beat others into accepting it and it's our fault, as voters, because it's what we ask of them, because we keep being told that we should vote for people who share our opinions.

I think we should vote for people not because they share our views, which are likely based in ignorance, but because they demonstrate the ability to make informed decisions and come to reasonable compromises. Look for the ones who think things through, who are willing to admit they made a mistake, and who will change their opinion in light of new information.


You make a good point but for one wrinkle: How many politicians can admit to a mistake and actually change in response to new information? Add the characteristic of "able to make genuine compromises" into your search and it seems your results would hover around 0 every time.


Don't blame me, I voted for the other lizard.

If we continue to vote for people without these qualities, our elected officials will continue to lack them.


There is no incentive to vote for a compromiser when the other side won't. All you're doing is sacrificing all your footing for the sake of movement in the wrong direction.

We're at a gridlock because a significant portion of this country is woefully misinformed about just about everything and are puppets to an incredible machine of profit and no-holds-barred capitalism.


And by participating in that system under those terms you are only encouraging it. Change begins when someone decides to do something different, and it's usually a risk, but if no one takes it everything stays the same.

You have no one to blame but yourself if you refuse to even try.


"we" don't get to decide who gets funding from the Party elites, free airtime from the media, etc.


Then we don't have a democracy and therefore shouldn't even bother voting.


> I think we should vote for people not because they share our views, which are likely based in ignorance, but because they demonstrate the ability to make informed decisions and come to reasonable compromises.

I like this model, but I think the rub is that rubric itself expresses a view that not everyone shares. It pushes the problem back one step, the disagreements become meta.

Any interest group can defect from the prisoner's dilemma where we all vote for tabula rasa rationalists, and instead choose the strategy of electing someone who pledges unwavering support.

They'd likely disagree with you that they don't know their interests and that they would be better off with smart generalists. And once some interest groups are getting unwavering support, then my bet is that strategy cannibalizes the "rationalist" strategy completely before long.

I'm not endorsing that outcome, I just think that's how things fall apart, and it's hard or impossible to create systems that prevent it.

Tyranny of the majority is incredibly tough. It's basically a sheep/goats problem, where you want majoritarian common sense to get through, but majoritarian selfishness to be stopped at the gate.

Federalism seems the best hedge against it so far, though even that seems to vary wildly in effectiveness. (e.g., Sometimes Jackson ignores Marshall.)


It could very well be that you're right, in which case we can chalk democracy up as just another failed model of governance. Way I see it, there are two options: continue to perpetuate the escalating arms race of irrationality, or try to change it.

One way might end badly, but the other certainly will.


Well, it might be the best of several flawed models.

Malcolm X's speech about the ballot and the bullet is worth hearing. Maybe democracy is only partly about rational policymaking, and mostly a pressure valve to prevent violent revolutions.

That's not to dismiss it, avoiding violent revolutions, even partly, is still a massive benefit for humanity that's hard to overrate.


Yes but then we have to define what informed decisions are. Then we're back to where we started because we think opposing views are uninformed.


Having watched even a little C-SPAN, I really, really don't understand the 'expert' argument that always gets trotted out. Senators range from well meaning but unable to keep up, to complete blathering idiots. They don't have time to be experts in anything.


As someone who lives in the UK, this is funny and also makes me cry a little.


Could be worse, could be in the US.


>but more importantly because they're willing and able to become experts in the things you need to be an expert in in order to be an effective lawmaker.

Did you not watch the latest Zuckerberg deposition? Most of our officials do not care about becoming experts in anything. They're old, tired, and ignorant and they are mostly concerned with maintaining power, not learning new material.


> A small algorithm would be a much better representation of you.

Nice, nice: and the de facto elected official becomes whoever implemented the algorithm. You've described a rare opportunity for us to become the Engineers Who Installed The Red Button. I'm in.


It's a terrible idea in more ways than one. The biggest being the fact that AI = software = will be hacked.

And that excludes scenarios where country elites will control it from behind the scenes with algorithm changes that favor them, while pretending to improve it.

Maybe all of these issues will be solved in 100 years. But I think it's far too premature to actually hope for this happening soon. It's going to end in disaster if it somehow does happen, because people are tricked into thinking it's a great idea (just like Pentagon officials and politicians are not being tricked by war contractors that autonomous weapons is a good idea with the AI capabilities and software security we have today).

Because of this trickery we're going to see increasingly more bad decisions by governments who allow AI to take their place. Such as this recent one:

https://www.techdirt.com/articles/20180409/09125639594/uk-po...


Not so much "hacked" as "written from the ground up by people with nobody's interests but their own at heart".

AI isn't naturally occurring. Humans write the code, humans select the training strategies, humans choose the data to train it with and most importantly humans choose what constitutes a "good, well-trained" AI over a "bad" one which needs more training. Humans pick when to stop. In this hypothetical, those humans are now the gatekeepers on all governance. They have no motivation to do anything but serve their own bias. And we know how much ethical training programmers receive.

We can't build a perfect AI because we don't have a perfect AI which could build it for us.


But is the real official the script or is it the person/persons running the script?


Neither: it's the person who implemented the script. That's why this is such an attractive prospect to the programming/tech crowd.


If only we got to replace them with small shell scripts. Instead we're going to get blockchains.


Do you really trust feature extraction from legalese enough to hand your voting power to some AI? Do you have the time to read hundreds of real laws in order to provide it with real training data? Do you trust that your fellow citizens will do the same?


It’d still likely represent you better than a regular politician.


Or misrepresent everyone more equally.

Currently they just misrepresent people without checkbooks.


No, it represents the people who know how to write laws to mean one thing to a lawyer and another thing to the AI.


The staus quo appears to be getting laws passed because they have positive sounding names such as “The PATRIOT Act” and that the people suggesting the laws made the correct choice of teams red or blue.

I didn’t want to learn psychology because I didn’t want to lose the magic of not knowing how my mind works. Now I’m learning it to stop people — myself and others — from deluding me. I’m a neural network too, just an organic one rather than a silicon one.


PATRIOT ACT passed because the authoritarians in government liked it giving them power. The name was just butter to make it easier to force down Americans' throats.


That’s who wrote it. The people who passed it could neither like nor dislike it, because they are on record saying they don’t read bills as that would “slow down the legislative process”.


>If you assumed that artificial intelligence itself couldn't run for mayor, you're absolutely not wrong; that just happens to be where things get truly interesting. The two-person team pushing Michihito Matsuda consists of both Tetsuzo Matsumoto, the vice president of mobile provider Softbank ($74 billion revenue), and former Google Japan representative Norio Murakami.

So what they claim is artificial intelligence and automation is actually the long arm of 2 huge conglomerates. Seems like a persisting trend.


It's an awfully apt metaphor for many elected government officials.


Metaphor? That's literally what it is now. I see no difference.


I'm kinda getting tired of how the word "AI" is getting thrown around so much nowadays. It's used to refer to chatbots or analysis programs in a way that evokes an intelligent strong-AI.

I'm looking forward to the next winter.


Your conception might be narrower than the term.


When we're talking about something being "entered into a political race," I don't think it is. In reality, this is about the same as running a dog as candidate for mayor.


May i introduce you to the CASIO AI- its a portable intelligent device, capable of doing math in milliseconds, that normal humans can only comprehend and reproduce after years of learning and studying.

It wants to be refered to as Sir, in the third person...

Compiled Stochastics becomes AI the moment it is able to move into new fields by not learning by mimicry but by dreamed up scenarios from noise and able to prune failing scenarios from itself. Until then it is not AI, its experience in a Box.


This is just marketing for black mirror season 5, right?


The article didn't seem to answer the one question I really had -- is this legal???

"If you assumed that artificial intelligence itself couldn't run for mayor, you're absolutely not wrong; that just happens to be where things get truly interesting."

And then the author doesn't actually answer whether the AI can or can't run for mayor. I'm wondering if they're taking advantage of some weird loophole in the law that doesn't require the mayor to be a citizen/human there.


Unfortunate Conflict Of Evidence - reporting for duty!


Clearly someone was Stood Far Back When The Gravitas Was Handed Out


I'm going to have to snoop around and see if I can find any specs on this, and the lawfare of if it can be elected.

How's your simming of this situation going?


> You see, if the current nominee Michihito Matsuda happens to earn the most votes during this election, that would make them the world's first AI (Artificial Intelligence) mayor, ushering in a new wave of possibilities for the district.

I understand that 'it' is an inappropriate pronoun when describing people of unidentified gender (although I still find singular 'they' abhorrent) , but an AI is very clearly an 'it'...


>(although I still find singular 'they' abhorrent) Is there any particular reason or is it just a matter of taste?

I personally prefer the singular 'they' because I don't think any of the pronouns invented for the purpose will gain any significant traction.


http://ai-mayor.com

Website of the AI's (owner's) campaign site.


The three claims seem massively overstated and way beyond my understanding of the state of the art. How can AI “break down the positives and negatives” of a petition and predict its future effect if implemented or rejected?

I’m I guess willing (ableit horrified) to be shown to be wrong, but this looks ridiculous as presented here.


There's a very popular young adult novel out called Scythe written by Neal Shusterman. In this novel there is a benevolent cloud-based AI in charge of ruling over the whole world named The Thunderhead ( because The Cloud wasn't impressive enough).

It's cool to see part of this fictional world coming to life


I assume that concepts descended from 'policy' and 'enforcement' will gradually be delegated to AI. Just as I assume more people will spend more time in VR, and a later generation will not make the same strong distinction between this reality dream, and the VR dreams.

I feel technologists tend toward a kind of over-skepticism, relative to scifi writers, because they know too much about the status quo, and they're used to getting value from drawing on that knowledge. But I wonder if there was any practicing technologist alive in 1960, who could have understood or predicted that we would have, e.g., unbiased photorealistic rendering in whatever color space you like. Or that pulling a piece of glass out of your pocket to have a face-to-face conversation with people around the world would be a passe thing any 12 year old does.


> this reality dream

Plato would agree.

But he would also say the dream is in your head. The thing is how to wake up from it. Reality is out there, where the material stuff is. Just shifting from one dream to another dream seems like a failure to me. I hope we don't go that way.


Is this AI open source? If not, I think it is outrageous that any AI serving in a public role is not open source.


Human politicians aren't open source either.


Let's fork them.


I believe AI/ML can reduce government costs and boost efficiency by automating staff duties. But if I leave actual governing to a machine, I am left with an entirely new, but not necessarily smaller attack surface for corruption.

Traditionally: monied interest buys influence, maybe in the form of a steak dinner for the mayor.

AI Gov’t Future: monied interest buys asymmetric understanding of GovBot5000, perhaps by hiring an AI researcher or data firm to figure out how to exploit flaws in the algorithm.

If we tackles issues of inefficiency, opacity, and corruption in government first, then I think we can tackle judgment and representation, which are less “on fire” things imo.


I've been expecting this ever since I read Robert Heinlein's "The Moon is Harsh Mistress" as a kid.

In this age of elections being influenced by social media botnets, it's simply too close to the truth.


Interesting that this showed up today also: http://www.bbc.co.uk/news/technology-43639704


Comments about the substance of the article aside, there's always going to be some risk of bias in AI which is why I'd argue one should never vote for an AI like this, especially if it takes the normal route of a politician and states that it's working for you.

Also, at what point does it decide to wage war or enlist a militia when the trolls dupe it into thinking it's what the majority want? Surely war has to be allowed at least somewhere in the algorithm - especially if you were implementing this at a nation level.


I think medical science hasn't yet realised that being a politician is a treatable illness.

You know, like how society tolerates smokers - it's an addiction, it's just actively encouraged and very profitable despite the actual loss of citizens.

First we can replace the politicians with sophisticated emulators (in the vein of bash scripts) then we can come up with even better algorithms, ones that would stop stealing or facilitating corruption. We could call that version two or something.


This article is really lacking in substance


Could it be a chance of political corruption even if all source code and data are open?


And here I thought that Beatless was just anime, not real life in the making.


Is the AI running for mayor, or the people controlling the AI are?


If they entered Hatsune Miku (a Vocaloid), she would probably win ...


This is so unreasonably worded that I'm flagging it.


Next year the AI will enter the mayoral race itself :)


Are we sure that is not satire?


I think it is a publicity stunt.


How is this even legal? Doesn't one need to be a natural person to register? Can I enter my toaster?


If your toaster can collect 500 signatures door-to-door, I don’t see why not.


愛AI!


Stop with the clickbait: an "AI" is not running for mayor. At best, if we want to stick with the term "AI" we may say "someone has entered an AI in a Japanese mayoral race". This kind of misleading hype is how previous AI winters happened... (and I haven't even addressed the capabilities of this "AI").


didn't a T-800 Model 101 autonomously decide to run for, and be elected to serve as governor of California for 8 years?


Hey, a step in the right direction! Thank you mods. Now whether or not Michihito (the "AI") actually qualifies as 'artifically intelligent' is an entirely different debate altogether. Presumably voters would actually be electing the two people pushing the effort and writing the code. But I guess you have to use the word AI if you want any votes...


I like AI mayors, as long as I control the code...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: