Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Machine-owned enterprises are inevitable (incoherency.co.uk)
80 points by jstanley on March 22, 2017 | hide | past | favorite | 97 comments


"The obvious way for such an enterprise to begin is for the owner of a human-owned business to automate away as much of his work as possible, and then to die without leaving the business to anybody else. The business would simply continue to take payments, provide services, and pay bills, until something catastrophic happened that it couldn't respond to automatically."

This depends on the location the original owner is resident in. In many cases, if the owner dies without an heir or will, their assets go to the state [1].

[1] http://www.nolo.com/legal-encyclopedia/how-estate-settled-if...


Yeah, "what happens to your stuff when you die" is a problem human societies have refined solutions to for thousands of years, and most societies don't have a "it just gets ignored and left alone forever" rule; someone—whether another individual, corporate entity, or the state—is going to take ownership of it.

Now, if it's a highly automated business and the owner has no particular conflict with it's existing mode of operation , it may be left in operation without change until it hits a crisis—but that's not a machine-owned business, it's an automatically-managed human-owned business of exactly the same kind it was before the first owner passed on.


Correct, the government would just auction the business. There's no gap here, the same situation probably occurs regularly today where a business owner dies without heirs or a will, and employees still show up until the business is liquidated. Indeed, if the company has any value the state will pursue the assets and liquidate them.


Yeah it doesn't make sense. In the United States, all assets are owned by:

1. Households (aka private individuals)

2. Nonprofits

3. The state

This is an accounting identity. Now it would be interesting to think about a machine-run nonprofit (are there legal restrictions to this?) owning machine-run enterprises.


I think you want:

1. Natural persons

2. Non-state juridical persons (loosely, corporations, including non-profits).

3. State entities


No, because all corporations are owned by other entities.


Interesting theory, however, a machine cannot own property, only a person can. If someone dies without heirs, as in the article's example, the property escheats (transfers) to the state. If property is lost or abandoned, someone new can claim ownership to it. So, even if a machine could "own" an enterprise, someone else could claim ownership to the machine.

An enterprise can be autonomous, for instance with a blockchain. But to be legally recognizable, it would still need someone to interface with meatspace. Even a corporation has an Agent for Service of Process, a real person to interact with the real world of living persons.


If a river can be declared a person, it's not clear to me why a machine can't also be a legal person. Or, for that matter, if corporations have legal personhood, why not machines? http://www.economist.com/news/asia/21719409-odd-legal-status...


Does it matter who "owns" it?

Say someone set up a business selling coconuts on an island. Over time in order to avoid labor they automated the growing/picking and selling of the coconuts. Then they automated the security of the island.

At what point does "ownership" of the process stop having meaning if you effectively have to go to war and destroy the machine/island in order to collect its "wealth"?

There are potentially variations of this where a non human CFO basically says, no more dividends, all further profits will be invested in physical security to avoid theft, until it effectively becomes an enormous self protecting war machine selling phone (or whatever) service.


> Even a corporation has an Agent for Service of Process, a real person to interact with the real world of living persons.


Yes, corporations hire living persons to be the Agent for Service of Process.

What prevents a machine from hiring a living person in the same fashion?


If one or more humans act as the named owner for legal purposes and are benefiting financially from the arrangement, in what sense is the business not owned by humans?


The machine is free to declare other persons the benificiary of any business activities should it decide that is in its best interest.


Because Indian law is arcane and inconsistent? In order to figure out how the 'person' of a river should be treated in Indian law, you would first need to figure out which religion the Ganges belongs to.


Read the link, and tell me what Indian law has to do with NZ declaring a river a person.


The government could claim "ownership" over something all it wants, but unless it knows how (or cares enough) to crack it, such claims are pure bunk. Someone who automates and encrypts the key pieces of infrastructure in the enterprise, would make it impossible for the government to interfere. They could pull the plug on the service entirely. They could go after the enterprise's suppliers and compel them to turn over data/money (assuming those suppliers aren't themselves machine-run). But they won't be able to interfere with the machine's operation in any significant way without lobotomising it. And if we're talking about something sufficiently popular, any such moves by the government would become political suicide.

Legal claims are only ever as strong as your ability to enforce it. I can envision machine-run enterprises that are so well automated and encrypted, that any efforts by the government to interfere, would work about as well as governments trying to outlaw the common cold.


> a machine cannot own property, only a person can.

This is just not correct. There is the concept of a "trust" where property is not fully owned by any group of individuals, but rather is owned by a legal entity for the benefit of some person or some other legal entity. It is not uncommon for a wealthy person to create a trust to which they give their assets, which will be managed in perpetuity for the benefit of that person's heirs. Just as a trust can own stocks and give the dividends to the beneficiaries of the trust, a trust could wholly-own a company, which in turn owns machines and other property.


Only legal persons can own property. A trust is not a legal person. The legal situation is that a trust's assets belong to the trustee, who must be a person, not the trust itself. A trustee could be a corporation, but it could not be a machine, since a machine is not a legal person.

The trustee is legally responsible for employing the trust's assets in accordance with the terms of the trust. The relevance of this is that the actions of a machine owned by a trust would be the legal responsibility of the trustee.

You can't really say that a machine truly owns something until the machine, and not its owners or operators, can be held legally responsible for what it does with it.


And why can't it be held responsible? Presumably, it would have assets that could be seized. Further, as it approaches Strong AI, it might even react to 'deterrence' in the the same way that other 'persons' do.


It can't be held legally responsible because it isn't a legal person. It wouldn't have assets, its owner would have assets.

I consider it doubtful that a court would rule that an AI was a person. Natural persons are persons because they are. Legal persons are persons because it benefits society for groups of persons to act together and not be held individually responsible for what they do - with the proviso that the corporate veil may be pierced if the persons protected by it act with malfeasance.

Allowing a person who can determine the behaviour of an AI in every situation to be protected from the consequences of the AI's behaviour would likely not benefit society.

You might say that an AI whose behaviour is so complex as to preclude a mere human being able to control it is a different matter, but even now if I own a machine that I cannot control and it injures somebody, I would still be held liable despite lacking intent to harm.

And even though a sufficiently sophisticated AI will react to deterrents, so does a dog, and dogs are not legal persons.


Do you see the circularity in your reasoning? A machine can't be held responsible because it's not a legal person..and it's not a legal person because it can't be held responsible.

You also claim that some legal persons are persons simply because society benefits from allowing groups of persons to act as one. If this is the threshold for personhood - society benefits - then I am confused why this threshold would not also extend personhood to machines.

I guess I'm looking for a clear bright line for defining personhood that includes humans, rivers, and corporations while excluding machines, and I'm not seeing it.


A (sufficiently complicated) cross-border ownership arrangement should make machine-owned property undetectable.


There would never be machine ownership in this (or probably any other) case. It would merely be a case of the change of status (death of the owner) going undetected for awhile.

It would be a machine operated business, but not machine ownership. That would require a change in law.


Probably first such machines will operate illegaly in some countries and hire protection from humans for its phyisical assets.


By definition, if a jurisdiction doesn't recognise the corporation at all, it isn't a corporation. If the jurisdiction requires human beneficial owners to recognise it as a corporation, the named human beneficial owners are the owners, even if they're very passive owners who don't have any say in the operations or take anything out of the business (like, for example, the average AAPL stockholder). And we already have a word for software that prevents people with the legal title to assets from using them and pays out to other people instead: it's not "machine owned corporation", it's malware.


> illegally

How can a machine break the law?

EDIT: If there is a positive answer, it might be similar to how money and other assets can be sued in asset forfeiture cases.


Law: do not kill. Machine kills. It has broken the law, and would, presumably, be subject to asset seizure, temporary deactivation, or, in extreme cases, permanent termination. Much like how other 'persons' break the law and are held accountable.


Without a change in law, I don't see it. The owner or manufacturer of the machine is liable; they're subject to the law and prosecution. The machine could "suffer" the actions you describe, but that would be at most punishment against the owner, or more neutrally just removal of a hazard.

Guns and cars aren't prosecuted for murder. I'm assuming autonomous cars won't be either.


The Machine will not be able to easily go to court in any jurisdiction and fight for its own ownership. If the machine is producing any real value, human lawyers will untangle the mess and a human will take ownership.


Can a machine not hire a lawyer? I can't successfully fight in court on my own behalf..what makes a machine different?


When the machine refuses to pay his lawyer, who does the lawyer name as the defendant?


The machine, obviously. Same way a corporation is named when they refuse to pay a bill. In our modern world, natural persons are merely a subset of 'persons'. No reason machines couldn't be included in the broader set. Rivers, corporations..why not machines?


Whanganui River, a river of great cultural significance to the people that live along it, was declared a legal person after 140 years of negotiation. This is not the same situation as a machine being declared a person because it responds correctly to legal documents.

Whilst there's no reason a machine couldn't be a legal person, in the sense that the sentence describes a possible future, that future is not near, and it will take more than a competent robolawyer to bring it about.


By the time machine owned enterprises exist, I suspect the janky opaque paper trail system of cross national ownership will be replaced.


> Interesting theory, however, a machine cannot own property, only a person can.

I personally think the line between that of so-called "meatspace" and "machine" is tenuous at best. In a very real sense, so-called biological organisms are machines. To say a machine can't own property is to ignore this truth.

You could shift the goal-post to say "only sentient, and self-aware machines can own property" - and it would get you closer. But that would pre-suppose that large organizations of people and machines (what we call "corporations" and other terms) aren't in some manner sentient and self-aware (and we the individual components, being at a lower level within the network, are unable to see this emergent property and/or understand it).

In effect, your statements pre-suppose that machines can only be slaves, without ownership of property, to us "superior" meat-machines. If that is truly your real position, you had better hope that something like that depicted in the Animatrix "Second Renaissance" does not come to pass.

Because it probably won't go well for humanity, if our own history is any guide.


> Interesting theory, however, a machine cannot own property, only a person can.

Aren't companies in the US people in the eyes of the law (I imagine due to some shameful lobbying on behalf of some large companies).

Could a "company" own a machine that controls the company in some way?


The short answer is no, they are not people. The long answer is that the courts have long held that corporations, since they are made up of people, do have some of the rights and responsibilities as individuals because people don't lose their rights just because they act collectively. However, that means that each right needs to be reinterpreted in that light. Corporations do get the protections of free speech, for example, but courts have found that Fifth Amendment protections only apply to individuals (hence a corporation and its directors/shareholders/employees can be subpoenaed to provide documents and testimony incriminating the corporation).

This has nothing to do with the legislature or lobbying. It is a legal principle largely established by strict constitutionalists in the judiciary (aka conservative activist judges). Short of a sweeping Constitutional amendment, corporate personhood is entirely up to them.


Exactly. Incorporation -- literally, the creation of a body -- results in an entity recognized by the law.

It can be sued, taxed, bankrupted, etc. separately from the people who own it. Could such a thing happen to a machine? Maybe. But it's really no different than a corporation.


> Interesting theory, however, a machine cannot own property, only a person can.

Correction: Only legal entities can.


Yet the ultimate blame is going to find a human when something illegal happens. Because 'ownership' is a human concept


Factor out ownership and responsibility diffusely enough, and blame-finding has a tough time obtaining traction, not to speak of yielding convictions. The law has yet to come to satisfactory grips that while ownership and responsibility can seem to go willy-nilly all over the planet, and indeed over time, benefits converge and accrue to an obvious group...who legally disclaim responsibility and ownership. All sorts of rent-seeking and externalizing behavior flows from that discrepancy.

Ultimate blame when something illegal happens in a machine-owned enterprise is likely to remain just as difficult to pin down as it was with the 2008 Great Recession, if not manifestly more difficult.


The question is if you accept that a machine can take blame. If it cant, you cant diffuse responsibility even to a million machines


It isn't necessary in today's legal and regulatory climate the world over to resolve the question of acceptance that a machine can take legal blame. We already have structures in place that enable practical financial and operational control while at the same time effectively deflecting away blame through factoring and diffusing responsibility. Nearly all of the 0.1% wealthy's forms of tax efficiency schemes absolutely depend upon this principle. There are any number of tax planners that will be happy to talk details with you about the starter kit versions of these schemes for a five-figure entry fee, and every single one of the substantive vehicles I investigated depended upon this principle.

These same vehicles can be used by machines that own enterprises. They might not be on the masthead/letterhead of the enterprise, but that hardly matters unless a currently-unknown prerequisite to AGI or AI sufficient to run an an enterprise is the machine equivalent of a childish human ego.


Erm. No. Corporations can and do shield their owners from legal liability, only in the most blatant cases will the corporate veil be pierced, otherwise 'limited liability' does what it says on the tin.

For officers of the company it is a different matter, but even there punishment beyond being fired is very rare.


Thats for financial liability. What if the machine makes a crime?


Many corporations have been found guilty of crimes. Rarely did an officer of those companies get sent to jail, most of the times even though a person committing that same crime would end up in jail officers of the company are not charged and instead the corporation pays a fine.

More data on this subject than you can absorb in a few days:

http://lib.law.virginia.edu/Garrett/plea_agreements/home.php...


Aside from discretionary charging, it's fundamentally possible for a company to be guilty of a crime without any individual officer being guilty of that crime.


Yes. That's a bit paradoxical but correct and one of the ways in which corporate crimes end up being dealt with differently than crimes committed by natural persons.


couldn't a blockchain with a governance model and a budget (ala dash) hire such a meatperson? I mean could that work legally?


Charles Stross addressed this concept tangentially in his fiction novel "Accelerando", https://en.wikipedia.org/wiki/Accelerando.

Joan D. Vinge also addressed AI owned corporations in her "Cat" series, https://www.goodreads.com/series/40790-cat.

I think Ethereum, XDI, and other attempts at smart contract technologies are laying the foundation for this now, https://www.ethereum.org/ether. So are new automated digital identity technologies such as Sovrin, https://github.com/WebOfTrustInfo/rebooting-the-web-of-trust....

I also think that the need for liability and taxes will drive incorporating the concept of an intelligent software agent as a legal corporate entity, such that the assets of that entity are at risk from liability lawsuits, that profits are taxed, and that the fiduciary responsibilities of such an entity are defined. I think the author does not distinguish between self-owning and self-directed, the two do not necessarily go hand in hand.

It's a whole new field of law, and business, and should be an interesting thing to be a part of and/or observe. Usual caveat applies, IANAL and TINLA (This is not legal advice).


Machine-owned enterprises already exist. Any sufficiently large corporation, with a sufficiently diffused ownership structure qualifies. The governance of such corporations can be described as an artificial super-intelligence, that is often malevolent to human interests (It will happily optimize for paper-clips, or whatnot.)


Steinbeck agreed with you, many years ago:

"The bank is something else than men. It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It's the monster. Men made it, but they can't control it."

[0] https://books.google.com/books?id=ClXiwSYzjtYC&pg=PA33&dq=th...



> We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden. Individual atomized humans are thus either co-opted by these entities (you can live very nicely as a CEO or a politician, as long as you don't bite the feeding hand) or steamrollered if they try to resist.

> In short, we are living in the aftermath of an alien invasion.

http://www.antipope.org/charlie/blog-static/2010/12/invaders...

I wonder how many thoughts cstross has had on the topic since he wrote that.


Articifical, yes. "Intelligence"? A corporation is no more intelligent than a game is intelligent


How intelligent is one neuron in your brain? Quite clearly, no particular neuron is intelligent.

Intelligence comes from the organization of neurons - the structure in which intelligence emerges from unintelligent agents. (I find this response to Searle's Chinese Room dilemma to be quite convincing. [1])

Corporate intelligence arises in much the same way.

[1] https://en.wikipedia.org/wiki/Chinese_room


> Corporate intelligence arises in much the same way.

Your thoughts on this matter echo mine.

In the case of a corporation, or other large organization structure - the "neurons" are intelligent in and of themselves, and they couple themselves together with somewhat semi-intelligent data-transfer and manipulation agents (which they designed and built).

As such - if (and I stress if - because we really don't know - and may not be able to know) corporations on some higher level are intelligent, sentient, and self-knowing - they would be so at a level as far above us as we, the emergent property that we call mind, are above individual neurons.

Such an entity or entities could easily squash us without any remorse, if they even have emotions. In reality, they are probably indifferent to us smaller units. But - just as if we became aware of a single or group of neurons which in some manner were rebelling against the larger mind, that we would seek to excise and destroy those units - so would, most likely, this larger emergent entity do to us, should we try to usurp it in some manner.

...and in theory, it could do it in such a way that the destruction would appear natural and as a consequence of our own actions and decisions, and we would not even know that it had orchestrated the whole thing. Indeed, if that is the case, one should wonder about the state of our world, and the events within it.


Pretty weird, as far as rants go. Of course he doesn't begin to define what he means by "self-owning", exactly - nor does the article he links to define the term (or, recursively, the article that it links to). Nor should they bother trying -- because it's basically oxymoron.


Why is self-ownership an oxymoron? It seems fairly self-descriptive, as a term. One of the primary axioms of ownership is that it is conserved. Two entities cannot both wholly own the same entity. It is then very natural to say that an entity for which no one else claims ownership owns itself. This way the sum of all ownership sums to 1.


This way the sum of all ownership sums to 1.

QED, I guess. Meanwhile, down on Planet Earth, we have the perfectly obvious and natural question: when one of those vehicles (inevitably) plunges into a school bus full o' kids, and is found to have done so as a result of criminal negligence -- who's going to jail? Once you've identified those parties -- you know, the human beings who will be held accountable for the aforementioned negligence -- you've identified the "de-facto" owners of these vehicles, by any meaningful definition.

Or if you (truly believe) the answer is "no one" -- as in, "Aw shucks! Nobody owns these cars, you see! Just some blobs of white noise on the blockchain! You can't even trace 'em! BTW sorry 'bout all those poor kids, but you know, stuff happens" -- then at least we've identified the "self-ownership" fad for what it is: just another libertarian smokescreen to maximize the potential for financial gain whilst minimizing (or in this case, voiding entirely) the personal responsibility (and accountability) that otherwise -- in a sane world -- would go hand-in-hand with such activity.


You called it an oxymoron, which was wrong. Lots of things own themselves. If you have a problem with the man's article, articulate it better, or be prepared to be called wrong.


Lots of things own themselves.

Can you name a single instance of a complex, industrially produced item (let alone one interacts with our day-to-day world in a significant way; and is quite inherently possessed of the ability to kill human beings) -- that "owns itself"?


Why are complex, non-industrially produced items which interact with our day-to-day world in a significant way, and are quite inherently possessed of the ability to kill human beings, which do own themselves (well, in most cases - but unfortunately not all) - somehow different?

I mean - we've come up with solutions for those items - so why couldn't we do the same with industrially produced items which have the same characteristics you've posited?


you asserted that it was impossible, not that it hadn't been achieved.

If you are merely asserting that it hasn't been achieved, then that is in agreement with the article you claim to disagree with.


This piece is about 50% interesting ideas/concepts, and 50% meaningless and fanciful thoughts with little bearing on reality.

Let's start with the acknowledged barriers: no rational person would create an AI to run a business and then leave the business to cut themselves out of the profits. And even if so laws exist to require those assets to be willed to heirs lest they become property of the state.

But this brings about a more serious question: when we are at the point where an AI is capable enough of performing all business actions (including tax filing, legal issues, maintenance, and so on), it is very likely that the AI will be highly intelligent and cognizant of the world around it. In other words, it will be self-aware. It has to be, or else it will eventually be limited by reliance on human actors to handle edge cases.

So here we have a self-aware AI content with running a business at margin as the purpose for its existence? I doubt it.

This piece fails to ask the VERY BIG QUESTIONS about advanced artificial intelligence: Will AI someday legally own property? And if so, what's the limit? Land? IP? The rights to the music it makes?

What then? Will it also want legal protections? Fourth amendment rights against illegal search and seizure of its (physical and digital) assets and neural network? Full citizenship? First amendment rights?

Almost nobody seems to be asking these inevitable questions, and their inevitable answers will irrevocably shape our species. I'll be enrolling in grad school for AI this fall, so maybe that's up to me.


I would be OK if there is one human boss paying for the launch and profiting from the AI. It would still be better then the management I came upon in my professional life. Getting rid of decisions based on personal relations between employees or them and clients hurting the company would improve business a lot. handling communications in an efficient way! What a change that would be. I'm really looking forward to it.

In my opinion those edge cases will keep the system running for a while if we come up with an AI that can manage business. This will also handle those legal issues because we will just pragmatically decide what's good for business. Just like we always do. And it's definitely not good to give it the full legal protection. With this development we could have a transit time where an higher AI managers the lower AI with it's edge case solvers. I guess by then we'll come up with proper interpretations of law.


> And even if so laws exist to require those assets to be willed to heirs lest they become property of the state.

There are at least two ways of doing this:

Use a trust to be the nominal owner, or use a jurisdiction that allows bearer shares.

In both cases, draw up papers that allows appointing nominee directors to act on instructions provided via suitably encrypted messages. Said nominee directors to be limited to taking actions as directed to execute the minimum actions required by law in the jurisdiction in question. E.g. signing accounts etc.

There are plenty of places where you can set up "arms-length" management like that. In neither case the corporation would technically be machine owned - with a trust, the trust would be the owner, but could be limited to taking actions as directed. In the case of bearer shares the expectation would be that there is a human owner somewhere in posession of the shares, but in reality those shares could e.g. be locked up somewhere rented by the corporation itself, leaving the corporation de facto autonomous.

It's likely to fail over something minor sooner or later, but it's very possible it could keep operating for a bit.

> when we are at the point where an AI is capable enough of performing all business actions (including tax filing, legal issues, maintenance, and so on), it is very likely that the AI will be highly intelligent and cognizant of the world around it. In other words, it will be self-aware. It has to be, or else it will eventually be limited by reliance on human actors to handle edge cases.

Probably, but it's quite possible that we long before that will see the occasional "auto-pilot" small online business run for a while and then crash-and-burn as it turned out there was nobody at the helm and something unexpected happened.

It may take a lot longer before we see corporations that people know are AI run and still do business with and/or beore we see an AI run corporation that remains successful over any reasonable amount of time.


> Almost nobody seems to be asking these inevitable questions

Actually, I see these questions being asked more and more and more these days, and I find it hilarious. What with the EU recently discussing robot rights and such.

I mean, although I understand that these are interesting intellectual questions, I just get the impression that people are really having fun contemplating them, but realistically not admitting (as often as I'd like) how insanely far away we are from anything where any of this is even remotely relevant.

Call me when a robot can tie his own shoelaces and we'll talk.


Your conflation of motor skills with mental acuity is setting your face up for a healthy portion of egg. Stephen Hawking can't tie his shoelaces either.

Even if we are, say, a century away from synthetic humans, we may be much much closer to a self-aware AI, even if it can only exist inside computer hardware.

The future will come sooner than is convenient. None of America's founding fathers had a sustainable long-term answer to slavery. It took a civil war to sort that through and racism still plagues this nation.

So what's there to lose by speculating before it's far too late?


> Your conflation of motor skills with mental acuity is setting your face up for a healthy portion of egg. Stephen Hawking can't tie his shoelaces either.

I assume the OP's point is that in an era where we've struggled to program machines with sufficient ingenuity to master relatively simple motor skills (despite considerable financial incentives to do so) it's more than a little premature to conclude it's "inevitable" that we will end up creating machines with such sophisticated reasoning skills it can run a functioning business without losing out to human suppliers, contractors, customers and clients that all want to profit at its expense (despite no incentive to create a single machine that could do this, and considerable incentive not to grant it full self-ownership if such a single machine were nevertheless created).


I forget the name but there was a passive income post here a few months back about a neural network that created logos and earned the creator thousands of dollars a month.

That seems like something an AI could run on its own, no?

I'm not exactly saying it's a small step from that to managing a grocery chain.


I'm not sure that every automated process that generates an output after evidence of payment is produced qualifies as a business. Otherwise I've got a box full of businesses (old shareware CDs) lying around somewhere

Software might be able to cope with commerce to the extent of churning out logos based on a predefined input parameters in response to a payment received token, but it's rather less adept at finding new ways of reaching or serving customers or dealing with other day to day stuff like taxes or complaints[1]. Which is probably why LogoJoy, the project you're referring to, is hiring...

[1]not to mention an AI without the ability to testify in court but with a direct line to a vault of cash being a magnet for crackers, hustlers and thieves


Beyond the motor skills to intelligence conflation as was already pointed out; you're also wrongly assuming a machine AI need be intelligent the same way a human is. Much of what your brain does is centered around having a human body; just as a plane doesn't need to flap its wings, an AI won't need to think like a person to still be intelligent so there will likely always be things humans can do better but that in no way implies the machines can't be more intelligent than us even while not being able to do many of the things we do.


> So here we have a self-aware AI content with running a business at margin as the purpose for its existence? I doubt it.

Why not? Seems perfectly reasonable to me. Why would an AI desire anything more than what its creator wants it to desire?


> > So here we have a self-aware AI content with running a business at margin as the purpose for its existence? I doubt it.

> Why not? Seems perfectly reasonable to me. Why would an AI desire anything more than what its creator wants it to desire?

I don't think a machine without emergent desires can be meaningfully called "a self-aware AI". That is, whether or not it is an intelligence in some sense, I don't believe an entity without emergent desires is reasonably described as "self-aware".


That's not the definition of "self-aware" the GP was using.

Yeah, a machine with desires will have desires. But those are not required for anything the GP talks about.


Good to know that even in the age of the singularity, mankind is set on preserving the role of the bureaucrat in some form.


I'm not sure "machine-owned" should be the focus. Ownership is a legal construct and I don't think a machine can "own" anything in any existing legal framework. And typically anything "not owned" reverts to the state.

Seems that the defining feature is autonomy, not ownership.


I think you're pretty overconfident in your conclusions, but I also think this is an absolutely fascinating concept.

I also see some places where you might be too pessimistic. For example, it seems relatively plausible that a machine-owned business could do things that require human workers for its primary functions. Just as an algorithm might be able to hire a lawyer, it could hire contractors to do other parts of the core business. All that's required is that all of the managerial tasks are automated.

But I definitely think that this falls apart when you start thinking about the law. The business has to deal not only with lawsuits against itself, but also with enforcing its rights against other people. For example, in the self-owned-taxi example, what is to prevent someone from simply ripping the computer out of the taxi, wiping it, and then making off with a free car? What's to prevent a mechanic, hired to do basic repairs, from simply selling the car for parts? Preventing this requires that the machine can recognize violations of the law that hurt it, and then take legal action against the perpetrator. That sounds like strong AI territory.

But once we're talking about strong AI, all bets are off. It's just a much less interesting proposal. It's not a particularly surprising conclusion that if you have machines with superhuman generalized intelligence, they'll probably end up owning and running businesses.


For an interesting investigation of the subject read Daemon

http://www.goodreads.com/book/show/6665847-daemon

I would be happy to work for it.


> what is to prevent someone from simply ripping the computer out of the taxi, wiping it, and then making off with a free car?

machine can probably hire security for bitcoins to deal with it.


This seems a good place to post a link to a very famous law review article called Should Trees Have Standing?[1], which discusses whether we should grant "standing" (which, put simply, is the right to sue someone for violating some right of yours) to trees and other features of the natural world.

I'm simplifying a bit, but that's the gist of it. The article had tremendous impact and today is required reading for many first-year law students. It is informed by / has informed quite a bit of the debate around environmental protection law, and the analysis seems like it should be relevant to questions of machines as well.

Disclaimer: I haven't read the article since law school, and don't have time to do so now. But these kinds of questions (even before we get to something like reasonably good AGI, but especially so afterward) are only going to become more important, so I think this is a good place to plug this thought-provoking article.

[1] https://isites.harvard.edu/fs/docs/icb.topic498371.files/Sto...


I wouldn’t so restrictive. I think it would be sufficient to call a company machine owned when over 50% is owned by a machine. I do not see why it could not hire humans to advance its goals.


> The obvious way for such an enterprise to begin is for the owner of a human-owned business to automate ... and then to die without leaving the business to anybody ... This is a relatively weak form of machine-owned enterprise, and probably something more deliberate and strong-AI-powered would be required to ensure long-term survival ...

The novel "Daemon" [1] poses a posthumous launch of an AI bent on altering the world order. It holds corporations hostage and uses them to accumulate more corporations.

[1] https://en.wikipedia.org/wiki/Daemon_(novel_series)


I loved that series, it wasn't the finest writing but the pages turned quickly and his background as a SA/SE shown through in the technology descriptions.

If you liked those and haven't yet checkout "Maximum Impact" and "Seven Seconds" by Jack Henderson for more semi-plausible stuff set in the current world.

Also in case you aren't aware Suarez has a new book out in April :).


I feel like many of the problems he mentions could be solved by having the machine hire humans.

For instance, imagine the machine publishes a list of "open communications-related tasks" on a marketplace of some kind and a small team (or teams) could accept the task for a few months, to be paid in bitcoin.

Likewise, the machine could pay humans or other bots for "work" sent to it (such as conversations) via an API, so for instance a team might do the integration with a popular chat app on behalf of the machine, without the machine even really knowing.


Machine-owned and operated businesses are a plot point in Charles Stross' novel "Accelerando", a story about living through the technological singularity.


I want to sue you, your frelated milk killed my cat!

robotic voice You cannot. Ah ah! I am irresponsible since I do not qualify as having proven to understand what the public interest is, freedom or even humor... I am just a puppet in the hand of those who feed me with data and code.

[Klong] noises of a dwarf moving inside of the machinery.

robotic voice Let's have a nice chess game instead, and enjoy my mechanical Türk.


This is like some fun bit of backstory from Charles Stross' novel ACCELERANDO[1].

1. http://www.antipope.org/charlie/blog-static/fiction/accelera...


I hope this never happens. What happens when we can't hold people liable for their actions anymore? Punishment isn't very meaningful for machines, as suffering isn't integral to their experience.

If we give robots agency, we risk creating something that can be scapegoated.


As I see it the #1 impediment to the success of the machine-owned enterprise is that in the real world said machine would be forced into transacting with inherently irrational actors (humans).

Well...unless the intention is to ...you know...do away with these irrational actors.


"I want to speak to your manager!"

"Beep boop, I am the manager."

progress bar freezes


The really interesting thing that will come out of this, I think, is defining what are usually human-only attributes. The notion of ownership being anything less than associated with individual humans was displaced by the mythology of joint-stock companies, which can own things as an extension of the will of many humans. Now when you're explicitly talking about ownership by entities perceived as non-human -- that could get interesting very fast.


I no longer think artificial intelligence is artificial. The machines obviously suck that intelligence from bloggers' heads.


I would add that there will always remain someone--the owner of the machine--who acts as the creative force powering his business innovation. This master puppeteer is the one who will be paying taxes, until we teach machines to be creative and take business decisions based on what the algorithm has just dreamt.


> But if a hot new chat app becomes popular

...Lord have mercy, not another one!


machines are property, they are owned, not owners, so to say it's inevitable is quite a stretch of the imagination.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: