Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure. Quite. But why not try developing an AI to replace them that can make you a >5x return?


Doing this would require an AGI, which is currently out of reach.

A more useful/practical question would be: what tools can we build to assist CEOs in their decision making and make them more productive? I think the answer will come out as "not much", since the job is so high level and abstract. You'd have a better time targeting lower-skilled jobs or technical jobs like software engineering.


> A more useful/practical question would be: what tools can we build to assist CEOs in their decision making and make them more productive?

They are called data scientists. An AI would be able to crunch numbers, but someone needs to read and interpret the results. You can't automate that part yet, humans are still the best in interpreting data into social and business context.


Explain why an AGI is needed? Because to me, invoking the need for an AGI sounds equivalent to saying you don't understand the problem space.


The job of a CEO would need AGI because it's the type of job that requires understanding of human social dynamics, market context, regulatory context, etc. Such cross domain multi faceted expertise is the type of thing that needs general and broad intelligence to do effectively.

Hire people, fire people, pitch investors, manage direct reports across multiple business lines, understand what's going on in the world and market, set a product vision/direction, shape the culture of the org, etc etc. That's AGI territory.


I have mixed feelings between your response and the response you are replying to. One one hand I can clearly see how what you're saying makes sense. On the other, I still am inclined to believe that if you really distilled the actual purpose of everything you mentioned, it could be formulated in a way that doesn't require AGI.


I think one thing to consider is the non-stationary distributions in the problem space.

We may be able to formulate our current understanding of what it takes to be a CEO in every feasible future context that we can imagine, and create an agent (not an AGI) that does that.

But then what would happen when the distributions change out of sample in an unforseeable way? Suppose two countries go to war and this drastically changes the operating environment of the business. The agent would need to learn how to operate with human-level capability in this novel environment which it wasn't trained specifically to do. That's why I'm thinking it requires an AGI.


Because ML (this is what people mean when using the term AI which is ambiguous) needs some data to learn from, and this - apart from the fact that learning on a living organism like a company can be extremely expensive - poses several problems:

1. What data do you use? Past data of the same company? Data of other companies? Which ones? Past best performers?

2. Actually identifying all pieces of information that are relevant for decision making is a challenge in itself, as we're not talking about internal metrics but also external events and trends. Choosing and collecting all these is a challenge in itself. Moreover, it is not a one-off action, it needs to be reevaluated regularly, i.e. you need to be actively looking for factors influencing the performance of the company. Mind you, not all of them are readily measurable.

3. Part of the work of CEO is creative, i.e. not extrapolating based on past events but looking for completely new avenues and opportunities. Building such a complex system would be extremely expensive, although I agree this challenge is very interesting.

Not to mention the fact that an extremely important job of the CEO is to deal with people, not just the data. This point alone can make a difference between a well- and badly-managed company.


If one could automate that job, one could automate every job. That's AGI territory.

Very creative jobs involving leading people will be the last jobs to fall to automation - assuming that's even possible.


Because they may be so expensive precisely because it’s difficult to automate them.


No reason not to try. That said I don't think it's possible. I love the thought that we can automate that but realistically not viable in the next 50 years. Maybe a bot version of a CEO but that provides little value.


Careful what you wish for here.

I suspect that if an AI can replace the job of a CEO then all other jobs will be gone as well.


The fact that the western society finds job automation to be a danger (or dystopian) is an indicator that something is severely dated at best--or dangerously corrupt at worst--with our economic system.

Such a thing would be a utopian outcome. If this kind work can be automated in a humane way, then the vast majority of work can be automated. That means people can be freed to spend every waking moment on art and play.


How are resources allocated in this utopia?


My personal guess is - assuming that democracy is in tact, redistribution will be voted in with landslide support.

In a hypothetical future where almost nobody can compete with an AGI and 90% of voters find themselves completely useless in the economy, the proportion of voters who are against redistribution will drop precipitously.


Then what?

Everyone is allocated the same by the state? A society where no-one has any function to society and where the state allocates resources to individuals sounds more dystopian than utopian to me...


I was merely stating what I expect to happen, not what I want.

It sounds pretty bad to me as well. Not worse than poverty, but meaningless and boring otherwise. Some people will make their own meaning, but many won't.


Do you really think the people who wouldn't make their own meaning are finding tons of purpose in their current jobs?



Going off on a tangent here, but I'm curious to know what would be meaningless and boring about it. This would permanently solve the first two foundations of Maslow's hierarchy of needs so people can focus on the top three.

It could be argued that the truth is actually the reverse: it would enable most people to find a much deeper meaning, while only an exclusive few (probably mostly pathological people, like sociopaths and narcissists) won't.


You're possibly right, but do people who inherit millions or win the lottery strike you as particularly happy after 10 years?

Admittedly, I only have anecdotal data on that.


That's not generally a good measure, because a) the people who play the lottery tend to be poorer, and b) poorer people tend to have poorer financial literacy/education.

This means that they don't know what to do with that massive windfall, they manage it poorly or are taken advantage of, and end up unhappy. And the ones who do know what to do with it tend to do very boring things (set up iron-clad structured annuities or something, I imagine—I don't claim to have enough financial education to do it well myself!), and we don't hear about them because "guy won $500m 20 years ago, is living a comfortable but unremarkable life now" doesn't sell papers.

Note that this is also very different from how a UBI would affect people, because that would be moderate amounts of money regularly for life (y'know...rather like an iron-clad structured annuity); there's no way to "blow it" and end up with nothing.

People who inherit very large sums of money, but are not themselves born into wealth, are so rare as to be essentially a myth. There certainly aren't enough of them to make a reasonably rigorous sample size for a scientific study.


The ones that were able to keep their anonymity and didn't engage in fiscally irresponsible behavior like drug purchases and $200k+ vehicles are probably very happy after 10 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: