Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dwarkesh had a good interview with Zuck the other week. And in it, Zuck had an interesting example (that I'm going to butcher):

FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.

So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).

Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.




I worked in automated customer support, and I agree with you. By default, we automated 40% of all requests. It becomes harder after that, but not because the problems the next 40% face are any different, but because they are unnecessarily complex.

A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.

The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.

The remaining 20% can't be resolved by neither human nor robot.


Between the lines, you highlight a tangental issue: execs like Zuckerberg think easy/automatable stuff is 90%. People with skin in the game know it is much less (40% per your estimate).This isn't unique to LLMs. Overestimating the benefit of automation is a time-honored pastime.


I’ve noticed this when trying to book a flight with American Airlines earlier this year. Their website booking was essentially broken, insisting that one of my flight segments was fully booked but giving no indication of which one and attempting alternate bookings which replaced each of the segments in turn still failed. They’d replaced most of their phone booking people with an AI system that also was nonfunctional and wanted to direct me to the website to book. After a great deal of effort, I managed to finally reach a human being who was able to place the booking in a couple minutes (and, it turned out, at a lower price than the website had been quoting).


This reminds me how Klarna fired their a large part of their customer support department to replace it with ai, only to eventually realize they couldn't do the job primarily using ai and had to rehire a ton of people.


That might have been their story, but Klarna is struggling to maintain their runway at the moment and that may have been the bigger driver


You're not buying toilet paper and doritos in 12 easy payments?


OT: just googled that name, info panel on the right in my language settings categorizes it as "金融の連鎖", or "cascading of finances". am not sure how to take that.


Klarna is basically loan sharks but if you do it with an app is legal. Also Opera the browser moved to doing that.


In fairness to Klarna, the interest rates they charge are typically low or even zero. The problem is more that they're encouraging poor people to waste money on things they probably shouldn't buy in the first place, like expensive concert tickets or consumer electronics.


I used to work at a competitor to klarna so take this with a grain of salt, but the zero interest rates aren't really zero. They finance either by klarna eating into their runway, or by the business paying the interest up front. Which usually leads to higher prices for everyone, regardless of you using klarna or not


That's only the case if demand is static. The parent comment mentions that "poor people buy more stuff" which is increased demand


Pretty good description tbh

Their business model is an online payment provider (like e.g. PayPal/apple pay) that splits the payment into 3, 6 or 12 monthly payments, usually at 0% interest

The idea being that for the business the loss in revenue from an interest free loan is worth it if it causes an increase in sales


But isn't it supposed to be more like "financing franchise"?


Yeah I think I do already see this happening in my work. It's clearly very beneficial, but its benefit is also overestimated. This can lead to some disenchantment and even backlash where people conclude it's all useless.

But it isn't! It's very useful. Even if it isn't eliminating 90% of work, eliminating 40% is a huge benefit!


I never call a customer service line unless the website doesn't work, but customer service robots try very hard to get me to hang up and go to the website.

It's super frustrating. These robots need to have an option like "I am technically savvy and I tried the website and it's broken."


Everyone would use that option.

Do you know why your isp asks you to unplug and plug your modem back in while on call, even if you insist you did that already? A surprising large number of people don’t even realize their modem isn’t even plugged in at all.


Perhaps the value of believing in the 90% is the motivation it provides.

If you don’t believe in an exaggerated potential, you might never start exploiting it.


> But because of the quick response, the customer will write back to say they'd rather talk to a human

Is this implying it's because they want to wag their chins?

My experience recently with moving house was that most services I had to call had some problem that the robots didn't address. Fibre was listed as available on the website but then it crashed when I tried "I'm moving home" - turns out it's available in the general area but not available for the specific row of houses (had to talk to a human to figure it out). Water company, I had an account at house N-2, but at N-1 it was included, so the system could not move me from my N-1 address (no water bills) to house N (water bill). Pretty sure there was something about power and council tax too. With the last one I just stopped bothering, figuring that it's the one thing that they would always find me when they're ready (they got in touch eventually).


The world is imperfect and we are pretty good at spotting the actual needle in the haystack of imperfection. We are also good at utilizing a whole range of disparate signals + past experience to make reasonably accurate decisions. It'll take some working for AI to be able to successfully handle such things at a large scale - this is all still frontier days of AI.


> this is all still frontier days of AI

That's why it annoys me how much effort they put into not talking to me, when it's clear that their machine cannot solve my problem.


They don’t care about you. You are a number on a screen that happens to pay their company money sometimes. But by using recorded voices, the company hopes to tap into the empathetic part of your human brain to subconsciously make excuses for their crappy service.

When I get stellar customer service these days, I’m happy and try to call it out, but i don’t expect it anymore. My first expectation is always AI slop or a shitty phone tree. When I reframed it for myself, it was a lot easier not to get frustrated about something that I can’t control and not blame a person who doesn’t exist.


> They don’t care about you. You are a number on a screen that happens to pay their company money sometimes.

Actually that reminds me, I couldn't figure out how to cancel my old insurance online and couldn't get to a person on the phone - I just deleted the direct debit, and waited until they called me to sort it out.


> A customer who wants to track the status of their order will tell you a story about how

I build NPCs for an online game. A non trivial percentage of people are more than happy to tell these stories to anything that will listen, including an LLM. Some people will insist on a human, but an LLM that can handle small talk is going to satisfy more people than you might think.


Zuck is just bullshitting here, like most of what he says.

There is zero chance he wants to pay even a single person to sit and take calls from users.

He would eliminate every employee at Facebook it it were technically possible to automate what they do.


So would everyone that ever created a business. Nobody grows headcount if they don't have to. Why be responsible for other people's livelihoods if you can make it work with less people? Just more worries and responsibilities.


> Nobody grows headcount if they don't have to.

From my experience in corporations this is a false statement. The goal of each manager is to grow their headcount. More people under you - more weight you have and higher position you got.


There is a difference between business owners (who don't want to spend money unless they have to) ans managers (who want career growth and are not necessarily worried about the company 's bottom line wrt headcount)


> Why be responsible for other people's livelihoods if you can make it work with less people?

Because he is the fourth richest man on the planet and that demands some responsibility, which he refuses to take.

He owns 162,000,000,000 dollars. Metas net income 2024 was 50,000,000,000 dollars.


I think that once you have profit & loss responsibility that changes.


A manager’s net worth is not tied to the valuation of the company. They get their salary regardless.


I think that once you have profit & loss responsibility that chanes.


I don’t know about you, but for me, one of the greatest joys in life is being able to hire people and give them good jobs.


This doesn't seem true to me at all. Humans are not rational drones that analyze the business and coldly determines the required number of people. I would be surprised if CEOs didn't keep people around because it felt good to be a boss.

Facebook might be able to operate with half the headcount, but them Zuckerberg wouldn't be the boss of as many people, and I think he likes being the boss.


> I would be surprised if CEOs didn't keep people around because it felt good to be a boss.

If you had hired people (and been responsible for their salaries and benefits and HR issues), you would definitely not say that.


Money unspent is worthless.


Most major corporations have increased head count in recent years when they didn’t have to via the creation of DEI roles. These positions might look good in the current cultural moment but add nothing to a company’s bottom line and so are an unnecessary drain on resources.


Ahh getting downvoted but no one has offered a counter-argument.


He can definitely fire most people at Facebook. He just doesn't because it would be like not providing a simple defense against a pawn move on a Chess board. No point in not matching the opposition's move if you can afford it. They hire, we hire, they fire, we fire.


FB would be run into the ground on day one, if he fired most people (>50%) at FB.


Why?

Other than on-call roles like Production Engineers, whose absence there would make the company fail within a day?


Because things would happen on the platform, that would be bad PR. Availability might even go down. Who knows what kind of automatized things need to be kept in check daily.


> things would happen on the platform, that would be bad PR

They literally had (allegedly) significantly contributed to inciting a genocide [0]. PR doesn't get much worse than that, but it seems that we as a society, just don't care about these things that much any more. I really can't recall any case of any individual or organization going down because of PR issues, except for people in the entertainment industry; for some reason, we only expect good morals from our actors and comedians.

[0] https://en.wikipedia.org/wiki/Facebook_content_management_co...


Twitter example shows it might not be true.


Twitter is dead now, right-wing echo hall. It basically ceased to exist in the way it did.

I will admit though, that it may be possible to continue existing in other ways, if he fired >50% of the people at FB.


This has more to do with Musk's policy, though. It's still up and running, so clearly the tech side wasn't as affected as people thought it would be.


You are showing your own biases here. Twitter did cease to exist the way it did. In its place is a platform mostly free of censorship and with new features added.

I’d rather see humanity in all of its good, bad, and ugly than have a feed sanitized for me by random Twitter employees who in many cases had their own agenda.


I would rather not see hate speech and incitement of violence online. If you think that Twitter in the form it has now doesn't have a hidden agenda ... That is a very naive believe to be held. Censorship is not the only negative thing that can happen to information. We should all have learned that lesson by now.


I don't see any hate speech or incitement of violence on my X feed. If you do then you must be following the wrong accounts.

Censorship is the worst negative thing that can happen to information. We should have all learned that lesson by now.


> Censorship is the worst negative thing that can happen to information. We should have all learned that lesson by now.

On the contrary, some "information" doesn't deserve the light of day, and we should have learned that lesson in the 1930s and 1940s. The question is where to draw the line.


I want to see those things. Or rather, I want people to show me who they are.

I want to see all the dumb stuff politicians say. I want to see celebrities’ terrible opinions on things.

I’d rather know how messed up people are than have a feed sanitized for me to keep me ignorant.


> Twitter did cease to exist the way it did. In its place is a platform mostly free of censorship and with new features added.

Try blocking or criticising Musk, or saying "cis" and come back to us on "mostly free of censorship".


It’s not mostly free of censorship, you can find many examples of mild left opinions being censored. Harsh epithets against the out group are allowed, up to and including death threats, but mild epithets against the right are removed and often result in bans.

Free speech on Twitter is a joke, and you either are arguing in bad faith or you have no idea what you’re talking about.


Most major companies and politicians still use twitter for communication. It sounds like you are the one in the "echo hall"?


And exactly why would I care, what uninformed people at companies and what uninformed politicians do? And what does that have to do with me being in an "echo hall" (I think you mean echo chamber, btw..)? In what way is whatever platform politicians use indicative of that platform not being an echo chamber?


It doesn't matter whether you care or not. Your personal opinion is of no importance when it comes to mass-market social media and other horizontal platforms. The point is that a lot of politicians and business leaders will continue using X regardless of what you think of it.


Do you even realize that you are the one who used the phrase "echo hall"?

Might be time to step back and take a breath.


You still didn't answer the question though. I asked you why it matters, whether politicians make uninformed use of a bad platform, when it comes to me being in an echo chamber or not. I think there is no relation between what silly things politicians do, and whether I am in an echo chamber or not.

I am not sure why I wrote "echo hall". I must have been mentally absent or something. To my own ears it sounds weird and not like something I would usually write. It might have been weird auto correction on phone. I am not sure. Anyway, that is besides the point. I would like to know, why you think, that what politicians do has any relation to me being in an echo chamber or not. I mean, do you define the outside of echo chambers to be the place, where politicians go? Like ... Are they such a massive number of people or somehow indicative of that outside? I just don't get your idea.


If you are only getting your perspective from tertiary sources rather than primary sources, then you are more likely subject to a bias layer from intermediaries.

I really dont think I am saying anything controversial.


> right-wing echo hall

But it is the result of new agenda of new owner, not the result of mass layoffs. I'm sure the result would be the same without layoffs.


Sounds like his own job could be automated…


The company is his property, ofc he won't fire himself


> Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls.

No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.


I try to enunciate very clearly: "What would you like to do?" - "Speak to a fcuking human. Speak to a fcuking human. Speak to a fcuking human. Speak to a fcuking human."


Just say “fucking”


No wonder the AI couldn't understand.


And you don't think that this won't improve with better bots?


> And you don't think that this won't improve with better bots?

Actually, now that I think about it, yeah.

The whole purpose of the bots is to deflect you from talking to a human. For instance: Amazon's chatbot. It's gotten "better": now when I need assistance, it tries three times to deflect me from a person after it's already agreed to connect me to one.

Anything they'll allow the bot to do can probably can be done better by a customer facing webpage.


Maybe for you, but not for most people. Most people have problems that are answered online, but knowledge sites are hard to navigate, and they can't solve their own problems.

A high quality bot to guide people through their poorly worded questions will be hugely helpful for a lot of people. AI is quickly getting to the point that a very high quality experience is possible.

The premise is also that the bots are what enable the people to exist. The status quo is no interactive customer service at all.


This sounds to me like something that's better solved by RAG than by an AI manned call center.

Let's use Zuck's example, the lost password. Surely that's better solved with a form where you type things, such as your email address. If the problem is navigation, all we need to do is hook up a generative chat bot to the search function of the already existing knowledge site. Then you can ask it how to reset your password, and it'll send you to the form and write up instructions. The equivalent over a phone call sounds worse than this to me.

I think Zuck is wrong that 90% of the problems people would call in for can easily be solved by an AI. I was stuck in a limbo with Instagram for about 18 months, where I was banned for no clear reason, there was no obvious way to contact them about it, and once I did find a way, we proceeded with a weird dance where I provided ID verification, they unbanned me, and then they rebanned me, and this happened a total of 4 times before the unban process actually worked. I don't see any AI agent solving this; the cause was obviously process and/or technical problems at Meta. This is the only thing I ever wanted to call Meta for.

And there is another big class of issue that people want to call any consumer-facing business for, which AI can't solve: loneliness. The person is retired and lives alone and just wants to talk to someone for 20 minutes, and uses a minor customer service request as a justification. This happens all the time. Actually an AI can address this problem, but it's probably not the same agent we would build for solving customer requests, and I say address rather than solve as AI will not solve society's loneliness epidemic.


Respectfully, I think your reply assumes that I am suggesting the only AI interface must be on the phone.

It should be everywhere, as a first line of customer service. Even once talking to a person, real-time translation is necessary -- it's not possible to staff enough skilled employees in every language on earth.

I'd like to call out that "I can't log in" is the most common problem with Facebook, by a wide margin. HN user anecdotes are just not useful when assessing the scope of this problem.

I'd also like to call out that many people (usually not English speaking) nearly exclusively use voice memos and phone calls, and rarely type anything at all.

I think it is clear that AI will enable better customer service from Facebook. Without AI, a FB call center is clearly impossible. With AI, perhaps it begins to look feasible.


Zuck also said that AI is going to start replacing senior software engineers at Meta in 2025. His job isn’t to state objective facts but hype up his company’s products and share price.


Honestly I hope this is true. I recognize this is a risky thing to say, for my own employment prospects as a software engineer. But if companies like Facebook could run their operations with fewer engineers, and those people could instead start or join a larger diversity of smaller businesses, that would be a positive development.

I do think we're going to see less employment for "coding" but I remain optimistic that we're going to see more employment for "creating useful software".


Sorry for the acidity, just training my patience while waiting for the mythical FB/AI call center.


Yeah, I was a little credulous about what Zuck said there too.

Like, if AI is so good, then it'll just eat away at those jobs and get asymptotically close to 100% of the calls. If it's not that good, then you've got to loop in the product people and figure out why everyone is having a hard time with whatever it is.

Generally, I'd say that calls are just another feedback channel for the product. One that FB has thus far been fine without consulting, so I can't imagine its contribution can be all that high. (Zuck also goes on to talk about the experiments they run on people with FB/Insta/WA, and woah, it is crazy unethical stuff he casually throws out there to Dwarkesh)

Still, to the point here: I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own. We, the humans, are still the thing that says 'go/do/start', the prime movers (to borrow a long held and false bit of ancient physics). The AIs aren't initiating things, and it seems to a large extent, we're not going to want them to do so. Not out of a sense of doom or lack-of-greed, but simply as we're more interested in working at the edge of the fractal.


Not to discredit anything you wrote, but:

"I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own."

I find that to be a highly ironic thing. It basically says AI is not AI. Which we all know it is not yet, but then we can simply say it: The current crop of "AI" is not actually AI. It is not intelligence. It is a kind of huge encoded, non-transparent dictionary.


As someone who has been involved with customer support (on the in-house tech side) the very vast majority of contacts to a CS team will be very inane or extremely inane. If you can automate away the lowest tier of support with LLMs you'll improve response times for not just the simple questions but also for the hard ones.


I have had the problem with customer support that about 90% of the calls/chats I have placed should have been automated (on their side), and the remaining 10% needed escalation beyond the "customer service" escalation ladder. In America, sadly, that means one of two things: (1) you call a friend who works there or (2) you have your lawyer send a demand letter requesting something rather inane.


I agree with that common pattern but even without [current] AI there were ways to automate/improve the lowest tier: very often I don't find my basic questions in the typical corporation's FAQ.


I usually assume, that it is, because they do not want to answer those basic questions or want to hide the answers. For example some shop. No answer found in the FAQ how refunds work. Instant sus.


I like the analogy of the fractal boundaries.

But there's also consolidation happening: Not every branch that is initially explored is still meaningful a few years later.

(At least that's what I got from reading old mathematical texts: People really delved deeply into some topics that are nowadays just subsumed by more convenient - or maybe trendy - machinery)


Let's watch your mood when AI answers your call.


Weird to find out that some people still believe a thing that guy says.


Where’s my free internet to migrants, Zuck?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: