This seems misplaced in a discussion about generative AI. Automated systems for handling appels are already a thing and don't necessarily rely on generative AI like LLMs.
I agree that bans and appeal handling needs a human in the loop to work as intended, to have some way of handling the false positives. Without a human in the loop the appeal process can just as well be removed entirely, because it won't fulfill its purpose.
Whether it's misplaced or not kinda depends on how big of a deal you think recent and upcoming generative AIs are. If they're really incredible, their recognition as something that makes moderation automation cheaper and/or better is likely to inject new energy into the preexisting drive to automate tasks like moderation and policy appeals. In the same way, if they make it possible to automate things that strictly required humans before (which seems to be the case for LLMs already), new classes of tasks will be automated that weren't before. Those things make things like a general right to appeals before competent, human reviewers more urgent, even if it's been a good idea for a long time.
On the other hand, if you don't see recent and upcoming generative AIs as that world-changing, they don't really change the picture as to whether or not the enshrinement of such a right into policy is warranted.
Free of cost != free open model. Free of cost means all your requests are logged for Google to use as training data and whatnot.
Llama3.2 on the other hand runs locally, no data is ever sent to a 3rd party, so I can freely use it to summarize all my notes regardless of one of them being from my most recent therapy session and another being my thoughts on how to solve a delicate problem involving politics at work. I don't need to pre-classify all the input to make sure it's safe to share. Same with images, I can use Llama3.2 11B locally to interpret any photo I've taken without having to worry about getting consent from the people in the photo to share it with a 3rd party, or whether the photo is of my passport for some application I had to file or a receipt of something I bought that I don't want Google to train their next vision model OCR on.
TL;DR - Google free of cost models are irrelevant when talking about local models.
> The difference is that nobody will pay programmers to keep programming once LLMs outperform them. Programmers will simply become as obsolete as horse-drawn carriages, essentially overnight.
I don't buy this. A big part of the programmer's job is to convert vague and poorly described business requirements into something that is actually possible to implement in code and that roughly solves the business need. LLMs don't solve that part at all since it requires back and forth with business stakeholders to clarify what they want and educate them on how software can help. Sure, when the requirements are finally clear enough, LLMs can make a solution. But then the tasks of testing it, building, deploying and maintaining it remain too, which also typically fall to the programmer. LLMs are useful tools in each stage of the process and speed up tasks, but not replacing the human that designs and architects the solution (the programmer).
It's good to envision what we'd actually use AGI for. Assuming it's a system you can give an objective to and it'll do whatever it needs to do to meet it, it's basically a super smart agent. So people and companies will employ it to do the tedious and labor intensive tasks they already do manually, in good old skeuomorphic ways. Like optimising advertising and marketing campaigns. And over time we'll explore more novel ways of using the super smart agent.
Agree with your premise, but the value creation math seems off. $0.50/day might become reality for some percentage of US citizens. But not for 3B people around the world.
There's also the issue of who gets the benefit of making people more efficient. A lot of that will be in the area of more efficient work, which means corporations get more work done with the same amount of employees at the same level of salary as before. It's a tough argument to make that you deserve a raise because AI is doing more work for you.
IT salaries began to go down right after AI popped up out of GPT2, showing up not the potential, but the evidence of much improved learning/productivy tool, well beyond the reach of internet search.
So beyond, that you can easily can transform a newbie into a junior IT, or JR into a something ala SSR, and getting the SR go wild with times - hours - to get a solution to some stuff that previously took days to be solved.
After the salaries went down, that happened about 2022 to the beginning of 2023, the layoffs began. That was mostly masked "AI based" corporate moves, but probably some layoff actually had something to do with extra capabilities in improved AI tools.
That is, because, fewer job offers have been published since maybe mid-2023, again, that could just be corporate moves, related to maybe inflation, US markets, you name it. But there's also a chance that some of those fewer job offer in IT were (and are), the outcome of better AI tools, and the corporations are betting actively in reducing headcounts and preserving the current productivity.
The whole thing is changing by the day as some tools prove themselves, other fail to reach the market expectations, etc.
Agree! I remember printing each issue and reading it over and over. So inspiring, convinced me that it's possible to figure out how everything works in tech, down to the wire.
Bingo. That's all that matters.
reply