I'm an AI skeptic when it comes to business cases. I think AI is great at getting to average and the whole point of a business is that you're paying them to do better than average.
But I think current AI (not where it might be in a few months or years) is absolutely amazing for disadvantaged people. Access to someone who's average is so freaking cool if you don't already have it. Used correctly it's a free math tutor, a free editor for any papers you write, a free advice nurse.
This sucks in a business setting but I could see it being incredible in a charitable setting. When businesses try to replace someone great with something average it sucks. But if you're replacing something non-existent with something average, that can be life changing.
I'm an AI skeptic and I can empathize with his AI enthusiasm given the problems he's trying to address (or at least professes to be trying to address).
> But I think current AI (not where it might be in a few months or years) is absolutely amazing for disadvantaged people. Access to someone who's average is so freaking cool if you don't already have it. Used correctly it's a free math tutor, a free editor for any papers you write, a free advice nurse.
Interestingly, I think AI, if its biggest boosters are correct, will end up being an absolute disaster for disadvantaged people.
The fact is that the vast majority of people in the current world are able to survive by selling their labor. If AI makes it so that, say, 50% of the world's population is no longer able to survive by selling their labor, that leads to massive serfdom, not some sort of Star Trek utopia.
And the thing that is shocking to me is that I haven't seen any (like, absolutely zero) credible explanation from AI boosters of how this dystopian end state is avoidable. I've either heard misdirection (e.g. yes, I agree AI is amazing at what it can do, but that doesn't explain how people will eat if they don't have jobs), vague handwavy-ness, or "kumbaya talk" about stuff like basic income that seems to completely ignore human nature.
I would absolutely love to be convinced I'm wrong, but that would need to start with at least something approaching a rational argument as to how the benefits of AI will be more equally distributed, and I have yet to hear that.
I know a few of the leaders designing and developing Microsoft’s AI applications for the Gates Foundation.
I think you’re on the right track, and, alongside the scale of service (reaching more people and more topics with an average level of advice or recognition), there’s a second component to it: scale of analysis. The newly possible solutions that AI advances have created include more than those famous models that answer broad prompts with art, copy, or code.
They also include focused, sometimes incomprehensible tasks which can only be done at an impactful scale due to the creation of deep learning and advances in compute-inexpensive language understanding, computer vision, and audio analysis:
A network of affordable, durable, solar powered, LoRa meshed audio sensors analyzed by a model to diagnose changes in the biodiversity of the Amazon and other rainforests (via ambient bird and animal calls across thousands of species). Visual analysis done on a cheap camera network estimates herd sizes of larger, silent animals.
A model that analyzes satellite imagery to evaluate major shifts in the industrial use of land, including tracking the national development of solar farms to evaluate nations receiving new energy grants.
A social analysis bot that tracks the rapid introduction of propaganda narratives or intentional agitation by foreign state actors (Russian bot farms), including building a map of associated IPs. Sadly, the social networks basically shrugged when given this data, so Msft gave it to LEAs.
These things are being done at a scale that would be incomprehensible to an organization of people.
Scale of analysis tasks are still, IMO the smartest use of AI today, despite the fashionable trend of GPT and the promise of AGI. A few models to spark ideas:
Recognition tasks with a dictionary too deep for human experts to grok when scaled up - like identifying thousands of wildlife
Recognition tasks with a timescale too rapid or sudden for human attention - Amazon Prime Vision predicting a QB sack in a football game before it happens
Recognition tasks when human vigilance or sensitivity would miss an occasional or slight occurrence - measuring eccentricities in electrical signals, vibrations, etc. to predict the failure of industrial equipment
It is good for the other use cases, but it is the worst possible source of advice on subjects where the user has no expertise, and where there are serious health or safety consequences for getting it wrong.
Call a professional for help. Are they breathing, is their heart beating, are they bleeding.
If you haven’t called someone that can actually save the persons life no amount of first aid will help.
Unfortunately unless something is obviously preventing breathing as someone untrained theres not a lot you can do if they aren’t breathing.
Heart beating is pretty easy, chest compressions…
Bleeding again, pressure, and a lot of it to try to prevent the bleeding.
I would want to check what an AI response is to some situations but as long as it just tackles those cases it can probably only do more good than harm.
Id be more worried some good samaritan would start cutting people to try to “get an airway” or some nonsense. That would significantly increase mortality rates…
My time in rescue gave me a ton a faith in good samaritans. To try to do something in an emergency is productive 99% of the time (imo).
The only case I've experienced where it wasn’t was when someone in our area was actively listening in in emergency channels and trying to preempt ambulances. The issue was that they had training in the basics but often went past that in care they provided. Something that I believe is not covered by Good Samaritan laws.
I’m much more worried about folks like that than people who find themselves in an emergency and are trying to help.
I'd rather see a Good Samaritan being talked through CPR or whatever by a dispatcher who's trained to give that advice over the phone, rather than having a hallucinating LLM tell them to do something deadly.
I believe the situation here is more a matter of they don’t have a dispatcher to guide them.
In some rural area of Africa they came across a car crash. Two people hop out and assist while a third drives off to notify someone to send emergency help.
An on device LLM might be very useful there depending on what it says…
Emergencies can freak people out but not once in my eight years in rescue have I ever encountered a scenario where a random bystander might do as drastic of an intervention as as a tracheotomy.
I have shown up at scenes where people have googled what to do though and, you know what, it was super helpful.
If someone is dumb enough to perform a tracheotomy because an llm, google, or a passerby told them to. The issue isn’t any of those factors. That person is just so incredibly dumb as to be a danger to everyone around them.
I've been a firefighter for 22 years. I'm sure neither of us will ever cease to be amazed at what otherwise intelligent people will do when they're in a panic.
People also do amazingly dumb things because a piece of software with a tone of authority told them to do it, even when they're not under duress. Look at the number of people who find themselves stranded or dead because they uncritically followed the directions of a navigation app, and who weren't in a panic state when they did it.
> The whole point of a business is that you're paying them to do better than average.
...this is a really interesting idea, but I'm not sure if it's entirely true?
If we're talking about a business's core competency, I think the assertion makes sense. You need to be better than your competition.
But businesses also need a whole lot of people to work in human resources, file taxes, and so on. (Not to mention clean bathrooms, but that's less relevant to the generative AI discussion.) I can certainly imagine how having a world-class human resource team could provide a tire manufacturer with a competitive advantage. However, if those incredible HR employees are also more expensive, it might make more sense to hire below-average people to do your HR and invest more in tire R&D.
I think my sense is that the zeitgeist around AI (at least in business circles) is much more “The only way to ensure our continued survival is by embracing ai in all our core competencies” than “your tire company is going to have some adequate hr for a great price.”
An example that springs to mind is the arms race between tech CEOs over who can have more of their code base written by llms.
It’s amazing tech and it seems like it’s being marketed for all the wrong things based off of some future promise of super intelligence.
I really liked the article posted on here a week or two back along the lines of AI is a normal technology. Imo, the most sane narrative I’ve read about where this tech is at.
Right up until those below average HR people break the law or allow managers to break the law and the company gets in trouble and no scientists or other R&D people want to work there.
I would really hope "not breaking the law" doesn't require an "above average" HR team. As long as it isn't bottom of the barrel you should be fine.
...if I was really cynical, I might say that one of the reasons you might want a "world class" HR team is in order to break the law, or come really close to the line, without getting caught, in a way that increases profits.
But I think current AI (not where it might be in a few months or years) is absolutely amazing for disadvantaged people. Access to someone who's average is so freaking cool if you don't already have it. Used correctly it's a free math tutor, a free editor for any papers you write, a free advice nurse.
This sucks in a business setting but I could see it being incredible in a charitable setting. When businesses try to replace someone great with something average it sucks. But if you're replacing something non-existent with something average, that can be life changing.
I'm an AI skeptic and I can empathize with his AI enthusiasm given the problems he's trying to address (or at least professes to be trying to address).