Basically, if it means companies can introduce automation without changing anything about the tooling/workflow/programs they already use, it's going to be MASSIVE. Just and install and a prompt and you've already automated a lengthy manual process - awesome.
Companies are going to install an AI inside their own proprietary systems full of proprietary and confidential data and PII about their customers and prospects and whatnot, and let it run around and click on random buttons and submit random forms?
Really??!? What could possibly go wrong.
I'm currently trying to do a large ORC project using Google Vision API, and then Gemini 1.5 Pro 002 to parse and reconstruct the results (taking advantage, one hopes, of its big context window). As I'm not familiar with Google Vision API I asked Gemini to guide me in setting it up.
Gemini is the latest Google model; Vision, as the name implies, is also from Google. Yet Gemini makes several egregious mistakes about Vision, gets names of fields or options wrong, etc.
Gemini 1.5 "Pro" also suggests that concatenating two json strings produces a valid json string; when told that's unlikely, it's very sorry and makes lots of apologies, but still it made the mistake in the first place.
LLMs can be useful when used with caution; letting one loose in an enterprise environment doesn't feel safe, or sane.
LLMs can't reason, or can't reason logically to be precise; what they are really good at is recalling.
So if you want accurate results on writing code you need to put all the docs into the input and THEN ask for your question. So download all docs on Vision, put them in the Gemini prompt and ask your question or code on how to use Vision, and you'll get much closer to truth
I have tried many others for many other things (via OpenRouter) but I have never compared LLMs on the exact same task; it's confusing enough with one engine... ;-)
Sonnet 3.5 for coding is fine but makes "basic" mistakes all the time. Using LLMs is at times like dealing with a senior expert suffering from dementia: it has arcane knowledge of a lot of things but suddenly misses the obvious that would not escape an intern. It's weird, really.
I've been peddling my vision of "AI automation" for the last several months to acquaintances of mine in various professional fields. In some cases, even building up prototypes and real-user testing. Invariably, none have really stuck.
This is not a technical problem that requires a technical solution. The problem is that it requires human behavior change.
In the context of AI automation, the promise is huge gains, but when you try to convince users / buyers, there is nothing wrong with their current solutions. Ie: There is no problem to solve. So essentially "why are you bothering me with this AI nonsense?"
Honestly, human behavior change might be the only real blocker to a world where AI automates most of the boring busy work currently done by people.
This approach essentially sidesteps the need to have effect a behavior change, at least in the short-term while AI can prove and solidify its value in the real-world.
There's a huge huge gap between "coaxing what you want out of it" and "trusting it to perform flawlessly". Everybody on the planet would use #2, but #1 is just for enthusiasts.
AI is squarely #1. You can't trust it with your credit card to order groceries, or to budget and plan and book your vacation. People aren't picking up on AI because it isn't good enough yet to trust - you still have the burden of responsibility for the task.
Siri, Alexa and Amazon Dash illustrate this well. I remember everyones excitement and massive investment about these, and we all know how that turned out. I'm not sure how many times we'll need to relearn that unless an automation works >99% of the time AND fails predictably, people don't use it for anything meaningful.
I think there is a large pool of near minimum-wage white collar workers who wouldn't care about that difference when it comes to executing on their jobs. These are the folks who are already using VBScript, AutoHotKey, Excel wizardry, etc. to automate large parts of their job regardless of any risks and will continue to use these new tools for similar purposes.
Of course, but they'll go bankrupt if they don't adapt. Just like mom&pop cornerstores disappeared or any other large scale automation. Loom, cars, automated checkout in supermarkets etc. There will be resistance but the market will play it out. Similarly how taxi companies have started making apps after Uber got successful or local restaurants reluctantly made websites and added themselves to Google maps.
Nobody likes to change a system where they already have their own little comfortable spot and figured it out and just want to seep in the lukewarm there until retirement. Fully understandable. But at least in the private sector this will not save them.