Hacker News new | past | comments | ask | show | jobs | submit login

Gemini flash models have the least hype, but in my experience in production have the best bang for the buck and multimodal tooling.

Google is silently winning the AI race.






100% agree. I had Gemini flash 2 chew through thousands of points of nasty unstructured client data and it did a 'better than human intern' level conversion into clean structured output for about $30 of API usage. I am sold. 2.5 pro experimental is a different league though for coding. I'm leveraging it for massive refactoring now and it is almost magical.

> thousands of points of nasty unstructured client data

What I always wonder in these kinds of cases is: What makes you confident the AI actually did a good job since presumably you haven't looked at the thousands of client data yourself?

For all you know it made up 50% of the result.


This was solved a hundred years ago.

It's the same problem factories have: they produce a lot of parts, and it's very expensive to put a full operator or more on a machine to do 100% part inspection. And the machines aren't perfect, so we can't just trust that they work.

So starting in the 1920s Walter Shewhart and Edward Deming came up with Statistical Process Control. We accept the quality of the product produced based on the variance we see of samples, and how they measure against upper and lower control limits.

Based on that, we can estimate a "good parts rate" (which later got used in ideas like Six Sigma to describe the probability of bad parts being passed).

The software industry was built on determinism, but now software engineers will need to learn the statistical methods created by engineers who have forever lived in the stochastic world of making physical products.


I hope you're being sarcastic. SPC is necessary because mechanical parts have physical tolerances and manufacturing processes are affected by unavoidable statistical variations; it is beyond idiotic to be provided with a machine that can execute deterministic, repeatable processes and then throw that all into the gutter for mere convenience, justifying that simply because "the time is ripe for SWE to learn statistics"

We don't know how to implement a "deterministic, repeatable process" that can look at a bug in a repo and implement a fix end-to-end.

that is not what OP was talking about though.

LLMs are literally stochastic, so the point is the same no matter what the example application is.

Humans are literally stochastic, so the point is the same no matter what the example application is.

The deterministic, repeatable process of human (and now machine) judgement and semantic processing?

In my case I had hundreds of invoices in a not-very-consistent PDF format which I had contemporaneously tracked in spreadsheets. After data extraction (pdftotext + OpenAI API), I cross-checked against the spreadsheets, and for any discrepancies I reviewed the original PDFs and old bank statements.

The main issue I had was it was surprisingly hard to get the model to consistently strip commas from dollar values, which broke the csv output I asked for. I gave up on prompt engineering it to perfection, and just looped around it with a regex check.

Otherwise, accuracy was extremely good and it surfaced a few errors in my spreadsheets over the years.


I hope there is a future where csv comma's don't screw up data. I know it will never happen but it's a nightmare.

Everyone has a story of a csv formatting nightmare


For what it's worth, I did check over many hundreds of them. Formatted things for side by side comparison and ordered by some heuristics of data nastiness.

It wasn't a one shot deal at all. I found the ambiguous modalities in the data and hand corrected examples to include in the prompt. After about 10 corrections and some exposition about the cases it seemed to misundestand, it got really good. Edit: not too different from a feedback loop with an intern ;)


Though the same logic can be applied to everywhere, right? Even if it's done by human interns, you need to audit everything to be 100% confident or just have some trust on them.

Not the same logic because interns can make meaning out of the data - that’s built-in error correction.

They also remember what they did - if you spot one misunderstanding, there’s a chance they’ll be able to check all similar scenarios.

Comparing the mechanics of an LLM to human intelligence shows deep misunderstanding of one, the other, or both - if done in good faith of course.


Not sure why you're trying to conflate intellectual capability problems into this and complicate the argument? The problem layout is the same. You delegate the works to someone so you cannot understand all the details. This makes a fundamental tension between trust and confidence. Their parameters might be different due to intellectual capability, but whoever you're going to delegate, you cannot evade this trade-off.

BTW, not sure if you have experiences of delegating some works to human interns or new grads and being rewarded by disastrous results? I've done that multiple times and don't trust anyone too much. This is why we typically develop review processes, guardrails etc etc.


You can use AI to verify its own work. Last time I split a C++ header file into header + implementation file. I noticed some code got rewritten in a wrong manner, so I asked it to compare the new implementation file against the original header file, but to do so one method at a time. For each method, say whether the code is exactly the same and has the same behavior, ignoring superficial syntax changes and renames. Took me a few times to get the prompt right, though.

Many types of data have very easily checkable aggregates. Think accounting books.

It also depends on what you are using the data for, if it's for non (precise) data based decisions then it's fine. Specially if you looking for "vibe" based decisions before then dedicating time to "actually" process the data for confirmation.

30$ to get an view into data that would take at least x many hours of someone's time is actually super cheap, specially if the decision of that result is then to invest or not invest the x many hours to confirm it.


You take a sample and check

In my professional opinion they can extract data at 85-95% accuracy.

> I'm leveraging it for massive refactoring now and it is almost magical.

Can you share more about your strategy for "massive refactoring" with Gemini?

Like the steps in general for processing your codebase, and even your main goals for the refactoring.


Isn't it better to get gemini to create a tool to format the data? Or was it in such a state that that would have been impossible?

what tool are you using 2.5-pro-exp through? Cline? Or the browser directly?

For 2.5 pro exp I've been attaching files into AIStudio in the browser in some cases. In others, I have been using vscode's Gemini Code Assist which I believe recently started using 2.5 Pro. Though at one point I noticed that it was acting noticeably dumber, and over in the corner, sure enough it warned that it had reverted to 2.0 due to heavy traffic.

For the bulk data processing I just used the python API and Jupyter notebooks to build things out, since it was a one-time effort.


Copilot experimental (need VSCode Insiders) has it. I‘ve thought about trying aider —-watch-files though, also works with multiple files.

Absolutely agree. Granted, it is task dependent. But when it comes to classification and attribute extraction, I've been using 2.0 Flash with huge access across massive datasets. It would not be even viable cost wise with other models.

How "huge" are these datasets? Did you build your own tooling to accomplish this?

It's cheap but also lazy. It sometimes generates empty strings or empty arrays for tool calls, and then I just re-route the request to a stronger model for the tool call.

I've spent a lot of time on prompts and tool-calls to get Flash models to reason and execute well. When I give the same context to stronger models like 4o or Gemini 2.5 Pro, it's able to get to the same answers in less steps but at higher token cost.

Which is to be expected: more guardrails for smaller, weaker models. But then it's a tradeoff; no easy way to pick which models to use.

Instead of SQL optimization, it's now model optimization.


i have a high volume task i wrote an eval for and was pleasantly surprised at 2.0 flash's cost to value ratio especially compared to gpt4.1-mini/nano

accuracy | input price | output price

Gemini Flash 2.0 Lite: 67% | $0.075 | $0.30

Gemini Flash 2.0: 93% | $0.10 | $0.40

GPT-4.1-mini: 93% | $0.40 | $1.60

GPT-4.1-nano: 43% | $0.10 | $0.40

excited to to try out 2.5 flash


Can I ask a serious question. What task are you writing where its ok to get 7% error rate. I can't get my head around how this can be used.

There are tons of AI/ML use-cases where 7% is acceptable.

Historically speaking, if you had a 15% word error rate in speech recognition, it would generally be considered useful. 7% would be performing well, and <5% would be near the top of the market.

Typically, your error rate just needs to be below the usefulness threshold and in many cases the cost of errors is pretty small.


In my case, I have workloads like this where it’s possible to verify the correctness of the result after inference, so any success rate is better than 0 as it’s possible to identify the “good ones”.

Aren’t you basically just saying you are able to measure the error rate? I mean that’s good, but already a given in this scenario where hes reporting the 7% error rate.

No. If you're able to verify correctness of individual items of work, you can accept the 93% of verified items as-is and send the remaining 7% to some more expensive slow path.

That's very different from just knowing the aggregate error rate.


No, it's anything that's harder to write than verify. A simple example is a logic puzzle; it's hard to come up with a solution, but once you have a possible answer it's really easy to check it. In fact, it can be easier to vet multiple answers and tell the machine to try again than solve it once manually.

low stakes text classification but it's something that needs to be done and couldnt be done in reasonable time frames or at reasonable price points by humans

I expect some manual correction after the work is done. I actually mentally counted all the times I pressed backspace while writing this paragraph, and it comes down to 45. I'm not counting the next paragraph or changing the number.

Humans make a ton of errors as well. I didn't even notice how many I was making here until I started counting it. AI is super useful to just write get a first draft out, not for the final work.


You could be OCRing a page that includes a summation line, then add up all the numbers and check against the sum.

[flagged]


Yeah, general propaganda and psyops are actually more effective around 12% - 15%, we find it is more accurate to the user base, thus is questioned less for standing out more /s

I know it's a single data point, but yesterday I showed it a diagram of my fairly complex micropython program, (including RP2 specific features, DMA and PIO) and it was able to describe in detail not just the structure of the program, but also exactly what it does and how it does it. This is before seeing a single like of code, just going by boxes and arrows.

The other AIs I have shown the same diagram to, have all struggled to make sense of it.


>”Google is silently winning the AI race.”

It’s not surprising. What was surprising honestly was how they were caught off guard by OpenAI. It feels like in 2022 just about all the big players had a GPT-3 level system in the works internally, but SamA and co. knew they had a winning hand at the time, and just showed their cards first.


True and their first mover advantage still works pretty well. Despite "ChatGPT" being a really uncool name in terms of marketing. People remember it because they were the first to wow them.

How is ChatGPT bad in terms of marketing? It's recognizable and rolls off the tongue in many many many languages.

Gemini is what sucks from a marketing perspective. Generic-ass name.


Generative Pre-trained Transformer is a horrible term to have an acronym for.

Do you think the mass market thinks GPT is an acronym? It's just a name. Currently synonymous with AI.

Ask anyone outside the tech bubble about "Gemini" though. You'll get astrology.


True I guess they treat it just like SMS.

I still think they'd have taken off more if they'd given it a catchy name from the start and made the interface a bit more consumer friendly.


It feels more authentically engineer-coded.

> Google is silently winning the AI race

Yep, I agree! This convinced me: https://news.ycombinator.com/item?id=43661235


Absolutely. So many use cases for it, and it's so cheap/fast/reliable

And stellar OCR performance. Flash 2.0 is cheaper and more accurate than AWS Textract, Google Document AI, etc.

Not only in benchmarks[0], but in my own production usage.

[0] https://getomni.ai/ocr-benchmark


I want to use these almost too cheap to meter models like Flash more, what are some interesting use cases for those?

Google always has been winning the AI race as soon as DeepMind was properly put to use to develop their AI models, instead of the ones that built Bard (Google AI team).

I have to say, I never doubted it would happen. They've been at the forefront of AI and ML for well over a decade. Their scientists were the authors of the "Attention is all you need" paper, among thousands of others. A Google Scholar search produces endless results. There just seemed to be a disconnect between the research and product areas of the company. I think they've got that worked out now.

They're getting their ass kicked in court though, which might be making them much less aggressive than they would be otherwise, or at least quieter about it.


I remember everyone saying its a two horse race between Google and OpenAI, then DeepSeek happened.

Never count out the possibility of a dark horse competitor ripping the sod right out from under


How is deepseak doing though? It seemed like they probably just ingested ChatGPT. https://www.forbes.com/sites/torconstantino/2025/03/03/deeps...

Still impressive but would really put a cap on expectations for them.


Everybody else also trains on ChatGPT data, have you never heard of public ChatGPT conversation data sets? Yes they trained on ChatGPT data. No it's not "just".

They supposedly have a new R2 model coming within a month.

The API is free, and it's great for everyday tasks. So yes there is no better bang for the buck.

Wait, the API is free? I thought you had to use their web interface for it to be free. How do you use the API for free?

You can get an API key and they don't bill you. Free tier rate limits for some models (even decent ones like Gemini 2.0 Flash) are quite high.

https://ai.google.dev/gemini-api/docs/pricing

https://ai.google.dev/gemini-api/docs/rate-limits#free-tier


The rate limits I've encountered with free api keys has been way lower than the limits advertised.

I agree. I found it unusable for anything but casual usage due to the rate limiting. I wonder if I am just missing something?

I think it's the small TPM limits. I'll be way under the 10-30 requests per minute while using Cline, but it appears that the input tokens count towards the rate limit so I'll find myself limited to one message a minute if I let the conversation go on for too long, ironically due to Gemini's long context window. AFAIK Cline doesn't currently offer an option to limit the context explosion to lower than model capacity.

I'm pretty sure that's a google maps' level of free where once in control they will massively bill it

There is no reason to expect the other entrants in the market to drop out and give them monopoly power. The paid tier is also among the cheapest. People say it’s because they built their own their inference hardware and are genuinely able to serve it cheaper.

create an api key and dont set up billing. pretty low rate limits and they use your data

I use Gemini 2.5 pro experimental via openrouter in my openwebui for free. Was using sonnet 3.7 but I don't notice much difference so just default to the free thing now.

using aistudio.google.com

Flash models are really good even for an end user because how fast and good performance they have.

Shhhh. You're going to give away the secret weapon!

> Google is silently winning the AI race.

It’s not clear to me what either the “race” or “winning” is.

I use ChatGPT for 99% of my personal and professional use. I’ve just gotten used to the interface and quirks. It’s a good consumer product that I like to pay $20/month for and use. My work doesn’t require much in the way of monthly tokens but I just pay for the OpenAI API and use that.

Is that winning? Becoming the de facto “AI” tool for consumers?

Or is the race to become what’s used by developers inside of apps and software?

The race isn’t to have the best model (I don’t think) because it seems like the 3rd best model is very very good for many people’s uses.


> Google is silently winning the AI race.

That is what we keep hearing here...The last Gemini I cancelled the account, and can't help notice the new one they are offering for free...


Sorry I was talking of B2B APIs for my YC startup. Gemini is still far behind for consumers indeed.

I use Gemini almost exclusively as a normal user. What am I missing out on that they are far behind on?

It seems shockingly good and I've watched it get much better up to 2.5 Pro.


Mostly brand recognition and the earlier Geminis had more refusals.

As a consumer, I also really miss the Advanced voice mode of ChatGPT, which is the most transformative tech in my daily life. It's the only frontier model with true audio-to-audio.


> and the earlier Geminis had more refusals.

Its more so that almost every company is running a classifier on their web chat's output.

It isn't actually the model refusing, but rather if the classifier hits a threshold, it'll swap the model's out with "Sorry, let's talk about something else."

This is most apparent with DeepSeek. If you use their web chat with V3 and then jailbreak it, you'll get uncensored output but it is then swapped with "Let's talk about something else" halfway through the output. And if you ask the model, it has no idea its previous output got swapped and you can even ask it build on its previous answer. But if you use the API, you can push it pretty far with a simple jailbreak.

These classifiers are virtually always ran on a separate track, meaning you cannot jailbreak them.

If you use an API, you only have to deal with the inherent training data bias, neutering by tuning and neutering by pre-prompt. The last two are, depending on the model, fairly trivial to overcome.

I still think the first big AI company that has the guts to say "our LLM is like a pen and brush, what you write or draw with it is on you" and publishes a completely unneutered model will be the one to take a huge slice of marketshare. If I had to bet on anyone doing that, it would be xAI with Grok. And by not neutering it, the model will perform better in SFW tasks too.


> and the earlier Geminis had more refusals.

You can turn off those, Google lets you decide how much it censors you can completely turn it off.

It has separate sliders for sexually explicit, hate, dangerous and harassment. It is by far the best at this, since sometimes you want those refusals/filters.


Have you tried the Gemini Live audio-to-audio in the free Gemini iOS app? I find it feels far more natural than ChatGPT Advanced Voice Mode.

What do you mean miss? You don’t have the budget to keep something you truly miss for $20? What am in missing here / I don’t mean to criticize I am just curious is all. I would reword but I have to go

What is true audio-to-audio in this case?

They used to be, but not anymore, not since Gemini Pro 2.5. Their "deep research" offering is the best available on the market right now, IMO - better than both ChatGPT and Claude.

Sorry, but no. Gemini isn't the fastest horse, yet. And it's use within their ecosystem means it isn't geared to the masses outside of their bubble. They are not leading the race but they are a contender.

In my experience they are as dumb as a bag of bricks. The other day I asked "can you edit a picture if I upload one"

And it replied "sure, here is a picture of a photo editing prompt:"

https://g.co/gemini/share/5e298e7d7613

It's like "baby's first AI". The only good thing about it is that it's free.


> in my experience they are as dumb as a bag of bricks

In my experience, anyone that describes LLMs using terms of actual human intelligence is bound to struggle using the tool.

Sometimes I wonder if these people enjoy feeling "smarter" when the LLM fails to give them what they want.


If those people are a subset of those who demand actual intelligence, they will very often feel frustrated.

Prompt engineering is a thing.

Learning how to "speak llm" will give you great results. There's loads of online resources that will teach you. Think of it like learning a new API.


This was using Gemini on my phone - which both Samsung and Google advertise as "just talk to it".

for now. one would hope that this is a transitory moment in llms and that we can just use intuition in the future.

LLM's whole thing is language. They make great translators and perform all kinds of other language tasks well, but somehow they can't interpret my English language prompts unless I go to school to learn how to speak LLM-flavored English?

WTF?


You have the right perspective. All of these people hand waving away the core issue here don't realize their own biases. Some of the best these things tout as much as 97% accuracy on tasks but if a person was completely randomly wrong at 3% of what they say you'd call an ambulance and no doctor would be able to diagnose their condition (the kinds of errors that people make with brain injuries are a major diagnostic tool and the kinds of errors are known for major types of common injuries ... Conversely there is no way to tell within an LLM system if any specific token is actually correct or not and its incorrectness is not even categorizable.)

I like to think of my interactions with an LLM like I'm explaining a request to a junior engineer or non engineering person. You have to be more verbose to someone who has zero context in order for them to execute a task correctly. The LLM only has the context you provided so they fail hard like a junior engineer would at a complicated task with no experience.

I like to think of my interactions with an LLM like I'm explaining a request to a junior engineer or non engineering person. You have to be more verbose to someone who has zero context in order for them to execute a task correctly. The LLM only has the context you provided so they fail hard like a junior engineer would at a complicated task with no experience.

It's a natural language processor, yes. It's not AGI. It has numerous limitations that have to be recognized and worked around to make use of it. Doesn't mean that it's not useful, though.

They are not humans - so yeah I can totally see having to "go to school" to learn how to interact with them.

Its because google hasn't realized the value of training the model on information about its own capabilities and metadata. My biggest pet peeve about google and the way they train these models.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: