At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.
It seems like you missed the point. Andrew Ng is not there to give you production grade models. He exists to deliver a proof of concept that needs refinements.
>Here's an idea that could use some polish, but I think as an esteemed AI researcher that it could improve your models. -- Andrew Ng
>OH MY GOSH! IT ISN'T PRODUCTION READY OUT OF THE BOX, LOOK AT HOW DUMB THIS STUFFED SHIRT HAPPENS TO BE!!! -- You
Nobody appreciates a grandstander. You're really treading on thin ice by attacking someone who has given so much to the AI community and asked for so little in return. Andrew Ng clearly does this because he enjoys it. You are here to self-promote and it looks bad on you.
Except it's a video introducing the concept and trying to create buzz around it and inviting people to try it (for free), and providing a link to the page where you can do so. (at least as far as I could tell).
So yes, but not really. This is more like when google released the initial android, and offered it to people to try to get feedback. Yes it's not offered as an obfuscated academic paper in a paywalled journal, but implying the video is promoting a half-baked product as production-ready for quick profit just because it's hosted in a proper landing page is a bit of an extreme take I think.
we respect andrew a lot, as we mentioned in our blog! he's an absolute legend in the field, founded google brain, coursera, worked heavily on baidu ai. this is more to inform everyone not to blindly trust new document extraction tools without really giving them challenges!
> That's the standard tier of competence you expect from Ng. Academia is always close but no cigar.
Academics do research. You should not expect an academic paper to be turned into a business or production overnight.
The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR. It took 70 years of non-commercial research to bring us to the very useful multimodal LLMs of today.
> The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR.
You're about a decade off, the Mark 1 Perceptron was created in 1958 [0]. The original paper (A Logical Calculus of the Ideas Immanent in Nervous Activity) that introduced the idea however was written during WW2 (1943) [1].
It's more they had to wait for processing power to catch up.
One of my bit older friends got an AI doctorate in the 00s, and would always lament a business would never bother reading his thesis, they'd just end up recreating what he did in a few weeks themselves.
It's easy to forget now that in the 90s//00s/10s AI research was mainly viewed as a waste of time. The recurring joke was that general AI was just 20 years away, and had been for the last few decades.
And on the other side, there's companies like Theranos, where you think the world will never be the same again, until you actually try the thing they're selling. Full cigar promised, but not even close.
Not saying this is the case with the OP company, but if you're ready to make sweeping generalizations about cigars like that on the basis of a commercial blog selling a product, you might as well invoke some healthy skepticism, and consider how the generalization works on both sides of the spectrum.
The whole corporation-glorifying, academia-bashing gaslighting narrative is getting very tiring lately.
https://x.com/AndrewYNg/status/1895183929977843970
At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.