Hacker Newsnew | past | comments | ask | show | jobs | submit | peter_d_sherman's commentslogin

>"If AI makes every engineer 50% more productive, you don't get 50% more output. You get 50% more pull requests. 50% more documentation. 50% more design proposals. And someone, somewhere, still has to review all of it.

When two or three early adopters start generating more PRs than before, the team absorbs it. No big deal.

When everyone does it, review becomes the constraint.

The bottleneck doesn't vanish. It moves upstream, to the parts of the job that are irreducibly human: deciding what to build, defining "done," understanding the domain, making judgment calls about risk.

I've written about this pattern before:

the work didn't disappear, it moved.

What's new here is that it moved specifically into verification - and most teams haven't consciously staffed or structured for that yet.

[...]

The question isn't "how do we produce more code?" anymore. The question is "how do we verify more code?" And I don't think most teams have a real answer to that yet."

Excellent article!

It's a great question... how do we verify AI produced code? We could use AI to do that too, but then:

Who verifies the verifier?

Related:

Quis custodiet ipsos custodes? (Alternatively known as: "Who watches the watchmen?" / "Who oversees the overseers?" / "Who manages the managers?" / "Who guards the guardians?" / "Who reviews the reviewers?", etc., etc.):

https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%...


IMHO: Of all of the videos showing Pi's relationship with the Arithmetic-Geometric Mean (AGM), this is the best one.

Related:

Gauss–Legendre algorithm:

https://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre_algorit...

Arithmetic–Geometric Mean (AGM)

https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_m...

Chudnovsky algorithm

https://en.wikipedia.org/wiki/Chudnovsky_algorithm

Landen's transformation

https://en.wikipedia.org/wiki/Landen%27s_transformation

Binary splitting:

https://en.wikipedia.org/wiki/Binary_splitting


Brilliant idea!

Heck, a brilliant potential bootstrapped-from-virtually-nothing-except-a-cell-phone business idea!


(Comedy writing mode ON):

SNL's Stefon character: "This one has it all... Waffle House, FEMA, breakfast foods, federal emergencies, waffles, emergency preparedness, eggs, teleportation, bacon, black helicopters, hash browns, angry men in combat fatigues talking to God over 2-way radios, George Carlin, grits, syrup for the grits, toast, military communications, orange juice, armageddon/end-of-the-world apocalypse themes, milk, coffee and other breakfast items... all for a very reasonable price!"

:-)

(Comedy writing mode OFF)


What a brilliant idea!

Split a "it needs to run in a datacenter because its hardware requirements are so large" AI/LLM across multiple people who each want shared access to that particular model.

Sort of like the Real Estate equivalent of subletting, or splitting a larger space into smaller spaces and subletting each one...

Or, like the Web Host equivalent of splitting a single server into multiple virtual machines for shared hosting by multiple other parties, or what-have-you...

I could definitely see marketplaces similar to this, popping up in the future!

It seems like it should make AI cheaper for everyone... that is, "democratize AI"... in a "more/better/faster/cheaper" way than AI has been democratized to date...

Anyway, it's a brilliant idea!

Wishing you a lot of luck with this endeavor!


>"ASGs are more complex and concise than ASTs

because they may contain shared subterms (also known as "common subexpressions").[1]

Abstract semantic graphs are often used as an intermediate representation by compilers to store the results of performing common subexpression elimination upon abstract syntax trees."

Related (Transpiler IR Related):

Intermediate Representations:

https://cs.lmu.edu/~ray/notes/ir/

https://en.wikipedia.org/wiki/Intermediate_representation

QBE Intermediate Language:

https://c9x.me/compile/

https://c9x.me/compile/doc/il.html

Can Large Language Models Understand Intermediate Representations in Compilers?:

https://arxiv.org/abs/2502.06854


There's an interesting set of ideas here!

If we look at the history of programming languages, we see the idea of Templating occuring over and over again, in different contexts, i.e., C's macros, C++ Templates, embedding PHP code snippets into an otherwise mostly HTML file, etc., etc.

Templating can involve aspects of meta-code (code about the code), interpretation proxying (which engine/compiler/system/parser/program/subsystem/? is responsible for interpreting a given section of text), etc., etc.

Here we see this idea as another level of proxied/layered abstraction/indirection, in this case between an AI/LLM and the underlying source code...

Is this a good idea?

Will all code be written like this, using this pattern or a similar one, in the future?

I for one don't know (it's too early to tell!) but one thing is for sure, and that's that this new "layer" certainly contains an interesting set of ideas!

I will definitely be watching to see more about how this pattern plays out in future software development...



>"The quacking that catches my ear is when something develops a dependency graph: your package depends on a package that depends on a package, and now you need resolution algorithms, lockfiles, integrity verification, and some way to answer “what am I actually running and how did it get here?”

Several tools that started as plugin systems, CI runners, and chart templating tools have quietly grown transitive dependency trees. Now they walk like a package manager, quack like a package manager, and have all the problems that npm and Cargo and Bundler have spent years learning to manage, though most of them haven’t caught up on the solutions."

https://edolstra.github.io/pubs/phd-thesis.pdf


Very interesting!

Yes, in this day and age, I could definitely see web pages being harder to crawl by search engines (and SEO companies and other users of automated web crawling technologies (AI agents?)) than they were in the early days of the Internet due to many possible causes -- many of which you've excellently described!

In other words, there's more to be aware of for anyone writing a search engine (or search-engine-like piece of software -- SEO, AI Agent, etc., etc.) than there was in the early days of the Internet, where everything was straight unencrypted http and most URLs were easily accessible without having to jump through additional hoops...

Which leads me to wonder... on the one hand, a website owner may not want bots and other automated software agents spidering their site (we have ROBOTS.TXT for this), but on the flip side, most business owners DO want publicity and easy accessibility for sales and marketing purposes, thus, they'd never want to issue a 403 (or other error code) for any public-facing product webpage...

Thus there may be a market for testing public facing business/product websites against faulty "I can't give you that web page for whatever reason" error codes from a wide variety of clients, from a wide variety of locations around the world.

That market is related to the market for testing if a website is up and functioning properly (the "uptime market"), again, from a wide variety of locations around the world, using a wide variety of browsers...

So, a very interesting post!

Also (for future historians!) compare all of the restrictive factors which may prevent access to a public-facing web page today Vs. Tim Berners-Lee original vision for the web, which was basically to let scientists (and other academic types!) SHARE their data PUBLICLY with one another!

(Things have changed... a bit! :-) )

Anyway, a very interesting post, and a very interesting article -- for both present and future Search Engine programmers!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: