Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've entered the voodoo witch doctor phase of LLM usage: "Enter thee this arcane incantation along with thy question into the idol and, lo, the ineffable machine spirits wilt be appeased and deign to grant thee the information thou hast asked for."


This has been part of LLM usage since day 1, and I say that as an ardent fan of the tech. Let's not forget how much ink has been spilled over that fact that "think through this step by step" measurably improved/improves performance.


> "think through this step by step"

Has always made sense to me, if you think how these models were trained.

My experience with great stackoverflow responses and detailed blog posts, they often contain "think through this step by step" or something very similar.

Intuitively adding that phrase should help the model narrow down the response content / formatting


Then why don't they hard-code the interface to the model to pretend you included that in the prompt?


...they do. It's called "o1".


It is because the chance of the right answer goes down exponentially as the complexity of what is being asked goes up.

Asking a simpler question is not voodoo.

On the other hand, I think many people are trying various rain dances and believing it was a specific dance that was the cause when it happened to rain.


We use the approaching of feeding mistakes from LLM generated code back to the LLM until it produces working code [1].

I might have to try some more aggressive prompting :).

[1] https://withlattice.com


The Tech-Priests of Mars are calling


Praise the Omnissiah




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: