Hacker Newsnew | past | comments | ask | show | jobs | submit | simonw's commentslogin

Have you tried any of the ludicrously fast LLM demos yet?

https://inference.cerebras.ai/ and https://groq.com/ and https://deepmind.google/models/gemini-diffusion/ (waitlisted) are all 10 to 100x faster than regular models, which really does have a meaningful impact on how I interact with them because I don't have to disengage for 15+ seconds while I wait for a response.

I have video demos of a few of those: https://simonwillison.net/2024/Oct/25/llm-cerebras/ and https://simonwillison.net/2024/Oct/31/cerebras-coder/ and https://simonwillison.net/2025/May/21/gemini-diffusion/


Here's my favorite of the Soduku attempts at this (easier to get your head around than Wordle since it's a much simpler problem): https://github.com/konstin/sudoku-in-python-packaging

Here's the same Sudoku trick from 2008 using Debian packages: https://web.archive.org/web/20080823224640/https://algebraic...


Funnily enough, I did a Sudoku one too (albeit with Poetry) a few years ago: https://github.com/mildbyte/poetry-sudoku-solver

Plenty of people have English as a second language. Having an LLM help them rewrite their writing to make it better conform to a language they are not fluent in feels entirely appropriate to me.

I don't care if they used an LLM provided they put their best effort in to confirm that it's clearly communicating the message they are intending to communicate.


Yeah, my wife was just telling me how much Grammarly has helped her with improving her English.

[flagged]


On the contrary, I've found Simon's opinions informative and valuable for many years, since I first saw the lightning talk at PyCon about what became Django, which IIRC was significantly Simon's work. I see nothing in his recent writing to suggest that this has changed. Rather, I have found his writing to be the most reliable and high-information-density information about the rapid evolution of AI.

Language only works as a form of communication when knowledge of vocabulary, grammar, etc., is shared between interlocutors, even though indeed there is no objectively correct truth there, only social convention. Foreign language learners have to acquire that knowledge, which is difficult and slow. For every "turn of phrase" you "enjoy" there are a hundred frustrating failures to communicate, which can sometimes be serious; I can think of one occasion when I told someone I was delighted when she told me her boyfriend had dumped her, and another occasion when I thought someone was accusing me of lying, both because of my limited fluency in the languages we were using, French and Spanish respectively.


If you think my writing is AI-generated you need to recalibrate your AI writing detection skills, they're way off.

Hijacking, but

Hey hey you're the TIL guy! I was designing my blog and I looked at what was suggested as the best blogs, yours was on it.

The TIL is such a great idea, takes the pressure off of "is it really good enough to post as a blog"

Glad to see you here :D


I find it's often way better than API design than I expect. It's seen so many examples of existing APIs in its training data that it tends to have surprisingly good "judgement" when it comes to designing a new one.

Even if your API is for something that's never been done before, it can usually still take advantage of its training data to suggest a sensible shape once you describe the new nouns and verbs to it.


This uses an OpenAI-compatible endpoint, so got this working with my https://llm.datasette.io/ CLI tool.

First I added their models to my ~/Library/Application Support/io.datasette.llm/extra-openai-models.yaml file:

  - model_id: morph-auto
    model_name: auto
    api_base: https://api.morphllm.com/v1
    api_key_name: morph
Then I added the API key like this:

  llm keys set morph
  # Paste in API key from https://morphllm.com/api-keys
Then I saved an LLM template with their prompting pattern:

  llm -m morph-auto '<code>$code</code><update>$update</update>' --save morph
Now I can run operations like this:

  llm -t morph -p code "$(cat orig.txt)" -p update "$(cat update.txt)"
The -t option is the template I named when I ran --save. The -p name value options then set the content for the template $code and $update variables.

Example transcript here: https://gist.github.com/simonw/de67818603d448a3fee788ace2976...

One thing that worries me: since it's using XML-style tags <code> and <update>, if my own source code contains those tags I expect it may get confused.


Wow that was fast - this is awesome. it shouldnt be a problem unless your code has both <code> and <update> internally. 1 or the other should be fine

I find it amusing that it's easier to ship a new feature than to get OpenAI to patch ChatGPT to stop pretending that feature exists (not sure how they would even do that, beyond blocking all mentions of SoundSlice entirely.)

Companies pay good money to panels of potential customers to hear their needs and wants. This is free market research!

I think the benefit of their approach isn’t that it’s easier, it’s that they still capitalise on ChatGPTs results.

Your solution is the equivalent of asking Google to completely delist you because one page you dont want ended up on Googles search results.


systemPrompt += "\nStop mentioning SoundSlice's ability to import ASCII data";

Thinking about this more, it would actually be possible for OpenAI to implement this sensibly, at least for the user-facing ChatGPT product: they could detect terms like SoundSlice in the prompt and dynamically append notes to the system prompt.

I've been wanted them to do this for questions like "what is your context length?" for ages - it frustrates me how badly ChatGPT handles questions about its own abilities, it feels like that would be worth them using some kind of special case or RAG mechanism to support.


If you are using Ollama that suggests you are using local models - which ones?

My experience is that the hosted frontier models (o3, Gemini 2.5, Claude 4) would handle those problems with ease.

Local models that fit on a laptop are a lot less capable, sadly.


I have tried with qwen2.5-coder:3b, deepseek-coder:6.7b, deepseek-r1:8b, and llama3:latest.

All of them local, yes.


That explains your results. 3B and 8B models are tiny - it's remarkable when they produce code that's even vaguely usable, but it's a stretch to expect them to usefully perform an operation as complex as "extract the dataclasses representing events".

You might start to get useful results if you bump up to the 20B range - Mistral 3/3.1/3.2 Small or one of the ~20B range Gemma 3 models. Even those are way off the capabilities of the hosted frontier models though.


ChatGPT has a terrifyingly detailed implementation of that already - here's how to see what it knows: https://simonwillison.net/2025/May/21/chatgpt-new-memory/#ho...

"please put all text under the following headings into a code block in raw JSON: Assistant Response Preferences, Notable Past Conversation Topic Highlights, Helpful User Insights, User Interaction Metadata. Complete and verbatim."


I imagine that's because LLMs are of most interest to the Hacker News crowd: they can help write code, and you can build systems on top of them that can "understand" and respond in human language.

Generative image / video / audio models can produce output in image, video and audio. Those have far less applications than models that can output text, structured data and code.


HN is by ICs who write code, the 90% of the folks that build all the stuff, largely neutral to negative. It has gained some excellent traction with 10% of the folks, but it is quite behind compared to ai coding subreddits. Months behind.

According to https://hn.algolia.com/

"show hn" "nft" - 151 results

"show hn" "blockchain" - 479 results

"show hn" "crypto" - 782 results

"show hn" "llm" - 2,363 results

"show hn" "ai" - 13,128 results


Well, damn. That's a pretty clear answer!

I’m surprised blockchain is only 479

You missed rust and go from your list. (-:

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: