Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  uv tool install llm
  llm install llm-moonshot
  llm keys set moonshot # paste key
  llm -m moonshot/kimi-k2-thinking 'Generate an SVG of a pelican riding a bicycle'
https://tools.simonwillison.net/svg-render#%3Csvg%20width%3D...

Here's what I got using OpenRouter's moonshotai/kimi-k2-thinking instead:

https://tools.simonwillison.net/svg-render#%20%20%20%20%3Csv...





Love seeing this benchmark become more iconic with each new model release. Still in disbelief at the GPT-5 variants' performance in comparison but its cool to see the new open source models get more ambitious with their attempts.

Only until they start incorporating this test into their training data.

Dataset contamination alone won't get them good-looking SVG pelicans on bicycles though, they'll have to either cheat this particular question specifically or train it to make vector illustrations in general. At which point it can be easily swapped for another problem that wasn't in the data.

I like this one as an alternative, also requiring using a special representation to achieve a visual result: https://voxelbench.ai

What's more, this doesn't benchmark a singular prompt.


they can have some cheap workers make about 10 pelicans by hand in svg, fuzz them to generate thousands of variations and throw it in their training pool. don't need to 'get good at svgs' by any means.

Why is this a benchmark though? It doesn’t correlate with intelligence

It started as a joke, but over time performance on this one weirdly appears to correlate to how good the models are generally. I'm not entirely sure why!

it has to do with world model perception. these models don't have it but some can approximate it better than others.

It's simple enough that a person can easily visualize the intended result, but weird enough that generative AI struggles with it

I'm not saying its objective or quantitative, but I do think its an interesting task because it would be challenging for most humans to come up with a good design of a pelican riding a bicycle.

also: NITPICKER ALERT


I think its cool and useful precisely because its not trying to correlate intelligence. It's a weird kind of niche thing that at least intuitively feels useful for judging llms in particular.

I'd much prefer a test which measures my cholesterol than one that would tell me whether I am an elf or not!


What test would be better correlated with intelligence and why?

When the machines become depressed and anxious we'll know they've achieved true intelligence. This is only partly a joke.

This already happens!

There have been many reports of CLI AI tools getting frustrated, giving up, and just deleting the whole codebase in anger.


There are many reports of CLI AI tools displaying words that humans express when they are frustrated and about to give up. Just what they have been trained on. That does not mean they have emotions. And "deleting the whole codebase" sounds more interesting, but I assume is the same thing. "Frustrated" words lead to frustrated actions. Does not mean the LLM was frustrated. Just that in its training data those things happened so it copied them in that situation.

This is a fundamental philosophical issue with no clear resolution.

The same argument could be made about people, animals, etc...


The difference is, people and animals have a body, nerve system and in general those mushy things we think are responsible for emotions.

Computers don't have any of that. And LLM's in particular neither. They were trained to simulate human text responses, that's all. How to get from there to emotions - where is the connection?


Don't confuse the medium with the picture it represents.

Porn is pornographic, whether it is a photo or an oil painting.

Feelings are feelings, whether they're felt by a squishy meat brain or a perfect atom-by-atom simulation of one in a computer. Or a less-than-perfect simulation of one. Or just a vaguely similar system that is largely indistinguishable from it, as observed from the outside.

Individual nerve cells don't have emotions! Ten wired together don't either. Or one hundred, or a thousand... by extension you don't have any feelings either.

See also: https://www.mit.edu/people/dpolicar/writing/prose/text/think...


Do you think a simulation of a weather forcast is the same as the real weather?

(And science fiction .. is not necessarily science)


> Do you think a simulation of a weather forcast is the same as the real weather?

If sufficiently accurate... then yes. It is weather.

We are mere information, encoded in the ripples of the fabric of the universe, nothing more.


This only seems to be an issue for wishy washy types that insist gpt is alive.

A mathematical exam problem not in the training set because mathematical and logical reasoning are usually what people mean by intelligence.

I don’t think Einstein or von Neumann could do this SVG problem, does that mean they’re dumb?


I actually prefer ascii art diagrams as a benchmark for visual thinking, since it requires 2 stages, Like svg, and also can test imaginative repurposing of text elements.

I suspect that the OpenRouter result originates from a quantized hosting provider. The difference compared to the direct API call from Moonshot is striking, almost like night and day. It creates a peculiar user and developer experience since OpenRouter enforces quantization restrictions only at the API level, rather than at the account settings level.

OpenRouter are proxying directly through to Moonshot - they're currently the only provider listed on https://openrouter.ai/moonshotai/kimi-k2-thinking/providers

That does include the Turbo endpoint, moonshotai/turbo. Add this to your prompt to only use the full-fat model:

-o provider '{ "only": ["moonshotai"] }'


Where do you run a trillion-param model?

If you want to do it at home, ik_llama.cpp has some performance optimizations that make it semi-practical to run a model of this size on a server with lots of memory bandwidth and a GPU or two for offload. You can get 6-10 tok/s with modest hardware workstation hardware. Thinking chews up a lot of tokens though, so it will be a slog.

What kind of server have you used to run a trillion parameter model? I'd love to dig more into this.

Hi Simon. I have a Xeon W5-3435X with a 768GB of DDR5 across 8 channels, iirc it's running at 5800MT/s. It also has 7x A4000s, water cooled to pack them into a desktop case. Very much a compromise build, and I wouldn't recommend Xeon sapphire rapids because the memory bandwidth you get in practice is less than half of what you'd calculate from the specs. If I did it again, I'd build an EPYC machine with 12 channels of DDR5 and put in a single rtx 6000 pro blackwell. That'd be a lot easier and probably a lot faster.

There's a really good thread on level1techs about running DeepSeek at home, and everything there more-or-less applies to Kimi K2.

https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-hom...


If I had to guess, I'd say it's one with lots of memory bandwidth and a GPU or two for offload. (sorry, I had to, happy Friday Jr.)

You let the people at openrouter worry about that for you

Which in turn lets the people at Moonshot AI worry about that for them, the only provider for this model as of now.

Good people over there

Does the run pin the temperature to 0 for consistency?

I've been under the impression most inference engines aren't fully deterministic with a temperature of 0 as some of the initial seed values can vary.

Note: I haven't tested this nor have I played with seed values. IIRC the inference engines I used support an explicit seed value, that is randomized by default.


No, I've never tried that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: