Hacker Newsnew | past | comments | ask | show | jobs | submit | versteegen's commentslogin

I agree except: this is creative work. Creativity can be and is being mechanised. True originality is extremely rare. Most novelty is the repurposing of one idea or concept elsewhere in a way we call find surprising, but the choice to apply A to B could have been made for any reason including mechanical: very many inventions are accidents. In-depth knowledge / conceptual understanding of something is built on abstraction, and abstractions are portable.

If you had a list of N concepts and M ways to apply them you could try all N*M combinations, and get some very interesting results. For a real example, see the theory of inventive problem solving (TRIZ)'s amusing "40 principles of invention" by Soviet inventor Genrich Altshuller. https://en.wikipedia.org/wiki/TRIZ


I'm going to find out. I've been meaning for years to port the OHRRPGCE back to DOS, where it came from.

I'm very surprised to see SDL3 re-gain DOS support, since they've aggressively dropped support for almost every port/OS they had in the SDL 1.2 days.


Very cool. I'd never heard of OHRRPGCE (Official Hamster Republic Role Playing Game Construction Engine) before. I was going to say it feels like an early predecessor to something like RPG Maker but I think RPG Maker originally came out in the early ’90s for the Japanese PC-98 computers.

From the wikipedia entry [1] for OHRRPGCE

> It runs at an 8-bit color depth, by default creates games that run at a 320 × 200 resolution.

It's funny but I bet anyone else in here who also grew up with the QBASIC interpreter as a kid instantly thinks SCREEN 13 when they read something like this.

[1] - https://en.wikipedia.org/wiki/Official_Hamster_Republic_Role...


:) SCREEN 13 (VGA Mode 13h) is almost correct, but actually it originally used a 320x200 VGA Mode X assembly graphics library. I believe 320x200 instead of 320x240 to be compatible with earlier pure-QB code for SCREEN 13 reused in the engine. (Mode X isn't a single mode, it has some adjustable parameters.)

Which model's best depends on how you use it. There's a huge difference in behaviour between Claude and GPT and other models which makes some poor substitutes for others in certain use cases. I think the GPT models are a bad substitute for Claude ones for tasks such as pair-programming (where you want to see the CoT and have immediate responses) and writing code that you actually want to read and edit yourself, as opposed to just letting GPT run in the background to produce working code that you won't inspect. Yes, GPT 5.4 is cheap and brilliant but very black-box and often very slow IME. GPT-5.4 still seems to behave the same as 5.1, which includes problems like: doesn't show useful thoughts, can think for half an hour, says "Preparing the patch now" then thinks for another 20 min, gives no impression of what it's doing, reads microscopic parts of source files and misses context, will do anything to pass the tests including patching libraries...

Interesting (would like to hear more), but solving a Rubiks cube would appear to be a poor way to measure spatial understanding or reasoning. Ordinary human spatial intuition lets you think about how to move a tile to a certain location, but not really how to make consistent progress towards a solution; what's needed is knowledge of solution techniques. I'd say what you're measuring is 'perception' rather than reasoning.

> what's needed is knowledge of solution techniques

That's definitely in the training data


> how to make consistent progress towards a solution

A 7 year old child can learn six sequences of a few moves and over a weekend solve the Rubik Cube. It is a solved algorithm something LLM should be very very good at. What it can't do is reason about spacial relationships.


The Anthropic Pro plan cost double and gave you, I don't know, a tenth the usage, depending on how efficiently you used Copilot requests, and no access to a large set of models including GPT and Gemini and free ones.

Yes, Github's per-request pricing was insane; anyone suggesting using CC instead or asking if any other provider is as cheap just doesn't understand the insanity. Clearly losing a lot of money on the people making good use of it.

I was actually hoping they would change it to something that more closely tracks their actual costs so that they wouldn't have to rug-pull this badly. In particular what was really bad about it was that sending prompts to agents while they were working (to give them corrections) cost extra so I stopped doing that (after initially OpenCode didn't cause billing for that, until they became official).


Yes, language design is a hugely important determinant of interpreter or JIT speed. There are many highly optimised VMs for dynamic languages but LuaJIT is king because Lua is such a small and suitable language, and although it does have a couple difficult to optimise features, they are few enough that you can expend the effort. It's nothing like Python. It's not much of an exaggeration to say Python is designed to minimise the possibility of a fast JIT, with compounding layers of dynamism. After years of work, the CPython 3.15 JIT finally managed ~5% faster than the stock interpreter on x86_64.

CPython current state is more a reflection of resources spent, than what is possible.

See experience with Smalltalk and Self, where everything is dynamic dispatch, everything is an object, in a live image that can be monkey patched at any given second.

PyPy and GraalPy, and the oldie IronPython, are much better experiences than where CPython currently stands on.


The problem is that AI has been dominating the conversation for so many years, and they'll get more improvements from removing the GIL than they would from adopting the PyPy JIT.

The JIT would help everyone else more than removing the GIL, I wish PyPy became the reference implementation during 2.7


Actually because AI has been driving the conversation that CPython JIT efforts are finally happening and being upstreamed.

It is also because of AI, that Intel, AMD and NVidia are now getting serious about Python GPU JITs, that allow writing kernels in a Python subset.

To the point that I bet Mojo will be too late to matter.


Python is worse, but not by all that much. After all, PyPy has been several times faster for many years.

That is an incorrect analysis. CPython is difficult to JIT because of the lack of thought to the native bindings / extensions, not because of the language itself (as others point out PyPy was way faster long ago)

You're correct. I neglected that; extension API compatibility is a big (the most important?) difference between PyPy and CPython's JIT. Amongst language features that affect optimisation potential, an extension API can be the worst.

Edit: I think what you're alluding to is that tracing JITs can overcome a lot of dynamic language features which make things hopeless for method JITs. Where LuaJIT really shines vs PyPy is outside of JITed loops. (Also memory and compile overheads). I realise this is a bit of a motte and bailey.


Von Neumann may possibly have been the smartest man to ever live, but giving him credit for all of this is too much, brushing aside many other inventors (oft independent, to his credit).

They're definitely not subsidizing API pricing, can't believe how prevalent that fallacy is on HN of all places. The question is how profitable Claude Code is. Your example 2 is real and major but your example 1 is ridiculous, almost any new model from any company is better at the same price, and how is increasing the price an example of decreasing prices??

BTW, Github Copilot is pricing Opus 4.7 at 2.5x the cost of Opus 4.6 at promotional pricing (so maybe it'll be 4-5x). But Github's request based pricing is insane, completely divorced from their actual costs (you can achieve 1+M tokens for $0.10 if you give it a large request), so I'd assume they're losing a lot of money.


They're definitely not subsidizing API pricing

The cost of a thing, is relative to its source costs. They are subsidizing API pricing, if you consider all the costs to provide the service, including all model creation, training, etc costs.

But that doesn't mean they will be more expensive, longer term. The cost of compute will go down as time goes on. Each year it will get cheaper. Same for power requirements, computing density, cooling, and so on.

I remember trying to store and play mp3 files on older computers. I could typically hold a few on a disk, and if I wasn't doing anything else I could play one. Barely. Now you'll be hard pressed to play an mp3 and see the load results in top or what not.

The same will be true of AI in 20 years.


If those cost of compute is going down, then eventually it will go down enough that we will run on our LLMs locally and Anthropic will go out of business.

> then eventually it will go down enough that we will run on our LLMs locally and Anthropic will go out of business.

I want robust local LLMs as much as the next person—Gemma E2B, 3.2GB does my word completions as I type. It's gotten to the point where it knows what I'm going to type before I do!

But I don't see Anthropic going out of business anytime soon. As good as some of the open source LLMs are, we’re still a long way from being able to frontier models at home.


The industry will shift, yes. At some point, remote LLM compute will be like AWS.

Everyone can do baremetal at home and run on it, or VMs, containers. Many don't.

However, you'll still want the best model and toolset. So there is some place for them to pivot to. Something for them to sell or licence.

It will be interesting to see where the all lands, a decade from now. Who will be left?


If you are using LLMs for tool use locally, then in a decade it will not make sense anymore to pay for hosted solutions. Your device will have compute power to run powerful LLMs trivially.

If you need LLMs at scale to serve many customers, then hosted solutions make sense for the availability aspect. But by this point models can be offered by any generic services provider, like AWS or Cloudflare. Pure AI companies that just offer hosted models and nothing else will go extinct if they don’t expand to offer more services.


> If you are using LLMs for tool use locally, then in a decade it will not make sense anymore to pay for hosted solutions. Your device will have compute power to run powerful LLMs trivially.

LLMs a couple of years ago that'd be impossible to run on consumer hardware are now running on consumer hardware. I'm less concerned about compute power; it's more about memory.

It could be several years before new RAM capacity comes online. Even then, it won't be cheap.

I expect in the future, hosted frontier models will be a utility like electricity or cable tv. Part of a package most people will subscribe to.


> can't believe how prevalent that fallacy is on HN of all places

AI is very emotional for a lot of people leading to bias takes in both directions. We like to think HN is more rational than average, but we’re all human.


Even more detail in the DW article:

""" Fortunately, the boy was very precise and showed me exactly where he found it on a map. Then we went into our findings registration and found that this agricultural site was actually a well-known place," Henker explained.

Berlin's Museum for Pre- and Early History has been systematically conducting surveys on empty land in Berlin since the 1950s to determine where possible excavation sites might be.

In this particular spot, explains Henker, the upper layers of the soil were surveyed in the 1950s and 70s and again later. "Every time, they discovered a few distinct finds that made them say 'ok, there's probably more in the ground here'."

Over the years, fragments of ceramics, Slavonic-era knives and a bronze button have been unearthed on the site, as well as burnt human bones, leading researchers to conclude that this are was used as a burial ground dating as far back as the early Iron Age — and has been in use throughout the centuries. """

https://www.dw.com/en/teen-discovers-first-ancient-greek-art...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: