Hacker Newsnew | past | comments | ask | show | jobs | submit | jaakl's commentslogin

Not just energy cost, but also licensing cost of all this content…


ASCII tabulature was not invented by ChatGPT, it is decades old thing. It is easier to write with basic computer capabilities, and also read for ChatGPT (and humans with no formal music education), so it is probably even more prominent in the Internet than "standard graphical notation". So it quite expected that LLMs have learned a lot of that.


Note that being Estonian OÜ (LLC) brings convenience of both having fully electronic communication towards any state affairs and also super easy to get (no even registration needed) yearly financial reports. Actually more-less the only touchpoint with state is the yearly report, no taxes until you have salaries, apply for VAT, deal with licensed area or really cash out the profits. Also you can be foreign, "e-resident" to use such OÜ.

The official company reporting source is https://ariregister.rik.ee/eng/company/16225385/Organic-Maps... . Yearly PDF reports are in Estonian language, but your favorite AI should help. The numbers are in actual EUR (not housands), so they seem to have 33KEUR profits, IMHO no huge piles of money to worry too much for.


I hate it. I used it to have carefully curated metadata (sources etc) to my collection of tens of tables, and someone else took backup/restore of the database and all this was lost.


It seems to be based on very common naive belief that things which are named same or similar in different domains are conceptually same, so "lets deduplicate" ? There can be rare moments when they really are, but then the moment passes and then you only have troubles.


To me the motivation seems more along the lines of "we build lots of different systems that deal in the same domains" (because they are deep in microservice land, have apps for all kinds of platforms, ...) "lets make sure they all use the same definition of the things". Do you think that doesn't make sense (because each of those should be considered their own domain?) or does something else give you your impression?


Did you try to put all this (complex and external) context to the context (claude.md or whatever), with intructions how to do proper TDD, before asking for the tests? I know that may be more work than actual coding it as you know all it by heart and external world is always bigger than internal one. But in long term and with teams/codebases with no good TDD practises that might end up with useful test iterations. Of course developer commiting the code is anyway responsible for it, so what I would ban is putting “AI did it” to the commits - it may mentally work as “get out of jail card” attempt for some.


we tried a few different variations but tbh had universally bad results. for example, we use `ward` test runner in our python codebase, and claude sonnet (both 3.7 and 4) keep trying to force-switch it to pytest lol. every. single. time.

maybe we could either try this with opus 4 and hope that cheaper models catch up, or just drink the kool-aid and switch to pytest...


Some models do have US 2025 president election results explicitly given in system prompt. To fool all who use it for cutoff check.


My main takeaway here is that the models cannot tell know how they really work, and asking it from them is just returning whatever training dataset would suggest: how a human would explain it. So it does not have self-consciousness, which is of course obvious and we get fooled just like the crowd running away from the arriving train in Lumiére's screening. LLM just fails the famous old test "cogito ergo sum". It has no cognition, ergo they are not agents in more than metaphorical sense. Ergo we are pretty safe from AI singularity.


Nearly everything we know about the human body and brain is from the result of centuries of trial and error and experimentation and not any 'intuitive understanding' of our inner workings. Humans cannot tell how they really work either.


Do you know how you work?


Well “building” is also troubleshooting, fixing a problem. Just in a bit more general level: ideally it is not fixing a “small” well-defined problem in software, but bigger and fuzzier problem in the real world: the thinking process and tooling is quite the same. Of course many devs dont think of it like that, they just try to fulfill given requirements without understanding real problem they troubleshoot. Actually a lot of software “builds” are really troubleshooting attempts on top of other software also, which makes that border even fuzzier.


Not funny anymore, after 1/20/2025


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: