Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Lingo – A linguistic database in Rust with nanosecond-level performance
42 points by peerlesscasual 4 days ago | hide | past | favorite | 23 comments
Hi HN, I made Lingo - the SQLite of semantic search.

I'm a self-taught developer and researcher who left school at 16, and I've spent some time exploring a first-principles approach to system design for various frontier problems. In this case it's AI that challenges the 'bigger is better' transformer paradigm.

Lingo is the first piece of that research, a high-performance linguistic database designed to run on-device.

The full technical overview and manifesto is here: https://medium.com/@robm.antunes/bcd1e9752af6

The paper has been archived on Zenodo with a DOI: https://doi.org/10.5281/zenodo.17196613

The code is open-source and can be found at https://github.com/RobAntunes/lingodb, it's currently broken and feature incomplete but I'm working on it - just wanted to start getting some feedback.

All benchmarks are reproducible from the repo and can also be found in the various texts.

As an independent without academic affiliation, I'd be incredibly grateful for your feedback! I'm here to answer any questions.

Cheers!





Already the title of your submission does not check out. Do you know how many clock cycles a 1 GHz CPU realizes in one nanosecond? One. Just reading the input argument of a function takes a "nanosecond-scale" amount of time.

> I'm a self-taught developer and researcher who left school at 16, and I've spent some time exploring a first-principles approach to system design for various frontier problems.

As much as I appreciate new ways of thinking, whenever I read "first-principles approach", my alarm bells go off. More often than not it just means "I chose to ignore (or am too impatient to learn about) all insights that generations of research in this field have made". The "left school at 16" and "self-taught" parts also indicate that. This may explain the hyperbole of the title as well, as it does not pass the smell test.

If you are looking for advice, here is mine: try to not ignore those that came before you. Giants' shoulders are very wide, very high up and pretty solid. There is no shame in standing on them, but it takes effort to climb up.


What an amazing comment, criticism on the title without going into any content with a side of character judgement

Ok, since you're looking for sincere feedback.

Great vision, challenging the "scale" of current AI solutions is super valid, if only for the reason that humans don't learn like this.

Architecture: despite other comments, I am not so bothered with MMAP (if read only) but rather with the performance claims. If your total db is 13kb you should be answering queries at amazing speeds, because you're just running code on in-cache data at that point. The performance claim at this point means nothing, because what you're doing is not performance intensive.

Claims: A frontal attack on the current paradigm would at least have to include real semantic queries, which I think is not currently what you're doing, you're just doing language analytics like NLP. This is maybe how you intend to solve semantic queries later, but since this is not what you're doing, I think that should be clear from the get-go. Especially because the "scale" of the current AI paradigm has nothing to do with how the tokenization happens, but rather with how the statistical model is trained to answer semantic queries.

Finally, the example of "Find all Greek-origin technical terms" is a poor one because it is exactly the kind of "knowledge graph" question that was answerable before the current AI hype.

Nevertheless, love the effort, good luck!

(oh and btw: I'm not an expert, so if any of this is wrong, please correct me)


Summary from my side:

Outstanding features:

- way better representation (very information-dense) of different basic language properties directly as a storage layout property (which seems absolutely possible to me to achieve)

- attention (signal) as resonance: analog wave signal processing methods can be used -> way less computation power needed

Analysis: It will have the same fundamental limitations in terms of "understanding" and "thinking" as traditional LLMs, as its "knowledge" is still based on language itself. I believe it would be implemented in combination with other models, which supply nuances of actual content – namely traditional LLMs, which are focussed on written text as it appears. Nevertheless, it should add a high-quality and high-efficient building block for language processing to the landscape of LLMs. Furthermore it may also be a nice starting point for a general development towards rethinking architecture patterns in favor of lower resource consumption and fine quality of any kind of information.


> • Memory-Mapping (mmap): We treat the database file as if it’s already in memory, eliminating the distinction between disk and RAM.

Ugh, not another one...


Yep, another developer enthusiastically proposing mmap as an "easy win" for database design, when in reality it often causes hard-to-debug correctness and performance problems.

To be fair, I use it to share financial time series between multiple processes and as long as there is a single writer it works well. Been in production since several years.

Creating a shared memory buffer by mapping it as a file is not the same as mapping files on disk. The latter has weird and subtle problems, whereas the former just works.

To be clear, I am indeed doing mmap to the same file on disk. Not using shmap. But there is only one thread in one process writing to it and the readers are tolerant to millisecond delays.

> millisecond delays

I thought you said financial time series!

But yeah, this is a case where mmap works great - convenience, not super fast, single writer and not necessarily super durable.


> I thought you said financial time series!

Yeah it is just your average normal financial time series.


Why not though, from what I can see from the docs, these databases supposed to be static and read only. At least when you use it on a device.

Page cache reclamation is mostly single threaded. It's much simpler, than you can create in a user space, it has no weight for specific pages etc.

Traveling into kernel flushes branch predictor caches, tlb. So it's not free at all.


No issue if you know what you are doing. Not sure about the author but I know very high perf mmap systems for decades without corruption / issues (in hft/finance/payments).

Ctrl-Fd you here the moment i saw that in the article

The repo is 100% AI slop.

Advice to OP: lay off the Claude Code if your goal is to become an “independent researcher”. Claude doesn’t know what it’s doing, but it’s happy to lead you into a false sense of achievement because it’ll never tell you when you’re wrong, or when it’s wrong.


Bizarre because a quick look at the code and commit log shows it was likely 100% coded by AI, so the author is not trying too hard to hide it, but they also seemed to forget to mention it anywhere in the README or the blog post.

Out of interest: can you elaborate how you analyzed the repo to come to this conclusion?

All of the code is imported in 1 commit. The rest of the commits are deleting the specs that I guess were used to generate the code. There’s one commit adding code which explicitly says generated by Claude code. There’s basically no chance the whole codebase is not AI slop.


The specs themselves seem generated with LLMs too, as in https://github.com/RobAntunes/lingodb/blob/5e3834de648debf08... – overuse of emojis, excitement, etc

Not too sure, reading with mmap is ok but simultaneous read/write operations are a bit tricky.

Really impressive work :)



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: