Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At 0.5GB that's 5k per entry -- what are you storing for each?

For comparison, my (non-work) history since 2012 (plain text) is 181k entries, and takes 25MB. I store the command along with when and where you ran it. (https://www.jefftk.com/p/logging-shell-history-in-zsh)



Ah, a fellow packrat! I have every command I ever typed into a shell since around 2005, and my history weighs in at 1 CD or 650MB (as of a couple of years ago)

I'm probably being wasteful of space because I store each session in a separate file. I used to do a lot of data analysis at the shell back in the day, and found it useful to audit sequences of commands afterwards for mistakes, or to turn them into scripts.


This is so insane that I love it. Do you also save your belly button lint since 2005? Or nail clippings? :)


I'm only a digital packrat. Bits are so much cheaper to hoard, even deciding to throw something away is often more work.


This is more like saving your old notebooks and drafts, only that they don't take any meaningful space. Or like having a revision control system.

Do you rebase your git repos regularly to delete commits older than 6 months?


Do you regularly back up your history?


Oh yes. It gets backed up along with everything else.


As a lot of people mentioned. This is FTS index. So it is definitely way more blown up. Plus I do save a lot of additional information with it: pwd, session id, shell used, exit codes, whole command obviously. And to support icloud, also additional information for icloud entity id. And now when you point out, 5k per entry is a lot of data. But I am on with that. This information really important for me.


Not the op, but I'd guess it's the full text search index.


For 100k entries you can grep them instantaneously; there's no need to maintain an index.


Grep only works if you want an exact string match. If you want to find words out of order or support features like stemming, fts is necessary.


Maybe I have some sort of disease, but while reading "find words out of order or support features like stemming" the regexs for that immediately flashed before my eyes, so I think "necessary" is a little strong there.


FTS is not the same as regex.


I don't think I said it was. I was addressing the specific use cases mentioned. If there's another use case you think is important in searching command line history, feel free to describe it.


> feel free to describe it

Didn't they already? eg stemming


Most stemming use cases are trivially solved with a regex. That's the point he was making. The difference between a beginner and expert with regexes is quite a lot.


Ahhh, interesting point.

"We could learn advanced regexes... or we could just use FTS5".

Hard call. :)


Maybe! Full-text search is great for text. Command lines have some things in common with text, but they definitely aren't normal text. E.g., punctuation is much more significant. Stemming may not be appropriate. Case matters. Word boundaries are different, and many of the significant lumps aren't really words.


Well, I suppose what's trivial for me might be advanced for you :)


For regexes, definitely. ;)


With a small enough corpus, full text search does not require an index to be instantaneous, and 100k entries is easily small enough for that.

Additionally, everything you describe can be phrased as a regular expression.


Sometimes it's nice to not manually write a regexp to find all of the variants of every word or deal with arbitrary ordering of substrings. And if you're using SQLite and fts5 is installed, why not just create a virtual full text search table with one command and use that? With a small enough corpus, it's a meaningless distinction to bikeshed about the implementation: the easiest solution to build is the best. 500MB of disk space for a pet project that gives you convenience is a terrifically small amount of storage. I have videos that I recorded on my phone that take up more than double that.


Not defending the idea of a db history but no db schema is going to beat plain text's 1 byte 0x0A per line delimiter.


*cough* compressed row data *cough*


*cough* log rotate and gzip *cough* :D


Touché. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: