Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sounds like a fun project. However, from the readme:

Efficient file listing: Optimized for speed, even in large directories

What exactly is it doing differently to optimize for speed? Isn't it just using the regular fs lib?



On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.


On the latest release the it can list a tree of 100 in depth with over 100k files in less than 100ms and if cached 40ms


Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.


Not trying to “gotcha” you, but I would imagine that 10x the CPU of ls is still very little, or am I wrong?


In the case of the 500k tree, `lla` needs 2.5 seconds, so it's pretty substantial.


Is listing a lot of files really CPU-limited? Isn’t the problem IO speed?


What exactly makes ls faster?


But it’s written in rust so it’s super fast. Did you take that into account when running your benchmarks? /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: