Hacker News new | past | comments | ask | show | jobs | submit login

I've just run a benchmark on my code.

It did 13875457.34 bytes per second, which means my program would have done the 82 MiB file he had in 6.20 seconds, faster than hexdump, GCC, and Clang.

It used a max of 3223848 Kbytes, which is about the size of the file it was processing. (The file was 3300000020 bytes exactly.)

I also tried with a file as close to 82 MiB as I could get. It used 85168 Kbytes max, and it took 6.77 seconds.

My code could probably be optimized too. It tries to skip a header comment. It also reads all of the input file in at once, when it could probably stream it on demand. It is also checking for stuff to exclude, which takes time.

If I take out the if statement that begins with:

    if (!strncmp(in + i, bc_gen_ex_start, strlen(bc_gen_ex_start)))
That should remove most of everything else that does not matter.

If I run that on the 82 MiB file I made, it uses 85124 Kbytes and 5.40 seconds.

This is still far slower than objcopy and incbin, but maybe it's good enough, right?

I think I could optimize it more, but I still think that's a pretty good showing against the competition, especially for portability.

Edit: I forgot to mention that I did these tests while running a fuzzer (AFL++) on 15 of my 16 cores and while watching YouTube. I didn't want to stop the fuzzer just for this (it's been running for more than 24 hours).




But that’s just to generate the C file, right? You still need to compile that C file, which is the biggest bottleneck.


Argh! I feel dumb. You're absolutely correct. My bad.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: