Don't think I've benchmarked that case since thats not generally what you'll run into with a human data format. Would love to see numbers if you have them since I have no idea what they'd be.
Personally performance isn't my biggest concern and I only focus on it for the cargo case but I want to switch cargo to storing packaged `.crate` files to a format meant for machines.
(maintainer of `toml` and `toml_edit` packages for Rust)
The file it generates is not valid TOML though; even if you fix the obvious syntax issues you run in to issues like:
% tomlv ./big_file.ini
Error in './big_file.ini': toml: line 35: Key 'section' has already been defined.
Perhaps tomlc99 doesn't check for this; I didn't try it.
Maybe I should add something to toml-test for this; while I agree that 50M files are somewhat rare[1] and that performance isn't a huge consideration, it's not entirely unimportant either, and this can give people a bit of a "is the performance of my parser horribly atrocious or roughly fine?" kind of baseline.
[1]: One use-case I have is that I use TOML for translation files (like gettext .po, except in, well, TOML) and in a large application with a whole bunch of translations I can see that adding up to 50M.
Personally performance isn't my biggest concern and I only focus on it for the cargo case but I want to switch cargo to storing packaged `.crate` files to a format meant for machines.
(maintainer of `toml` and `toml_edit` packages for Rust)