As others have pointed out, the regex engine is the same so the benefits would trickle downstream. For example, VSCode also uses ripgrep and therefore the rust-lang/regex engine.
PCRE2 supports only bounded length lookbehinds. It is true, it is not a big improvement to have unbounded ones in rust-lang/regex, but it still feels like something.
Roux while still very much an underdog is gaining popularity as the method for one handed (OH) solves. The 2023 world champion of OH used roux to gain this title. I even heard of cases where people that use CFOP are now learning roux to use it exclusively for OH solves. The main advantages of roux for OH are that it requires fewer moves, requires no cube rotations (which are awkward to do with one hand) and you can use the table to act as a second hand when performing middle layer slices.
I’m never going to be particularly fast - ~50s best measured time which may not be my best time - but I really love roux. It’s meditative, like cube Taichi.
Originally when teased at Google IO this product was called project tailwind but the URL was thoughtful.sandbox.google.com (it seems to now redirect to the notebooklm URL). "Thoughtful sandbox" feels like a much more fitting name.
> In Go, I found that using an interface was not free: it can make the code slower.
The Go version that was presented isn't equivalent though. In Go you are accepting an interface directly which will hide the value under some fat pointer for dynamic dispatch, in c++ you are using generics to monomorphise the function to specific types. If you want to compare the implementations fairly you should've used Go generics:
Fair criticism, though I do wonder if it'd really make that much of a difference. Go doesn't really monomorphize generics either, and would end up with an equally if not more expensive lookup for the correct generic function at runtime.
That's true at the moment, but still an implementation detail. I think I remember early versions of C++ compilers doing the same thing with templates.
Considering the progress Go compiler has gone through, I think it's reasonable to expect the optimized implementations will come few versions down the road.
Not the OP, however I have programmed in C++ since 1987 across many different operating systems and hardware platforms and I've literally never heard of a compiler that implements template stuff using runtime dispatch. CFront3 which was I think the first real template implementation that most people used certainly never did it that way, neither did any version of gcc, visual studio or Sun Workshop, which are the compliers I used the most from that period. Dug out my old copy of Coplien[1] which is from the early 90s and it discusses runtime dispatch in depth in the context of vtables and virtual function pointers and the cost of these things, so the concept was well understood but not a cost anyone was paying with templates.
[1] https://archive.org/details/advancedcbsprogr00copl "Advanced C++ Programming Styles and Idioms" aka the first programming book that genuinely kicked my ass when I first read it and made me realise how good it was possible to be at computer science.
Right. For starters, from the very beginning C++ has supported function templates which take native types. So you don't even necessarily have any kind of pointer you could add a vtable to even if you wanted to. Then add to that the guarantee[1] about pod types being directly compatible with C which as you say I don't see how it owuld be possible to do.
[1] which has always been strong even before there was an actual ISO/ANSI standard
templates don’t exist after the front end. there is no ABI that allows them to exist in any object file. there is no object file format they could be embedded in, sans a string representation of the source they came from.
Are there any benefits for users that tree-sitter is used under the hood? Can we benefit from the killer features of tree-sitter? Namely incremental parsing, fallible parsing, lossless syntax tree, or being embeddable into editors supporting tree-sitter syntax highlighting?
Yes! Right now, the main benefits are the ability to write grammar definitions that are quite close to the ideal AST structure (made possible by Tree Sitter's grammar format), and being able to embed the parser in many different applications (including WASM via https://github.com/shadaj/tree-sitter-c2rust). Rust Sitter also gives quite nice error diagnostics with spans thanks to Tree Sitter's recovery logic.
Fallible parsing is something I plan to implement in the very near future, by letting users wrap types in `Result` to mark them as an error boundary. Incremental parsing is a bit more difficult, since we'll need to add logic to know when an existing AST struct can be reused, but is on the roadmap.
I would like to delve into the compatibility with tree-sitter, since in other features tree-sitter being under the hood is mostly an implementation detail:
If I were to write my parser using rust-sitter, would I be able to still generate the final standalone tree-sitter parser as a `.so`? That way I could integrate with tools supporting tree-sitter parsers (for instance https://github.com/nvim-treesitter/nvim-treesitter#language-...) without having to write the `.js` grammar?
In principle, yes, you can use the `rust-sitter-tool` crate to generate the Tree Sitter JSON definition and then compile it to a standalone parser. The grammar is auto-generated though so it may be a bit trickier to integrate into other tooling? The general problem of exporting just the grammar is something that's been on my radar, but haven't had a chance to think through it too deeply yet.