The only real question I have with this is did the program have to have any specific performance metric?
I could write a small utility in python that would be completely acceptable for use but at the same time be 15x slower than an implementation in another language.
So you do you compare code across languages that were not written for performance given one may have some set of functions that happens to favour one language in that particular app?
I think to compare you have to at least have the goal of performance for both when testing. If he needed his app to be 30% faster he would have made it so, but it didn't need to be so he didn't. Which doesn't make it great for comparison.
Edit, I also see that your reply was specifically about the point that the libs by themselves can help the performance with no work, and I do agree with you, as you were to the guy above.
Honestly I'm not quite sure what point you're making.
> If he needed his app to be 30% faster he would have made it so
Would he have? Improving performance by 30% usually isn't so easy. Especially not in a codebase which (according to Cantrill) was pretty well optimized already.
The performance boost came to him as a surprise. As I remember the story, he had already made the C code pretty fast and didn't realise his C hash table implementation could be improved that much. The fact rust gave him a better map implementation out of the box is great, because it means he didn't need to be clever enough to figure those optimizations out himself.
Its not an apples-to-apples comparison. But I don't think comparing the world's fastest C code to the world's fastest rust code is a good comparison either, since most programmers don't write code like that. Its usually incidental, low effort performance differences that make a programming language "fast" in the real world. Like a good btree implementation just shipping with the language.
I did feel my post was a bit unneeded when I added my edit :)
My point about the 30% was that you mentioned that he got in rust and attributed it to essentially, better algorithms in the rust lib he used. Once he knew that then its hard to say that rust is 'faster' but the point is valid and I accept that he gained performance by using the rust library.
My other point was that the speed of his code probably didn't matter at the time. If it was a problem in the past he probably would have taken the time to profile and gain some more speed. Sure you cant gain speed that can't be had but as you pointed out, it wasn't a language issue, it was an implementation of the library issue.
He could have arbitrarily used a different program that used a good library and the results reversed.
I also agree that most devs are not working down at that level of optimisation so the default libraries can help but at the same time it mostly doesnt matter if something takes 30% longer if that overall time is not a problem. If you are working on something where the speed really matters and you are trying to shave off milliseconds then you have to be that developer that can work C or Rust at that level.
What I think it illustrates more is how much classic languages could gain by having a serious overhaul of their standard library and maybe even a rebrand if that's the expected baseline of a conformant implementation.
>If he needed his app to be 30% faster he would have made it so
That still validates "In short, the maximum possible speed is the same (+/- some nitpicks), but there can be significant differences in typical code" the parent wrote
Yes, copied it in to my scratch buffer to read it, not readable in the browser at all with a dark background. It did then make all the elisp nice to look at.
I think the same thing, and then I went a little smaller! I went to a large split then to a 58 key split, then to a 42 key split. At 42 I saw no advantage in going smaller other than it being smaller if you liked the look of it.
Then I wanted to try a small dactyl and that lead me to an already designed 36 key split and I love it. I lost some more keys and found that I can easily handle that. I would not say that the move from 42 to 36 made it more ergonomic but not worse.
While I went from 42 to 36 without thinking there were downsides, I think going any smaller does start to compromise functionality for the sake of form.
At 36, I think that even on a bigger keyboard I would emulate the layout I have now as it is so easy.
I don't think Miryoku is a good layout for many either, it will depend on your usage.
A strange thing is that many come in to the small split keyboard world and then don't have the motivation to come up with something that works for them. You can make anything work, so a lot make Miryoku work but I doubt for many that would be the best layout for them.
I code a lot and find that its layout would not suit me. I have 99% of what I need on a the base layer and one more layer for doing development work - on a 36 key board. I could not imagine that I would want to switch layers as much as I would have to for a continuous stream of alphabet/symbols and numbers.
I think Miryoku would be fine if you were an average computer user editing documents, emails etc and I do sometimes forget that there are a lot of guys out there using Miryoku doing only that.
I recently downloaded a number of magazines froma now defunct publication from late eighties. No onesells them and annas archive was the only place i found them. its not exclusivly pirating, its a source for a lot of out of print.
Thank you for the suggestion. The news I was after was usenet feeds, I have archives of some groups but thought it would be interesting to rather than go and read it all at once have it put a new days worth of posts every day from the past.
As the messages are all in just text format with headers and timestamps I should just be able to extract the messages from a particular day and add them to my news reader, worst case changing the timestamp to today to have it appear in a sensible place and putting the original timestamp back where it can be seen when reading.
If it could be OK for you other than that, there is usually a timeout setting that can be applied before it is recognised as a press.
For example, any press under 200ms is ESC, anything over will register as Ctrl. The timeout can be adjusted to work and not capture that time that you press it but then decide you don't want it.
I am not sure people really optimise for speed, they do it for comfort. When I moved to dvorak my speed dropped by quite a bit and I never regained it but that was done for comfort, and it is more comfortable no doubt.
I lost the speed because I wasn't doing as much documentation as I used to with qwerty. For development, that loss of speed hasn't made any difference because I type just as fast for development purposes.
But my main point is the people I know have not optimised things for speed, they have done it for comfort. A ten hour trip in a plane takes the same time in business class as it does in economy but one is a nicer way to travel.
I wouldnt run out and tell everyone to change their setup, people are usually driven to it for one reason or another, either just curious or have an issue they need to address but on a topic of ergonomics, thats exactly what ergo keyboards are about.
I think your reply is still based on a standard keyboard. When you get to more ergonomic keyboards you can stay on the homerow and also reach modifiers without moving.
While most people do not have an ergonomic keyboard then what you say is likely true and most ergonomists would agree but I think they would also agree that even better is not have to move and use an ergonomic keyboard. That comes with cost and time which is why most still dont use them.