I sometimes write C# in my day job. But I think I don't know much about how to write really fast C#. Do you have any recommendations for learning resources on that topic?
LINQ is fine (but enterprise style never is, yes), it’s a matter of scale and what kind of a domain the code is targeted too. C# needs to be approached a little like C++ and Rust in this regard. Having standard performance optimization knowledge helps greatly.
Also can recommend reading all the performance improvements blog posts by Stephen Toub as well as learning to understand disassembly at a basic level which .NET offers a few convenient tools to get access to.
Thank you. I once read a bit about Span<T>, but some of this reference stuff is very new to me. Interesting, definitely. C# really is a big language nowadays...
Spans are just a slice type, but those which any type based on contiguous memory can be coerced to (usually). I’m sure you’re already using them somewhere without realizing that. Their main use case in regular code is zero-cost slicing e.g. text.AsSpan(2..8).
It's a heavily gamed benchmark, but TechEmpower Fortunes is pretty good at revealing the max throughput of a language runtime for "specially tuned" code (instead of idiomatic code).
I judge more idiomatic / typical code complexity by the length of stack traces in production web app crashes. Enterprise Java apps can produce monstrous traces that are tens of pages long.
ASP.NET Core 9 is a bit worse than ASP.NET Web Forms used to be because of the increased flexibility and async capability, but it's still nowhere near as bad as a typical Java app.
What matters in practical scenarios is that ASP.NET Core is significantly faster than Spring Boot. If you have a team willing to use ActiveJ or Vert.x, you are just as likely have a team willing to customize their C# implementation to produce numbers just as good at web application tasks and much better at something lower level. There are also issues with TechEmpower that make it highly sensitive to specific HW/Kernel/Libraries combination in ways which alter the rankings significantly. .NET team hosts a farm to do their own TechEmpower runs and it just keeps regressing with each new version of Linux kernel (for all entries), despite CPU% going down and throughput improving in separate more isolated ASP.NET Core evaluations. Mind you, the architecture of ASP.NET Core + Kestrel, in my opinion, leaves some performance on the table, and I think Techempower is a decent demonstration of where you can expect the average framework performance to sit at once you start looking at specific popular options most teams use.
No, records are a reduction in boilerplate for regular classes (the result also happens to be read-only — not deeply immutable, mind you). Value types are in the works:
Hmm looking at that it seems like being a struct type is a non-goals they seem to explicitly call out C# value types as a different thing...
Smaller objects from dropping identity is nice but it really doesn't seem like it gives you more explicit memory layout, lifecycle, c interop etc that C# has with their structs. Maybe I'm missing something.
Trying to write it as if it was a different language instead or, for whatever reason, copying the worst style a team could come up with does happen and must be avoided, but that’s user error and not a language issue. Also the tooling, especially CLI, is excellent and on par with what you find in Rust, far ahead of Java and C++.
If you link an example snippet of the type of code that gave you pause, I’m sure there is a better and more idiomatic way to write it.
A few important ones:
- Avoid memory allocations as much as you can. That's a primary thing. For example, case insensitive string comparisons using "a.ToUpper() == b.ToUpper()" in a tight loop are a performance disaster, when "string.Equals(a, b, StringComparison.CurrentCultureIgnoreCase)" is readily available.
- Do not use string concatenation (which allocates), instead prefer StringBuilder,
- Generally remember than any string operation (such as extracting a substring) means allocation of a new string. Instead use methods that return Span over the original string, in case of mystr.Substring(4,6) it can be a.AsSpan(4,6),
- Beware of some combinations of Linq methods, such as "collection.Where(condition).First()" is faster than "collection.First(condition)" etc.
Apart from that (which simply concerns strings, as they're the great source of performance issues, all generic best practices, applicable to any language, should be followed.
There are plenty resources on the net, just search for it.
Anecdotal: I recently found a little trick that works for me to overcome the horrors of the blank page: I turn my phone (having opened my preferred note-taking solution) into horizontal mode. The keyboard gets larger in width, making it nicer to type, and on my medium-sized phone, it covers enough of the UI that I don't actually see what I type into the textbox, until I close the keyboard again. So I just happily type away and hit save at the end.
Nice. I also have an implementation in Rust (no public repository, private note-taking app).
One low-hanging fruit (IMO) to improve on the base SM-2 is to more smartly pick the initial ease. I just took the average ease of "similar" mature items. Since for my use-case, spaced repetition items are embedded in notes, "similar" items meant items in the same note, or in notes that were tagged similarly.
These days I often wonder if I should just switch to FSRS [1], which Anki also switched to. It delivers better results. However, I am hesitating, since I understand SM-2, and it is easy to read the code while FSRS is complex and feels kinda black-boxy, which wouldn't feel right to me.
A quick note to the implementation above: I wonder if having that many answer options is worth it. It probably increases the cognitive effort needed for grading and I wonder if the increased precision in some cases is worth that. But who knows?
While being more opaque & difficult to self-correct for. How much more work are we talking about? A theoretical couple of minutes in a year? Not worth it.
Using the scheduler estimates from the FSRS simulator [1], for desired retention held equal at 85%, I received approximately 20-30% improvements in workload upon switching to FSRS from SM-2. Even disregarding the "internal" improvements, the ability to reduce the number of parameters that require modification/present risk to performant scheduling is heavily reduced to only setting desired retention explicitly (a benefit in and of itself) as well as minor decisions (e.g. inclusion of suspended cards). Interpretability really is far less of an issue than efficiency, and frankly the achievements of the team behind FSRS (including their decision to make it publicly available) should be lauded.
Say you have concepts/items/cards A, B and C, with
A -> B -> C (C encompasses B, B encompasses A, keeping the notation from the article).
As I understand it, the article advocates for showing C first, then you can assume that you also know B and A to at least some part, and save yourself the repetitions for these.
Intuitively, I would have guessed the opposite approach to be the best: Show A first, suspend B until A is learned (by some measure), then show B, etc.
That means no repetitions to skip, but also you get less failures (and thus additional repetitions) that occur as follows: you are shown C, but don't know B anymore, and thus cannot answer and have to repeat C.
If you are shown C before B, you kinda make C less atomic (you might have to actively recall both, B and C to answer it), showing B before C makes C more atomic, as you will have B more mentally present/internalized and can focus on what C adds to B.
1. First want to clarify that the learner is first introduced to the topics through mastery learning (i.e., not given a topic until they've seen and mastered the prereqs). So, they would explicitly learn A before learning B, and explicitly learn B before learning C. It's only in the review phase when we do all this stuff with "knocking out" repetitions implicitly.
2. When you say "then you can assume that you also know B and A to at least some part," I want to emphasize that if C encompasses B and B encompasses A in the sense of a full encompassing that would account for a full repetition, then doing C fully exercises B and A as component skills. Not just exercises them "to some part." For instance, topic C might be solving equations of the form "ax+b=cx+d," topic B might be solving equations "ax+b=c," and topic A might be solving equations "ax=b."
3. This scenario should never happen: "you are shown C, but don't know B anymore, and thus cannot answer and have to repeat C." There are both theoretical and practical safeguards.
3a-- Theoretical: if you are at risk of forgetting B in the near future, then you'll have a repetition due on B right now, which means you're going to review it right now (by "knocking it out" with some more advanced topic if possible, but if that's not possible, we're going to give you an explicit review of B itself. In general, if a repetition is due, we're not going to wait for an "implicit knock-out" opportunity to open up and let you forget it while we wait. We'll just say "okay, guess we can't knock this one out implicitly, so we'll give it to you explicitly."
3b-- Practical: suppose that for whatever reason, the review timing is a little miscalibrated and a student ends up having forgotten more of B than we'd like when they're shown C. Even then, they haven't forgotten B completely, and they can refresh on B pretty easily. Often, that refresher is within C itself: for instance, if you're learning to solve equations of the form "ax+b=cx+d," then the explanation is going to include a thorough reminder of how to solve "ax+b=c." And even in other cases where that reminder might not be as thorough, if you're too fuzzy on B to follow the explanation in C, then you can just refer back to the content where you learned B and freshen up: "Huh, that thing in C is familiar but it involves B and I forgot how you do some part of B... okay, look back at B's lesson... ah yeah, that's right, that's how you do it. Okay, back to C." And then the act of solving problems in C solidifies your refreshed memory on B.
Anyway, I think I've clarified all your questions? But please do let me know if you have any follow-up questions or I've misinterpreted anything about what you're asking. Happy to discuss further.
I guess math is uniquely suited for this kind of strategy, but would you say it translates to learning concepts in other domains too?
I was thinking about whether something like "what is X?" -> "What field is X used in?", which seems to form a hierarchy for me, would benefit of this technique? Personally, I found that for something like the preceding example, I could answer the second question without thinking about what X is at all, just by rote memorization of the wording. Happened to me quite a lot when I was using Anki. And actually, I guess this is even acceptable in some way, since the question is not about activating "what X is", but "what X is used in". What I am trying to express: I feel like I would not necessarily activate a parent concept by answering a child concept, and I think that might be true for a lot of questions outside math problems, although they form a hierarchy. So I am wondering what you think about the general applicability of this technique...
Please don't take all of this questioning wrong, I think you are doing pretty cool stuff, and I am grateful for everyone trying to push the boundaries of current SRS approaches :-)!
Yeah, you're right that the power of this strategy comes from leveraging the hierarchical / highly-encompassed nature of the structure of mathematical knowledge. If you have a knowledge domain that lacks a serious density of encompassings, there's just a hard limit to how much review you can "knock out" implicitly.
> I feel like I would not necessarily activate a parent concept by answering a child concept, and I think that might be true for a lot of questions outside math problems, although they form a hierarchy.
This is where it's really important to distinguish between "prerequisite" vs "encompassing." Admittedly I probably should have explained this better in the article, but you are right, prerequisites are not necessarily activated. If you do FIRe on a prerequisite graph, pretending prerequisites are the same as encompassings, then you're going to get a lot of incorrect repetition credit trickling down.
We actually faced that issue early on, and the solution was that I just had to go through and manually construct an "encompassing graph" by encoding my domain-expert knowledge, which was a ton of work, just like manually constructing the prerequisite graph. You can kind of think of the prerequisite graph as a "forwards" graph, showing what you're ready to learn next, and the encompassing graph as a "backwards" graph, showing you how your work on later topics should trickle back to award credit to earlier topics.
Manually constructing the encompassing graph was a real pain in the butt and I spent lots of time just looking at topics asking myself "if a student solves problems in the 'post'-requisite topic, does that mean we can be reasonably sure they truly know the prerequisite topic? Like, sure, it makes sense that a student needs to learn the prerequisite beforehand in order for the learning experience to be smooth, but is the prerequisite really a component skill here that we're sure the student is practicing?" Turns out there are many cases where the answer is "no" -- but there are also many cases where the answer is "yes," and there are enough of those cases to make a huge impact on learning efficiency if you leverage them.
I still have to make updates to the encompassing graph every time we roll out a new topic, or tweak an existing topic. Having domain expertise about the knowledge represented in the graph is absolutely vital to pull this off. (In general, our curriculum director manages the prerequisite graph, and I manage the encompassing graph.)
Happy to answer any more questions if you've got any! :)
From an ergonomics perspective, I write less code with Vapor than the equivalent Rust frameworks. It is slower today though but plenty for my needs, with some speed ups coming with Vapor refactors down the pipe.
I find that I need to spend less time managing shared resources with concurrency , and my code is clearer while being less verbose in general. Features like the trailing closure syntax are much easier to read for me.
I find it closer to how I’d write my Flask servers in the amount of code I need to do myself.
Did anyone here do the eye surgery where they make a cut and slide in a permanent contact lens? My eyes are too bad for lasering, but this method would apparently still work. The doc tried to sell it to me as the better method irrespective of how bad your eyes are, since it is reversible (you can remove the lens).
Yes I did. The result is perfect as far as I can tell. Not everyone is eligible to that method: you need to have enough space in your eye so that it is doable. So checking for that is a first step to be sure it is doable. The only serious risk is an infection, so each eye was done in a different operation room. I was also told that the result is better than lasik, it is more costly though.
I had EVO ICL done ~10 months ago. My contact prescription was -7.25 left / -6.75 right with thin corneas so I wasn't a great candidate for LASIK. Weighed costs of ICL vs PRK and opted for the former (mostly recovery time concerns + as you mention ICL can be removed). Dollar cost was ~9.2k USD, so it was significantly higher than other options.
Recovery was very quick. I was 20/40 a few hours after surgery and 20/15 the following day (could have worked at computer, but took the day off). Pretty intense dry eyes initially, but eye drops helped and this improved over next couple months.
10 months later, vision is still great (though I expect my eyes to continue their normal progression). Halo-ing effects at night are stronger than I expected (believe they're related to pupil size + hole in center of lens), but I've mostly learned to ignore it (I also now prefer not to drive at night). Minor dry eyes but might try NAC supplement as suggested elsewhere in discussion.
I'd do it again in a heartbeat (hated dealing with contacts on camping trips) but wish I'd been more informed of halo-ing risks.
Both my wife and I did. She went first, and had bifocal lenses inserted. I waited a couple of years and got trifocals instead. I think I've had mine for five years or more by now; I don't remember.
My wife went from "can't read the big E on the eye chart" to only using readers for very tiny print. She had different prescriptions put in each eye, for different distances. It took her brain a little while to sort all that out, but she doesn't notice it anymore.
We both see concentric halos around light points (a side effect of the bifocal and trifocal lenses, I think), but eventually your brain edits those out and you don't notice them unless you try.
I'm in the U.S. and insurance did not cover them. I think we each paid around $7K.
All in all, they were a good buy and I would recommend the procedure. They may not be cost effective, but the quality of life change is amazing.