What am I paying for if it's running on my system ?
Not asking to deride , but could you elaborate what value this adds over simple installing python on my system ?
What does it matter which tool-chain they use .
I can understand if the difference is it made it notably slower or they dropped some features because they couldn't get it done in rust . But purely as an end user why would it matter if it's written in rust or cpp ?
This!
i prefer tags over folders for this reason. All notes go into single folder , no sub directories . Because a note can have multiple classifications a tree structure is not natural way to organize them. Add tags , if you have note taking program will show you all possible existing tags you it makes this easier.
I love tags until I actually use them, I always wind up using them inconsistently, or not at all for a specific file, and them bam, I can't find anything at all.
The benefit of file structures is that things have to have a place, you can't not put something in a folder, so for car insurance, it might be in "insurance" or "cars" but it's definitely one or the other. With tags, it could be "insurance", "finance", "cars", "automobiles", "vehicles", "veihcles", etc.
Any tips of how to funnel some strictness into tags so that they're actually usable?
Sometimes autocomplete works for me, so I avoid the "auto" vs "automobile" but it falls apart as soon as I realize I have "autombile" suggested and now I wonder what to do to re-tag files.
TS can be slow in some situations.
https://github.com/tree-sitter/tree-sitter-julia/raw/refs/he...
open this file with and without treesitter . And neovim will slow to a crawl with TS on. But the traditional regex highlighter can handle it fine. (a file such as the link posted above typically is never meant ot be opened since its machine generated , this is just to show you TS can be slow on large files)
TS is faster in other situations.
for example with TS highlighting enabled entering "(" in the buffer is definitly faster in a julia file. (you can test this by holding down "(" in a .jl file and see the difference between TS enabled and only regex highlighting).
i might be misunderstanding , but it seems easy
if you want a vector orthogonal to A , generate a random vector B non co-linear to A and take AxB (cross product). AxB is orthogonal to A .
Right: and that's not a single, continuous function that works for all inputs—specifically, it fails on the input B itself. BxB = 0.
Any solution will have a discontinuity in its output vector angles. I don't know how this problem is applied in computer graphics, but you probably want to avoid rendering objects in the vicinity of a discontinuity: you'd get some kind of flickering artifact when you cross it, with small ɛ-displacements being amplified into something much larger.
This does not work on all non-zero vectors, hence your "non co-linear" comment. If the vectors point in the same direction, the cross product is zero, and you have an extra degree of freedom when choosing your "up" vector.
>There's no "pass-by-reference", never_has_been.jpg. With pass-by-reference what you're actually passing is a value, of a pointer. So you're "duplicating" a value: of the pointer.
dyingkneepad is talking about how the cpu actually works (x86-64 & cache hierarchy). It doesn't really matter what constructs a particular programming language does or doesn't have -- we can reason about performance in terms of what x86-64 instructions the cpu ends up executing.
Suppose we want to compute the sum of n scalars, represented as n 32 bit floats, say, and we're weighing up if we should store these n scalars in a linked list or an array. The "introduction to algorithms" analysis might be that both approaches require O(n) storage and O(n) running time, so they're indistinguishable, up to some constants that get absorbed into the big-oh notation.
The basic big-oh analysis is true as a first cut for some theoretical simplification of a machine, but with our performance hats on, maybe we're interested in the constants being small so our code actually runs fast on a real machine -- we need to understand how a real machine works in a little bit more detail, in particular we need to have some crude mental model of the memory hierarchy.
On real hardware, if we use a linked list to store our data then each node in the list may end up in an unpredictable region of memory. The CPU will need to load these fragmented chunks of memory into L1 cache memory. Depending on the whims of the memory allocator, each time we read a node into cache we may get a single useful scalar value surrounded by a bunch of other junk -- our precious expensive L1 cache may be 90% filled with junk and only contain 10% or less useful data. (this is the big downside of "pointer-walking" AKA "pointer-chasing"). If our calculation is bottlenecked by memory bandwidth, we're wasting 90% of our machine's memory bandwidth to read useless data because we chose to store our data in an unpredictable, fragmented arrangement throughout memory.
In contrast, if we store our data in an array in a contiguous block of memory that we linearly traverse, each time we read a chunk of values into L1 cache, all the neighboring values are ones we're going to need to read next as we compute our sum. So we're filling our precious tiny L1 cache with 100% useful data and 0% junk. Because we load multiple useful data values each time we load a chunk of our array into cache, we reduce the number of times we need to load into cache, and also unblock more useful compute-bound work (adding scalar values) that the cpu can keep busy with before we need to load main memory into cache again.