Hey, thanks for sharing your thoughts! I appreciate you putting this out there.
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.
From my perspective, the article seems primarily focused on promoting React Server Components, so you could mention that at the very top. If that’s not the case, then a clearer outline of the article’s objectives would help. In technical writing, it’s generally better to make your argument explicit rather than leave it open to reader interpretation or including a "twist" at the end.
An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.
I appreciate the suggestions but that’s just not how I like to write. There’s plenty of people who do so you might find their writing more enjoyable. I’m hoping some of them will pick something useful in my writing too, which would help it reach a wider audience.
I looked into eBPF-based observability tools for k8s some time ago and found at least four tools that look incredibly similar: Pixie, Parca, Coroot, and Odigos. There are probably others I missed too. Do you have any thoughts about this?
From a user perspective, having several tools that overlap heavily but differ in subtle ways makes evaluation and adoption harder. It feels like if any two of these projects consolidated, they’d have a good shot at becoming the "default" eBPF observability solution.
From a user’s perspective, it doesn’t really matter how the data is collected. What actually matters is whether the tool helps you answer questions about your system and figure out what’s going wrong.
At Coroot, we use eBPF for a couple of reasons:
1. To get the data we actually need, not just whatever happens to be exposed by the app or OS.
2. To make integration fast and automatic for users.
And let’s be real, if all the right data were already available, we wouldn’t be writing all this complicated eBPF code in the first place:)
Speaking for Odigos (disclosure: I’m the creator), here are two significant differences between us and the other mentioned players:
- Accurate distributed traces with eBPF, including context propagation. Without going into other tools, I highly recommend trying to generate distributed traces using any other eBPF solution and observing the results firsthand.
- We are agent-only. Our data is produced in OpenTelemetry format, allowing you to integrate it seamlessly with your existing observability system.
For those that have not implemented toposort or don't remember it: 1) only directed graphs without cycles can be topologically sorted (DAGs) 2) there can be more than one topological order and 2) reverse post order of depth first traversal from every unvisited node shields a topological order.
In JavaScript:
function toposort(graph = { a: ['b', 'c'], b: ['d'] }) {
const order = [];
const visited = new Map();
function dfs(node) {
const status = visited.get(node);
if (status === 1) throw new Error("Cycle found.");
if (status === 2) return;
visited.set(node, 1); // In progress.
const adjacent = graph[node] ?? [];
for (const neighbor of adjacent) dfs(neighbor);
visited.set(node, 2); // Done.
order.unshift(node); // Reverse post order.
}
for (const node in graph) {
if (!visited.has(node)) dfs(node);
}
return order;
}
I spent way too long figuring this one out, so this is what I got:
An improvement on [1], which I vaguely remember using with pen and paper to find minimums of differentiable functions. The original algorithm runs "on a loop" (iteratively) and utilizes the first and second order derivative of a function (f', f''). From the article:
> Newton did it for degree 2. He did that because nobody knew how to minimize higher-order polynomials
The improved version looks a lot more complex but seems to sacrifice simplicity to converge faster to the minimum when implemented as a program.
Trivia: a "single loop" of the Newthon method is famously used in Quake's Fast InvSqrt() implementation [2].
I don't think they're quite synonyms. In math they denote two different things. The reciprocal of f(x) = y is g(x) = 1/y. The inverse of f(x)=y is g(y) = x.
...and it's a hunk of steel that must weigh a few tons and is built to withstand an extinction-level disaster. I mean, even for the '30s, it seemed a little bit excessive. I understand it had to be somewhat heavy for stability and all, but I suspect it could have been made more lightweight.
> I suspect it could have been made more lightweight.
Perfection is the enemy of done.
Those are the kind of improvements that happen when many items are made. I suspect Disney only made a few, and thus what was more important was creating a working multiplane camera than lowering its weight.
I also suspect that the weight added a lot of stability which prevented shaking between frames.
I wonder about the angle of the article, starting with "B-Trees stand the test of time" ending with "everything else has seriously diminishing returns".
I never ask myself why we still use hashmaps or heaps or whatnot, so makes me wonder if this is really an article about why Cedar does not use something else? (LSM-trees the elephant in the room?)
I found myself putting my finger directly into the paddle many times. Instead of the message "Push here to play" you could create a more obvious UI element where one should press (something like the grip part of a slider). Maybe even make the grip glow when you push it for funs :-).
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
reply