I'm familiar with zod.infer but I'm not sure how to use it to produce an interface that would be compatible with RpcStub and RpcTarget, like MyApi in the example in your post:
// Shared interface declaration:
interface MyApi {
hello(name: string): Promise<string>;
}
// On the client:
let api: RpcStub<MyApi> = newWebSocketRpcSession("wss://example.com/api");
// On the server:
class MyApiServer extends RpcTarget implements MyApi {
hello(name) {
return `Hello, ${name}!`
}
}
But my expectation is you'd use Zod to define all your parameter types. Then you'd define your RpcTarget in plain TypeScript, but for the parameters on each method, reference the Zod-derived types.
I learned about Luau via my 13 years old who is looking into Roblox Studio. That's how I ended up visiting luau.org and I'm quite impressed by Roblox's engineering on this.
Arseny Kapoulkine is an amazing engineer. Highly recommend following his blog or social media. Other than working on luau and the rendering engine at Roblox, he's also responsible for meshoptimizer which if you're in graphics you've most definitely heard of, and volk, which now comes packaged with the Vulkan SDK.
I’ve been thinking about that in the context of hiring. Some companies require a cognitive test, which is often some kind of Raven’s Advanced Progressive Matrices. Unfortunately, most companies giving these tests do it with a time limit, thus measuring a speed function, and neglecting other dimensions. Some people think fast, others think deep. We need both.
In college I interviewed at a commodity trader and they made us play a group trading game, I bombed because I wanted to make sure every trade I made was profitable/certain, people who made it to the next round moved quick even if they made somewhat questionable trades. That's when I realised I was not meant to be a commodity trader where decisions need to be made in matter of minutes. It was just so different from the business case interviews I was also doing at the same time where you build a thesis and stress test it and look for areas you missed
I'm also a slow thinker, but I'm doing well on these types of tests, because they are very contained and small, plus I have partly learned to sort of relax my brain and let it work more intuitively.
For me at least, the slow thinking comes in part from being overly reflected. Thought can pull me into a seemingly phased out state where I drill down into it or try to find related connections. I also have trouble to go from thought and abstraction to language, because it's always hard to find the right expression that _really_ fits.
So the larger and more disconnected the possible space is that one could explore, the slower I naturally get before expressing myself.
But these matrices and puzzles are very contained and observable in their entirety. Even though solving these isn't my main forte, they also don't necessarily trigger my slowness to the same degree as other tasks.
If someone uses them in a job interview, they would likely make me a bit nervous, because it feels bad to be stuck or unsure when you're solving a puzzle and that can be draining and distracting for long enough to matter if the test is timed.
But I think if they are used with care, the minimum required score is considerate and they are not the only test, then they might be fine or even useful.
What is the reason for saving the end-to-end encrypted backup files on Signal backup servers instead of iCloud or Google backup service, as most of us are already paying for this storage?
The "Signal should exist" part of me is happy to donate $2/mo to help them keep the lights on, but I really did expect that to be an option alongside Drive/Dropbox/et al, not the only option.
Besides the obvious (they want/need the revenue from selling their own solution), many people using Signal do so in an effort to move away from Big Tech and/or on devices with custom ROMs.
I so much agree with this. When I started using Zig, memories from writing Pascal started coming back, and I realized I was fine writing Pascal with manual memory management, unlike C and C++ where segfaults were a more frequent problem.
I was lucky to have gone through a few BASICs (including compiled ones), Z80 and 68000 Assembly, and Turbo Pascal (3.0 - 6.0), before getting to learn C with Turbo C 2.0, although I had already a few books on C, which made be aware that C wasn't the be all end all, that a few decades later many that learned C after C89 kind of propagated as urban myth.
Not only I was aware that there were other ways to do systems programming, I was educated that there were safer ways to achieve the same goals.
My trasition to C++ came thanks to Borland having the TP/C++ symbiosis, which later became Delphi/C++.
If I remember well, the biggest difference between Pascal and C was that Pascal, unlike C, offered spatial memory safety (thanks to strings and arrays being represented as "fat" pointers including the length), but none of them offered temporal memory safety (heap-use-after-free and stack-use-after-return)?
Much more than that, first lets take into account no one really used plain ISO Pascal on PC, it was always dialects from UCSD Pascal, or eventually Object Pascal, created by Apple and adopted by Borland, TMT and others.
Strings were indeed fat pointers, and for the usual critic of being limited to 255 characters, you could eventually use fat pointers to unbounded arrays, with functions to get the lower and upper bounds.
Since Pascal supports pass by reference (var parameters), there was no need to mess up with pointers for in/out, or out parameters, thus one less scenario where pointers could go wrong.
Memory allocation has special functions New() and Dispose() that are type aware, thus no need to do sizeof() that might go wrong. And there was Mark()/Release() for arena style programming as well.
Numeric ranges, with bounds checking, was another one.
Enumerations without implicit conversations.
Bounds checking, checked arithemetic (this one depended on the build mode), IO checking, and a few other checked modes, naturally they could be disabled if needed.
Mapped arrays, so you could get the memory address directly, while having bounds checking.
Unbounded arrays provided a way to do slices, even if a bit verbose.
Pointers were typed, no decays from arrays, although you could do exactly the same clever tricks as C, by callign specific functions like Adr, Succ, Pred,...
Record variants required a specific tag, so while they weren't tracked, it was easier to spot if "unions" were being used properly.
Units provided a much better encapsulation as header files + implementation, with selected type visibility.
Yes, use after free, doing the wrong thing in mapped memory, or inline Assembly were the main ways how something could go wrong, but much less than with plain C.
Which is why I tend to do my remarks with Zig, you will notice quite a few similarities with 1980's safety from the likes of Pascal and Modula-2, the later even better than those Pascal dialects, although Turbo Pascal eventually grew most of the Modula-2 features, plus plenty of C++ inspired ones as well.
Hence why with the prices Ada compilers were being sold, and the "cheaper" Pascal dialects and Modula-2, it was yet another reason why it did not took off back then.
Ironically GNU Pascal is no longer relevant, while GCC now supports Ada and Modula-2 as official frontends.
I forgot Pascal var parameters, which have probably inspired ref parameters in C# and inout in Swift. Makes me wonder if a language like Zig would benefit from something similar, reducing the need for pointers even more.
I also have some vague memory that Turbo Pascal, unlike my Watcom C++ compiler, had a default segfault handler printing stack traces, which was a nice quality-of-life improvement.
Interesting that you mentioned header files, as that was indeed the thing that I missed the less when I moved from C++ to Java.
I'm wondering how Oxide Computer racks can look so clean (for example in this picture: https://bsky.app/profile/oxide.computer/post/3lvdw7mdwms2i) compared to Dropbox's racks, and if there is a technical advantage beyond the aesthetic advantage.
> Anyone accustomed to a datacenter will note the missing mass of cold-aisle cabling that one typically sees at the front of a rack. But moving to the back of the rack reveals only a DC busbar and a tight, cabled backplane. This represents one of the bigger bets we made: we blindmated networking. This was mechanically tricky, but the payoff is huge: capacity can be added to the Oxide cloud computer simply by snapping in a new compute sled — nothing to be cabled whatsoever! This is a domain in which we have leapfrogged the hyperscalers, who (for their own legacy reasons) don’t do it this way. This can be jarring to veteran technologists. As one exclaimed upon seeing the rack last week, "I am both surprised and delighted!" (Or rather: a very profane variant of that sentiment.)
Yes, Zig’s syntax is a bit noisier, but it enables things such as using if/for/while/switch in expressions, or using anonymous struct literals to emulate named and default parameters in functions.
reply