How this was discovered is incredible. An amator satelite was lauched. operators collected data. At some point they had the idea to plot them. That gave a big stain in the south atlantic.
Not into ZIG but for some reasons I monitor new issues that pop in the bug tracker of "new" / "raisins" languages. ZIG has clearly reached the next level, let's say if you compare the issues two years ago (lot of comptime/type system things) VS now.
I cannot reply on the blog but to answer the author about other languages, here is the D version, using a single template:
auto ref T max(T)(auto ref T u, auto ref T v) => u > v ? u : v;
void main()
{
int a = 1;
int b = 0;
int x;
static assert(__traits(compiles, &max(a,b) == &a),
"should have selected the lvalue version");
static assert(__traits(compiles, x = max(1,0)),
"should have selected the rvalue version");
static assert(__traits(compiles, x = max(a,0)),
"should have selected the rvalue version");
static assert(__traits(compiles, x = max(1,0)),
"should have selected the rvalue version");
}
Bound checks are usually conditionally compiled. That's more a kind of "contract" you'll verify during testing. In the end the software actually used will not check anything.
#ifdef CONTRACTS
if (i >= array_length) panic("index out of bounds")
#endif
an hygienic way to handle that is often "assert", can be a macro or a built in statement.The main problem with assertions is the side-effects. The verification must be pure...
What is really mind blowing is that, if understood correctly, bots would be used to check the availability of a product, that sounds so a "hacky" method, like "seriously people are doing that in 2025".
Yeah, their list of recommendations could use another point: expose the public data in a simple, structured way.
I'm working right now on an inventory management system for a clinic which would really benefit from pulling the prices and availability from a very specialised online shop. I wish I could just get a large, fully cached status of all items in a json/CSV/whatever format. But they're extremely not interested, so I'm scraping the html from 50 separate categories instead. They'll get a few daily bot hits and neither of us will be happy about it.
If people are scraping data that you're not selling, they're not going to stop - just make it trivially accessible instead in a way that doesn't waste resources and destroy metrics.
The counterpoint is 'Why hand your competitors data on a silver plate'?
Sure you might be willing to build the bot to scrape it... but some other competitors won't go to this effort so it still means a bit of information asymmetry and stops some of your competitors poaching customers / employing various marketing tactics to exploit short term shortages or pricing charges etc.
I really don't believe we're in a situation where a company can exploit product availability and pricing data, is pushing enough volume to make it worth it, can process that information effectively, yet cannot hire someone on Fiverr to write a scraper in a few hours.
> 'Why hand your competitors data on a silver plate'?
To lessen the issue from the article and free up server resources for actual customers.
That depends - scrapers are currently annoying and temperamental and you have to maintain them. Also the idea of allowing some random person from Fiverr to write some code you are going to have running in your infra that has access to your webshop, ERP and the open internet isn't usually that palatable to most IT teams.
I wonder if LLM agents will know to go for apis and data or if they'll keep naively scraping in the future. A lot of traffic could come down to "find me x product online" chats eventually
Basically any transfert function that is used as interpolator can also be used as "easing". E.g a quadratic Bézier (let's say with empirically determined coeffs). One lesser known I used to like much is the super ellipse, although very costly.
reply