First of all, great job! I think the inference will become more and more important.
That being said, I have a question regarding the ease of use. How difficult it is for someone with python/c++ background to get used to zig and (re)write a model to use with zml?
Hi co-author here. Zig is way simpler than C++. Simple like in an afternoon I was able to onboard in the language and rewrote the core meat of a C++ algorithm and see speed gains (fastBPE for reference).
Coming from Python, the hardest part is learning memory management. What helps with ZML is that the model code is mostly meta programming, so we can be a bit flexible there.
We have a high level API, that should feel familiar to Pytorch user (as myself), but improves in a few ways
On a side note, if you are running something for days or weeks you should implement checkpointing anyways. I have no idea if matlab allows that for internal operations
These CRUD apps need complex business rules, requiring expertise in the domain and making them configurable on the application level for customer while trying to keep the app not bloated.
Scaling is not the only challenge engineers face, but somehow it's the one that is mostly praised.
They also need to respond to customer requirements, which IG never needed to do while they had no actual customers. And as soon as fun was up and IG had actual customers (spoiler alert, advertisers) what a surprise 3 devs was not enough.
They also need to quickly respond to downtime, because unlike IG if some of those CRUD apps go down in B2B world you are often losing customers actual money not just ad views
Sure. If you are full on marketer you can say "not see an ad for service" is as bad as "service does not work".
But anyway there was never a period where Instagram had a tiny 3 dev team and handled ads at the same time. 3 devs only worked back when there were no customers, no ads, no profits and no real responsibilities.
To be accurate, IE was not hated by "end-users". It was hated by web developers (understandably) reluctant to support non-standard apis and by informed people.
Realistically, with the title you suggest nobody would have read the post.
A title is not "the most informative and complete sentence summarizing the article", it also has the goal to stimulate curiosity. I understand that we don't want misleading titles but this obsession on titles is not very helpful. I am participating in this useless conversation but I couldn't help myself. Now every single HN post has a comment on how the title is wrong..
>A title is not "the most informative and complete sentence summarizing the article", it also has the goal to stimulate curiosity.
It shouldn't have to do the latter. That's clickbait. People are either organically curious about what actually happened or they are not. If they are not, they shouldn't be "stimulated" with BS. That "media/advetising" attitude is the cause behind many issues with science in society today.
Ehh. We’ve been here long enough to both know that there’s a balance. $10 says the mods won’t change the title to the suggested one, precisely because of this balance.
It’s not about clickbaiting. The whole reason the research is getting funded in the first place is to try to find the link.
Grants are ALWAYS tiered based on biased political factors. That's why they are "grants". Someone heavily invested in finding answers is willing to give money away to achieve that end. In this case, the government is heavily invested on ideas that can stimulate their economy.
I don't even know what can be done here. Take money out of acedemia? Maybe in a post scarcity world.
The Artifact news reader app on iOS uses this premise to provide one of the truly useful implementations of LLMs I've seen in the wild so far. You can mark a title as clickbait, and the app will use AI to generate a new title, usually one that is more usefully descriptive.
It's not perfect, sometimes the generated title is still totally clickbait and I've seen a couple instances of it being completely wrong or hallucinating a detail. But generally it's pretty neat.
The app also uses the same tech to summarize articles into a few bullets which I don't use as often but is also neat.
>It shouldn't have to do the latter. That's clickbait.
clickbait generally has an implication of incorrectly representing the content. romanticized titles aren't always clickbait, and I wouldn't say this is exactly a sensationalist tite like how others may say "We're on the first road to curing Alzheimer's disease".
>People are either organically curious about what actually happened or they are not.
it's a prisoner's dilemma. It's not necessarily zero sum, but attention is somewhat finite. And for some sites, the result isn't a more comfy high quality community. It just dies out, and that benefits no one.
Why did you italicize a word? Why speak with any emotion ever? Why not focus exclusively on the optimal method of transferring information from one being to another?
In this case, I believe that using italics to convey emphasis and a bit of emotion is the optimal method of transferring information (here, expressing surprise and questioning someone apparently favoring a practice that is generally correctly viewed as a pure negative).
Came to say the same. Peter was a Quicktime engineer.
A coworker and I came across a stack of these machines in an abandoned lab at Apple's Infinite Loop Campus decades ago. We didn't know what they were although we were sure they were some kind of set-top box that Apple had abandoned. I feel like my coworker knew the code name but if he did, it escapes me now.