This sounds like a breath of fresh air as a disenchanted Spotify user. My only hesitation is that I’ve lost touch with collecting music. I used to rip CDs and download music and curate a library etc, but I’ve lost my collection and collecting habits since adopting streaming. How do people collect music nowadays? Is there a legit way (fairly compensating artists) to do it?
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
Wow, this looks awesome. Been using Temporal, but this fits so perfectly into my stack (Postgres, Pydantic), and the first-class support for DAG workflows is chef's kiss. Going to take a stab at porting over some of my workflows.
Seems like you should be correct. A shadcn button is just react, tailwind, and @radix/react-slot. But if you simply create a new shadcn Next.js template (i.e. pnpm dlx shadcn@latest init) and add a button, the "First Load JS" is ~100kB. Maybe you could blame that on Next.js bloat and we should also compare it to a Vite setup, but it's still surprising.
Yeah, but my point is that you download the runtime and core of React/Tailwind just once for the whole web page and those should be removed from the test, or at least there should be comparison which includes the both cases.
You only need couple images on your webpage and that runtime size becomes soon irrelevant.
So the question is, that how much overhead are React/Tailwind CSS adding beyond that initial runtime size? If I have 100 different buttons, is it suddenly 10 000 kilobytes? I think it is not. This is the most fundamental issue on all the modern web benchmarking results. They benchmark sites that are no reflecting reality in any sense.
These frameworks are designed for content-heavy websites and the performance means completely different thing. If every button adds so much overhead, of course that would be a big deal. But I think they are not adding that much overhead.
> Yeah, but my point is that you download the runtime and core of React/Tailwind just once for the whole web page and those should be removed from the test, or at least there should be comparison which includes the both cases.
You think a test that is comparing the size of apps that use various frameworks should exclude the frameworks from the test? Then what is even being tested?
Actual overhead when the site is used in reality? How much ovearhead are those 100 different buttons creating? What is the performance of state managing? What is the rendering performance in complex sites? How much size overhead are modular files adding? Is .jsx contributing more than raw HTML for page size? The library runtime bundle size is mostly meaningless, unless you want to provide static website with just text. And then you should not use any of these frameworks.
With OpenAI models, my understanding is that token output is restricted so that each next token must conform to the specified grammar (ie json schema) so you’re guaranteed to get either a function call or an error.
Edit: per simonw’s sibling comment, ollama also has this feature.
Ah, There's a distinction here with model vs model framework. The ollama inference framework supports token output restriction. Gemma in AI Studio also does, as does Gemini, there's a toggle in the right hand panel, but that's because both those models are being served in the API where the functionality is present in the server.
The Gemma model by itself does not though, nor does any "raw" model, but many open libraries exist for you to plug into whatever local framework you decide to use.
I’m missing something. If WebAuthn is “ssh for the web” then why would it matter if Bob was phished and logged into the fake crypto portal running on the raspberry pi? It’s not like the attacker now knows his private key. Is the danger that Bob also would share his crypto wallet keys with the fake site or something?
Attacker is now logged in on the real crypto portal as Bob. SSH equivalent would be like connecting to malicious server with SSH agent forwarding enabled.
I suppose you can completely skip dummy sites when phishing for passkeys since the user doesn't know the password and therefore you don't need him to enter said password anywhere (which is why you needed a dummy site in the first place).
The attacker has access to whatever the passkey was protecting. It's like asking who cares about password phishing. And FWIW a crypto portal in front of something like Coinbase can obviously do a lot of damage since most people do not keep their crypto in their own personal cold storage.
For one, ffmpeg is 9 years older than Go. Plus, when dealing with video files a garbage collected language probably isn't going to cut it. C++ and Obj-C also feel overkill for ffmpeg.
CoreVideo and CoreAudio are both implemented in C on Apple systems. There are higher level APIs like AVFoundation implemented in Obj-C/Swift, but the codecs themselves are written in C. Even the mid-level frameworks like AudioToolbox and VideoToolbox are written in C. I’m not as familiar with Microsoft but imagine it’s similar.
Also the article doesn’t actually mention OOP. You can use polymorphism without fully buying into OOP (like Go does).
The great thing about C is its interoperability, which is why it’s the go to language for things like codecs, device drivers, kernel modules, etc.