This. I wouldn't be surprised if people give up after a while of searching and migrating to alternatives to each service that started using or selling data for AI training.
Scope hoisting is the technique used there (it's in the title of your link) to simplify tree shaking a.k.a dead code elimination. Also yeah it's unlikely to be what's happening in that bug due to Google using Closure compiler.
Agree, it's like they made no effort to make hydration easier. Even the router doesn't have the route info on the first render on the client, I had to pass it with getServerSideProps, that's ridiculous.
If you have to re-engineer all your code to avoid content shift, why bother using a mega-framework like Next in the first place?
Also the point of SSR and hydration is to improve performance and page loads, if you have a ton of work to do again on the client that's very small actual performance gains, but that's not so much of a surprise considering Next is one of the heaviest frameworks out there (in bundle size)
This. Same happened with npm starting to offer new features (like lockfiles) only after yarn had them. History has shown competition is only way to make them move.
> This video debunks 4 dangerously misleading scientific papers, all with a Google co-author, that denigrate the urgent need for massive audits and investments in security and ethics. Terrifyingly, the scientific community has scientifically validated and massively propagated what appears to be intentional disinformation.
French researcher and youtuber Lê Nguyên Hoang tries in this video essay to demonstrate that the ML research community is plagued by unfounded claims of algorithms being privacy-preserving, and lack of accountability over ethical concerns.
This video seems to be the continuation of papers co-authored by Lê:
Basically because MDX mixes JSX and markdown, you need knowledge of JSX/JS (which non-devs might not have), and tooling dedicated to build, parse it and so on.
Markdoc is more of a "separation of concerns" approach.