I use createPages and I get hot reloading when running gatsby develop.
I also just figured out how to trigger hot reloading for sources that normally don’t trigger it (markdown files which I was manually processing).
Basically, I discovered that gatsby will hot-reload when it sees a change in code files (.tsx). So I created an essentially empty _rebuild-trigger.tsx file and imported that in my main page template.
Now, to change the page content and trigger a hot-reload:
- I have an observable subscribed in createPagesStateful that calls deletePage and createPage
- Then write to the file _rebuild-trigger.tsx file
I need to make a blog post because it works great (and I could tie it to any change source - like file system watcher - during development).
Hmmm, I seem to recall that hot reload didn’t reload the content, but I’ll check again! In any case it’s not as smooth for allowing content editors to preview content as NextJS’s preview unless I’ve missed something - with Next it will reload the content on every page load if you have the preview cookie set
When I found gatsby, I immediately stripped everything out and hooked into the dynamic createPages so I could just give it a list of components to turn into pages. (I thought the graphQL parts were horribly bloated boilerplate, so I didn’t want to even touch it.)
It also works well with async import for runtime loading of components.
Other then that, I don’t use any of it’s features. (I found a markdown react on npm and manually move the images to the public folder and use normal image tags.)
I’m quite happy with the runtime results, and develop re-render works pretty quickly. However, the production build takes about 30 secs which is pretty slow and I also ran into “out of heap memory“ on node during build and had to increase node’s heap size.
So I’m sure I’ll end up needing to build my own SSR eventually, but it should be easy to do.
Gatsby's one saving grace feature is the createPages API - it lets you programatically create pages at a specific URL with a specific template (React component), with context passed to the template that's not present in the URL. The benefit of this is you're able to render multiple difficult templates at the same URL pattern.
Gatsby does, however, suffer from huge scale problems, I think mostly due to the data type inference it needs for GraphQL. Once you start ramping up the number of models and records, it will churn forever ingesting them. There are hacks and ways to particially mitigate some of this, but still Gatsby continues to have problems once you're working with a medium amount of content.
Yes, I see the scale issues and I don’t have much yet (I assume because I am using createPages it doesn’t use any caching).
At runtime it’s excellent - perfect scores for performance - and using the website is perfect, so I’m thankful that gatsby let me get started quickly and see how good runtime performance can be.
I’m planning to replace it with React’s built in ReactDOMServer (and I can do a simple change detection and rebuild only changed content easily).
Honestly, it will probably be easier then all the hacking I had to do to get gatsby to work the way I wanted.
> I assume because I am using createPages it doesn’t use any caching
It's not this - we only used createPages and ran into these issues. It's the ingesting content from contentful or any database that'll continue to grow where gatsby lets you down.
I have 2 TLC Roku TVs. One I use as a large monitor for my desktop. I’m very happy with them. I mostly watch Netflix, YouTube, and Amazon Prime video and have never noticed the ads on the navigation screen (since they are just tv shows).
I’m planning to get YouTube premium to remove ads there and vote with my money for an ad-free Internet.
I never imagined that they would be spying on the actual video output, but I would consider that not only a major violation of privacy but corporate espionage (because I use it for work).
I don’t believe that is the case, but I’ll be researching if that is being done.
MVC - because spending 50% Of your time looking for the other file with the almost identical name is fun... wait now we need the view... nope back to the controller again... why can’t these files be next to each other...
It’s ironic that this is done in the name of “Separation of concerns“ -
You know what separates concerns: having everything that affects this component - in one place. That’s true separation of concerns.
I’ve worked in multiple frameworks in multiple languages and nothing does it better than modern react (especially with webhooks).
My website which is entirely built in react (and uses Gatsby).
Takes 4kb to full page render (1 file - static only - JavaScript disabled).
That’s smaller than most any html+css especially if using some css library.
Of course once that is loaded and if JavaScript is enabled, then it will start to load interactive stuff and prefetch (which is around 100kb - but I have a lot of interactive stuff - including a zork style adventure Easter egg in the site header).
In lighthouse, I see a score of 100 performance and time to interactive at 1.8secs.
This is comparable to hacker news home page (which is purely static), but my first contentful paint was slightly faster (probably because my site is slightly smaller).
Both sites were about 4 times faster (TTI) than google home page and typescript home page.
I’d be interested in checking out your site. I normally only have a 2G connection on my phone, I’d love to be able to read something that’s NOT HN. This site is the only site I know that takes less than 5-10 mins to load, if it loads at all.
It took 30s to 60s to fully load, but the content was available very quickly. The most entertaining thing was the stripe example. I didn’t realize it was supposed to load the card form, so I hit subscribe and it caused an error (because the card form wasn’t loaded yet) but the app handled it appropriately. It would be better if the subscribe button were disabled before the component loaded (which took at least a minute), but it’s still pretty good.
I just used blacklight with my own site and found a perfect score.
My site costs me essentially nothing* to host (netlify and aws serverless technologies that are mostly under the free tier).
*My highest cost so far was when I was debugging serverless websockets and had a bug in my code that caused constant messages between the browser clients I was testing (which I left open for a day when I started work). That cost me $7 dollars.
I have my own service-hosted playground using little more than git and a few cli tools.
I suspected (perhaps like you) that this article was going to be about free hosting providers (Netlify and the likes). When I saw it was Disqus/Facebook/Twitter I wasn't too surprised. I also host a lot of projects "for free" (TM) on Netlify, Firebase, etc. and don't tend to include any 3rd party scripts.
It makes me wonder if some form of data (anonymous or not) scraped from folks like Netlify is slated to be sold off to advertisers or SaaS products looking to find customers. As they do things like process your HTML they could pull out textual content looking for signals.
Yeah I was wondering if this was about maybe Cloudflare or Vercel getting lots of user / usage analytics off of freely hosted sites... doesn't seem like that's what they're doing, but who knows?
wordpress.com is often used by internet marketers because it has loads of tools for that and content management.
But that doesn't mean it has to include all the tracking plugins.
Also, there are plenty of site builders like wix.com, squarespace, etc. that can launch a site in minutes.
Using those tools doesn't necessarily imply ads or tracking. (But I would agree it would be impossible to know for most uses if it did have tracking in the background.)
And yes, I wrote my first web page in the 90s, but that doesn't mean we don't have nicer tools that anyone can use now. (Wow, I'm trying to remember how I figured that out as a teenager. I think my dial-up isp had a tool on their website and instructions for creating a path and uploading your own web files.)
If one has something to say one can host it. Hosting a small website yourself cost peanuts and simple websites should cover most non commercial use.
The walled gardens are killing the independent small scale websites. Facebook and Reddit essentially are eating every community site and those doesn't make much admoney anyway.
I’ll pay to host my own content - but if it’s just static content - that’s essentially zero.
Also, I’ll make tools to make it easy for anyone to create their own self-hosted website.
No ads needed.
And if I make a product or service of value, I’ll sell it - without ads.
Word of mouth recommendations are great. Somebody can make a review or a blog post about it because they like it.
Others can find it when they search for it - because they need it.
Think about it - how many things/services do you buy because you saw an ad versus because you had a need and sought out the best product that could fulfill it.
In fact, I think it would be awesome if a new search engine could step up - ad free web search (excludes any website that uses ads).
The big problem I believe is discoverability. People use Google's algorithmically culled search which favors big sites and SEO spam. Earlier there was a logistic with web rings and link-lists. They are pretty much gone. I mean since the phone book where I live is not printed anymore I can't look up what companies operate plumbing or pizza in my area because there are no such lists anymore. I can find a few on Google. Google pretty much put the information collect and arrange business out of business and didn't replace it.
"The information age" should really be called "the age of colored noise".
Amazing how blacklight gives it a perfect score when, in reality, IP addresses, domains, SSL certificates and other traffic metadata are clearly visible to AWS.
Not only the data can be subpoenaed, but it's also being intercepted by the usual 3-letter agencies.
If you care about user privacy find a smaller hosting in a country with good data protection.
Even better, host the site in somebody's home in that country.
My site is just a blog with a bunch of prototypes and games.
My goal is an ad-free Internet.
If the client is worried about surveillance then it would be up to them to use an appropriate vpn - but if a three letter agency cares about who reads my blog, I‘ll probably have a lot more problems then a visitor.
That's a really interesting observation. Really the site/service could just make the ceremony of objectivity part of the entire style and UX, that might be enough.
There's other things you could do too, like make every statement tagged with a source, and let community attempt to mark each source as primary/secondary, full/partial context, etc. Those statements could rise based on those tags instead of upvotes. It'd be wikipedia-for-news like. Has this been done?
So instead of an ad blocker, we could have background bots in our browser visiting random urls and clicking on every ad in sight (of course it would need to mimic human UI input).
The only legitimate ad blocker that has been banned from the chrome store was ad nauseum. It was a thin wrapper over ublock that a click signal to every single ad. You could adjust the intensity (no clicks, some clicks, all), but that was where Google drew the line.
Besides that the typescript type system is insane. It’s cool to hear about discriminated unions though.