The best recipe I know is to start from a modular monolith [1] and split it when and if you need to scale way past a few dozen nodes.
Event sourcing is a logical structure; you can implement it with SQLite or even flat files, locally, if you your problem domain is served well by it. Adding Kafka as the first step is most likely a costly overkill.
What you're speaking of is a need/usability-based design and extension where you design the solution with certain "safety valves" that let you scale it up when needed.
This is in contrast to the fad-driven design and over-engineering that I'm speaking of (here I simply used ES as an example) that is usually introduced because someone in power saw a blog post or 1h talk and it looked cool. And Kafka will be used because it is the most "scalable" and shiny solution, there is no pros-vs-cons analysis.
Event sourcing is a logical structure; you can implement it with SQLite or even flat files, locally, if you your problem domain is served well by it. Adding Kafka as the first step is most likely a costly overkill.
[1]: https://awesome-architecture.com/modular-monolith/