It's hard to answer this in general. Most out-of-the-box scaling solutions have to be generic, so they lean on distribution/clustering (e.g., more than one + coordination) so they're expensive.
Consider something like an amazon product page. It's mostly static. You can cache the "product", and calculate most of the "dynamic" parts in the background periodically (e.g., recommendation, suggestions) and serve it up as static content. For the truly dynamic/personalized parts (e.g., previous purchased) you can load this separately (either as a separate call from the client or let the server pieces all the parts together for the client). This personalized stuff is user specific, so [very naively]:
conn = connections[hash(user_id) % number_of_db_servers]
conn.row("select last_bought from user_purchases where user_id = $1 and product_id = $2", user_id, product_id)
Note that this is also a denormalization compared to:
select max(o.purchase_date)
from order o
join order_items oi on o.id = oi.order_id
where o.user_id = $1 and oi.product_id = $2
Anyways, I'd start with #7. I'd add RabbitMQ into your stack and start using it as a job queue (e.g. send forget password). Then I'd expand it to track changes in your data: write to "v1.user.create" with the user object in the payload (or just user id, both approaches are popular) when a user is created. It should let you decouple some of the logic you might have that's being executed sequentially on the http request, making it easier to test, change and expand. Though it does add a lot of operational complexity and stuff that can go wrong, so I wouldn't do it unless you need it or want to play with it. If nothing else, you'll get more comfortable with at-least-once, idempotency and poison messages, which are pretty important concepts. (to make the write to the DB transactionally safe with the write to the queue, lookup "transactional outbox pattern").
Consider something like an amazon product page. It's mostly static. You can cache the "product", and calculate most of the "dynamic" parts in the background periodically (e.g., recommendation, suggestions) and serve it up as static content. For the truly dynamic/personalized parts (e.g., previous purchased) you can load this separately (either as a separate call from the client or let the server pieces all the parts together for the client). This personalized stuff is user specific, so [very naively]:
Note that this is also a denormalization compared to:select max(o.purchase_date) from order o join order_items oi on o.id = oi.order_id where o.user_id = $1 and oi.product_id = $2
Anyways, I'd start with #7. I'd add RabbitMQ into your stack and start using it as a job queue (e.g. send forget password). Then I'd expand it to track changes in your data: write to "v1.user.create" with the user object in the payload (or just user id, both approaches are popular) when a user is created. It should let you decouple some of the logic you might have that's being executed sequentially on the http request, making it easier to test, change and expand. Though it does add a lot of operational complexity and stuff that can go wrong, so I wouldn't do it unless you need it or want to play with it. If nothing else, you'll get more comfortable with at-least-once, idempotency and poison messages, which are pretty important concepts. (to make the write to the DB transactionally safe with the write to the queue, lookup "transactional outbox pattern").