We're also doing this at Thumbtack. We run all of our Spark jobs in job-scoped Cloud Dataproc clusters. We wrote a custom Airflow operator which launches a cluster, schedules a job on that cluster, and shuts down the cluster upon job completion. Since Google can bring up Spark clusters in < 90s and bills minutely, this works really well for us, simplifying our infrastructure and eliminating resource contention issues.
Awesome stuff, glad to see folks leveraging the possibilities! Perhaps as a follow-up you could write a guest blog on how this works for you! Feel free to ping me offline.