It's very good. Postgres by itself can handle a very high volume of inserts (I did over 100,000 rows/s on very modest hardware). But timescale makes it easier to deal with that data. It's not strictly necessary but it's very time series friendly (good compression, good indexing and partitioning etc). Nothing a pg expert can't accomplish with a vanilla postgres but very, very handy.
I haven’t tried timescale, but I have found postgres with time-based partitions works very well for timeseries data. Unless you’ve got really heavy indexes, the insert speed is phenomenal, like you said, and you’ve got the freedom to split your partitions up into whatever size buckets makes the most sense for your ingestion and query patterns.
A really nice pattern has been to use change data capture and kafka to ship data off to clickhouse for long-term storage and analytics, which allows us to simply drop old partitions in postgres after some time.
I think timescale will compress them heavily on your schedule so if that's acceptable to your use case you might be able to do away with clickhouse. Hard to say of course, without knowing details around your insertion and query patterns, retention requirements and aggregations you need. But timescale can do a lot of that with pretty straightforward syntax.
I have used TimescaleDB in my last work place. We needed a easy way to store and visualize 500hz sensor data for few 10s of devices. We used it and Grafana to build a internal R&D tool and it worked way better than I imagined. Before I left I think the DB was using ~200GB on a compressed btrfs volume in DigitalOcean droplet and still performed fine for interactive Grafana usage.