Be careful when you google for this or it might get NSFWish: Top result "Amazon Athena" is a page on their shopping site - I clicked a bit too quickly, and it was a listing of women's lingerie. :-|
Very similar but even more crazy expensive than BigQuery, despite the pricing parity between the two services.
BigQuery automatically optimizes the data stored during ingress - whatever it's original format, it's converted to a columnar store, compressed, and (iirc) automatically optimized based on workload.
If you load an uncompressed CSV file into BigQuery and query a single column, it'll cost you the amount it takes to scan that single column in compressed form. If you have that CSV file in S3 and query it with Athena, it'll cost you the entire, uncompressed size of scanning every single column in that CSV file, even if your query only references one column. It'll be cheaper if the CSV file is Gzipped. And even cheaper if the data is partitioned into several CSV files (and the partitioning is known to Athena).
But all of those would still be even more expensive than BigQuery, because it's still processing unnecessary columns to get to the column you want. Unless you convert that CSV to a supported columnar format like Parquet, then it starts to approach the cost of an equivalent BigQuery query.
It's still incredibly nifty, and I'm chomping at the bit to use it. But even with the same pricing as BigQuery, your queries will end up costing far more to run on Athena unless you also invest continual resources into tuning the physical structure of your data to both Athena's processing structure and your query load, which BigQuery does for free in the background.