Even if a 6 terabyte CSV file does fit in RAM, the only thing you should do with it is convert it to another format (even if that's just the in-memory representation of some program). CSV stops working well at billions of records. There is no way to find an arbitrary record because records are lines and lines are not fixed-size. You can sort it one way and use binary search to find something in it in semi-reasonable time but re-sorting it a different way will take hours. You also can't insert into it while preserving the sort without rewriting half the file on average. You don't need Hadoop for 6 TB but, assuming this is live data that changes and needs regular analysis, you do need something that actually works at that size.
Yeah, but it very well not be data that needs random access or live insertion. A lot of data is basically just one big chunk of time-series that just needs some number crunching run over the whole lot every half a year.