It's easy if the fields are all numbers and you have a good handle on whether any of them will be negative, in scientific notation, etc.
Once strings are in play, it quickly gets very hairy though, with quoting and escaping that's all over the place.
Badly formed, damaged, or truncated files are another caution area— are you allowed to bail, or required to? Is it up to your parser to flag when something looks hinky so a human can check it out? Or to make a judgment call about how hinky is hinky enough that the whole process needs to abort?
Regardless of the format if you're parsing something and encounter an error there are very few circumstances where the correct action is to return mangled dat.
Maybe? If the dataset is large and the stakes are low, maybe you just drop the affected records, or mark them as incomplete somehow. Or generate a failures spool on the side for manual review after the fact. Certainly in a lot of research settings it could be enough to just call out that 3% of your input records had to be excluded due to data validation issues, and then move on with whatever the analysis is.
It's not usually realistic to force your data source into compliance, nor is manually fixing it in between typically a worthwhile pursuit either.