Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

awk is my go-to tool for simple command line tokenizer. Hard to beat:

awk '{ print $1; }'

Other than that... not really. Maybe the advantage would be ubiquity, if you really, really want to avoid Perl.



>Hard to beat: awk '{ print $1; }'

How about: cut -f 1 -d ' '

You don't even need the -d flag if the you happen to be able to use the default delimiter, like in your example.


I used to regularly do ad-hoc text processing, typically on a 24 hour log of GPS data (on an early-2000s-era computer). Surprisingly enough, awk is many times faster than cut for any data set big enough for you to notice time passing.


Awk delimits on whitespace by default. cut cannot do that afaict. So if you have something like this cut wont work:

    apples  1
    bananas 2


Exactly, which makes cut kinda shit for 99% of common "split-on-whitespace" tasks IMO.


You can always use awk with IFS/OFS values to clean up the delimiters so you can pass the data off to cut ;-)


There's always plain old bash

while read a _; do echo $a; done




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: