Hacker News new | past | comments | ask | show | jobs | submit login
A command-line power tool for Twitter (github.com/sferik)
48 points by adpreese on May 3, 2014 | hide | past | favorite | 13 comments



I think the king of command line clients is TTYtter - I love it.

http://www.floodgap.com/software/ttytter/


To my great sadness, TTYtter is currently unmaintained (I sent in a bug report and that's what the dude told me). So I expect it to stop working at some point. Until then, best desktop Twitter client for me by a mile.

I wish Twitter, Inc weren't the absolutely stinking worst maker of Twitter clients. :(


This has many cool features and is easy to use in scripts but if you just want Twitter in your terminal, check out Twirssi[1]. It plugs into irssi which many have running already anyway.

[1] http://twirssi.com/


Using bitlbee for this is awesome aswell. Then you don't need to run several scripts for more functionality (g+, Facebook, jabber etc)


The problem with Bitlbee is that it does not support twitter lists. I had a project to add that support, but my C is very rusty :(


yes, please! have needed a tool to help deal with unfollowing some folks for a while, i am always at follow limit, and when you're in the ui follow list it starts with the people who tweet the most and who you interact with the most afaict, so i end up scrolling for ages trying to find anyone uninteresting. people seem to show up multiple times in infinite scroll, too, so it's like digging a fucking hole in sand.


"Multi-threaded: Whenever possible, Twitter API requests are made in parallel, resulting in faster performance for bulk operations."

That's not what threads are about.


t is written in Ruby, and Ruby doesn't come standard with a non-blocking HTTP client. You'd have to pull in EventMachine which would have a far greater cost than spinning up a thread. Additionally, threads in Ruby are user-level threads, so the overhead is a lot smaller than a kernel thread.

I see what you're saying here, but given that t had little other choice, I don't see how this comment adds any value other than to attempt to point out anything that could possibly be wrong with t.

Instead, I think t is fantastic. The usage of threads is good enough, and I thank Erik for spending so much time on a sweet little utility.


Why not? Thread pools are great for doing bulk operations... long API requests is a great thing to do in parallel when possible.


There's a runtime overhead of spinning up a bunch of threads that are just going to sit idle waiting for an API response. There are better asynchronous models for that sort of thing.


Even if starting a new thread takes 1ms, that pales in comparison to the 300ms network connection.

On my laptop, thread creation (pthread_create() followed by pthread_detach() takes ~17 microseconds.

When one end of a socket is on the Internet somewhere, connection time is ~80ms. If I start a thread and then have it create a network connection, eliminating the therad spawn overhead couldn't save more than 0.02% (that's two percent of a percent) of the running time, by Amdahl's law.

The effect of thread creation just isn't significant here. If this utility is going to spawn 1,000 simultaneous API connections to twitter, the threads could all be started by the time the first connection succeeds.

(My numbers come from a class assignment where I benchmarked some unrelated stuff, but if you're interested, I can send you the report)


Unless you plan to open 10K connections to twitter, for a small utility like that, it hardly matters.


No, long parallel requests are great to do concurrently on a single thread. A thread is a computation primitive, you spin off a thread when you want to compute many things at once, not just wait for many things at once.

If you would spin up threads just for this you're wasting memory and slowing down startup time.

And if you would destroy the threads and spin them up again for every batch of API calls, the result may counterintuitively be a slower app due to the overhead of creating the threads themselves.

At the same time concurrency is free. There's no overhead for doing a call async.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: