> I like the awkwardness of the default prefix key.
I am 100% in agreement with you. It takes all of 5 seconds to add:
unbind-key -T prefix C-b
set-option -g prefix C-s
To your .tmux.conf on your local laptop (where I use tmux 99.99% of the time) - without worrying about conflicting on that once-in-one-year event where you start up tmux remotely.
The only (deal breaker for me ) weakness of zellij - doesn't support copy/paste from the keyboard (from the screen/scrollback) and doesn't support multiple copy/paste buffers.
I do that roughly every 60-90 seconds with tmux - so, until the zellij developers relent (they suggest the "proper way" of copy paste is to pipe the data into a text editor and use that - but has the downside of not supporting system copy-paste buffers.) - no options other than to stick with tmux (or fork zellij - but that seems a bit much....)
It does - and we spent an hour or so reading through the code and affordances to see if there might be a possible path.
The general response is that this user behavior, selecting/copying/saving-in-named-buffer is a very "tmux" like usage pattern the Zellij authors don't want to encourage in Zellij. Instead -they suggesting bringing your preferred Text Editor (emacs, vim, etc...) and doing the select / copy /paste in that.
The problems for me are - (A) I know how to select/copy/paste very well in tmux. Don't have the faintest clue how it's done in a text editor, (B) No (easy) ability to have multiple named buffers if you use a text editor, etc...
This is also the dealbreaker for me, I use copy / paste from the scrollback buffer all the time. And also quick search through the scrollback buffer. I don't want to first pipe all the output to an editor or pager or something; that messes with the terminal colors, indentation, and it's an extra step, making the whole process slower and more tedious.
But I guess Zellij people don't use the keyboard so much for copy pasting. A lot of people just use the mouse.
Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?
I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market
“Very likely yes”, I reply to an account that <1yr old with mostly comments in AI topics many of which violate the HN guidelines (including the one I’m responding to).
Strange gatekeeping response. Yep i comment on topics i'm interested in. Forgive me for not being on the platform for more than a year yet. That's a cute attitude
Another thing that multiple generation of MacBook Airs used to do is constantly be running (sometimes quite painful) amounts of electricity through your wrists if they accidentally touched the metal.
Not sure if the Apple Silicon devices have the same issue - but it was consistent through at least 3 different generations.
"Last year, our 1,500 posts earned roughly 13 million impressions for the entire year."
13 million impressions? And how much did they pay to reach their audience? I'm absolutely gobsmacked that any organization is willing to walk away from 13 million impressions a year and very interested in know how many impressions/year they get on their top-ten outreach platforms if 13 million impressions/year (presumably for free ???) is something not worth the effort of dropping onto X.
I'm a lifetime EFF member and have given them money multiple times, but this article is also clearly missing:
1. Are they spending less to get content promoted?
2. Are they posting links outside of twitter back to twitter less often?
3. Are they linking links to twitter in all their site traffic like they used to?
4. Is their site traffic in general the same as it used to be?
There is no analysis - just flat contextless numbers clearly designed to make it sound like "X is dying, we're taking our ball and going home" in a sour grapes sort of way.
disclaimer: anti elon, very pro-LGTB+, pro-EFF aside from weird political snipes
> disclaimer: anti elon, very pro-LGTB+, pro-EFF aside from weird political snipes
I'm actually with you on basic philosophy but the weird political snipes undercut everything they're doing and I can't support any nonprofi who stonewalls questions about what they're doing with my money.
> We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.
Given that social media posts are not free, in the sense that someone or something has to put some effort in to format the message for that particular site, I can see how a simple cost calculation would show that it is no longer worth it.
They are posting the same content in virtually identical format to other twitter clones.
The whole process can be automated, the marginal cost is nothing.
I hope they ran the numbers and did some cold surveying/analysis/postmortem before deciding that.
What is worse is those aren't shitty ad impressions. Interested people will be following maybe even expecting to see them. In addition and ironically also other interested people will be algorithmed in to their orbit.
E.g. I read more of a blogger I like because I follow him on LinkedIn over following RSS feed.
> Interested people will be following maybe even expecting to see them.
But they won't. That isn't how modern social networks work, and X definitely isn't an exception. The chronological feed of people you follow is long gone.
I was recently at a brown bag at work - regarding enablement of AI in the workplace (it was awesome - all over the roadmap) - and one of the audience asked the speakers (a very diverse group of people) how on earth they keep up with all the developments in AI?
All six of the speakers immediately said Twitter was realistically the only place you can keep up with the conversation. Having an extensively curated list means that anytime anything breaks (and often a few hours before) you are going to hear about it on X/Twitter.
I would love to know if there is anything even close to the reach of X. It has a lot of problems - but if you want to track breaking news, I can't think of anything else close to it.
You can still stay pretty up to date (at least in AI) without even being on X, since everything distills out to every other platform anyway. Between /r/LocalLlama and the ThursdAI and Latent Space newsletters, I'm at most only a few days away from whatever the latest hype is.
I absolutely agree with your sentiment - but it is often the case where you will get into the office at 9:00 AM - and everyone is talking about the biggest release/development that morning - and by lunch it's kind of old news and people have moved on to new thing - and so by the time you are interesting in talking about the thing that happened last week - implications, use, whether it's legit or just hype - people have all moved onto the new thing.
HN is a nice consolidated view - and I pull up the home page 2-3 times a day (and have done so for 10+ years every day) - but, there is a firehose of information coming in on X - particularly if you have a very highly curated list - and some people are insanely high signal - Karpathy for instances always seems to zoom in on important things.
> but it is often the case where you will get into the office at 9:00 AM - and everyone is talking about the biggest release/development that morning - and by lunch it's kind of old news and people have moved on to new thing
That's literally just gossip. The same dynamic existed with episodes of Friends and Game of Thrones.
Everyone gathers around the water-cooler and discusses the newest happenings, but that's not science and it's not engineering. You're not passing around serious white papers and looking over peer reviewed publications and datasets, it's just... gossip. It has the same value as gossip and is completely optional.
The big issue with this approach is that it will destroy your sanity for things that are often a big bag of hype with nothing underneath. I often find HN to be better because things that get on the front page are vetted beyond 'someone on twitter hyped up a thing'
HN is still great but it’s in decline, I still hear about AI developments on r/LocalLlama and X sometimes weeks before they make it here if even at all.
And all the commentary here is negative, skeptical and mean. It’s like Slashdot when Apple started ascending and everyone was complaining that iPods will never catch on.
> things that get on the [HN] front page are vetted beyond 'someone on twitter hyped up a thing'
Interesting take. I'm not aware that anyone is doing vote rings or vote buying very successfully (considering that my own blog also makes it at an expected rate, and I know there isn't a group of friends voting that up) but I kinda assume that this is a thing for some of the bigger launches where they are hoping for conversions. Beyond a defined group coordinating their posts or votes, though, surely HN's front page can't be seen as vetted beyond "oh this looks trendy/hype"? People don't vote only after trying out the product or reading the full article. In many cases that would mean voting after it has already disappeared off of the front page for good
I had to reluctuntaly create an account on twitter after years because of the exact same reason. AI research discussion is more active there than anywhere else. I've tried to use nitter's rss feed to stave off of the platform but it was limiting.
Well, Twitter has a lot of separate spheres. It's pretty easy to curate just tpot (the part that concerns itself with the Bay area, venture capital, and so forth) by following the right people and then engaging with posts that are on-topic.
Even when it was Twitter drinking from the firehose didn't really make your life better. I don't need a two sentence breaking update from a Miyazaki baby to stay on top of this stuff, and quite frankly if they can't bother to make a blog post or press release it's probably just noise any way.
I very much appreciate the sentiment - and agree that random crap (particularly some of the insane dependency chains that you get from NPM, but also Rust) in which you go to install a simple (at least you believe) package - and the Rust/NPM manager downloads several hundred dependencies.
But the problem with only using the OS package manager is that you then lock yourself out of the entire ecosystem of node, python, rust packages that have never been migrated to whatever operating system you are using - which might be very significant.
How do you feel about Nix? It feels like this is a nice half-way measure between reliable/reproducible builds, but without all of the Free For all where you are downloading who-knows-what-from-where onto your OS?
Solved with direnv. Also - in my .bashrc in all of my (many) clients:
$ type uvi uvl uvv
uvi is a function
uvi ()
{
uv pip install $@
}
uvl is a function
uvl ()
{
uv pip list
}
uvv is a function
uvv ()
{
uv venv;
cat > .envrc <<EOF
source .venv/bin/activate
EOF
direnv allow
}
I've used python for roughly 15 years, and 10 of those years I was paid to primarily write and maintain projects written in Python.
Things got bearable with virtualenv/virtualenv wrappers, but it was never what I would call great. Pip was always painful, and slow. I never looked forward to using them - and every time I worked on a new system - the amount of finaggling I had to do to avoid problems, and the amount of time I spent supporting other people who had problems was significant.
The day I first used uv (about is as memorable to me as the the day I first started using python (roughly 2004) - everything changed.
I've used uv pretty much every single day since then and the joy has never left. Every operation is twitch fast. There has never once been any issues. Combined with direnv - I can create projects/venvs on the fly so quickly I don't even bother using it's various affordances to run projects without a venv.
To put it succinctly - uv gives me two things.
One - zero messing around with virtualenvwrappers and friends. For whatever reason, I've never once run into an error like "virtualenvwrapper.sh: There was a problem running the initialization hooks."
Two - fast. It may be the fastest software I've ever used. Everything is instant - so you never experience any type of cognitive distraction when creating a python project and diving into anything - you think it - and it's done. I genuinely look forward to uv pip install - even when it's not already in cache - the parallel download is epically fast - always a joy.
All of them (well - no HPUX in 15+ years, and I've never used uv in Solaris, or AIX) - but the major two client side environments that I use 'uv' in would be WSL2+Ubuntu/ext4 (work) and macOS/APFS at home.
But - neither the speed nor constant issues with pip/virtualenvwrappers are really a function of the OS/File System.
A frequent theme in this thread (probably most clearly described in https://news.ycombinator.com/item?id=47444936) is that relying on your Python Environment to manager your Python Environment - always ends up in pain. Poetry had this issue as well.
One of the key architectural decisions of Astral was to write the Python Environment Management tooling in rust - so that it could never footgun itself.
I'm thoroughly enjoying this thread by the way, between someone who is clearly informed and educated in platform research, and pretty enthusiastic and interested in the field, and yourself - an deeply experienced engineer with truly novel contributions to the conversation that we don't often see.
Looking very forward to more of your insight/comments. Hopefully your NDA has expired on some topic that you can share in detail!
Thank you for your comment. I started this thread just as a simple "job well done" to the authors. I didn't expect to be told that my work doesn't exist. ;-)
No one ever notices plastic surgery when it is done well. The same can be true for obfuscation. But, as I indicated, no amount of obfuscation is foolproof when dealing with experienced, well-funded attackers. The best you can do is make their task annoying.
I am 100% in agreement with you. It takes all of 5 seconds to add:
To your .tmux.conf on your local laptop (where I use tmux 99.99% of the time) - without worrying about conflicting on that once-in-one-year event where you start up tmux remotely.reply