Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seriously confused, the article first screws up royally saying its not opensource, which they correct, but then discuss "shoved down their throats" ... a feature that exists but is not enabled by default and doesn't even work if you don't provide a key.

It's absolutely rediculous that supporting something, thats opt-in not opt-out is causing a ruckus lol, people really have too much fuckin time on their hands to bitch about things.

This is like bitching that Firefox allows us to enable a proxy server, and throwing a fit that a feature exists even as an option to be enabled.




You're right across the board. My only issue with it is general-purpose AI fatigue. Everywhere I look, blog posts use the same generic-looking AI art (I can't quite put it into words, but you probably know the look I'm talking about), social media posts are written up using genAI (e.g. Linkedin will now ask you if you want AI to write your post for you -- though let's be honest, original thoughts are few and far between on there to begin with), and while interviewing recently I received multiple warnings about disabling any AI assistants in my editor (to me, it's kind of a bummer that that's a big enough issue to mention at all).

I have, in principle, nothing against an opt-in feature that requires unmistakable user consent and a specific sequence of actions to enable. I'm just kinda tired of AI in general, and I'm also worried about potential licensing issues that may arise if you use genAI in your terminal to write scripts that weren't permissively licensed before being used as part of a training set. That's nothing new though, I had, and have, the same concerns with Github Copilot.

I also recognize that my complaint is pretty personal (not counting the licensing thing). My low-level annoyance with genAI isn't something the AI industry at large should seek to resolve. Maybe I'm more set in my ways than I should be at this point in my life. Either way, it's a factor for me and a few other tech people I know.


> Everywhere I look, blog posts use the same generic-looking AI art (I can't quite put it into words, but you probably know the look I'm talking about)

They got that AI grease on them


Oh, that's easy, you just add the words "but don't make it look greasy", and as a bonus you're now a fully accredited Prompt Engineer! :p


Realistically, you use models that make it easier to prompt away from greasiness:

Positive prompt: 1girl, hacker, laptop, terminal, coffee, green hair, green eyes, ponytail, hoodie

Negative prompt: worst quality, low quality, medium quality, deleted, lowres, comic, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry, looking at viewer

Steps: 25, Sampler: DPM++ 3M SDE Karras, CFG scale: 7, Seed: 1313606321, Size: 768x384, Model hash: 51a0c178b7, Model: kohaku-xl, VAE hash: 235745af8d, VAE: kohaku-xl.safetensors, Denoising strength: 0.7, Hires resize: 2048x1024, Hires upscaler: Latent (bicubic), Version: v1.6.0-2-g4afaaf8a


Now you can be like every other obnoxious tech blogger putting ai art crap in their header.


Hah! That's actually a pretty good way of describing it.


The best is going to boingboing, and seeing that shit art everywhere, while they have posts whining about AI art. Be Consistent. You're just being part of the problem if you can't even abide by the basics.

I never thought I would end up hating the whole tech world so much, and I thought "Crypto" was peak - but that was just a bunch of scammy dudes and rugpulls for suckers. This? Everyone is suckers. In theory there's a case to be made for it, but I trust none of the entities involved in pushing this out.

For about 5 years I thought MS was going to do something good. WSL2 was actually good tech and they seemed to uh... "embrace" open source. But since 2020 I feel like things are just going downhill.

My inner old man yells at the lawnmowermanchild : GET OFF MY LAWN.


> while interviewing recently I received multiple warnings about disabling any AI assistants in my editor

Weird. Does the company forbid its staff to use AI assistants?

I get that they want to find out what you know. If you know how to solve problems using an AI, isn't that what they are going to (increasingly) expect you to do on the job?

In fact, demonstrating that you can effectively use AI to develop, would seem to me to be something they'd want to see.


The stated reason was that they wanted to understand my ability to solve a problem and my thought process when faced with their problem. Having run technical interviews in the past, I completely agree with the reasoning.

While I don't use it, I'll grant you that AI is good at solving small, repetitive, tedious problems; stuff that's maybe a bit too domain-specific to be widely available in a library, but that's consistently subtly different enough that you have to sink time into either making different implementations or trying to generalize.

AI is generally going to be poor at solving novel problems, and while that's something that you can never really use in an interview for a variety of reasons, I can send you a problem that's likely new to you and see how you tackle it.

I'll also admit that there's not a single good way to do that as it is. Technical interviewing is more of an art than a science, and it's difficult to get really good signal from an interview, generally speaking.


If you're experienced with how to use an LLM they can be very good at helping with novel problems and much more than boilerplate.

I've been able to tackle problems that would have otherwise taken up too much time and effort between my jobs as both expert witness for software related Federal lawsuits and as a father of two young children.

Here's just a sampling of what I've accomplished since I started using these tools:

A convolutional neural network trained using PyTorch to predict the next chord (as tablature) on a guitar based on image data. Includes labeling software for the image data as well as an iOS app for hosting and running the model.

A web framework written in C using clang blocks for the DSL, per request memory arena, and complete JSON-API compatible ORM, JWT auth, and enough sanitizers and leak detect to make your head spin.

A custom CLI DSL modeled somewhat on AWK syntax written with Python Lex Yacc for turning CSV data into matplotlib graphics.

A custom web framework written in F# that compiles to JS with Fable.

LLMs helped with looking up documentation, higher level architecture, fixing errors, and yes, plenty of boilerplate!

All of this being said, I brought with me almost 30 years of experience writing software and a pretty good idea high level of how to go about building these things. The devil is in the details and that's where the LLM was most useful.


Sure, but none of what you mentioned actually solves novel problems. It's a helpful tool, sure, but that was never up for debate.

Beyond that, if it helps you spend more time with your family while still being good at your job, that's always a win.


I'll agree that I was the one ultimately responsible for solving the novel problems!


Most people don’t actually work on novel problems though. They build CRUD web apps. Copilot shines there.


And leetcode interview problems are not novel either. They are formulaic, you just have to know the formulas.


The ability to solve novel problems is tested like a marathon, not like a 100m race.


> If you know how to solve problems using an AI, isn't that what they are going to (increasingly) expect you to do on the job?

Why would you work somewhere that prescribed your workflow?

You're a professional. Do your job as you see fit, whether that involves AI assistance or not.

> demonstrating that you can effectively use AI to develop, would seem to me to be something they'd want to see.

Irrelevant. They want you to meet business needs more than play with today's shiny technology.


The trouble is that AI assistants have seen all of the contrived algorithmic problems before. I once interviewed a candidate for whom Copilot spat the answer out as soon as he typed the function signature. Whether these problems are a good idea in the first place is a separate discussion, but as it stands the AI seems to just sidestep the whole thing.


I refuse interviews that include shit like leetcode currently, as they are a waste of time, and I’m glad that LLM’s are ruining them.


I hate them too, but I'll put up with them out of necessity if the product or company mission's interesting to me.

Ironically, the most interesting positions I've held have almost universally been at companies that don't have leetcode-style questions as part of the hiring process.


The interview (test) isn't how well you know your IDE or the compiler tools, but the test is: Can you work hard (study leetcode) and have mental ability to achieve your goals (a job offer).

IMHO, this is the same as disabling linting or compiling. Or why companies (used to) do coding challenges on a white board.


You'll think I'm kidding but at least 1/3rd of Google isn't sure if AIs can code, and 1/3 thinks that if they can the code is too bad to bother with. Things like "gee idk, VS Code autocomplete is pretty cool" are either non-sequitors or a battleground.


I don't think it's uncommon for a feature to start as opt-in, then turn to opt-out, then finally turn to built-in, so it makes sense to me to be wary if the feature is something you don't want in something typically light and not-overburdened-by-features as a terminal emulator.


Yes, but you have to take into account what kind of product it is and who's the developer and their track record.

It's different if a paid or ad supported product wants to increase the amount of people who use a feature vs. an open source developer adding a feature for people to use and having no benefit by people using it. iTerm already has a lot of advanced options and that's just one more of it.


Relevant track record: the developer thought this would be a worthwhile "feature" to add. Frankly, that says enough. Even if they do eventually give in and remove it, the stain will still be there.


You’re talking about someone publishing high quality open source software for over a decade that’s beloved by millions of users. Have some respect, man.

You might disagree with the way a specific feature works, but this is absolutely no reason to talk about someone like Mr. Nachman this way.


What is disrespectful about gp's comment? It's okay to disagree with decisions made by even the most dedicated maintainers; I've switched to software (forks, or similar projects) many times on principle - I respect JetBrains, but disagreed with an awful decision they made that was a stain and made me question their judgement and avoid their otherwise excellent IDEs whenever possible. I haven't bought any personal licenses from them since - is that disrespectful in your books?

I hope iTerm moves this functionality into a plugin with the next version.


Read their remark again. Tell me in earnest if that is something you would like to hear about yourself, being told to your wife or boss.

We’re not talking about a faceless company like JetBrains here, but a single human being, for chrissakes.


It's not disrespectful.


I for one am horrified at the sheer amount of ill will and sense of entitlement directed towards a developer who has been working tirelessly on high quality free and open source software out of his spare time. It's a thankless job he has continued for more than a decade.

These people have benefitted from iTerm2, possibly for years. Instead of giving back or thanking the person behind it, they chose to flood HN and the issue tracker with insults and lies. They literally pretend as if he's trying to sneak in spyware that steals each and every keystroke. They berate, mock, and question the morality of him and anyone else who dares to push back.

I see this kind of bullying behavior in some open source communities and am taken aback every time I see it. The lack of empathy on display here is ironically less than an LLM and that's saying something.


It appears your issue is not with the comment at the head of this thread specifically, but generally with other comments elsewhere (on HN and Github); that doesn't make the comment disrespect or untrue by association.

AI is a lightning rod issue, while insults are not ok, disagreement is inevitable, disagreeing wirh a decision a maintainer made is orthogonal to dis/respect.

Open Source is fickle - you can diligently work on a project for years and have it forked for mundane reasons, such as some people thinking you're not doing things fast enough for their liking, or you included a controversial feature - and they won't have to pay you or even send you a "thank you" note, that does not change the value of the contributions.


100% this, it’s honestly a bit embarrassing reading through some of these comments or Gitlab issues (“I would donate if you remove this feature”) as a fellow developer. I expected a bit more of my peers on a developer focused tool.


"Just for the future, I'd like to have a discussion before introducing highly controversial 'features' like this" is another one. Not only does it come off as entitled and condescending, it also shows they haven't bothered to even look at the existing development process. iTerm2 already has test releases with detailed release notes.

https://iterm2.com/downloads.html

And oh, they're now demanding an apology ... for a gift they willingly grabbed without a single thank you.


> It's absolutely rediculous that supporting something, thats opt-in not opt-out is causing a ruckus

iTerm is pretty extensible, and there are other ways of making the AI bloat (IMO) opt-in, without including it in the core software.

The biggest issue for me is that it increases the attack surface on iTerm2 with no tangible benefit (to me), I'd be similarly upset if they added an opt-in "Share on Facebook/StackOverflow" feature. I'd seriously consider switching to a purist fork that doesn't integrate social-media sharing as a core-feature of a terminal app.


> The biggest issue for me is that it increases the attack surface

What's your threat model?


> What's your threat model?

Why do you ask, will you help with designing a mitigation plan?

I'll humor you: It's a turnkey gadget for sniffing/exfiltrating the output of any open iTerm2 shell.


Because you’re already using other software that has LLM integration. What specifically about this iterm2 impl makes the threat more real??


> Because you’re already using other software that has LLM integration

Oh really, which software would that be? And which other LLM-enabled software connects production environments or has access to auth credentials/tokens?


How do you know what other software they are using?


Is someone not using GitHub these days? Or web search? Or macOS? Or Windows?


I use GitHub, I don't use its copilot.

I use web search, I don't use LLM websites.

I use MacOS, I don't use Siri.

I use Windows, I don't use Cortana/Copilot.

------------------------------------------

I don't want LLMs to parrot back code from other projects without understanding what that code does and what my code does. I don't want it to parrot back irrelevant slop.

And I especially don't want it to parrot:

rm -rf $BUILDDIR/ && ./build-project.sh

and just hallucinate the assumption that $BUILDDIR is already defined.


But GitHub doesn't ship copilot as a separate binary. So the threat vector of “AI has no place in my VCS get it out it increases the surface area” is there. So it’s okay for github to have copilot but not iterm2 to have codesierge? Doesn't add up.


Github isn't a binary, it's a repo host. Github can hallucinate whatever it wants, it's not going to brick my computer.

A terminal on the other hand...


The point here is about compliance. I agree it’d be stupid to pipe the output of an LLM to a terminal’s command line. But people are saying they can’t use iterm2 now because compliance says no AI and having an mdm-secure way to disable the functionality is not enough because _there could be a bug_ or something. Yet they’re checking commits, in presumably the same compliance regime, into other software with AI features.


Github doesn't come with Copilot, even on the enterprise plan.

You have to explicitly pay for it and add it to your repo.


> people really have too much fuckin time on their hands to bitch about things

I always find it so puzzling when people say this. If this author has "too much time" what would the ideal society look like to you? Would you prefer to live in a society where every person is worked to the bone for optimal productivity, such that there is not even a spare 30 minutes in their week to write something that isn't generating economic output? I really want to know what you mean when you say this.


I want to put my hand on people's shoulders and calmly say 'take it easy'. So many times when I'm reading hn.


Brave’s crypto reward features might be a better analogy, but I like the comparison.

Fortunately I get paid to throw shade. Here is a free sample:

“Unsupervised use of LLMs can increase the risk of exposing sensitive information. Mitigation strategies and capabilities are not mature.

“Exploiting AI requires rapidly increasing the capital expenditures (input) per worker while the horizon for productivity gains (output) is uncharted and unknown.


Except LLMs have real value today unlike crypto which is mostly used for scams and speculative trading. That’s what makes this different from Brave. Also I don’t think the CEO has any integrity whereas the iTerm2 developer has been making an incredible product for free for over a decade.


> It's absolutely rediculous that supporting something, thats opt-in not opt-out is causing a ruckus lol, people really have too much fuckin time on their hands to bitch about things.

People have preferences. They find their preferences meaningful to them. This means there will always be a healthy competitive market of alternatives to choose from in order to serve those preferences. This is not a bad outcome. Why do you find it "worthy of ridicule?"


> there will always be a healthy competitive market of alternatives

I'm not complaining about iTerm2, but this statement on its own is not true.

For example - a healthy competitive market for phones that respect your privacy. Cars without touchscreen controls for everything. Televisions that are not "smart".

Honestly, I hope that iterm2 gets a "local AI" feature.

Running a local LLM or stable diffusion is fun.


That was specifically to the world of open source.

I can't do anything about our government failing to enforce anti trust laws that have existed for more than a century. If they did enforce them then perhaps this statement would have wider truth to it.


preferences aren't above criticism, i see many people on the internet with the misapprehension that just because "it's an opinion" somehow ought to shield you from people being able to say "that's a dumb opinion"


> preferences aren't above criticism

Sure. I just wouldn't expect to get particularly far with that strategy.

> ought to shield you from people being able to say "that's a dumb opinion"

We're talking about a preference of which terminal emulator to use. Is there some larger social consequence that should be addressed this way?

Having openly genocidal racist attitudes are "dumb opinions" that are worthy of public challenge. Whether or not I want to use iTerm2 because they added LLM interfaces is probably not.


i feel like you are treating 'dumb' as synonymous with 'reprehensible'.

worthy of public challenge? we're commenting on an internet forum. not using a terminal because it has added a feature that is opt-in seems silly to me, it doesn't have to be racist or genocidal to reach the threshold of 'worthy to comment on'


you’re being silly on this just as much as you’re being silly about iterm

yes, in the context of an internet discussion about iterm we can of course say that you’re being silly, even if genocide exists. Don’t be silly.

not exactly like the front page of the NYT is discussing akira2501’s iterm stance instead of genocide, here, you’re just having an emotional tantrum over some random computer thing.

people have just gotten silly over this whole thing, visceral emotional reaction is about right.


Because they’re going beyond their personal preference - which could be fulfilled by never using this feature - by trying to press their legitimate preferences onto others by blackballing companies that even touch LLMs.


> beyond their personal preference

My preference is to not use software which has these features in it. I don't need my terminal software having a network request layer which can exfiltrate my data to unknown third parties.

You might feel comfortable with a labeled "off switch" but many of us do not. Is that an allowed preference? Or should I be ridiculed?

> press their legitimate preferences onto others

You mean describe their preferences openly and allow others to come to their own conclusion? Or are you suggesting that they're bullying other people into this position against their will?

To me it seems the opposite. Whenever these criticisms come up there is a contingent dedicated to minimizing them to the point of suggesting that they should be openly ridiculed. Or that they've misunderstood their own preferences. Or that they've taken them "too far" somehow.

> by blackballing companies that even touch LLMs.

Yes, preferences even extend to economic decisions. People often forget that this is the basis of all economy.


iTerm has had a font fetcher, a crash reporter, and an automatic updater that all hit the internet for a while. Did anyone care? Is there anybody who turned off the crash reporter but started a fairly mean-spirited thread that the crash reporter was still in the binary? Is there anybody who didn't bother reading the code for what the crash reporter did and expressed that they're not "comfortable with a labeled 'off switch'"?

Seems like this preference only comes up when the endpoint is an LLM. It's an isolated demand for rigor rooted in the visceral emotions LLMs seem to inspire.


iTerm did have at least one rather serious data leaking bug in the past. It was introduced in a new feature and enabled by default.

People (or power users at least) tend to remember incidents like that. I don’t think it’s entirely “just because LLM’s”.


I would find this persuasive if I could find a single mention of this bug in either of the GitLab threads or the original blog post.

Also, what was the bug?


This was fixed now. The article’s main point still stands IMO. I.e. that iterm2 focused on openai and not some local workflow by default.

Optional or not, I’d like core features to be privacy friendly and provider agnostic. Otherwise a plugin might be a better fit.

> I think that one of the greatest errors that was made with putting this in iTerm2 was making a big show of it, and by not letting you use local models (such as with Ollama) instead of having OpenAI be the only option.


I think you can hardly call it “focused on”, when it’s one feature out many many updates made to the software. Alao I believe you can retarget the calls to a local instance of a model behind an OpenAI compatible API, and it will happily use that.

Seems like lots of knee jerking going on.


I meant “focused on” in the sense that AI features specifically are focused on openai, to the point of mentioning openai only in the changelogs.

I haven’t seen an official demo of using ai features with a local instance. I believe it should be the other way round, the focus should be on local (because again, this is provider agnostic and privacy friendly).


You can change the endpoint, which anyone could learn from reading the comments on the release yesterday or reading the wiki

> I think that one of the greatest errors that was made with putting this in iTerm2 was making a big show of it, and by not letting you use local models (such as with Ollama) instead of having OpenAI be the only option.

There is not a single line this is that is true. A big show was not made and you can use Ollama with it. The “big show” was made by other people not the developer behind iTerm2.


Having a proxy setting actually is a problem, which is why, for example, Microsoft Windows Server lets you create a policy that prevents users from configuring it.

Codecierge allows terminal scrollback and its presumably unredacted data to be sent to a third party, it is a conduit for data exfiltration and may violate a whole bunch of compliance policies, so if it cannot be disabled, it may put companies in violation of those compliance certifications.


Features you don’t use still add complexity, bugs and potential security and privacy issues.


You can always fork it if you disagree with the direction of development.

Everyone of course knows perfectly well that it’s not anywhere near that level of concern in reality, one might say just concern trolling in fact.

But if you and enough other people really do feel strongly enough, you can maintain a security-focused fork with… removing an optional LLM thing that requires manually entering a key. Sure.


“If you don’t want to maintain your own terminal app, don’t criticize”

The “love it or leave it” of the open source world. Sorry, I don’t buy that framing. No one should be immune from criticism.


Sure, but the gitlab thread is not respectful or constructive criticism. It’s a bunch of knee jerk users saying they can’t use iTerm2 anymore and overtly threatening and/or bribing the devs because they don’t like any product that includes a nice UX to interface with an “AI” language model.


Looking at the GitLab issue, I have much more trust in the current developer than whatever (pitch)forks that might appear out of there.


A proxy server is not a controversial feature.

The proxy server was not created through petabyte-scale plagiarism.

A proxy server does not use half a million kilowatt hours of electricity a day.

This is nothing like complaining that Firefox allows you to enable a proxy server.

I use ChatGPT but I also think the AI detractors have some good points...


Fortunately, the iTerm feature is not mandatory, nor are they now sponsored by OpenAI or neglecting other "duties" (it's a free and open-source project) as far as I can tell.

iTerm has always been an "everything and the kitchen sink" type of project/software. If you want minimalism, especially in the interest of security, it's definitely not the terminal emulator for you – its attack surface must be massive.


Yeah, I'm not one of the people mad at iTerm right now, I'm simply saying those people aren't absolutely deranged and there are good reasons to balk at the inclusion of AI features where they don't need to exist.

I happily use iTerm2 and will continue to do so.


Looking at the issue tracker, they're abusive bullies acting like they own the project though. They could've nicely instead of balking and stick to facts instead of making wild claims about bait and switch spyware. Their behavior is anything but normal.


Looks like they got what they wanted.


If you think AI is bad, wait til you hear about humans...

Im being facetious, but my point is that raw power/data usage isn't by itself a bad thing, as long as it is providing commensurate value. Now you can argue thy don't do that yet, but that would require a lt more nuance than "using resources bad".


I don't think "AI bad" but I think some of the people saying "AI bad" have interesting reasons for saying so.

For the record I believe a lot of the "AI Bad" discourse is a direct carryover from "NFT/Crypto Bad" discourse. A lot of the annoying voices that were loudly promoting NFTs and dodgy web3 companies two years ago are now LOUDLY promoting dodgy AI companies...

Some of it still rings true, a lot of it seems like "twitter is still mad"


The parallels even extend further: A lot of the people (largely correctly!) crying "scam" back then for crypto are now crying scam for AI.

If I had to draw a historical analogy, it'd be to the dot com bubble: Yes, it was a bubble – yet the Internet turns out to have been real. It just was almost impossible to guess correctly as to which company would still be around after the bubble burst (to say nothing of the now-giants that didn't even exist back then).

But that kind of bubble is very different from crypto, which so far has yet to prove that there was any substance to it, despite having gone through at least one global hype cycle and bubble burst two years ago. I don't recall there being two years of frantic search of an application for the web/Internet back in the early 2000s.


Just wait until the enshittification hits. The power and compute to train and run AI models aren't free and all of these products integrating AI are going to get... interesting when the AI companies start trying to get to profitability.


Did you actually read the article? The conclusion is exactly what you state. No big deal. It's just a meandering and fun read. YMMV.


I got the sense that he’s speaking of the same people the author of the post is speaking of in the article, not the author himself


> Please call me (order of preference): Xe/xer, They/them or She/her please.

https://github.com/Xe


I don’t want to make more of it than it is, but reader mode cut that part out for me.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: