Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
All this IDE churn makes me glad to have settled on Emacs a decade ago. I have adopted LLMs into my workflow via the excellent gptel, which stays out of my way but is there when I need it. I couldn't imagine switching to another editor because of some fancy LLM integration I have no control over. I have tried Cursor and VS Codium with extensions, and wasn't impressed. I'd rather use an "inferior" editor that's going to continue to work exactly how I want 50 years from now.
Emacs and Vim are editors for a lifetime. Very few software projects have that longevity and reliability. If a tool is instrumental to the work that you do, those features should be your highest priority. Not whether it works well with the latest tech trends.
Ironically LLMs have made Emacs even more relevant. The model LLMs use (text) happens to match up with how Emacs represents everything (text in buffers). This opens up Emacs to becoming the agentic editor par excellence. Just imagine, some macro magic acound a defcommand and voila, the agent can do exactly what a user can. If only such a project could have the funding like Cursor does...
Nothing could be worse for the modern Emacs ecosystem than for the tech industry finance vampires ("VCs," "LPs") to decide there's blood enough there to suck.
Fortunately, alien space magic seems immune, so far at least. I assume they do not like the taste, and no wonder.
Why should the Emacs community care whether someone decides to build a custom editor with AI features? If anything this would bring more interest and development into the ecosystem, which everyone would benefit from. Anyone not interested can simply ignore it, as we do for any other feature someone implements into their workflow.
Elnode should make this very easy, given the triviality of the MCP "protocol."
I would take care. Emacs has no internal boundaries by design and it comes with the ability to access files and execute commands on remote systems using your configured SSH credentials. Handing the keys to an enthusiastically helpy and somewhat cracked robot might prove so bad an idea you barely even have time to put your feet up on the dash before you go sailing through the windshield.
yea... I guess is too niche, I guess scratch your own itch + foss it so the low hundreds of us can have fun or smt
I was exploring using andyk/ht discussed on hn a few months back, to sit as a proxy my llm can call at the same time i control via xtermjs, but i need to figure out how to train the llm to output keybindings/special keys etc, but promising start nonetheless, i can indeed parse a lot of extra info than just a command, just imagine if AI could use all of the shell auto-complete features but feed into it..
maybe i should revisit/cleanup that repo and make it public. It feels like with just some data training on special key bindings etc an llm should be able to type, even if -char by char- at a faster speed than a human, to control TUI's
I'm not sure why you were downvoted. You're right that buffers and everything being programmable makes Emacs an ideal choice for building an AI-first editor. Whether that's something that a typical Emacs user wants is a separate issue, but someone could certainly build a polished experience if they had the resources and motivation. Essentially every Emacs setup is someone's custom editor, and AI features are not different from any other customization.
Emacs diff tools alone is a reason to use the editor. I switch between macOS, Linux, and Windows frequently so settled on Emacs and happy with that choice as well.
I’ve been using Aidermacs to access Aider in Emacs and it works quite well and makes lots of LLMs available. Claude Sonnet 3.7 has been reasonable for code generation, though there are certainly tasks that it seems to struggle on.
Cursor/Windsurf and similar IDEs and plugins are more than autocomplete on steroids.
Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.
It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.
I think you’re right, and perhaps it’s time for the “autocomplete on steroids” tag to be retired, even if something approximating that is happening behind the scenes.
I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.
Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.
Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.
That wasn’t really the point I was getting at, but as you asked…
The reading doesn’t involve much more than a cursory (no pun intended) glance, and I didn’t test more than I would have tested something I had written manually.
Maybe it wasn't your point. But cost of development is a very important factor, considering some of the thinking models burn tokens like no tomorrow. Accuracy is another. Maybe your script is kind of trivial/inconsequential so it doesn't matter if the output has some bugs as long as it seems to work. There are a lot of throwaway scripts we write, for which LLMs are an excellent tool to use.
I use Rider with some built in AI auto-complete. I'd say its hit rate is pretty low!
Sometimes it auto-completes nonsense, but sometimes I think I'm about to tab on auto-completing a method like FooABC and it actually completes it to FoodACD, both return the same type but are completely wrong.
I have to really be paying attention to catch it selecting the wrong one. I really really hate this. When it works its great, but every day I'm closer to just turning it off out of frustration.
Arguing that ActiveX or Silverlight are comparable to AI, seeing what changes it did bring and is bringing, is definitely a weak argument.
A lot of people are against change because it endangers their routine, way of working, livelihood, which might be a normal reaction. But as accountants switched to using calculators and Excel sheets, we will also switch to new tools.
Where is this 2x, 10x or even 1.5x increase in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".
I keep seeing this being repeated ad nauseam without any real backing of hard evidence. It's all copium.
Surely if everyone is so much more productive, a single person startup is now equivalent to 1 + X right?
Please enlighten me as I'm very eager to see this impact in the real world.
> is that productivity in the teams using tools like this has greatly increased
On the short term. Have fun debugging that mess in a year while your customers are yelling at you! I'll be available for hire to fix the mess you made which you clearly don't have the capability to understand :-)
Debugging any system is not easy, it is not like technical debt didn't exit before AI, people will be writing shitcode in the future as they were in the past. Probably more, but there are also more tools that help with debugging.
Additionally, what you are failing to realise is that not everyone is just vibe coding and accepting blindly what the LLM is suggesting and deploying it to prod. There are actually people with decade+ of experience who do use these tools and who found it to be an accelerator in many areas, from writing boilerplate code, to assisting with styling changes.
In any case, thanks for the heads up, definitely will not be hiring you with that snarky attitude. Your assumption that I have no capability to understand something without any context tells more about you than me, and unfortunately there is no AI to assist you with that.
To be fair, I think the most value is added by Agent modes, not autocomplete. And I agree that AI-autocomplete is really quite annoying, personally I disable it too.
But coding agents can indeed save some time writing well-defined code and be of great help when debugging. But then again, when they don't work on a first prompt, I would likely just write the thing in Vim myself instead of trying to convince the agent.
My point being: I find agent coding quite helpful really, if you don't go overzealous with it.
Are you using these in your day job to complete real world tasks or in greenfield projects?
I simply cannot see how I can tell an agent to implement anything I have to do in a real day job unless it's a feature so simple I could do it in a few minutes. Even those the AI will likely screw it up since it sucks at dealing with existing code, best practices, library versions, etc.
I've found it useful for doing simple things in parallel. For instance, I'm working on a large typescript project and one file doesn't have types yet. So I tell the AI to add typing to it with a description while I go work on other things. I check back in 5-10 mins later and either commit the changes or correct it.
Or if I'm working on a full stack feature, and I need some boilerplate to process a new endpoint or new resource type on the frontend, I have the AI build the api call that's similar to the other calls and process the data while I work on business logic in the backend. Then when I'm done, the frontend API call is mostly set up already
I found this works rather well, because it's a list of things in my head that are "todo, in progress" but parallelizable so I can easily verify what its doing
SOTA LLMs are broadly much better at autonomous coding than they were even a few months ago. But also, it really depends on what it is exactly you're working on, and what tech is involved. Things are great if you're writing Python or TypeScript, less so with C++, and even less so with Rust and other emerging technologies.
The few times I've tried to use an agent for anything slightly complex or on a moderately large code base it just proceeds to smeer poop all over the floor eventually backing itself into a corner.
I shortcut the "cursor tab" and enable or disable it as needed. If only Ai was smart enough to learn when I do and don't want it (like clippy in the ms days) - when you are manually toggling it on/off clear patterns emerge (to me at least) as to when I do and don't want it.
Bottom right says "cursor tab" you can manually manipulate it there (and snooze for X minutes - interesting feature). For binding shortcuts - Command/Ctrl + Shift + P, then look for "Enable|Disable|Whatever Cursor Tab" and set shortcuts there.
Old fashioned variable name / function name auto complete is not affected.
I considered a small macropad to enable / disable with a status light - but honestly don't do enough work to justify avoiding work by finding / building / configuring / rebuilding such a solution. If the future is this sort of extreme autocomplete in everything I do on a computer, I would probably go to the effort.
I can't even get simple code generation to work for VHDL. It just gives me garbage that does not compile. I have to assume this is not the case for the majority of people using more popular languages? Is this because the training data for VHDL is far more limited? Are these "AIs" not able to consume the VHDL language spec and give me actual legal syntax at least?! Or is this because I'm being cheap and lazy by only trying free chatGPT and I should be using something else?
Its all of that to some extent or the other. LLMs don't update overnight and as such lag behind innovations in major frameworks, even in web development. No matter what is said about augmenting their capabilities, their performance using techniques like RAG seem to be lacking. They don't work well with new frameworks either.
Any library that breaks backwards compatibility in major version releases will likely befuddle these models. That's why I have seen them pin dependencies to older versions, and more egregiously, default to using the same stack to generate any basic frontend code. This ignores innovations and improvements made in other frameworks.
For example, in Typescript there is now a new(ish) validation library call arktype. Gemini 2.5 pro straight up produces garbage code for this. The type generation function accepts an object/value. But gemini pro keeps insisting that it consumes a type.
So Gemini defines an optional property as `a?: string` which is similar to what you see in Typescript. But this will fail in arktype, because it needs it input as `'a?': 'string'`. Asking gemini to check again is a waste of time, and you will need enough familiarity with JS/TS to understand the error and move ahead.
Forcing development into an AI friendly paradigm seems to me a regressive move that will curb innovation in return for boosts in junior/1x engineer productivity.
Yep, management dreams of being able to make every programmer a 10x programmer by handing them an LLM, but the 10x programmers are laughing because they know how far off the rails the LLM will go. Debugging skills are the next frontier.
It's fun watching the AI bros try to spin justifications for building (sorry, vibing) new apps using Ruby for no reason other then the model has so much content back to 2004 to train off.
They are probably really good at React. And because that ecosystem has been in a constant cycle of reinventing the wheel, they can easily pump out boilerplate code because there is just so much of it to train from.
The amount of training data available certainly is a big factor. If you’re programming in Python or JavaScript, I think the AIs do a lot better. I write in Clojure, so I have the same problem as you do. There is a lot less HDL code publicly available, so it doesn’t surprise me that it would struggle with VHDL. That said, from everything I’ve read, free ChatGPT doesn’t do as well on coding. OpenAI’s paid models are better. I’ve been using Anthropic’s Claude Sonnet 3.7. It’s paid but it’s very cost effective. I’m also playing around with the Gemini Pro preview.
It's very helpful for low level chores. The bane of my existence is frontend, and generating UI elements for testing backend work on the fly rocks. I like the analogy of it being a junior dev; Perhaps even an intern. You should check their work constantly and give them extremely pedantic instructions
Yeah, I use IntelliJ with the chat sidebar. I don't use autocomplete, except in trivial cases where I need to write boilerplate code. Other than that, when I need help, I ask the LLM and then write the code based on its response.
I'm sure it's initially slower than vibe-coding the whole thing, but at least I end up with a maintainable code base, and I know how it works and how to extend it in the future.
Same here. It's extremely distracting to see the random garbage that the autocomplete keeps trying to do.
I said this in another comment but I'll repeat the question: where are these 2x, 10x or even 1.5x increases in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".
I keep seeing this being repeated ad nauseam without any real backing of hard evidence.
If this was true and every developer had even a measly 30% increase in productivity, it would be like a team of 10 is now 13. The amount of code being produced would be substantially more and as a result we should see an absolute boom in new... everything.
New startups, new products, new features, bugs fixed and so much more. But I see absolutely nothing but more bullshit startups that use APIs to talk to these models with a few instructions.
Please someone show me how I'm wrong because I'd absolutely love to magically become way more productive.
I am but a small humble minority voice here but perhaps I represent a larger non-HN group:
I am not a professional SWE; I am not fluent in C or Rust or bash (or even Typescript) and I don't use Emacs as my editor or tmux in the terminal;
I am just a nerdy product guy who knows enough to code dangerously. I run my own small business and the software that I've written powers the entire business (and our website).
I have probably gotten a AT LEAST a 500-1000% speedup in my personal software productivity over the past year that I've really leaned into using Claude/Gemini (amazing that GPT isn't on that list anymore, but that's another topic...) I am able to spec out new features and get them live in production in hours vs. days and for bigger stuff, days vs weeks (or even months). It has changed the pace and way in which I'm able to build stuff. I literally wrote an entire image editing workflow to go from RAW camera shot to fully processed product image on our ecommerce store that's cut out actual, real, dozens of hours of time spent previously.
Is the code I'm producting perfect? Absolutely not. Do I have 100% test coverage? Nope. Would it pass muster if I were a software engineer at Google? Probably not.
Is it working, getting to production faster, and helping my business perform better and insanely more efficiently? Absolutely.
I think that tracks with what I see: LLMs enable non-experts to do something really fast.
If I want to, let's say, create some code in a language I never worked on an LLM will definitely make me more "productive" by spewing out code for me way faster than I could write it. Same if I try to quickly learn about a topic I'm not familiar with. Especially if you don't care about the quality, maintainability, etc. too much.
But if I'm already a software developer with 15 years of experience dealing with technology I use every day, it's not going to increase my productivity in any meaningful way.
This is the dissonance I see with AI talk here. If you're not a software developer the things LLMs enable you to do are game-changers. But if you are a good software developer, in its best days it's a smarter autocomplete, a rubber-duck substitute (when you can't talk to a smart person) or a mildly faster google search that can be very inaccurate.
If you go from 0 to 1 that's literally infinitely better but if you go from 100 to 105, it's barely noticeable. Maybe everyone with these absurd productivity gains are all coming from zero or very little knowledge but for someone that's been past that point I can't believe these claims.
Your comment is about 2 years late. Autocomplete is not the focus of AI IDEs anymore, even though it has gotten really good with "next edit predicion". People use AI these days use it for the agentic mode.
Absolutely hate the agent mode but I find autocomplete with asks to be the best for me. I like to at least know what I'm putting in my codebase and it genuinely makes me faster due to:
1) Stops me overthinking the solution
2)Being able to ask it pros and cons of different solutions
3) multi-x speedup means less worry about throwing away a solution/code I don't like and rewriting / refactoring
4) Really good at completing certain kinds of "boilerplate-y" code
5) Removed need to know the specific language implementation but rather the principle (for example pointers, structs, types, mutexes, generics, etc). My go to rule now is that I won't use it if I'm not familiar with the principle, and not the language implementation of that item
6) Absolute beast when it comes to debugging simple to medium complexity bugs
I'm past the honeymoon stage for LLM autocomplete.
I just noticed CLion moved to a community license, so I re-installed it and set up Copilot integration.
It's really noisy and somehow the same binding (tab complete) for built in autocomplete "collides" with LLM suggestions (with varying latency). It's totally unusable in this state; you'll attempt to populate a single local variable or something and end up with 12 lines of unrelated code.
I've had much better success with VSCode in this area, but the complete suggestions via LLM in either are usually pretty poor; not sure if it's related to the model choice differing for auto complete or what, but it's not very useful and often distracting, although it looks cool.
This is where I landed too. Used Cursor for a while before realizing that it was actually slowing me down because the PR cycle took so much longer, due to all the subtle bugs in generated code.
Went back to VSCode with a tuned down Copilot and use the chat or inline prompt for generating specific bits of code.
Well yes, but I personally would never submit a PRr I could use the excuse, "sorry, AI wrote those parts, that's why this PR has now bugs than usual".
All that to say that the base of your argument is still correct: AI really isn't saving all that much time since everyone has to proof-read it so much in order to not increase the number of PR bugs from using it in the first place.
AI autocomplete can be infuriating if like me, you like to browse the public methods and properties by dotting the type. The AI autocomplete sometimes kicks in and starts writing broken code using suggestions that don't exist and that prevents quickly exploring the actual methods available.
I have largely disabled it now, which is a shame, because there are also times it feels like magic and I can see how it could be a massive productivity lever if it needed a tighter confidence threshold to kick in.
I always forget syntax for things like ssh port forwarding. Now just describe it at the shell:
$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy
or maybe:
$ ffmpeg -ss 0:10:00 -i somevideo.mp4 -t 1:00 (speed it up 2x) out.webm
I press ctrl+x x and it will replace the english with a suggested command. It's been a total game changer for git, jq, rsync, ffmpeg, regex..
For more involved stuff there's screen-query: Confusing crashes, strange terminal errors, weird config scripts, it allows a joint investigation whereas aider and friends just feels like I'm asking AI to fuck around.
This never accesses any extradata and works only when explicitly asked? I find terminal as most important part from privacy perspective and I haven’t tried any LLM integration yet…
I also realized this morning that shell-hook is good enough to typo correct. I have that turned on at the shell level (setopt correct) but sometimes it doesn't work like here
git cloen blahalalhah
I did a ctrl+x x and it fixed it. I'm using openrouter/google/gemma-3-27b-it:free via chutes. Not a frontier model in the slightest.
I was 100% in agreement with you when I tried out Copilot. So annoying and distracting. But Cursor’s autocomplete is nothing like that. It’s much less intrusive and mostly limits itself to suggesting changes you’ve already done. It’s a game changer for repetitive refactors where you need to do 50 nearly identical but slightly different changes.
I had turned autocomplete off as well. Way too many times it was just plain wrong and distracting. I'd like it to be turned on for method documentation only, though, where it worked well once the method was completed, but so far I wasn't able to customize it this way.
Having it as tab was a mistake, tab complete for snippets is fine because it’s at the end of a line, tab complete in empty text space means you always have to be aware if it’s in autocomplete context or not before setting an indent.
We have an internal ban policy on copilot for IP reasons and while I was... missing it initially, now just using neovim without any AI feels fine. Maybe I'll add an avante.nvim for a built-in chat box though.
What folks don't understand, or keep in mind maybe, is that in order for that autocomplete to work, all your code is going up to a third party as you write it or open files. This is one of the reasons I disable it. I want to control what I send via the chat side panel by explicitly giving it context. It's also pretty useless most of the time, generating nonsense and not even consistently either.
This is different from haveibeenpawned leaks. These infostealer dumps mean the data is direct from a spyware/malware on a victims computer. for ex: https://hackerone.com/reports/3091909
It means the people in the leak had malware on their computer in the past, and maybe present.
Just eww... you were an expert at Rails 10+ years, failed to become an equivalent expert at Next.js so you went back to what you're used to. You just didn't dive in deep enough.
I was the same expert level with Python, now I'm using trpc, nextjs, drizzle, wakaq-ts, hosted on DO App Platform and you couldn't pay me enough to go back to Python, let alone the shitstorm mess that's every Rails app I've ever worked on.
I've also not seen the 1s Next.js pageloads you had, but I'm confident of figuring a fix if that becomes a problem.
I like a similar trick, sending very large files hosted on external servers to malicious visitors using proxies. Usually those proxies charge by bandwidth, so it increases their costs.
Similar to what I'm doing with https://wonderful.dev (example dev profile: https://wonderful.dev/alan) where profiles can't be edited, devs can only connect their GitHub, StackOverflow, etc. and we fill in the profile for them with real data. No more fake Linkedin skills.
This difference here is wonderful.dev adds points to skills based on repo stars. We take a dev's contributions to a repo, times that repo's stars, then assign those points to the repo's languages on the dev's profile. It's a proxy for impact by language.
One problem with this is my day job work (99.9% of my experience) isn't publicly posted and wouldn't be captured in the profile. The nature of my personal projects are significantly different than my job and result in different technologies being used to better fit the use case. Also, most of my personal projects are private and would be left out.
I guess this is just one more thing I feel is a barrier to equitable evaluation and hiring practices.
Same, always have copilot autocomplete turned off. I do use the chat, but rarely.
I was using it as docs but had to stop because it gives straight up wrong answers while sounding so confident. It's just faster to go directly to the docs or use Dash.app
reply