Hacker Newsnew | past | comments | ask | show | jobs | submit | more karthink's commentslogin

Removing the interpreter lock for a few specialized tasks (without sweeping runtime changes to Emacs) would be enough to fix most of these issues -- parsing JSON from process output into lisp data in a background thread is one candidate. [1]

Installing packages does not need to block either, there is no architectural limitation here. The Elpaca package manager for Emacs provides async, parallel package updates. Loading packages into the Lisp image will block though, there's no way around that.

The other big source of input lag is garbage collection, and there are some ongoing efforts to use the MPS library in Emacs for a copying, concurrent GC. This is a big change and I don't know if this experiment will go anywhere, but Eli Zaretskii and co are trying.

[1]: https://github.com/emacs-lsp/emacs


Someone made a alternative way of optimizing LSP by converting the JSON to elisp bytecode on the fly: https://github.com/blahgeek/emacs-lsp-booster


> Something this old shouldn't have this property.

It's not an accident -- reading through the emacs-devel mailing list, it's easy to see how much effort the maintainers pour into backward compatibility. It's one of Emacs' unspoken guiding principles[1].

At the same time, it's not that surprising either. Emacs does not have other objectives that more modern languages/ecosystems do: no revenue or growth targets, corporations or VCs breathing down its neck, or a mandate to be "modern". Its most vocal and experienced users, who are also its volunteer maintainers, decide what its priorities should be. Since they've been using it for decades, backward compatibility is high on the list.

[1]: It's "spoken" guiding principles being to further the goals of the GNU project.


For a REPL-like interface, you could try the chatgpt-shell package. It can execute code generated by the LLM. It too does this by using org-babel though, it just calls org-babel functions under the hood. It's also OpenAI-only right now, although the author plans to add support for the other major APIs.

gptel has a buffer-centric design because it tries to get out of your way and integrate with your regular Emacs usage. (For example, it's even available _in_ the minibuffer, in that you can call it in the middle of calling another command, and fill the minibuffer prompt itself with the text from an LLM response.)


I think the web chat history is separate from API use, so you can't combine them. OpenAI claims not to retain a history of your API queries and responses.

For organizing LLM chat logs in Emacs, there are many solutions. Here are a few:

As a basic solution, chats are just text buffers/files, so you can simply store your conversations in files in a single directory. You can then see them in dired etc -- and they are ripgrep-able, can be integrated into Org-roam or your choice of knowledge management system.

If you use Org mode, you can have branching conversations in gptel where each path through the document's outline tree is a separate conversation branch. This way you can explore tangential topics while retaining the lineage of the conversation that led to them, while excluding the other branches. This keeps the context window from blowing up and your API inference costs (if any) down.

If you use Org mode, you can limit the scope of the conversation to the current heading by assigning a topic (gptel-set-topic). This way you can have multiple independent conversations in one file/buffer instead of one per buffer. (This works in tandem with the previous solution.)

-----

Tools tend to compose very well in Emacs. So there are probably many other solutions folks have come up with to organize their LLM chat history. For instance, any feature to handle collections of files or give you an outline/ToC view of your Markdown/Org documents should work well with the above -- and there are dozens of extensions like these.


> If you use Org mode, you can have branching conversations in gptel where each path through the document's outline tree is a separate conversation branch. This way you can explore tangential topics while retaining the lineage of the conversation that led to them, while excluding the other branches. This keeps the context window from blowing up and your API inference costs (if any) down.

Can you give an example of how this looks? I see it's mentioned in https://github.com/karthink/gptel/?tab=readme-ov-file#extra-... but I feel like I need an example. It sounds quite interesting and useful, I've often done this "manually" by saving to a new buffer when I go on a tangent.

EDIT: Nevermind, C-h v gptel-org-branching-context gives:

    Use the lineage of the current heading as the context for gptel in Org buffers.
    
    This makes each same level heading a separate conversation
    branch.
    
    By default, gptel uses a linear context: all the text up to the
    cursor is sent to the LLM.  Enabling this option makes the
    context the hierarchical lineage of the current Org heading.  In
    this example:
    
    -----
    Top level text
    
    * Heading 1
    heading 1 text
    
    * Heading 2
    heading 2 text
    
    ** Heading 2.1
    heading 2.1 text
    ** Heading 2.2
    heading 2.2 text
    -----
    
    With the cursor at the end of the buffer, the text sent to the
    LLM will be limited to
    
    -----
    Top level text
    
    * Heading 2
    heading 2 text
    
    ** Heading 2.2
    heading 2.2 text
    -----
    
    This makes it feasible to have multiple conversation branches.
Cool :-D


You can add a prefix arg to replace the region with the result. (Any numeric prefix works, so I usually do M-0 M-| since that's easy to type with the meta key held down, and 0 and | are close together.)


Emacs can do that quite easily[1]. But this code is not merged into the trunk yet, it should be some time this year.

[1]: https://share.karthinks.com/prog-preview-3.mp4


That's exactly what I had in mind, is it just a modified org-latex-preview that works anywhere?


> The killer app I'm trying to apply this to is LaTeX, so that I can write math notes in Emacs, incrementally, without visible latency.

See texpresso [1] for one solution that does something like this with the LaTeX process.

Another, more conservative solution is the upcoming changes to Org mode's LaTeX previews [2] which can preview live as you type, with no Emacs input lag (Demos [3,4]).

[1] https://github.com/let-def/texpresso

[2] https://abode.karthinks.com/org-latex-preview/

[3] http://tinyurl.com/olp-auto-1

[4] https://tinyurl.com/ms2ksthc


I wasn't aware of either of those, so thank you very much for those referrals I shall take a close look at :)

(Are you by chance the author of org-latex-preview, or is it a coincidence of usernames?)


I am one of the authors, should have mentioned. It's not part of Org yet, but should be some time this year.


Emacs' keymap system is fully modal by design. This state machine is why it's possible to fully implement evil-mode (and many other modal editing UIs, like boon, lispy etc) quite easily on top.


Taking "quite easily" on a walk there


This feature is built into Emacs, no Magit needed. It's the vc-region-history command, bound to `C-x v h` by default. It works across all version control systems Emacs supports, not just git.


Just added it to gptel.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: