I don't understand the reasoning for persisting LLM output that can be generated at any point. If I want to use an LLM to understand someone else's commits, I can use the LLM best suited for that task at the time I need the information, which will likely be more accurate than what was available at the time of the commit and will have access to more context.
I also believe that commit messages should focus on information the code doesn't already convey. Whatever the LLM can generate from looking at your code is likely not the info I'll seek when I read your commit message.
It looks like it just is based on the git diff and status, at least as far as I can tell in a quick skim…
Hypothetically, a tool like this could ingest the bug report you were fixing, some emails, etc etc. It could also read the whole project (to get more context than just the diff). In principle there’s no reason it couldn’t relay more info than just the diff, in some extreme form…
Also, it could be seen as producing a starting point. When a person picks which AI generated text to keep, that is enough to add a bit of human spark into the system, right?
Maybe I am a bit old-fashioned but I think the commit message should convey intent and not content of the diffs. Perhaps the real utility of this is to describe existing commits in a repository.
Fully agree. Also, using LLMs for things like this can have bad side-effects, too, simply because it raises the noise-floor:
By spelling out things that are not noteworthy enough for a human, you make it more difficult to find comments that are (and were). Injecting a lot of irrelevant information can hamper understanding even if it is technically completely correct.
The commit message is supposed to contain the details that you can't just glance from the code. Why a certain decision was made, or the pro's and con's of a decision, a link to a relevant Github / Jira issue, etc.
I use the following script to allow copilot vim plugin to help me.
```plaintext name=../../bin/assisted-commit
#!/bin/bash
# Run git commit with --verbose --dry-run and save the output
git commit --verbose --dry-run > ./commit.message
# Prepend # to every line and add "conventional commit message:" at the end
sed -i 's/^/# /' ./commit.message
echo "# uncommented conventional commit message using feat, fix or doc flags. !beakingchange iff change breaks backward compatibility:" >> ./commit.message
echo "" >> ./commit.message
# Open the file in vim for editing, with cursor on a new line at the end and in insert mode
vim +':normal Go' +startinsert ./commit.message
# Filter out commented lines and save to a temporary file
grep -v '^#' ./commit.message > ./commit.message.filtered
# Commit using the filtered file
git commit -F ./commit.message.filtered
# Delete the files
rm ./commit.message ./commit.message.filtered
Looks like openrouter api can be self-hosted, which means you should be able to run this locally. If anyone is able to run this with ollama, please do post how you did that? :)
The openrouter api is the same as the openai api, so you should be able to use the openai api compatibility built into ollama after updating the url in /src/acmsg/constants.py
I also believe that commit messages should focus on information the code doesn't already convey. Whatever the LLM can generate from looking at your code is likely not the info I'll seek when I read your commit message.