It's minimal, but I'm posting things.
I needed LLM completions in Vim9 without running local models.
llama.vim is an excellent Vim plugin for LLM autocompletion.
But it expects a local model to be running.
My computers are not powerful enough to run local models, and I enjoy models provided by third party vendors.
So, I implemented vim-llama-adapter. It's a small Python server that makes llama.vim think it’s talking to a local model ; in reality the server is forwarding requests to remote LLM providers.
To run the application, configure your vimrc and start getting autocompletions, just check the README.md.
codestral right now. You can get a free API key at the Mistral console.Cheers,