Skip to main content
Install the llm package in your environment:
conda install conda-forge::llm
To list the Anaconda AI models, run:
llm models list -q anaconda
When you invoke a model, llm first ensures that the model has been downloaded, then starts the server using Anaconda AI. Standard OpenAI parameters and server configuration options are supported. For example:
llm -m 'anaconda:meta-llama/llama-2-7b-chat-hf_Q4_K_M.gguf' -o temperature 0.1 'what is pi?'
To view server parameters, run:
llm models list -q anaconda --options
For more information on using the llm package, see the official documentation.