llm package in your environment:
llm first ensures that the model has been downloaded, then starts the server using Anaconda AI. Standard OpenAI parameters and server configuration options are supported. For example:
llm package, see the official documentation.