Skip to main content
Anaconda Desktop is currently available through a limited early access program. Anaconda AI Navigator provides many of the same capabilities and is available to all users.
The Chat feature of Anaconda Desktop enables users to interact with a locally downloaded model. The chat interface is mainly intended to provide you with a space to evaluate how different models handle specific tasks and scenarios. If the model you are using isn’t responding to your test prompts in a helpful way, try a different model. Once you see some success with a particular model, you can load it into an API server and test your own applications against it to see how the model handles your users’ input.

Loading a model

Load a model you’ve downloaded into the chat interface to interact with it. To load a model:
  1. Navigate to the AI Models page.
  2. Hover over the model you’d like to chat with and select Chat.
    Model chat button displays on hover
Alternatively, you can open the chat interface at any time by selecting New in the left navigation menu, then selecting Chat.

Changing a model

To load a different model to interact with, select the new model from the Model dropdown on the Model Chat page.
Loading a new model into the chat interface

Starting a new chat

All chats are initially created as New Chat <#>, but are topically renamed based on your initial prompt to the model. To start a new chat:
  1. From the Model Chat page, select Chat.
  2. If necessary, load a model into the chat interface.
  3. Enter a prompt to chat with the model.
You can access your previous chats from your Dashboard.

Stopping a response

Sometimes a model can start generating a lengthy and off-topic response to a prompt. To stop the model from generating its response, select Stop in the prompt field.

Editing a prompt

You can edit your most recent prompt to change the model’s response:
  1. From the Model Chat page, select the actions dropdown in your most recent prompt.
  2. Select Edit.
  3. Edit the prompt, then press Enter (Windows)/Return (Mac).
The model will re-reply to the new prompt.

Regenerating a prompt

To ask the model to generate a new response to your most recent prompt:
  1. From the Model Chat page, select the actions dropdown in your most recent prompt.
  2. Select Regenerate.
The model will re-reply to your original prompt.

Renaming a chat

You can provide a specific name to a chat at any time. To rename a chat:
  1. From the Model Chat page, open a chat’s actions dropdown and select Rename.
  2. Enter a name for your chat.
    Chat names over 21 characters are truncated.

Deleting a chat

To delete a chat:
  1. From the Model Chat page, open a chat’s actions dropdown and select Delete.
  2. Select Delete Chat.
You can also delete a chat from your Dashboard.

Chat settings

Chat settings allow you to fine tune how the model responds during user interactions. Once you have loaded a model, select Settings to open the chat settings.
Chat settings menu
Customize the responses from your model by adjusting the following :
System Prompt
Sets the overall tone and behavior for the model before any user input is provided as a prompt. The system prompt is a hidden instruction that establishes the model’s persona. For example, you could instruct the model to act as a data analyst, a python developer, or to adhere to a formal tone. For more information, see Crafting effective system prompts.
Maximum Response Length
Adjusts how long the model’s responses can be. Short responses are best suited for succinct answers or summaries of information, while longer responses are better for detailed guidance.
Context Length
Defines how much of the ongoing conversation the model can recall. A higher context length allows the model to keep track of more conversation history, which is beneficial for extended discussions. However, this can increase RAM usage and slow down inference, as the model processes more and more history. A lower context length can provide faster responses and reduce memory usage by considering less history.
Temperature
Controls the level of randomness in the model’s responses. This can help the model to feel more or less creative in its responses. A lower temperature makes the model’s replies more deterministic and consistent, while a higher temperature introduces variability, allowing the model to produce varied answers to the same prompt.
Different models have different values for context length.
Select the Let the model decide checkbox to allow the model to determine when it has fulfilled the prompt request adequately.
Select Clear at the bottom of the Settings panel to return to system defaults.