Products

Anaconda AI Navigator

Download and execute curated open-source AI models locally.

Download AI Navigator for Free

Simplify and Safeguard AI App Creation

AI Navigator gives you easy access to a variety of large language models with various parameter counts, sizes, and accuracy levels so you can find the right model for your specific device — in a secure desktop environment.

Secure

Protect your on-premise AI solutions by running models locally. Control models’ behavior and maintain data security.

Private

Interact with local LLMs without sending private information to unknown cloud services or infrastructure providers.

Low Risk

End reliance on commercial services to fuel your AI efforts. 

A Proving Ground for GenAI

Transform your approach to data science with access to LLMs curated, hosted, and verified by Anaconda. AI Navigator lets you experiment with genAI in the secure environment of your desktop, download curated LLMs, then interact with them through an API server or a chatbot — working locally so your data stays secure.

Download AI Navigator for Free

Curated Models

Choose from Anaconda’s trusted library of over 200 pre-trained LLMs, consisting of 50+ models, each with four different quantization levels that let you balance model efficiency and accuracy. 

API Inference Server

Test and deploy models without the need for external cloud services. Enhance security by replacing calls to OpenAI with calls to the local server.

Built-in AI Assistant

Simplify complex processes with an easy-to-use, built-in chatbot agent. Complete common genAI tasks like question answering, summarization, and ideation, all with the security and privacy of local chat.

Icon of models

Local Models

Work locally and securely, keep control of proprietary information, and eliminate the need for an internet connection and external servers.

Intuitive User Interface

Dismantle the barrier to AI app development. Made for users of all technical skill levels, the interface simplifies browsing, downloading, and experimenting with LLMs.

Learn More about AI Navigator

Download Info Sheet

Additional Resources

Gratitude and Growth: Reflecting on 2023 and Embracing the Promise of 2024

Learn More

How to Build AI Chatbots with Mistral and Llama2

Learn More

How to Build a Retrieval-Augmented Generation Chatbot

Learn More

Download Anaconda AI Navigator

For best experience, we recommend 16GB memory or more
Choose your platform installer below.

Linux

Coming soon

Frequently Asked Questions

What is AI Navigator?

Designed to simplify and safeguard AI app creation, AI Navigator includes over 200 curated LLMs plus an API server and a chatbot for interacting with those LLMs. AI Navigator lets you work locally so your data stays secure.

The goal of AI Navigator is to bring state-of-the-art models to Anaconda’s users and to support diverse hardware configurations. The LLM repository contains Llama 3, Gemma, and Mistral models, just to name a few. In addition, four quantization levels are available for each model, to ensure compatibility across a wide range of devices.

After a user downloads a curated model to their PC, computations to support subsequent interaction with the model rely entirely on the user’s local computing resources. This is true whether the user interacts with the model through the chat agent or through the API server. No content of the interaction leaves a user’s machine, and the local model does not learn from the interaction. In fact, a user can interact with the local models through AI Navigator without any network access, once the models of interest have been downloaded.

The Anaconda team verifies that models are coming directly from their publishers and then builds quantizations (compressed versions of the model). Without AI Navigator, a user typically must rely on unsecured third parties or individuals to provide quantized models, because publishers tend to only publish full models, which can be very large and expensive to work with.

AI Navigator comes with an API server, which serves or runs the local LLM model chosen such that it can accept incoming API call requests and return responses. This lets developers incorporate intelligence into their agents and apps.

You can now easily replace OpenAI API calls with calls to the server in AI Navigator, as these calls are compatible with each other. From the API Server page, you can configure the local server’s address, port, and the model file you’d like to set up.

AI Navigator is currently available for Windows 11 and macOS 13+. Requirements for AI Navigator itself are minimal. Instead, requirements are determined by the model you want to use. Each model has two main requirements: (1) You must have enough disk space available to download and store the model. Most models fall between 2 and 10GB, but some models are quite robust, requiring up to 150GB of storage space. (2) You must have enough memory to run the model. AI Navigator uses a combination of the main system memory (RAM) and any available GPU memory (VRAM) to run the model. We recommend a minimum of 16GB of RAM to be able to run most models without issue, but some models require more memory to operate.

The embedded chatbot agent within AI Navigator is the most intuitive way to interact with an LLM thanks to its user-friendly interface. The LLMs you download to your own machine will power the chat, and all the computation is local, meaning that no proprietary or private data will leave your devices from the chat.

The intuitive user interface allows the users to conveniently select the their LLM of choice to power the chat, perform common GenAI tasks, regenerate a response, and check previous chat histories. Through the UI, more sophisticated users can also change the system prompt and configure key settings to shape the chat responses.

For AI app and agent developers,  once they achieve success with a particular model for their specific tasks and scenarios, they can load the LLM into the API server to build and test their own AI apps and agents.