Skip to main content
Anaconda Platform 7.0.0 is available through a limited early access program. Contact your Anaconda Technical Account Manager (TAM) if you’re interested in adopting the latest version.
The Model Catalog is where you can explore Anaconda’s collection of curated open-source models. From here, you can review model details and performance metrics, compare models based on benchmark scoring, and start servers using the models you have access to.

Exploring models

To browse models, select Model Catalog from the left-hand navigation. The Models page displays all of the models available in Anaconda Platform. You can search for a model by name, filter and sort the displayed models, and switch between catalog views.
Spotlight of the model view and filter controls at the top of the page
Models with a lock icon beside their name have been restricted from use by your administrator. Select the Hide restricted models checkbox to only view models you have access to.

Model catalog views

The model catalog can be displayed as a tile grid, a table, or as a comparative chart. Use the icons in the upper-right of the Models page to switch between views.
The Tile view displays models in a grid. Each tile shows the model’s name, publisher, type, and the disk space and RAM required to run the currently selected quantization.
Models page tile view

Model types and tags

A model’s type reflects its training objective and architecture. Anaconda Platform currently supports the following model types: Tags reflect what a model can do beyond its primary purpose. For example, a Text Generation model type might also have a Code Generation tag, indicating its capability for generating code.

Filtering and sorting models

Apply filters and sort the results to help you locate models.
  1. Select the Filter icon to open the filter panel.
    Models view with filter icon called out
  2. Apply filters as necessary to narrow the list of displayed models.
  3. Close the panel to see the model list with filters applied.
Filters apply to all views.
You can sort listed models by date or file size using the dropdown beside the Filter icon.
Select the icon beside any established filter to remove it, or select Clear at the top of the filter panel to remove all filters.
Administrators can also filter models by group access.

Model filters

  • Hide restricted models: Filters out models that you do not have permission to use.
  • Publisher: Filter models by the organization that built them.
  • Date Published: Filter models based on the date they were published.
  • Purpose: Filter models based on their associated model type.
  • Language: Filter models by which spoken languages they can understand.
  • Tags: Filter models by tags associated with their characteristics, capabilities, or use cases.
  • License: Filter models based on their usage, modification, and distribution terms.
  • Quantization: Filter models by the quantization method used to build them.
  • File Size: Adjust the slider to filter models by the amount of disk space they require.
  • RAM: Adjust the slider to filter models by the amount of RAM they require.
  • HellaSwag: Filters models by their HellaSwag benchmark score.
  • WinoGrande: Filters models by their WinoGrande benchmark score.
  • TruthfulQA: Filters models by their TruthfulQA benchmark score.

Viewing model details

Select a model to view its details:
The Overview tab offers general information about the model, including its description, publisher, intended use cases, and license terms, as well as relevant links and a downloadable AI Bill Of Materials (AIBOM).
Model details overview

Downloading the model AIBOM

The AIBOM provides a comprehensive record of the model’s composition and provenance. Select Download AIBOM to download the report as a .json file in CycloneDX format.A model’s AIBOM might include:
  • Model metadata: The publisher, version, architecture, license terms, and intended use cases
  • File variants: Details for each available quantization, including file format, disk space, and RAM requirements
  • Cryptographic hashes: SHA-256 checksums you can use to verify that downloaded model files are intact and unmodified
  • Performance metrics: Benchmark scores across available quantizations
  • Ethical considerations: Documented limitations, bias risks, and recommended mitigations
  • Software dependencies: Libraries required to run the model
The information available in an AIBOM varies depending on what the model publisher has disclosed and which benchmarks have been run.

Creating a server

Creating a server loads the selected model quantization into a dedicated instance that exposes API endpoints for inference and embedding. When requests are sent to these endpoints, the model processes the input and returns the output. You can connect your applications to the server’s IP address and use it in your AI workflows. For more information about creating and using servers, see Model servers.
  1. From a model’s details page, select Create Server.
  2. If the model is already in use, a dropdown lists active servers using the model. Select a server from this list to view the server’s details.
  3. Newly created servers appear on the Model Servers page.

Opening a model in Desktop

If you have Anaconda Desktop installed, you can select Open in Desktop to open the model’s details page in Anaconda Desktop. From there, you can review the model’s details, chat with the model, and download the model to create a local server. See Getting started with Anaconda Desktop for a quick start guide on chatting with models and creating local servers.