Skip to main content
Anaconda Platform 7.0.0 is available through a limited early access program. Contact your Anaconda Technical Account Manager (TAM) if you’re interested in adopting the latest version.
The Model Catalog is where you can explore Anaconda’s collection of curated open-source models. From here, you can review model details and performance metrics, compare models based on benchmark scoring, and start servers using the models you have access to.

Exploring models

To browse models, select Model Catalog from the left-hand navigation. The Models page displays all of the models available in Anaconda Platform. You can search for a model by name, filter and sort the displayed models, and switch between catalog views.
Spotlight of the model view and filter controls at the top of the page
Models with a lock icon beside their name have been restricted from use by your administrator. Select the Hide restricted models checkbox to only view models you have access to.

Model catalog views

The model catalog can be displayed as a tile grid, a table, or as a comparative chart. Use the icons in the upper-right of the Models page to switch between views.
  • Tile
  • Table
  • Chart
The Tile view displays models in a grid. Each tile shows the model’s name, publisher, type, and the disk space and RAM required to run the currently selected quantization.
Models page tile view

Model types

Anaconda Platform currently supports the following model types:
Text-generation
Designed to produce coherent, contextually relevant natural language based on user input. Common use cases include:
  • Content creation - Drafting articles, outlines, summaries, or creative pieces.
  • Coding assistance - Generating or autocompleting code and troubleshooting issues.
  • Data extraction - Summarizing and interpreting large datasets to uncover insights.
  • Conversational AI - Powering chatbots and virtual assistants to generate natural dialogue.
Sentence-similarity
Encodes text into a vector database (embedding) that captures semantic meaning. These embeddings enable efficient comparison and analysis of text based on contextual relationships. Common use cases include:
  • Semantic search - Finding documents or items contextually similar to a query, beyond keyword matching.
  • Recommendation systems - Suggesting relevant items by comparing semantic similarity to user preferences.
  • Text classification - Categorizing text (for example, spam detection or sentiment analysis) based on meaning.
  • Clustering analysis - Grouping similar text data to uncover patterns or organize information.

Filtering and Sorting Models

Apply filters and sort the results to help you locate models.
  1. Select the Filter icon to open the filter panel.
    Models view with filter icon called out
  2. Apply filters as necessary to narrow the list of displayed models.
  3. Close the panel to see the model list with filters applied.
Filters apply to all views.
You can sort listed models by date or file size using the dropdown beside the Filter icon.
Select the icon beside any established filter to remove it, or select Clear at the top of the filter panel to remove all filters.
Administrators can also filter models by group access.

Model filters

  • Publisher - Filter models by the organization that built them.
  • Quantization - Filter models by the quantization method used to build them.
  • File Size - Adjust the slider to filter models by the amount of disk space they require.
  • RAM - Adjust the slider to filter models by the amount of RAM they require.
  • License - Filter models based on their usage, modification, and distribution terms.
  • Date Published - Filter models based on the date they were published.
  • Purpose - Filter models based on their associated model type.
  • Language - Filter models by which spoken languages they can understand.
  • HellaSwag - Filters models by their HellaSwag benchmark score.
  • WinoGrande - Filters models by their WinoGrande benchmark score.
  • TruthfulQA - Filters models by their TruthfulQA benchmark score.
  • Hide restricted models - Filters out models that you do not have permission to use.

Viewing model details

Select a model to display detailed information for the selected quantization, including its supported languages, parameter count, context window size, and more. The details page also includes links to related documentation, license information, and acceptable use policies, and displays evaluation metrics for the model’s various quantizations.
Model details view

Creating a server

Creating a server loads the selected model quantization into a dedicated instance that exposes API endpoints for inference and embedding. When requests are sent to these endpoints, the model processes the input and returns the output. You can connect your applications to the server’s IP address and use it in your AI workflows. For more information about creating and using servers, see Model servers.
  1. From a model’s details page, select Create Server.
  2. If the model is already in use, a dropdown lists active servers using the model. Select a server from this list to view the server’s details.
  3. Newly created servers appear on the Model Servers page.