If you’ve been supplementing main with conda-forge or PyPI to get your AI stack working, this post is for you.
Over the past two quarters, Anaconda has significantly expanded main channel coverage for AI development—faster PyTorch releases, broader GPU support, and new packages that complete the AI workflow stack.
Here’s what’s new and why it matters.
Closing the PyTorch Release Gap
Historically, if you wanted the latest PyTorch version, Anaconda’s main channel lagged conda-forge and PyPI by months. This gap forced teams into an uncomfortable choice: mix channels to stay current, or stick with main and miss the latest capabilities.
That’s changed. PyTorch 2.10 is available on main now. We’re targeting 2.11 and 2.12 for May, and going forward, our goal is to get new PyTorch versions on main within 30 days of upstream availability.
We’ve also expanded the hardware we support. PyTorch 2.7+ added Windows CUDA support, which means data science teams running Windows workstations now get the same GPU acceleration as Linux teams. PyTorch 2.9+ added support for ARM-based NVIDIA GPU systems, which means if you’re running DGX infrastructure or other ARM GPU platforms, PyTorch with CUDA now works directly through main.
What this means practically: fewer exceptions in your conda configuration files, fewer mixed-source environments to debug when something breaks, and one source of truth for your team’s PyTorch dependency.
Completing the AI Workflow Stack
Here’s what’s now possible across the AI development lifecycle:
Building RAG and agent applications
You’re creating a retrieval-augmented generation system—maybe a documentation assistant or an internal knowledge tool. You need vector search, agent orchestration, and reliable structured output from language models.
The full stack is now on main. FAISS and Qdrant handle vector search at any scale. LanceDB covers multi-modal storage when you’re working with text, images, and audio together. LangGraph and pyautogen provide agent orchestration: LangGraph for stateful workflows, pyautogen for conversational agents. dspy lets you optimize prompts systematically, treating them as parameters you compile rather than strings you hand-tune. The llama-index package makes it straightforward to build document-centric RAG pipelines, and the outlines package forces language models to generate valid JSON, which matters a lot when you’re building production APIs that consume LLM output.
Together with SentenceTransformers (which was already available), you can build a complete RAG pipeline, from embedding generation through retrieval and orchestration, sourced entirely from main.
Fine-tuning on your own data
Pre-trained models are powerful, but they don’t know your domain out of the box. PEFT and bitsandbytes bring memory-efficient fine-tuning techniques to main. These make it practical to customize models with 7 billion parameters on a single GPU. You don’t need a cluster anymore.
Computer vision, speech, and local inference
Ultralytics (YOLO) covers most real-time object detection use cases and runs on PyTorch for automatic GPU acceleration. Whisper.cpp and faster-whisper provide efficient speech-to-text processing. Llama-cpp-python gives you Python bindings for local inference with C++-level performance and access to thousands of open models without the overhead. Looking ahead, we’ll be adding vLLM, which handles high-throughput model serving for production-scale deployment.
Production monitoring
Until now, you could train and serve a model on main, but you couldn’t monitor it on main. Evidently AI fills this gap. It provides drift detection, performance tracking, and LLM-specific observability.
Development tools
Gradio makes it easy to demo ML models to stakeholders without building full web applications. Polars provides DataFrame processing that’s significantly faster than pandas when you’re working with large datasets. Both were frequent requests from customers.
How to Get Started
If you installed Anaconda Distribution or Miniconda with default settings, you’re already set up: these packages are available now with conda install. If you’ve customized your channel configuration over time and aren’t sure what you’re using, Anaconda’s channel configuration guide walks through how to check your setup.
New to these tools? We’ve put together some tutorials to help you get started:
- Building AI Workflows with Human Oversight using LangGraph
- Time-Series Analysis with Polars: Exoplanet Transit Detection
Why Coordinated Packaging Matters
When your RAG pipeline depends on PyTorch, FAISS, LangGraph, and dozens of shared libraries underneath, those pieces need to actually work together. A RAG application might pull in 50+ dependencies once you account for everything in the stack. When packages come from different sources with different build systems, version mismatches can create problems that only surface at runtime—the kind that take hours to debug.
Packages on main are built and tested as a coordinated distribution. Dependency conflicts get caught before installation. When a critical security vulnerability appears in a library like libwebp (as happened in 2023), we can update it once and every dependent package continues working. That’s significantly harder to manage when packages come from multiple sources.
If you’re interested in how this coordination works under the hood, this blog post goes into greater detail.
Lastly, tell us what you need! We’re continuing to expand package coverage based on what teams actually need. If there are AI packages your team depends on that aren’t available on main yet, we want to hear about it. Visit our product feedback page and submit a request under Packages and Installers. Your feedback directly influences our roadmap.