You’ve mastered creating specialized Jupyter environments and using quick start environments for rapid development. Now it’s time to take the next step: building a complete project workflow that goes from initial development with quick start environments to creating production-ready code you can confidently share and deploy.
This guide walks through a complete real-world example, showing how to customize the NLP quick start environment, develop a project, and create bulletproof requirements files for production deployment.
Foundation Check: Your Navigator Workflow
Before diving in, ensure you have your intermediate Navigator setup ready:
- Base environment: Clean and minimal
- Jupyter environments: Specialized setups (jupyter-ds, jupyter-ai)
- Anaconda Toolbox: Installed and ready to access quick start Environments
- Multi-IDE setup: VSCode or PyCharm configured to use environments in Navigator
Real-World Project: Building a News Sentiment Analyzer
Let’s walk through a complete workflow that shows how to go from quick start environment to production-ready requirements files, using an NLP project as our example.
In this example, we will see how to customize your quick start environment as your project matures and produce a reusable environment that takes these changes into account.
Step 1: Start with the NLP Quick Start Environment
The NLP quick start environment provides everything you need for text analysis.
Pre-installed core tools:
- nltk, spacy: Traditional NLP processing
- transformers: Modern transformer models
- torch, tensorflow: Deep learning frameworks
- pandas, numpy: Data manipulation
- requests, beautifulsoup4: Web scraping
- sqlite: Local data storage
- plotly, seaborn: Visualization
Install with the Anaconda Toolbox:
- Open Anaconda Navigator
- Click on Anaconda Toolbox
- Open the Toolbox, and click Create a New Environment
- Install the “Anaconda-NLP” environment (wait 5-10 minutes)
- Create new notebook, select the Anaconda-NLP environment as kernel
Step 2: Develop and Discover Missing Packages
As you build your sentiment analysis project, you’ll likely discover you need additional functionality beyond what’s in the quick start environment.
Initial Development: Start with basic sentiment analysis using the pre-installed transformers library. Your quick start environment has everything needed for:
- Loading pre-trained sentiment models
- Processing text data with pandas
- Web scraping with requests and beautifulsoup4
- Storing results in sqlite databases
Discovering Additional Needs: As your project evolves, you realize you need:
import feedparser # RSS feed parsing - NOT in the Anaconda-NLP environment
import apscheduler # Task scheduling - NOT in the Anaconda-NLP environment
Step 3: Add Packages Through Navigator
Method 1: Through the CLI
- Go to the Environments tab in Navigator
- Select your NLP environment
- Click the <i class=”fa-solid fa-circle-play”></i> icon beside the environment’s name.
- Click “Open Terminal”
- Run the following command to install the packages into your NLP environment:
conda install feedparser apscheduler
Alternative Method 1: Using Navigator
- Go to the Environment tab in Navigator
- Select your NLP environment
- Change the package list filter to “Not installed”
- Use the search box to search for feedparser, apscheduler
- Check the box beside the package’s name, then click Apply at the bottom of the screen to install it.
Method 2: Through the conda command Line
Open Anaconda Prompt (Terminal on macOS/ Linux) and run the following:
conda activate your-nlp-environment-name
conda install feedparser apscheduler
Verify Installation:
In your new notebook, run the following in a cell to verify installation was successful:
# Quick test to ensure packages are available
import feedparser
import apscheduler
print("✓ All additional packages installed successfully")
Step 4: Create Production-Ready Requirements
Now comes the critical step: creating environment files that others can use to recreate your working setup. You have two main strategies to accomplish this.
Strategy 1: Exact Export (Maximum Reproducibility)
Export your exact environment with version locked packages:
# Export everything exactly as you have it
conda activate your-nlp-environment
conda env export --no-builds > news-sentiment-exact.yml
What you get:
name: news-sentiment-exact
channels:
- defaults
dependencies:
- python=3.11.5
- pandas=2.1.0
- numpy=1.24.3
- transformers=4.33.2
- torch=2.0.1
- nltk=3.8.1
- spacy=3.6.1
- beautifulsoup4=4.12.2
- requests=2.31.0
- feedparser=6.0.10
- apscheduler=3.10.4
- plotly=5.15.0
- sqlite=3.41.2
# ... all other packages with exact versions
Best for:
- Research projects requiring exact reproducibility
- Production systems that can’t risk any changes
- Situations where you spent time debugging specific version conflicts
Strategy 2: Strategic Requirements (Future-Proof)
Create a manual specification that focuses on essential dependencies by allowing flexible version ranges. This is done by setting the package versions that you don’t mind being updated. For example, change pandas=2.0.0 to pandas>=2.0.0.. This approach lets you automatically benefit from future improvements and bug fixes, but it also introduces the risk of breaking changes from newer versions. Only use flexible versioning for packages where you’re confident you can handle potential compatibility issues.
Export the environment:
To export the above environment to a sharable .yml file, run the following command:
conda env export > news-sentiment-strategic.yaml
Once you have the yaml file, it’s a manual job to edit the package version specification to enable packages you are comfortable with being able to update. The following is an example environment yaml file edited to have flexible version ranges:
# news-sentiment-strategic.yml
name: news-sentiment-strategic
channels:
- defaults
dependencies:
# Core Python
- python=3.11
# NLP quick start environment essentials
- transformers>=4.30.0
- torch>=2.0.0
- nltk>=3.8
- spacy>=3.6
- pandas>=2.0.0
- numpy>=1.24.0
- requests>=2.28.0
- beautifulsoup4>=4.12.0
- plotly>=5.15.0
- sqlite>=3.40.0
# Your custom additions
- feedparser>=6.0.0
- apschedule>=1.2.0
# Development tools
- jupyter
- ipykernel
Best for:
- Long-term projects you’ll maintain over years
- Open-source projects for community use
- Cross-platform compatibility needs
Step 5: Validate Your Requirements File
Always test your requirements files with a fresh environment:
# Test the requirements file
conda env create -f news-sentiment-strategic.yml -n test-production
conda activate test-production
# Run your validation script
python test_environment.py
Create a validation script to verify everything works:
# validate_environment.py
def test_nlp_environment():
"""Test that all required packages are available and working"""
# Test NLP Quick Start packages
import transformers, nltk, spacy, torch
print("✓ Core NLP packages")
# Test data processing
import pandas, numpy, sqlite3
print("✓ Data processing packages")
# Test web tools
import requests
from bs4 import BeautifulSoup
print("✓ Web scraping packages")
# Test visualization
import plotly.express as px
print("✓ Visualization packages")
# Test your additions
import feedparser, textstat, apscheduler
print("✓ Additional packages")
print("🎉 Environment validation successful!")
if __name__ == "__main__":
test_nlp_environment()
Step 6: Document Your Environment Strategy
Create clear documentation for your environment choices:
README.md example:
# News Sentiment Analyzer
## Environment Setup
This project uses conda for environment management.
### Quick Start
```bash
# Create the environment
conda env create -f news-sentiment-strategic.yml
# Activate the environment
conda activate news-sentiment-strategic
# Validate the setup
python validate_environment.py