With free, open source tools like Anaconda Distribution, it has never been easier for individual data scientists to analyze data and build machine learning models on their laptops.

So why does deriving actual business value from machine learning remain elusive for many organizations?

Because while it’s easy for data scientists to build powerful models on their laptops with tools like conda and TensorFlow, business value comes from deploying machine learning models into production. Only in production can a deployed model actually serve the business. And unfortunately, the path to production remains difficult for many companies.

What’s So Hard About Deploying an ML Model?

Does this sound like your organization? Your data science team works on laptops using open source tools installed from the open Internet. They train their models with a subset of production data that fits in memory on their local machines. They share results with stakeholders via PowerPoint slides and emails, not deployed REST APIs.

Then, with management approval, the data science team will pass a model over to the software engineering team, who will then rewrite the model in a “production language” like Java. Sometimes this causes the model outcomes to change and it always takes a significant amount of time.

But there is no other option—your data scientists do not have the tools or expertise to deploy their own models into production. And while this approach is great for getting started, it will not generate business value. Let’s assume the data scientist has successfully loaded and analyzed data; trained, scored, and serialized a model; and then written a predictor function for the model. This represents a significant amount of work requiring expertise in statistics, software development, and the business domain. Unfortunately, for the data scientist to then deploy this model, he or she needs to take many additional, time-consuming steps.

The Larger your Organization, the Greater the Challenge

First, the data scientist needs to wrap the model within a web framework. Flask, Tornado, and Shiny are several popular choices. From there the data scientist must package the model, data, and packages/dependencies in a format that will run on both the data scientist’s laptop and on the production servers. For large organizations, this is a lengthy, complex process. Does the data scientist have experience building web applications that gracefully handle load-balancing and surges in network traffic? If not, the model likely will crash when faced with serious load for the first time after deployment.

Many organizations think containerization tools like Docker and cloud providers are all they need to solve this problem. But while containers and elastic compute environments are crucial parts of the solution, they alone are not enough. Model serving, routing, and management at scale are significant challenges that require supporting infrastructure.

The graphic below highlights the numerous steps required to build, train, and deploy a machine learning model in an enterprise environment.

Model Deployment Graphic

That’s…a lot of steps, right? This is why tech giants like Google, Facebook, and Uber have built their own internal platforms to automate model deployment. But good news—you don’t need to be a big tech company to achieve similar results.

Anaconda Enterprise to the Rescue with One-Click Deployments

With Anaconda Enterprise, your organization can develop, govern, and automate machine learning pipelines, while scaling with ease. Anaconda Enterprise supports one data scientist or thousands, working on laptops to GPU training clusters to production clusters running thousands of nodes.

For your data scientists and the IT administrators that support them, Anaconda Enterprise provides powerful tools for building, training, and deploying models. Data scientists can deploy models with the single click of a button, without having to worry about containerization, URL generation, model routing, or security. Anaconda Enterprise takes care of all that behind the scenes.

With Anaconda Enterprise, data scientists easily can deploy:

  • Machine learning models as REST APIs
  • Dashboards with Bokeh, Plotly, and other viz libraries
  • Web applications with Flask, Tornado, and Shiny
  • Live, read-only notebooks

Unconvinced? Let’s look at a few examples…

Deploying a Machine Learning Model as a REST API with One Line of Code

Our first example will demonstrate how to turn any function into a REST API in Anaconda Enterprise with just one line of code. And if that doesn’t sound enticing enough, this method also works inside of Jupyter notebooks, so you don’t have to move any of your existing code. If you can write a function, you can create a REST API. Here’s how to do it:

Let’s define a basic function in Python: foo

# Increments an input by 1
def foo(my_input):
    return my_input + 1

With the anaconda-enterprise-web-publisher package, available to all Anaconda Enterprise users, data scientists need only add a wrapper to their function.

# import anaconda-enterprise-web-publisher 
from anaconda_enterprise import publish

@publish()
def foo(my_input):
    return my_input + 1

Now a user can deploy this function as a REST API using our GUI-based deployment workflow. Anaconda Enterprise will automatically generate an endpoint and a token for secure access. Data scientists (or other external users) can access the deployed model by appending the function name to a URL dynamically generated by Anaconda Enterprise and making an HTTP GET request. For example, the function above is named “foo”. A user can consume this API with a tool like curl via:

curl random.url.anaconda.com/foo

Now let’s look at a more realistic example. Below is a function that calls a scotch recommendation engine. End users will hit the API and provide their favorite scotch whiskey via an HTTP POST request. The API will return the top five recommendations as determined by the engine. Come thirsty!

@publish(methods=['GET', 'POST'])
def top_scotch_picks(my_scotch):
    results = recommend_scotch(my_scotch[0])
    top_picks = [(x[1], x[2]) for x in results]
    return json.dumps({"Top 5 Picks For {}".format(my_scotch[0]) : top_picks})

Note that we have added an argument to @publish. Here we specify the HTTP methods that this endpoint will accept: GET and POST. As you can see, this is done simply by adding (methods=[‘GET’, ‘POST’]) and now this function will accept both methods.

It also worth pointing out that the function returns JSON, a common format for data on the web. This is implemented as a Python user would expect, using the standard JSON library that ships with Python.  

At this point we hope you see how easy it is to turn any function into a REST API with Anaconda Enterprise. Users can quickly build, train, and deploy models as APIs, consumable by software developers and third-party applications. By making deployment fast and easy, Anaconda Enterprise enables data scientists to rapidly drive business value for their organizations.

Now, we recognize that many data science users are comfortable with web frameworks like Flask, Tornado, Django, and Shiny. These libraries are fully supported in Anaconda Enterprise. In the next example, we’ll walk through a deep learning model deployed as a REST API using Flask.

Deploying a Deep Learning Model as REST API with Flask

The basic machine learning model above is a good starting point, but we should provide a more robust example. In this example, we’ll build a deep learning model using Keras, a popular API for TensorFlow. We’ll use Keras to construct a model that classifies text into distinct categories. We’ll then serve our model using Flask, a lightweight web framework favored by data scientists for its flexibility and ease of use.

The code for this example comes from Hamza Harkous and his blog post “A Guide to Scaling Machine Learning Models in Production”. We won’t go through the code line by line, as Hamza provides a great write-up in the link above. What we do need to make clear is just how easy it is to run this Flask example inside of Anaconda Enterprise.

We can run any web application by simply specifying the URL as “0.0.0.0” and the PORT as 8086. Once we do that, we can run our application locally or deploy it into production. Let’s look at the code snippet below:

if __name__ == "__main__":
    print("Loading model")
    model = load_model(os.path.join(MODEL_DIR, 'reuters_model.hdf5'))
    # we need the word index to map words to indices
    word_index = reuters.get_word_index()
    tokenizer = Tokenizer(num_words=max_words)
    app.run(host='0.0.0.0', port=8086)

As a data scientist, one can create an API with a framework like Flask, specify the host and the port, and that is it. Anaconda Enterprise provides everything else for locally testing as well as deployment.

To start a local server, simply open a terminal and start the server. In this example, the server file is named app.py, so one can open a terminal and run python app.py. Within a project session, Anaconda Enterprise serves the model on http://localhost:8086.

Deploying the model into production is just as easy. Simply select the Deploy button and choose from a simple drop-down what you want to deploy.

Anaconda Enterprise Deployment

Users choose the model version, deployment command, and resource profile for each deployment. Each specification is a significant boon to the data scientist and should be discussed in turn.

  • Model version:
    • The model version allows data scientists to choose any version of their model or project. This means if a data scientist wants to roll back to a model deployed six months ago, it can be done without any hassle. No searching for old files or trying to remember environment variables—it’s all right there ready for you.
  • Deployment command:
    • Within a single project, a data scientist can build any number of models. For example, a user might build a model in Keras with Python 3.6, a dashboard with Python 2.7, and a Shiny app in R. Each of these models can coexist in a single Anaconda Enterprise project. The user can deploy each simply by specifying the deployment command specific to the given project. This provides maximum flexibility and fosters collaboration. Data scientists can share their projects with team members and take advantage of each team member’s strength.
  • Resource profile:
    • Each deployment has different computational needs. Anaconda Enterprise lets the user choose the compute he or she needs for their projects. Deploying a model with a smaller number of input parameters for inferencing? Choose a lightweight container that suits the task. Need a heavy-duty GPU for a powerful deep learning model? Pick the resource that you need and Anaconda Enterprise takes care of the rest.

Final Thoughts

For those managing and supporting data science teams, Anaconda Enterprise makes your life easy. Anaconda Enterprise offers full reproducibility, including governance around the organization’s open source supply chain. This means no package conflicts, environment issues, or model drift due to package updates. Data scientists have secure access to packages and dependencies behind their corporate firewall in a highly available environment.

IT administrators don’t need to worry either, as Anaconda Enterprise automates the process of spawning containers, installing the appropriate run-time and dependencies, and serving models with load-balancing and full authentication. Deployed models are shared securely with automatic TLS 1.2 encryption and token-based access, allowing secure access from any restful interface.

Next Steps

Interested in learning more about what machine learning can do for your organization? Check out our webinar, Getting Started with Machine Learning with scikit-learn. Anaconda Data Science Trainer David Mertz will walk you through the scikit-learn estimator and transformer APIs, integration of data preparation techniques for machine learning, and a simple workflow for setting up and evaluating machine learning systems.