Faster Machine Learning—Deep Learning with GPUs

hardware abstract

The Big Deal With GPUs

If you’ve been following data science and machine learning, you’ve probably heard the term GPU. But what exactly is a GPU? And why are they so popular all of a sudden?

What A GPU Is

A Graphics Processing Unit (GPU) is a computer chip that can perform massively parallel computations exceedingly fast. Throughout the 2000s, companies like NVIDIA and AMD invested in GPUs to improve performance for video gaming and 3D modeling. As developers designed increasingly realistic looking games, they needed more powerful hardware to render the game images. Researchers have long experimented with using GPUs for more than just games, but the last 10 years have seen an expansion of GPUs into many data science applications, including deep learning.

Why You Need A GPU

The popularity of GPUs in AI hinges upon the recent explosion in usage of deep learning. So let’s give a quick introduction to deep learning.

The unreasonable effectiveness of AI is largely due to a form of machine learning called deep learning. Deep learning, as opposed to traditional machine learning techniques, builds a model from many simple computational layers stacked up, forming a deep network. These deep networks enable many of the advances in AI we see today such as self-driving cars, smart speakers, and superhuman Go playing.

Deep learning is a new name for an old idea: artificial neural networks (ANNs). And ANNs are quite old—first invented in the 1950s. But until recently they were not a popular choice for machine learning. In order to make an ANN powerful, they need to be very deep and trained with a large amount of data. The challenge with this approach was that, until recently, large networks were too computationally expensive to train and run. It required several advances in computing to be able to work with these large networks.

Once researchers recognized that the fast parallel computational ability of the GPU could be applied to ANNs, deep learning took off. GPUs lower model training times from days to hours, enabling researchers and data scientists to quickly iterate and build powerful models.

Today, hardware manufacturers build GPUs designed specifically for deep learning. Google has gone so far as to build their own chip, the TPU (Tensor Processing Unit), designed from scratch for working with Google’s open source deep learning library, TensorFlow.

Aso, the open source software community has supplied a number of libraries that make deep learning accessible to anyone willing to write a few lines of code. These powerful deep learning libraries—including TensorFlow, Keras, PyTorch, and CNTK—all run effectively on both CPUs and GPUs with little to no code changes on the part of the data scientist.

How To Get A GPU

Individuals that want to get started with deep learning typically buy a physical GPU and install it on a desktop computer. For many enterprises interested in deep learning, this solution is not ideal. Purchasing individual GPUs for data scientists is expensive and makes collaboration and deployment difficult to manage. It makes more sense to have on-premise GPU servers or cloud-based GPUs from cloud providers like AWS, Google Cloud, or Microsoft Azure.

How Anaconda Enterprise Can Help

With Anaconda Enterprise (AE), IT organizations can manage GPU servers in their AE cluster. AE will make these servers available to every user without requiring them to install anything on their computer. Users can simply check out a GPU when needed (e.g., for training a deep learning model), and then AE will automatically return them to the cluster when the job completes. This approach makes sharing GPUs across an organization cost-effective while also ensuring availability for users.

If you’d like to learn more about how Anaconda Enterprise can help your organization build powerful deep learning models with GPUs, drop us a line at [email protected].

Talk to an Expert

Talk to one of our financial services and banking industry experts to find solutions for your AI journey.

Talk to an Expert