6 Considerations for Developing AI for Manufacturing

Orange and white manufacturing equipment

AI techniques like machine learning (ML) and deep learning (DL) offer new ways to optimize processes, automate workflows, detect anomalies, and reduce costs in manufacturing industries, with a potential value estimated at $2 trillion. However, these techniques can be tricky to implement and apply, with many potential pitfalls for the unwary. Let’s explore some use cases for AI in manufacturing, then consider what data science and machine learning practitioners need to keep in mind when applying AI to this field.

AI Use Cases in Manufacturing

Modern factories are now loaded with sensors of all types. AI tools can use this data to improve production, efficiency, and communication across the manufacturing process, from product design and development to final delivery. AI approaches are already used in the factory to optimize production processes, help identify defects or issues with products, enable predictive maintenance of equipment, and expedite and track shipping, amongst many other applications.

They can also be applied to digital twins, or detailed simulations of entire products and systems, to perform virtual experiments before committing to specific (and expensive!) real-world implementations. By providing better tools to improve virtually the entire manufacturing process, AI can help reduce the risk of errors and delays and save money. Of course, arbitrarily detailed simulations can get arbitrarily expensive to set up, and so it is crucial to have realistic goals throughout the process to ensure that AI techniques provide appropriate levels of value for the investment involved.

For a more comprehensive list of manufacturing use cases, check out the manufacturing section in Anaconda’s guide: AI Use Cases for the Enterprise.

6 Critical Considerations When Developing AI for Manufacturing

When developing AI for manufacturing, there are a few things to consider before you get too far down the road. This is a list that guides you through the basic steps, with considerations for each one:  

1. Identify the specific problem or challenge you want to address. AI techniques depend on data that is expensive to collect, curate, and process, and it is crucial to be realistic about the potential outcomes you desire. Just adding sensors everywhere will rapidly fill up your hard drives without necessarily leading to better decision making or better products! To set the scope for your AI project, start by brainstorming a list of potential outcomes your organization’s leadership would like to see. Setting scope includes determining what data will need to be collected, what types of software and algorithms could be applied, and what metrics will be used to evaluate the success of your project. Explicitly defining the scope helps ensure that all stakeholders, developers, and users are aligned in the project’s objectives and provides the basis for measuring the success of your project. All of this is essential to demonstrating the value of the AI system to your organization. Keep goals focused, measurable, and aligned to the needs of your business.

2. Take a close look at your data, before committing to a big project. Make sure you have access to suitably high-quality data that has been or can be cleaned and structured for AI, and that it is representative of the real-world manufacturing process. Humans are good at ignoring implausible values, but bad data can easily lead the AI model astray. Curating your data takes a lot of human time and effort, but without it your data will just be a black box, with unknown biases that will determine your results.

Visualizing all of your data is key here. Often you may find that just visualizing it already tells you how to proceed, without any other algorithm needed! Once a human understands the existing data and its quality, you can decide from there whether your AI project can proceed, and can estimate how much time and effort will be needed to get the rest of the data needed. When you do go forward, treat your data as a strategic resource, with data in the right place, at the right time, in the right format. Data management includes the collection, storage, security, quality, dissemination, and changes that occur in your data, from creation to archiving.

3. Plan your infrastructure and deployment. Gather a keen awareness of your existing infrastructure and your leadership’s understanding of the direction your organization is headed. Specifically, how will you deploy your models—in the cloud, on premises, or air gapped? Working with data in the cloud will be easy for your analysts and for provisioning computing power, but can open up types of risk that may not be acceptable in a manufacturing context.

Also consider how you plan to monitor AI models once they’re deployed into production. The AI model lifecycle includes making sure that models continue to perform as expected and addressing any issues that may arise after deployment.

4. Choose an AI platform and/or combination of open-source software for building. Sensor data can quickly become larger than a single machine can handle, and it’s important to have platforms, tools, and repositories that can handle large volumes of data being processed in real time.

For building AI, look for a multi-coding-language AI platform that makes it easy for your building teams to use disparate types of data, build helpful visualizations, share their work, and deploy models into production faster. Your building teams typically include data scientists, data engineers, ML engineers, software engineers, and developers. These are the people who understand the open-source environment and the best tools for your organization’s use cases.

5. Protect compliance, security, and governance. Security is a key consideration overall, and while open-source software brings the innovation of the crowd, it also can bring security risks in the form of vulnerabilities inside software packages. Look for an AI platform that stops vulnerabilities but doesn’t interrupt your builders’ workflows. Key capabilities here include user access controls, common vulnerabilities and exposures (CVE) reporting and, most importantly—association of CVEs with software packages your teams are building or your deployed systems are using.

6. One size does not fit all! “Manufacturing” covers an enormous range of activities, ranging from one-off custom work, mass customization using computer-controlled machines, high-precision but low-volume production for exacting applications, high-volume and cost-sensitive mass production, and every possible combination of these. Using AI to shave off a few cents from the unit cost of something you produce in the millions is very different from using AI to find defects in a safety-critical subassembly, and how to apply it is different in each case! So you should expect to need to think very carefully about the particular needs and goals for your particular type of manufacturing, in order to use AI effectively to address your particular needs.

AI is quickly becoming a powerful tool for manufacturing operations. When considering AI as an option, it is important to consider the various ways in which AI can be leveraged to increase efficiency and reduce costs. By understanding the six critical considerations outlined above, you will be better equipped to make informed decisions regarding AI implementation and reap the benefits of automation in your own manufacturing operations.

To learn more about AI use cases for the enterprise, download our guide: AI Use Cases for the Enterprise. To find out how enterprise data science and machine learning practitioners are using Anaconda to build and deploy secure AI solutions faster and more collaboratively across their organizations, contact us today.

About the Authors

James A. “Jim” Bednar works with commercial and government clients to improve Python software for visualizing and analyzing large and complex datasets. He holds an M.A. and Ph.D. in Computer Science from the University of Texas, along with degrees in Electrical Engineering and Philosophy. He has published more than 50 papers and books about the visual system and software development. Before joining Anaconda back in 2015, Jim was a faculty member in Informatics for 10 years at the University of Edinburgh in Scotland.

Andrew Sherlock is the Director of Data-Driven Manufacturing at the National Manufacturing Institute Scotland and Professor of Practice at the University of Strathclyde. His career, both in academia and industry, has focused on the application of AI, data science, and search techniques to design and manufacturing. In 2006 he founded ShapeSpace Ltd, a spin-out from the University of Edinburgh, to commercialize 3D search-by-shape technology, initially developed as a 3D search engine for components and subsequently enhanced to allow analysis of assemblies. This technology has been deployed at a number of large manufacturers in the automotive, aerospace, and industrial equipment industries, where the ability to undertake analytics on component portfolios and large numbers of bills of materials has uncovered significant cost savings within the supply chain. Between 2016 and 2019 Andrew was the Royal Academy of Engineering (RAEng) Visiting Professor in Design for Product Profitability at the University of Edinburgh.

Talk to an Expert

Talk to one of our financial services and banking industry experts to find solutions for your AI journey.

Talk to an Expert