The impact of AI and data on the manufacturing industry

The impact of AI and data on the manufacturing industry

Article Data & AI

Technological advances are driving the manufacturing industry into the next phase of its development: Industry 4.0. Innovative AI, machine learning and IoT solutions can now predict when machine maintenance will be needed, improve organizational efficiency and minimize operational risks and costs. Partly due to the large amount of available data, this is all becoming easier and easier. Yet research shows that companies are unsure about how to explore the possibilities of AI.

In this series of articles, we look at the impact of these trends on IT organizations. We also explain how organizations in the manufacturing industry can make their processes faster, better and smarter by bringing their data to life. What can AI and machine learning mean for businesses? What are the requirements for making AI a success within a company? In this article, the first in this series, we answer these questions and introduce our solutions: Applied AI and Data Hub.

Computer programs are getting smarter and smarter and can support people and companies in their work more and more effectively. These programs do this with the help of large amounts of data. Whereas until recently we mainly processed this data into reports and dashboards in order to look back and come up with human insights, computers can now use this same data to present us with insights or even make decisions based on them.

Predicting with algorithms

These computer programs not only look back through the data, but also help us to look ahead. We can now use smart algorithms to predict complex trends. In the financial domain, for example. Or in the areas of supply and demand and inventory. We can predict unplanned maintenance by having built-in sensors automatically perform analyses of machinery long before maintenance is needed. We can also analyze various business processes, such as the production line, supply chain and procurement. And we can use algorithms to automate various – previously primarily human – activities.

Examples of AI applications

For the first time ever, we now have enough storage capacity for all the data we generate and enough computing power to perform complex calculations. This means we have everything we need to build powerful AI applications. Some examples of applications:

  • Computer vision. The YOLO (You Only Look Once) algorithm can recognize where an object is and what it is in a single frame of a video. It’s able to recognize thousands of objects. The app consists of a deep artificial network that was trained using an enormous number of photos. Suppose that in a factory you need to recognize – quickly and repeatedly – whether all the parts of a production line are available fast. This technology would be handy.
  • Virtual agents. These are virtual characters that do something, for example, play a game without knowing the rules beforehand. Using an algorithm called reinforcement learning, they learn the rules of the game by playing it millions of times with each other. Over time, they even learn complex strategies. Using reinforcement learning or other algorithms, you can implement complex optimizations to make your production process more efficient or even automate it.
  • Natural language processing (NLP) combined with creating new images. An example: you describe in natural language an image that you want a program to draw. A chair shaped like an avocado? An elephant made of cucumbers? An artificial network can generate such images straightaway, based on the user’s input.

AI is not magic

With all these wonderful examples, you might be quick to assume that there’s something almost magical about AI. However, AI as we know it today is nothing more than a learning algorithm that learns from a huge amount of data how to get from a certain input to a certain output. The more data, the more accurate, better and wonderful the AI. But AI is not always the best solution to your problem. It can be costly and take a long time for your AI solution to finally run in production. Sometimes, you can also solve a problem the old-fashioned way by calling on a clever programmer.

With that in mind, it’s good to know that anyone can get started with AI now that the right tools and resources are available. The big question is: where do you start and how do you make sure your first AI project isn’t a miserable failure? Things often go wrong because the problem picked up doesn’t add any value. Which problem to tackle first should be a topic of conversation among business professionals and experienced data professionals, so that both added value and feasibility are considered simultaneously. Things also frequently go wrong with the data itself. The right data isn’t there, there’s not enough of it, it’s of insufficient quality or it’s not accessible. A specific problem we often see in the industrial sector is that the data is trapped in the machines and can only be freed by the suppliers.

Finally, we regularly see that data scientists produce a good machine learning model, but that insufficient consideration is given to how the model can be put into production when setting up the model and the pipelines. For example, the model takes too long to make a prediction or it’s only running on a data scientist’s laptop with all kinds of exotic dependencies. And a model that’s trained today may no longer be current in six months’ time. If such a model no longer works well, the business won’t need it anymore either.

Our approach: Applied AI

From our experience with data science projects and AI projects, we have distilled an approach that significantly increases our likelihood of success. We call it “Applied AI”. The underlying idea is very simple: start small and fail fast. Our starting point is a company’s or an operational unit’s interest in AI. There is a prevailing idea or feeling that AI can mean something, but what and how is still unclear. In a number of steps, we get from that point to the end point: an AI solution in production that delivers value. These are the steps.

  1. Find use cases. Use cases are specific problems that can be tackled with AI or data. For example, increasing production efficiency, predicting planned maintenance or properly reconciling inventory. The use cases are generated by the business (for example, in a workshop) and in the presence of a data scientist. The business can then rank the use cases in terms of their expected value return, and the data scientist ranks them in terms of feasibility. The trick is to find the optimal use case. If everything runs smoothly, which technically feasible use case provides maximum value?
  2. Define the problem. We get to work on the first use case with a quick scan. Using workshops and data analysis, we try to answer the questions of what the problem is, why it needs to be solved, and what its resolution will deliver.
  3. Identify the prerequisites for a good solution to the problem. Questions that come up include: what exactly needs to be predicted? What is the minimum accuracy that the model must have to deliver value? How will the model be used in production? This step provides clear requirements with which a data scientist can get started.
  4. Data assessment. How does the available data compare to the desired data? Can any gap between the two be filled or should we go back a step and adjust our ambitions? And is there enough data, can it be accessed and what about its quality?
  5. Find out if the problem is learnable. Can the data scientist create a super-simple model that outperforms the probability forecast? If so, it proves that the problem can potentially be solved using machine learning. If not, we need to see if more data is required or if the criteria should be revised.
  6. Proof of value. In this step, with a team, we try to create the best possible model in six to eight weeks. We look at how far we can push the accuracy with different algorithms, by adding additional data and so on. The end result is a prototype model that is ready to be put into production. We determine the objective performance of this model. If it at least meets the performance requirements for a good solution that were established in the quick scan, then we close the cycle and move on to production in the integration phase. We run through the use cases in this way until we see good performance and decide to put the model into production and incorporate it into new or existing systems.
  7. Integration/production. In this final step, we put the model into production and incorporate it into existing or new systems.

Centralized data platform

It’s important to note that you need different roles in each step. For example, a consultant, data scientist and data engineer. Also, don’t underestimate the work involved in bringing all the necessary data together and transitioning from one step to the next. Centralizing all the data within an enterprise and making it easily accessible is invaluable when it comes to accelerating this approach. And not just for that, by the way. We know from our experience with clients that building such a centralized platform is costly and time consuming. You have to think about the architecture, security, scalability, which standards to use and so forth. It’s complicated, especially if you don’t yet know how the platform will ultimately be used in the context of data science and AI.

Based on our experiences, we have built such a centralized data platform: our Data Hub solution. It can be deployed in the cloud in a matter of weeks. Data Hub uses the cloud provider’s reference architecture and is secure by design. Using the cloud has many advantages. For example, it is easy to scale up in the cloud and, important in the industrial sector, the transition from innovation to production and integration is easier in the cloud. Data Hub uses standard elements from the cloud provider’s toolkit so we know that the components are optimized and free of bugs. In addition, it’s often cost-effective because you only pay for what you use.