Numerous AI techniques are being used as part of the ongoing Industry 4.0 revolution. Everything from applying new capabilities, such as predictive maintenance, improved supply chain management, predictive analytics, and more.
In predictive analytics, a production line can use different classification algorithms to detect—and even predict—which unit is likely to fail at the end-of-line (EOL) tester before it reaches that stage.
And the results speak for themselves: reduced time, less money invested in producing faulty units, and more time spent manufacturing high-quality units. This technique directly lowers your “cost of failure.”
Sounds pretty good, right? Shouldn't every production line have its AI-based algorithms integrated into it, predicting faulty units at a very early stage? While the answer is yes, it's not that simple. Recent study that was conducted by Mckinsey & Company shows that about 70% of the semiconductor companies that have tried harnessing AI capabilities to their needs have failed to extract significant value out of it. Moreover, less than a third of companies surveyed by Mckinsey have managed to rollout a large-scale AI-based solution.
Why is that? And how can you make sure your next AI endeavour will be fruitful?
Focusing on the previous example of predictive analytics, the idea of directly integrating a prediction model into the production cycle may seem simple enough, but the reality is it takes a lot of time. In contrast, there’s an increasing requirement for fast deployment of these prediction models to be able to adapt to the decreasing development cycles and increasing product complexities in the real world.
Slow deployment of these algorithms prevents a manufacturer from scaling a prediction model to numerous production lines, making the entire solution unsustainable.
But why does it take so long to build a model? And what can we do about it?
First, Data scientists need lots of data to build a reliable classification model that can predict faulty units.
Gathering enough data takes time and is dependent on manufacturing volume (higher production volume means less time needed to gather data). Obtaining the necessary data to train the model could take a couple of months. Plus, a good data scientist will tell you they need a sufficient amount of faulty units to train the model properly, which can delay the process even further, as these units are scarcer in any functioning production line.
Second, even once you do have enough data, the data scientist would need time to create a model based on it. Creating models requires cleaning and processing that data, configuring different model parameters to fit your use case, training the model, and testing its expected performance in real-time.
This process is iterative since it requires retraining the model repeatedly until reaching a sufficient result, and that can take weeks (if not more) to complete.
Third, AI models are built to best perform on a dataset that is "behaving" in a similar way. Production lines, for example, could differ significantly between one another; They could execute different tests, use a slightly different material or even machines, and this is just a partial list. These differences, amongst others, result in different behavior of the data (e.g. there might be different yield rates at a specific production line just because the material that was used in this line is different). What that means is that if you want a prediction model to have good results, you will need to create a model for each production line as each line behaves differently. Multiply the process we’ve discussed thus far per each production line and you will get a never-ending project.
Creating a real-time prediction model is possible, but it can take an unreasonable amount of time if you want to deploy this solution across dozens of different production lines.
What is time-to-model? Time-to-model is a crucial KPI that measures how long it takes a vendor to deploy a production-grade model, and it can help you make a better assessment of how long it would take to scale your next AI-based solution. It assesses the time required to gather the data to create a model, building the model itself, customizing configurations, training the model, reviewing the model’s expected performance, retraining the model to improve its performance, and integrating the model into your production line.
Technically speaking, Time-to-model is defined as:
Reducing your time-to-model will reduce the entire time to get a production-ready, scalable solution for your manufacturing process.
As mentioned, the data gathering stage really depends on the production volume you have, and your IT teams collecting this data. While this can be a tedious process, it is a mandatory step for creating good prediction models.
The rest of the steps beyond the first step are vendor-specific and vary significantly between one another. Creating and training a model could take several days to complete if the vendor you choose is not accustomed to creating manufacturing AI models, or if your data science team needs to build a model from scratch.
One of Vanti's main goals is to reduce time-to-model, allowing customers to not only create and use real-time prediction models in their production line but also to be able to scale this solution across all lines, fast.
We have put together a model creation process that is easy and intuitive —one that’s so intuitive that it doesn’t even require a data scientist! But most importantly, it’s fast.
Time-to-model is much faster than other solutions that provide the same outcome. Also, we made sure that the process of reviewing and optimizing the model up to a point where it satisfies your needs is baked into the solution itself, making the process even faster.
Are you interested in learning more about our model creation process? Get in touch with us to see how our low time-to-model can create incredible value for your business.