FAQ

It’s very easy! The best way to start is to schedule a demo with us

Vanti analyzes your data to understand the underlying model and produce actionable insights about

  • Root cause analysis
  • Opportunities to improve performance
  • Zero in on the most important parameters
  • Reduce testing scope & time
  • Accurate data-driven predictive models
  • interactions/relationships between parameters and features within your data

Vanti is designed specifically for operations professionals in digitized manufacturing facilities
 

No!
Vanti is your personal automated data scientist, ready to answer any question you have regarding your data and to provide actionable insights. Just upload the data and Vanti’ll do the rest.

A Model is a single relationship between 1 parameter (the “KPI”) and all other parameters.

Data types of all kinds (numbers, text, binary, etc.) are fully supported.

For a deployable model, we usually recommend a dataset from a repeatable / established process or manufacturing line.

For quick analysis & insights, you can use even small data sets from batch testing.

Full SaaS solution

Data coming from production lines change frequently, due to changes in the tests being conducted, changes in the material itself and other external factors. Being aware of it, Vanti offers its customers something that is called “model resiliency”. In essence, that means that every time data is changed in the production line, Vanti would test the deployed model to make sure that it can still provide accurate predictions and that the performance of the model has not deteriorated. Only if this is the case, Vanti will keep making real-time predictions; otherwise, Vanti would halt its real-time predictions and suggest creating a new model that would fit the new data structure. 

Vanti AI models not only make real-time predictions, but it also offers built-in fail-safe mechanisms that ensures the model performance is at its very best. Vanti constantly monitors each model performance and proactively defends against performance degradation, by detecting and alerting this to the user and by halting predictions if found necessary. 

Models are created through a self-service platform and require no coding. They can be deployed as supervised or unsupervised models. All models are optimized and reviewed before deployment.

For tabular data, you should upload a file where each row represents a single unit and each column represents either a test result or a measurement that was recorded for every unit. 

For image-based data, you will need to upload a set of images that you wish the model to learn and be able to detect anomalies.

Generally speaking, you would need to have at least 1000 units in the dataset, both for tabular and image-based data.

For tabular data, the rule of thumb is to have 10 units recorded for each column. For example, if you have 100 columns in the dataset, you should have at least 1,000 units (i.e. rows) recorded.

All of the data, insights, and configurations for your prediction models are accessible through our SaaS platform.