top of page
Search
Writer's pictureEitan Netzer

Why "Plug & Play" AI Solutions Fail to Deliver High-quality Results

Updated: Oct 24, 2021


Many AI suppliers offer plug & play solutions that theoretically only need to be plugged into a company's database and immediately provide business insights from the data, without any need for data training or labeling. But is there really a solution out there that is "one size fits all"? What about the variance among data structures, prediction models, and even organizational cultures characterizing each company? The notion that one AI solution can be integrated into your organization and work like magic without any adjustment is tempting but is far from realistic.


There are two types of products that make this kind of "plug & play" promise:


  • Some providers promise an AI-based plug & play product for a particular domain. For example, a product that predicts system malfunctions or forecasts customer churn. But assuming that simply connecting the AI product to the database will do, is mistaken, and users are up for disappointment. Every organization operates differently, and each business holds a different data and database structure. Although the solution's concept and design might fit various organizations, what worked well for one organization will not necessarily suit another without training the organization's specific data. Due to the variation in data across organizations, the data must be tested and adjusted.


  • Other AI suppliers promise customers that they will be able to build their own AI solution with AutoML. The customers will provide all their data, and the computer will independently create an infinite number of model permutations or variations. Next, the computer will examine all the potential models and choose the one that best fits the customer's needs.


The problem with this approach is that it is not much better than tossing a coin. Even if the model works perfectly, it can be applied only to simple problems or completely "clean" data – an unlikely situation. Since the model verification is automatic and the data is never entirely "clean", it is inevitable to get incorrect results during the model verification or evaluation stages.



In addition, AutoML requires tremendous computing power, resulting in high electrical consumption that contributes to carbon emissions, pollution, and global warming. In a recently published article in MIT Technology Review, Karen Hao has claimed that training a single AI model can lead to carbon emissions equivalent to the emissions of five cars during their entire lifespan.



Integrating the Human Aspect in the Process Significantly Improves the Result

In both types of products, domain-specific plug & play and AutoML, there is a need for a domain expert as part of the decision-making process - a Human in the Loop (HITL). The moment an actual human being is part of the process, it is easier for them to recognize silly mistakes made by the computer. The domain expert usually examines the model and tests to what extent it works during the labeling process or the production stage.


To fully enjoy the benefits of AI, it is recommended to use Interactive HITL tools. This way, the domain expert examines the results during the model development and ensures that it works. In an Interactive HITL mode of operation, the domain expert is involved in each data change, and the manipulations they make significantly improve the output.


CRISP-DM Processes can be Long and Frustrating

According to CRISP-DM methodology (Cross-Industry Standard Process for Data Mining), each stage in creating an analytical prediction model is performed separately:

  • Understanding the problem

  • Understanding the data

  • Feature engineering

  • Model development

  • Model evaluation.

But in reality, model creation is a recursive process going back and forth – leading to very long procedures. Since data cleaning processes and model creation depend on computing time and have an iterative structure, it is impossible to speed them up. Therefore, CRISP-DM processes take very long to complete, and only after completion is it possible to evaluate to what extent the model has improved. If we find a mistake, the whole process must be started from scratch, which is more time-consuming, expensive, and frustrating.


CoreISP-DM – Extreme HITL for Real-time Model Updates

At CoreAI, we have developed an efficient, fast, and low-cost method to build the prediction model. With the CoreISP-DM methodology, we update the model throughout the entire development cycle upon every action or manipulation on a row of data, such as cleaning, deleting, or adding data. We change the conventional paradigm and apply Extreme Human in the Loop: all the parameters in the cycle are updated, and the model is updated with them in real-time. There is no need to start all over again if a mistake is discovered, and in every moment we want, we receive an up-to-date output. Such methodology enables us to know immediately if the model development is on track or not.


Familiar Environment and Faster Results

Recently, a company has approached us with the following challenge: Its engineers can detect errors in the data and graphs generated by a measurement system but have no way of updating the data and model when they find an error. We offered them to build a platform with the same look&feel as their work environment, including the same graphs, where they can also implement corrections to the model. This solution provides smart data tagging capabilities, and the prediction model is constantly updated.


Typically, the domain expert would have to go over each data row and tag each line in a costly and lengthy process prone to mistakes. But when the model is constantly reviewed and updated by our system, the system knows how to pick the most relevant samples for statistical deduction and draw accurate conclusions. Instead of working with millions of records, the model can make predictions based on only 50,000 records. The system can recognize when the model has reached the desired quality and is no longer changing and then complete the model development process, saving time and costs.


Organizations should exercise discretion when it comes to plug & play AI products. Combining the human aspect with real-time machine learning and model updates will likely provide more accurate prediction models and better data analytics results.


To learn more about the CoreISP-DM method and how it can help your organization, please contact us via our home page contact form.


467 views0 comments

Comments


bottom of page