August 5, 2022

The Corporate Risk of Artificial Intelligence

The growing demand for implementing Artificial Intelligence (AI) is a necessity for many organizations to remain competitive. However, as companies rush to integrate AI into their processes, many do it poorly.

Event Date:
Hosted By:
Register Now
Michael Gonzales

The growing demand for implementing Artificial Intelligence (AI) is a necessity for many organizations to remain competitive. However, as companies rush to integrate AI into their processes, many do it poorly or without sufficient foresight.

High failure rates for deploying AI solutions are reported by several sources, for example:

  • Up to 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.
  • 90% of machine learning solutions are never put into production.

These statistics are alarming, especially when you consider that many AI projects are research efforts, testing a hypothesis, or are one-off projects that are never expected to be put into production.

This bias in data and algorithms not only results in erroneous outcomes, but they are driving new legislative agendas such as the Artificial Intelligence and Data Act (AIDA). The AIDA gives individuals the right to inquire about the reasoning behind any forecast, suggestion, or decision that may impact them.

Common models leveraged in many AI solutions are built, trained, and validated using historical data where we already know the outcome we wish to predict. Leveraging this historical data exposes additional challenges and associated risks for the organization, such as bias.

Illustrated in Figure 1 is a common, high-level execution flow of an AI model. A model is built using historical data, deployed, executed using current and relevant data, and the outcome of the model is leveraged for, hopefully, positive value. However, the yellow highlights show some of the risks of using the company’s data with regard to bias, including:

  • Risk associated to the training data set. There are two distinct risks associated with the training data set, including:

    - Risk of Sampling Bias – the analyst that select the data in order build the model must be able to properly sample the company database(s). If the training data set is not representative of the company’s relevant total population, then the model will under perform and/or be trained with bias because of a poorly conducted sample. This is a self-inflected error that can result in a compromised model outcomes and even expose an organization to fiduciary exposure via legislation like AIDA.

    - Risk of Data Poisoning – a current threat to organizations is perpetrated by nefarious individuals/agencies who consciously poison the training data set in order to influence a model’s outcome. This can have a devasting impact on an organization that relies on an automated process that incorporates the outcome of an AI    model executing thousands of times a day, or more.  
  • Risk associated with Population Bias. Unfortunately, the entire database population may expose an AI model to bias simply because of organizational processes, culture, and other business practice tendencies. In this case, even a well-executed sample will have bias in the training data set simply because that bias is real in the total population.

In order to help mitigate the data risks, the approach to Machine Learning Operations (MLOps) is to monitor the data input being consumed by the model as well as the model’s output. This monitoring is represented as the green circles in Figure 1. If either fails their respective validation process, it will trigger a cancellation of the execution and automatically execute a rebuild/retrain process.

MONITOR DATA

AI models often experience performance decline overtime. This is frequently caused by data drift, the unexpected changes to data structure, scope of data, or the meaning of the data. This drift can break data pipelines and cause the model’s accuracy to degrade.  Consequently, the data required by an AI solution is first examined to ensure any detection of data drift. And, if necessary, then triggers an automatic optimization (basically rebuilding and retraining the model, and redeploying) process. If your data is poisoned, it can potentially trigger the expected thresholds of data input and cause are build to initiate.

Figure1. High-Level Model Execution Flow:

Some techniques used to monitor data input include:

  • Mean (continuous features)
  • Median (continuous features)
  • Mode
  • Minimum
  • Maximum
  • Percentage of null, blank, or missing values (continuous and categorical features)
  • Number of distinct values(categorical features)
  • Occurrence count of distinct values (categorical features)
  • Feature containing all missing values
  • Feature containing all the same value
  • Categorical feature containing completely distinct values across whole population

MONITOR OUTPUT

Monitoring the output of an AI solution is done at 2 distinct levels, Figure 1 only illustrates the first, real-time monitoring.

  • Real-time - When an AI solution is executed, the results are returned to the calling application. The outcome is evaluated by a monitoring process before they are made available to the business application or downstream processes. If the results do not meet specific standards, they will not be made available, the process is simply canceled and will trigger an automatic optimization. The rational is simple: It is better for the customer to work with outdated data as opposed to bad data.[5]
  • Periodically – On a periodic basis the organization should examine the overall performance of an AI solution’s output. The purpose of this analysis is to evaluate the results on a scheduled basis in order to mitigate, for example, model bias.

Some scripts used for monitoring an AI solution’s output include:

  • Record count in the feature table(s)
  • Record count in the trigger population
  • Record count of the generated scores
  • Mean value of the scores
  • Median value of the scores
  • Mode value of the scores
  • Score minimum
  • Score maximum
  • Score Standard Deviation
  • Bucketed counts of scores
  • Kolmogorov-Smirnov Test(KS-Test) comparing the latest scores with those of the training set

So, why this high-level of failure? There are at least 2 common challenges:

  • Required skills are beyond that of Data Science, e.g. Data, Software, and Systems Engineers.
  • AI solutions are susceptible to degradation over time.

Many AI initiatives focus on the model itself, built by the data scientist. The attention, however, should be on the skill and experience necessary to properly craft models, effectively integrate them into complex solution pipelines, and monitor the model’s performance to ensure the expected value and impact is being realized. This is not a trivial task, requiring a team of professionals well beyond data science.


Sign up to be notified
about future publications!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
August 5, 2022

The Corporate Risk of Artificial Intelligence

The growing demand for implementing Artificial Intelligence (AI) is a necessity for many organizations to remain competitive. However, as companies rush to integrate AI into their processes, many do it poorly.

Date:
Hosted By:
Register Now

The growing demand for implementing Artificial Intelligence (AI) is a necessity for many organizations to remain competitive. However, as companies rush to integrate AI into their processes, many do it poorly or without sufficient foresight.

High failure rates for deploying AI solutions are reported by several sources, for example:

  • Up to 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.
  • 90% of machine learning solutions are never put into production.

These statistics are alarming, especially when you consider that many AI projects are research efforts, testing a hypothesis, or are one-off projects that are never expected to be put into production.

This bias in data and algorithms not only results in erroneous outcomes, but they are driving new legislative agendas such as the Artificial Intelligence and Data Act (AIDA). The AIDA gives individuals the right to inquire about the reasoning behind any forecast, suggestion, or decision that may impact them.

Common models leveraged in many AI solutions are built, trained, and validated using historical data where we already know the outcome we wish to predict. Leveraging this historical data exposes additional challenges and associated risks for the organization, such as bias.

Illustrated in Figure 1 is a common, high-level execution flow of an AI model. A model is built using historical data, deployed, executed using current and relevant data, and the outcome of the model is leveraged for, hopefully, positive value. However, the yellow highlights show some of the risks of using the company’s data with regard to bias, including:

  • Risk associated to the training data set. There are two distinct risks associated with the training data set, including:

    - Risk of Sampling Bias – the analyst that select the data in order build the model must be able to properly sample the company database(s). If the training data set is not representative of the company’s relevant total population, then the model will under perform and/or be trained with bias because of a poorly conducted sample. This is a self-inflected error that can result in a compromised model outcomes and even expose an organization to fiduciary exposure via legislation like AIDA.

    - Risk of Data Poisoning – a current threat to organizations is perpetrated by nefarious individuals/agencies who consciously poison the training data set in order to influence a model’s outcome. This can have a devasting impact on an organization that relies on an automated process that incorporates the outcome of an AI    model executing thousands of times a day, or more.  
  • Risk associated with Population Bias. Unfortunately, the entire database population may expose an AI model to bias simply because of organizational processes, culture, and other business practice tendencies. In this case, even a well-executed sample will have bias in the training data set simply because that bias is real in the total population.

In order to help mitigate the data risks, the approach to Machine Learning Operations (MLOps) is to monitor the data input being consumed by the model as well as the model’s output. This monitoring is represented as the green circles in Figure 1. If either fails their respective validation process, it will trigger a cancellation of the execution and automatically execute a rebuild/retrain process.

MONITOR DATA

AI models often experience performance decline overtime. This is frequently caused by data drift, the unexpected changes to data structure, scope of data, or the meaning of the data. This drift can break data pipelines and cause the model’s accuracy to degrade.  Consequently, the data required by an AI solution is first examined to ensure any detection of data drift. And, if necessary, then triggers an automatic optimization (basically rebuilding and retraining the model, and redeploying) process. If your data is poisoned, it can potentially trigger the expected thresholds of data input and cause are build to initiate.

Figure1. High-Level Model Execution Flow:

Some techniques used to monitor data input include:

  • Mean (continuous features)
  • Median (continuous features)
  • Mode
  • Minimum
  • Maximum
  • Percentage of null, blank, or missing values (continuous and categorical features)
  • Number of distinct values(categorical features)
  • Occurrence count of distinct values (categorical features)
  • Feature containing all missing values
  • Feature containing all the same value
  • Categorical feature containing completely distinct values across whole population

MONITOR OUTPUT

Monitoring the output of an AI solution is done at 2 distinct levels, Figure 1 only illustrates the first, real-time monitoring.

  • Real-time - When an AI solution is executed, the results are returned to the calling application. The outcome is evaluated by a monitoring process before they are made available to the business application or downstream processes. If the results do not meet specific standards, they will not be made available, the process is simply canceled and will trigger an automatic optimization. The rational is simple: It is better for the customer to work with outdated data as opposed to bad data.[5]
  • Periodically – On a periodic basis the organization should examine the overall performance of an AI solution’s output. The purpose of this analysis is to evaluate the results on a scheduled basis in order to mitigate, for example, model bias.

Some scripts used for monitoring an AI solution’s output include:

  • Record count in the feature table(s)
  • Record count in the trigger population
  • Record count of the generated scores
  • Mean value of the scores
  • Median value of the scores
  • Mode value of the scores
  • Score minimum
  • Score maximum
  • Score Standard Deviation
  • Bucketed counts of scores
  • Kolmogorov-Smirnov Test(KS-Test) comparing the latest scores with those of the training set

So, why this high-level of failure? There are at least 2 common challenges:

  • Required skills are beyond that of Data Science, e.g. Data, Software, and Systems Engineers.
  • AI solutions are susceptible to degradation over time.

Many AI initiatives focus on the model itself, built by the data scientist. The attention, however, should be on the skill and experience necessary to properly craft models, effectively integrate them into complex solution pipelines, and monitor the model’s performance to ensure the expected value and impact is being realized. This is not a trivial task, requiring a team of professionals well beyond data science.


Let's talk

Ready To Discuss Your Data Challenges?

Contact us

you may also like