Several years ago, I was involved in defining an artificial intelligence (AI) system to improve help desk tickets for a large IT provider. They received hundreds of tickets per hour across a global customer base. The leadership identified a key question for the AI system to answer: Given a new IT problem by a user, what is the first resolution they should attempt?
Initially, the IT provider wanted us to generate a recommended set of actions to resolve new problems by mining previous cases and solutions. After several interviews and discussions, we identified a new set of related challenges. Despite having a global network of employees solving similar tickets at the same time, employees were having difficulty properly labeling new tickets—and their successful solutions—consistently. This labeling challenge made those tickets not only difficult to find, but sometimes invisible to other help desk employees at the time when they would be most helpful—solving a nearly identical ticket occurring in the same time period.
We also discovered it was difficult to measure outcomes of recommendations the employees made. Sometimes tickets were not closed correctly or lacked important metadata to understand if all the recommended actions were necessary and correct. This experience taught us a vital lesson about what dictates the success of AI projects.
Develop a System of Measurement
Before joining Maana as the chief scientist, I led the Knowledge Discovery Lab at the General Electric Global Research Center, focusing on knowledge graphs and machine learning. As a leader in AI research & development, I am often asked to explain AI and the ways it provides value to large industrial companies. Despite many differences between companies, the common goal of this type of AI remains the same: to improve productivity within an organization by allowing employees to make better, more informed decisions.
Conversations about AI often focus on the potential and certainty of an outcome the AI solution can deliver, such as, Decrease customer IT tickets by 10% or decrease the time it takes to close IT tickets by 20%? However, these overall business goals may not directly align to the type of measurement the AI system will need to be successful, such as, What specific actions did a customer follow to resolve an IT issue, and which of those were successful?
The goal was an AI system to improve the IT ticket resolution process and provide better customer and employee satisfaction. The initial system design was toward recommending steps to resolve the issue, but the success of that system would be predicated in part upon measuring the outcomes of the employee recommendations. We struggled to find an immediate path to measuring the outcome of those recommendations directly, rendering the design of the AI system incapable of improving the ticketing process.
Being successful in AI applications requires solving this joint problem of finding the most effective outcome (for the business and customers) and of obtaining the data and measurement capability needed by an AI approach. Without the ability to measure the recommendation outcomes, an AI system will fail in the long term or become very costly to maintain.
Combine Subject Matter Expertise with Data
To create a better AI solution, the domain expertise of IT employees must be leveraged with the data that a system collects. In the IT provider situation, we realized employees:
The solution was an AI system that could recommend a label, intermittently gather feedback from the employee about whether the label was valid, and use the feedback to continually improve the labeling AI system. This solution was met with positive feedback from the IT employees since it allowed them to still apply their experience and technical knowledge, while assisting with the tedious task of selecting the best label for every new ticket. It also had a proactive advantage of allowing IT engineers to see trending issues better in order to head them off, and enabling customers to search for solutions themselves.
Tell the AI System the Score
Imagine that instead of optimizing IT tickets, the task is to develop an application for field engineers at a large energy company. Their goal is to keep critical infrastructure working, in part by prioritizing which pieces of equipment, at specific customer locations, require inspection or repair based on maintenance schedules or predictive maintenance algorithms. In this task, the number of possible objectives and measurements increases significantly to encompass the health of equipment, the efficiency of servicing support contracts, the satisfaction and profitability of the customer, and the overall productivity and satisfaction of the field engineer.
For this scenario, like many other industrial scenarios, success is often not as easily measurable for an AI system. Overall, a company tries to make an objective decision regarding the success of its maintenance actions and productivity of its employees; although the decision can still come across as subjective. Businesses develop competitive business models and processes that benefit directly from the domain knowledge and experience of employees. The experience-based knowledge allows businesses to differentiate and provide ever-improving capability to customers. Thus, when developing an intelligent solution using AI, organizations must begin by considering the following:
Pinning those subjective and experience-based decisions down into concrete data and measurements that an AI algorithm is able to utilize can be very difficult. Therefore, the ability to tell an AI system the “score” based on a measurement capability—so it knows if it is winning or losing—is the first step toward achieving value from AI. Combining that score with feedback from users based on their experience and domain knowledge allows the AI system to improve over time.
Stay in the know with the latest information about Maana services, events, news and best practices by email.