Augmented analytics: the human-AI hybrid
Artificial intelligence (AI), robotics, big data and machine learning are terms that have quickly become part of the lexicon used to describe contemporary approaches to machine-enabled decision making. While these approaches have begun to have an impact on many business processes, they continue to stop well short of the more extreme claims that have been made of AI.
When Fukoku Mutual Life Insurance in Japan decided to replace 34 claims handlers with IBM’s AI, Watson, many assumed wholesale disruption of the claims process was around the corner. Every image of android-like machines preparing legal documents offered up by software vendors helped perpetuate the idea that robots were about to take over.
Instead, as the hype around AI has given way to a growing sense of realism, approaches such as augmented analytics have risen to prominence in their place. Rather than replacing human intelligence with machine intelligence, augmented analytics augments human intelligence to produce something all the more immediate and powerful.
The definition of augmented analytics is to use machine learning or natural language processing (NLP) to automate the process of data preparation, to simplify complex data into a form that is significantly easier to interpret, allowing for greater confidence in the decision-making process. This confidence comes from the interface between the computing power of complex algorithms and the natural pattern spotting and inference abilities of the human brain.
Traditional business intelligence work involves manually exploring and preparing data, manually testing models, finding patterns in data and sharing those patterns with business leaders. The preparation aspect of the work can take upwards of 80% of a data scientist’s time. Those who do the analysis retain the power to decide what’s important, and anything outside of their purview is left unexamined.
Compared to traditional business intelligence work, augmented analytics improves the overall speed and accuracy, and, because more data can more readily be analysed, it can also potentially reduce data bias. Data discovery and preparation are faster as algorithms are applied to automatically search for patterns, while features, models and code are automatically selected.
Several products currently exist on the market from established data insight providers such as Qlik, DataRobot, and Smarten, among others. Augmented analytics platforms remain relatively immature, especially in the legal space where analytics can provide valuable insights into the vast array of data that is available and so speed up existing workflows.
At Kennedys, we are working on a number of products within Kennedys Intelligence (KI) that fall under the umbrella of augmented analytics, all designed to help our clients use lawyers less.
One important example concerns high-volume insurance claims from road traffic accidents. Where these result in personal injury, supporting evidence is required in the form of a written report from a medical professional. To help augment human decision making, we have developed a methodology for automatically providing an estimate of the personal injury or loss value arising in respect of such claims.
This is performed by analysing the grammatical structures surrounding mentions of injuries. In the majority of cases, there are numerous mentions of injuries and recovery periods and so the problem of extracting the correct data can be difficult. However, using the principles of language structure, we have been able to create a method that with a high degree of precision can extract the correct values. We are then able to correlate these values with historical data and the ranges of values given in standard legal texts for valuing injuries. We then display this information back to the user, showing the predicted value, the extracted information, and reference to supporting legal texts.
Our second KI product is called Evidential Reasoning & Maximum Likelihood Evidential Reasoning, or rather more simply, the MAKER framework. This framework is used for the identification of fraud. Previously, Kennedys had employed a rules-based method, developed by a team of specialist fraud analysts, with each rule contributing to an overall score. When a score crosses a predetermined threshold, the claim is tagged as fraudulent.
The MAKER framework instead makes use of statistical methods, to help our fraud analysts be more efficient in their work by augmenting our expert rules. The approach calculates the maxim likelihood of fraud given a set of observable features and has been developed to be as transparent as possible. At each step, the observables and their respective contributions to the overall fraud likelihood are explicit.
Making the shift from thinking of machine intelligence as something that will replace humans tomorrow to one that can enhance humans today opens up a much more immediate, realistic and powerful set of opportunities for creating value and reducing cost. Not so much the rise of the robot and subservience to the machine, just the next stage in our technology evolution with humans firmly at the helm.
This article was first published by Claims Media.