Sector Specific AI Examples

Insurance

Fraud Detection

The prevention of fraud is an important application in prediction techniques. With an increasing number of transactions, people can no longer manually control all of them. As a remedy, one may catch the experience of the experts and put it into an expert system.

The traditional approach has the disadvantage that the expert’s knowledge, even when it can be extracted explicitly, changes rapidly with new kinds of organized attacks and patterns of fraud.

In order to stay profitable, it is no longer viable to use traditional methods. The solution and answer to this are to use machine learning algorithms.

Let 2021.AI be a catalyst to start your company’s journey with these new technologies, such as machine learning.

Product and Coverage optimization & Recommendation

Implementing a recommendation engine to help suggest relevant content to the clients, is not only the key to increasing client engagement, but will be the first vital step in a direction where the client helps to tag the content.

 

Financial

Client Lifetime value

The core data in a churn analysis is the historical lifetimes of the clients and an indicator (censoring variable) of the client being “alive” or not. These two variables together form the “spells” of the client.

Two methods can now be used in modelling the data. Each framework has its advantages and drawbacks. Method 1 takes a survival analysis approach, which has been extensively used in churn analysis. It is a novel framework taking the censoring (dead/alive indicator) into account; this feature makes it valuable in understanding churn and the drivers behind it. The basic idea is to model the hazard of the client, where hazard refers to the risk of churn within the next time-step.

Method 2 is simple and efficient but relies heavily on the choice of a time-span, h. The idea here is simply to look at the probability of churn within the next predefined interval of length h. The length of this interval is typically decided based on the objective using domain knowledge about the company together with an analysis of the data, for instance, using method 2. When settled on a time-length, the probability is modeled using background data and individual paths as input. Popular choices of models are (among many): Logistic Regression, (boosted) decision tree and neural nets

Anomaly Detection

AI can be used to identify patterns in voluminous historical data and detect behaviour that deviates from identified patterns. Examples of such aberrant behaviors may include money laundering, illicit transactions and security threats. Upon detection, the AI systems issue an alert that can either perform automated tasks, such as freezing account(s) or notify a person for evaluation.

Fraud Detection

The prevention of fraud is an important application in prediction techniques. With an increasing number of transactions, people can no longer manually control all of them. As a remedy, one may catch the experience of the experts and put it into an expert system.

Pharmaceutical

As AI technology advances, predictive models are becoming increasingly intuitive and embedded in a wide array of industries. For the pharmaceutical industry, AI provides a cost effective and flexible platform to gain insight into the future while simultaneously taking a retrospective view of the past to develop new and improved medicines.

In terms of new product development, pharmaceutical companies channel significant resources into screening compounds for preclinical trials. To accelerate this process many who operate in this sphere are using predictive models to search vast virtual databases of molecular and clinical data. Data scientists, clinicians and senior management then zoom in on likely molecular candidates that have been screened by AI for appropriate chemical structure, known interactions and cost.

 

Big Data Analysis

The opportunity presented by improving big-data analytical capabilities is especially compelling in complex business environments such as the pharmaceutical industry. In the quest to develop new treatments, ethically sensitive and time critical information is generated from multiple sources including the R&D process, retailers, patients and caregivers. Effective synchronization of this data will help pharmaceutical companies identify new molecules (or treatments) and develop them into effective, approved and reimbursed medicines more quickly.

 

Big data analytical processes can be used to create value across various sectors of the pharmaceutical industry. These include:

 

Predictive Modeling Predictive modeling of biological processes and drug interactions is the domain of big data. By leveraging the diversity of available molecular and clinical data, predictive modelling helps identify new molecules that have the highest probability of being developed into drugs that act on biological targets safely and effectively.
Data Integration Intelligent integration of information from the discovery of a molecule, regulatory approval through to real-world use provides opportunities for growth. Further, Smart algorithms linking laboratory data to clinical experience could rapidly detect safety or efficacy issues.  
Safety and Risk Management Genetic profiling, molecule combinations and multi-layered adverse event profiling are some of the traditional areas where big data analysis is being deployed in the pharmaceutical industry. However, the potential reach of big data is far greater. What if the effectiveness and reputation of a medicine were to be quantified? Then signals could be detected and a profile generated from monitoring online physician communities, consumer generated media and patient enquiries on medical websites.
Sensors and Devices Miniaturized biosensors and the evolution of smartphones and their apps are resulting in increasingly sophisticated health measurement devices. Pharmaceutical companies can deploy smart devices to gather large quantities of real-world data not previously available to scientists. This influx of information if managed via a fluid data exchange will enable improvements in clinical trial design and increase response time to adverse events in the smallest cohorts.

 

“Every patient experience now generates rivers of data which, if pooled intelligently,

can trace a detailed portrait of a patient’s health and, when aggregated with other

patient data streams, can coalesce into deep reservoirs of knowledge about

entire disease states and patient populations.”

Ref: PWC

 

Patient Data analysis

Clinical Trials

Artificial intelligence can be deployed throughout all phases of the clinical trial process.

Compound screening Artificial intelligence can be used to test the most promising compounds before pre-clinical experiments. Intelligent simulations can predict dose-effect relationships for a given number of molecule combinations and in some cases can uncover new therapeutic potential for existing compounds.
Optimised Clinical Trial Design Identifying the most promising combination molecules through compound screening can uncover markers of resistance or sensitivity early in the trial planning phase. Incorporating this knowledge into the design of a clinical trial could:

  • Fast track project approval
  • Simulate optimal dosage
  • Screen for most suitable subjects
Data Mining By screening entire libraries of historical data artificial intelligence can uncover drug repurposing opportunities. Compounds that failed previous trials can be resurrected and paired with new molecules within a virtual space. Such trials can test the best combinations and rapidly pinpoint potential candidates for re-use.
Precision Medicine Precision medicine is an approach to disease treatment and prevention that seeks to maximize the effectiveness by taking into account individual variability in genetics, environment and lifestyle. These precision medicine principles, when supported by an artificial intelligence engine, provide an opportunity for pharmaceutical companies to redefine our understanding of disease onset and progression, treatment response and health outcomes. This insight could lead to more accurate diagnoses, more rational disease prevention strategies, better treatment selection, and the development of novel therapies.

Clinical Imaging – X-ray and MRI/CT analysis

Artificial intelligence image-processing algorithms can be trained to interpret MRI and CT scans. By teaching computers to read and diagnose medical images the risk profile of the patients becomes clearer and early detection of disease can assist in the formulation of timely preventative care programs.

 

Some of the areas where AI can support medical imaging specialists and back office operations include:

 

Medical Imaging/Radiology Benefit AI Foundation
Tumor Monitoring Patients library of scans loaded into appropriate AI engine and chronological time-course analysis applied
Second Opinion – Diagnostic Validation Consensus opinions of other experts entered into AI engine
Automated Medical Reports AI engine merges clinical evaluation with text library

 

Manufacturing and Industry

Power optimization

Some manufacturing and industrial industries have large power consumptions that every day sums up to large costs. Clever algorithms, that automatically can control power utilization, could potentially save the companies huge amounts. For some, even cost reduction of 1-2% could have a big impact. Artificial intelligence can combine and analyse power utilization data from all of the components and optimize the workflows accordingly to minimize the cost.

 

Processes (Everything that humans can do machines can do)

In recent years machines have steadily increased their presence in various different areas of manufacturing and industry. The vast growth of available technologies and ever present competitiveness drives companies to use machines instead of people where it is possible. One of the main components of these machines is its operative system that can be described as its brain. As the areas of the application get more advanced so does the systems, which require advanced algorithms. Implementation of AI in this area is necessary to optimize and speed up the processes and will only grow in the next years as we see it happening to for example self-driven cars.

 

Inventory optimization

Keeping up inventory such as the different parts for the end products as well as spare parts for machines that produce them at levels that would not evoke shortcomings but at the same time not overstocking is a challenge for many industries. Clever algorithms that combine data across different segments of the industry can help optimize this thorough the use of machine learning

 

Fault detection

Not knowing when something is going to fail and not having spare parts could be very costly. Right data and algorithms can help predict potential faults of the system, failures of the individual components and allow you to act faster.

 

Supply and demand

Variations in demand can sometimes come as a surprise that could result in shortage or overproduction of a product. AI and machine learning can process the latest demand data and automatically adjust the production of that product to the needs.

 

Marketing & Retail

Product Pricing

Optimizing your profits goes very close together with optimizing product pricing. Aspects such as time, client type and product type all play a big role in the optimization. Combining data from different sources to describe and predict product price can be accomplished through different machine learning algorithms. Some of the areas where product pricing optimization are of high value are hotels and housing booking, transport tickets, insurance policies and advertising.

 

Customer Segmentation

People are different and have different needs. So why not use AI to help you segment your customers to customize your business and offering accordingly. Identifying differences among your customer segments can not only help customers to find what they need faster and make them happy but also improve your revenue. By segmenting customers, it is possible to offer relevant products to relevant target groups, customize marketing channels and content as well as information which is shown on your website to improve your business and customer experience.

 

Recommender engine

Similar to how Netflix and Amazon recommend its users to new content and products, other companies can do the same whether it is for the sale of books, shoes, travels or numerous other products which reflect that person’s behavior online. Through use of machine learning algorithms models which can be trained to identify visitors and personalize the web content to best suit the visitors. There are many possibilities with the use of recommender engines: Upselling, cross selling, improve conversion rates or just enable visitors to get relevant information and thereby improve their experience.

 

Forecasting

Forecasting is the process of making predictions of the future based on past and present data. Forecasting with regards to market and retail can be done on many different parameters depending on the nature of the business. Prices is one possibility, but others such as client inflow, lead acquisition and customer lifetime values are some of the other metrics for that can be predicted by forecasting techniques. Many methods are used for forecasting, some of the common one include the different regression analysis. An import aspect in forecasting is whether the data is time dependent or not. Time series analysis is often the way to go if your data is time dependent.

 

Public Sector

Text classification and recognition

Automated classification of text, whether they’re articles or documents, could save time and money while still getting the job done. For example improvement in how emails are being channeled to the right receivers, or have automated replies sent based on the classification of the email. Such tools are already being utilized, through automated chat generators. Identifying needed content among vast masses of information on the internet could also be automated to gather the right input on the subject. Machine learning algorithms can also be trained to recognize handwritten text and digitalize it, which is quite similar to image recognition. Applications of this techniques are many and could benefit many different areas.

 

Predictive Diagnostics and Treatment

Predictive diagnostics and treatment are not widely used in healthcare today but have large potential for predicting a patient’s health development as well as customize the right treatment approach. Predictive diagnostics and treatment can with help of AI assist doctors in predicting the disease based on symptoms or identify preventive treatment based on patients biological data. Possibilities are many, but in order for models to be successful large amounts of structured data is required to train them.

 

Predictive Traffic and Delays

Often one hears about traffic jams and delays already after these have happened. Predictive modelling of the traffic based on historic and current data could potentially predict traffic jams and delays in advance. Features like these have already been implemented with success in trip planners by estimating the time of travel while other places it has been directly integrated on the streets to prevent large traffic jams from happening. The key in achieving valuable predictions is having a solid historic data to support the training of the models. Predicting traffic could not only be used to warn drivers of potential jams and accidents but also to regulate the traffic and traffic lights to avoid bottlenecks ever being created, by adjusting public transportation systems to cope with these problems.

Horizontals

CLIENT HEALTH SCORE

Used for customer retention and churn understanding and prevention.

Data is needed in order to understand, model and predict churn. The core data in a churn analysis are the historical lifetimes of the clients and an indicator (censoring variable) of the client being “alive” or not. These two variables together form the “spells” of the client.

These spells are interesting in themselves when trying to understand the lifecycle of a client on a top-level, but alone, often bring low prediction accuracy thereby low value on the individual level. The key is of course to identify and characterize high-risk clients. In order to do so we need (lots of) individual information on the clients.

Two methods can be used in modeling the data:

  1. Method takes a survival analysis approach, which has been extensively used in churn analysis. It is a novel framework taking the censoring (dead/alive indicator) into account; this feature makes it valuable in understanding churn and the drivers behind. The basic idea is to model the hazard of the client, where hazard refers to the risk of churn within the next time-step.
  2. Method is simple and efficient, but relies heavily on choice of a time-span, h. The idea here is simply to look at the probability of churn within the next predefined interval of length h. The length of this interval is typically decided based on the objective using domain knowledge about the company together with an analysis of the data for instance using method 2. When settled on a time-length, the probability is modeled using background data and individual paths as input.  Popular choices of models are (among many): Logistic Regression, (boosted) decision tree and neural nets. Here a trade-off between interpretability and accuracy (often) plays role.

PRICING OPTIMIZATION

Pricing optimization and revenue management will have a very different face by 2020. Digital developments in areas such as big data, the Internet of Things (IoT) and artificial intelligence will ultimately reshape the way airline pricing analytics works.

  • Visualize price trends.
  • Gain visual insights to competitive position.
  • Explore the current and past market price structure.
  • Real time monitoring – Get immediate alerts of problems.

 

ADAPTIVE WEBSITES

Imagine presenting your clients with a welcoming website that is tailored to their needs.

With an adaptive website the structure, content, or presentation of information is structures in response to measured user interaction with the site, with the objective of optimizing future user interactions.

Adaptive websites “are websites that automatically improve their organization and presentation by learning from their user access patterns.”  – much like a recommender engine.

User interaction patterns may be collected directly on the website or may be mined from web server logs. A model or models are created of user interaction using artificial intelligence and statistical methods. The models are used as the basis for tailoring the website for known and specific patterns of user interaction.

 

RECOMMENDER ENGINE

Implementing a recommendation engine to help suggest relevant content to the clients, is not only the key to increase client engagement, but will be the first vital step in a direction where the client helps tagging the content.

The main goal of a recommender engine is to help the client navigate in a potentially huge content space, ie. the term personalization is key. The engine should, when properly implemented, help each client find not only what they are looking for when they are actively searching, but also indirectly by delivering relevant suggestions.

Ratings: In order to fulfill the task, the system needs to know if the delivered content actually had any interest and how much. This kind of data is normally referred to as ratings and we distinguish between two types of ratings:

  1. Explicit ratings: This could be as simple as asking the client in the bottom of the webpage to rate the content on a predefined scale, which could be yes/no or maybe 1 to 5.
  2. Implicit ratings: These kinds of ratings are at least as important. Often the explicit ratings are not accessible and one might prefer not to bother the clients all the time. Many important implicit ratings are on a binary scale, ex. Opened a mail or not, Clicked on the webpage button, etc. Others are numeric like the amount of time-spend on a page (reading an article).

SENTIMENT ANALYSIS

Sentiment analysis – otherwise known as opinion mining – is a much bandied about but often misunderstood term.

In essence, it is the process of determining the emotional tone behind a series of words, used to gain an understanding of the the attitudes, opinions and emotions expressed within an online mention.

Shifts in sentiment on social media have been shown to correlate with shifts in the stock market.

 

Case: Expedia Canada

The ability to quickly understand consumer attitudes and react accordingly is something that Expedia Canada took advantage of when they noticed that there was a steady increase in negative feedback to the music used in one of their television adverts.


Sentiment analysis conducted by the brand revealed that the music played on the commercial had become incredibly irritating after multiple airings, and consumers were flocking to social media to vent their frustrations.

A couple of weeks after the advert first aired, over half of online conversation about the campaign was negative.

Rather than chalking up the advert as a failure, Expedia was able to address the negative sentiment in a playful and self-knowing way by airing a new version of the advert which featured the offending violin being smashed.

Source: marketingmag.ca

Models and Algorithms

Decision Trees


Decision tree learning, as the name suggests, uses a decision tree as a predictive model which maps observations about an item to conclusions about the item’s target value. A decision tree is a simple representation for classifying examples. For this section, assume that all of the input features have finite discrete domains, and there is a single target feature called the classification.

Each element of the domain of the classification is called a class. A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target or output feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes.

Regression analysis


Regression analysis is a method for estimating the relationships among various numbers of variables. Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. There are various different regression techniques some of which are: Linear Regression, Logistic Regression, Lasso Regression and ElasticNet.

 

Neural Network

A Neural Network or Artificial Neural Network (ANN) as it is called in regards to data science, is an information processing system that is inspired by the way biological nervous systems works. The main element of this structure is the information processing system. It is composed of a large number of highly interconnected processing elements working together to solve specific problems.

ANNs, learns by example and are trained on historic data. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Neural networks are used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques.

Neural networks are typically organized in layers. Layers are made up of a number of interconnected ‘nodes’ which contain an ‘activation function’. Patterns are presented to the network via the ‘input layer’, which communicates to one or more ‘hidden layers’ where the actual processing is done via a system of weighted ‘connections’. The hidden layers then link to an ‘output layer’ where the answer is output as shown in the graphic below.

Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an “expert” in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer “what if” questions.
Other advantages include:

  1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
  2. Self-Organiszation: An ANN can create its own organization or representation of the information it receives during learning time.
  3. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
  4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

 

Deep Learning

Deep learning is part of a broader family of machine learning methods based on learning representations of data. It is inspired by the way biological nervous systems works. In a deep network, there are many layers between the input and output, allowing the algorithm to use multiple processing layers, composed of multiple linear and non-linear transformations.

Some of the most successful deep learning methods involve Artificial Neural Networks (ANN). ANN is an information processing system and is composed of a large number of highly interconnected processing elements working together to solve specific problems. ANNs, learns by example and are trained on historic data. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Neural networks are used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. The areas of Deep Learning implementations are numerous stretching from medical industry to robotics.

 

Cluster analysis

Cluster analysis is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. Whether for understanding or utility cluster analysis has played a big role in a wide variety of fields: social science, biology, statistics, pattern recognition, information retrieval, machine learning and data mining.

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters).

Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their notion of what constitutes a cluster and how to efficiently find them. Some of the more widely known algorithms include k-means clustering and hierarchical clustering.

 

Bayesian network

Bayesian network is a type of statistical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms.

The nodes in a Bayesian network represent variables and arcs represent direct connections between them. In the diagram above, each of the arrows would have a probability attached. These direct connections are often causal connections. In addition, it models the quantitative strength of the connections between variables, allowing probabilistic beliefs about them to be updated automatically as new information becomes available.

Bayesian networks can be a useful tool for determining relationships between variables and their probability of affecting each other.

 

Support Vector Machine

Support Vector Machines (SVMs) are a subclass of machine learning, and are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of labeled training data an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier.

A support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class.

SVMs are helpful in text categorization, image classification and other areas such as biology where classification is widely used.

 

Reinforcement Learning

Reinforcement Learning is a type of machine learning. It allows machines and software agents to automatically determine the ideal behaviour within a specific context, in order to maximize its performance. Simple reward feedback is required for the model to learn its behaviour, this is known as the reinforcement signal.

This feedback from the environment allows the machine or software agent to learn and become smarter over time. This behaviour can be learnt once and for all, or keep on adapting as time goes by.

In few words reinforcement learning can be describe as a trial and error process.