Article by Sean Hunter, Chief Information Officer at OakNorth
Artificial intelligence (AI), machine learning (ML) and large-scale data analysis (also sometimes known as ‘Big Data’) are a set of technologies and techniques that are having an ever-increasing impact.
Computers excel at high-speed repetition of tasks that can be explicitly encoded as algorithms to process data. Many tasks, however, that humans find trivial to perform are extremely difficult to decompose in this way, and therefore traditional approaches to computer science have been unsatisfactory in tackling them. For example, although the game Go has an extremely simple rule set that can be learned by a child in less than an hour, the number of possible moves is so staggering that even the fastest computer could never succeed in finding good moves purely by brute force evaluation of every possible move. For a computer to be effective in working on this problem, it must have a method of first pruning irrelevant moves from the search, allowing it to thereby reduce the complexity of the challenge.
AI is a branch of computer science concerned with approaches to making computers tackle problems not by encoding the solutions in an algorithm but instead by creating simplified models of cognitive processes and then training these models to solve the problem. In this limited sense, the computer can be seen to be tackling the problem more similarly to how a human might think and is therefore ‘artificially intelligent’ and not merely an extremely efficient calculating machine (although the field has always raised interesting questions about what constitutes ‘intelligence’ as opposed to mere calculation).
One of the most notable triumphs of AI occurred in recent years when AlphaGo (an AI system from DeepMind) defeated the very best human Go players, a task that seemed impossible given that only a few years ago even an amateur Go player could beat all but the strongest computer opponents with ease.
ML is a subset of artificial intelligence, which concerns the training of computer models to progressively improve performance at a given task, without the computer being explicitly programmed to solve it. This can be subdivided into supervised learning (where a human provides classifications known as ‘labels’ for the data that are used to train the model), unsupervised learning (where the model has no labelled training data, but is guided by statistical properties of the input data itself) and semi-supervised learning (a hybrid of the two in which some labelled training data is provided but in which much or most of the training data is not labelled). This can be coupled with reinforcement learning, where the system attempts to maximise some ‘reward’ function, allowing the operator to specify characteristics of a desired solution, with the system discovering possible candidate solutions and evaluating them relative to each other. In particular, semi-supervised and reinforcement learning techniques allow skilled human operators to govern the training process and thereby accelerate learning of some aspects of the task. These techniques have been successfully used in a wide variety of applications in the financial arena and elsewhere.
AlphaGo and its successor AlphaZero used reinforcement learning in that candidate algorithms played very many matches against each other, with training governed by the success or failure of these matches.
A common model used in AI is an ‘artificial neural net’ that simulates neurons connected together by synapses. A neuron will ‘fire’, producing a value (or not) on its output synapses based on an activation function acting on the weighted values of its input synapses. If there are hidden layers in this net between the input and the output layers, it is called ‘deep’, and ‘deep learning’ is the process by which the weights and activation functions of a deep neural net (or similar multilayer architecture) are trained such that it can be used to solve a given problem.
This is an area that has attracted high levels of research interest in the last few years as deep learning has proven capable of producing models that are able to tackle applications across a wide variety of fields. One of these applications, natural language processing (or NLP), is an area of AI that focusses specifically on machine understanding and generation of speech and written languages. This can allow a computer to perform tasks such as analysing the sentiment of passages of human-written text, extracting topics or searching for text that may relate to a given subject.
AI and ML technologies can help drive business growth as demonstrated by OakNorth’s journey to dat. OakNorth’s enterprise software combines a deep understanding of credit, dynamic data sets, auto-analysis capabilities, cloud-computing and state of the art machine learning, to create a better borrowing experience for businesses. Our credit and monitoring tool is informed by industry benchmarks, peer analysis, and scenario analysis to improve the quality, consistency and speed of commercial lending decision making. Through analysing each borrower’s data in the context of its geography and sector and monitoring a borrower against its peers, the software is able to alert lenders when a loan, or borrower needs attention. In five years, the technology has enabled OakNorth Bank to become the fastest-growing business in Europe (Source: FT 1000), achieving performance metrics that place us amongst the top 1% of banks globally.