Royal Society, London
Please meet in reception.
Royal Society, London
The Rt Hon. Lord Willetts
Professor Marta Kwiatkowska (University of Oxford) and Professor David Hand (Imperial College London)
We are rushing headlong into a world in which great reliance is placed on machine learning and AI systems. And yet there is uncertainty about how these systems will behave when confronted by unexpected situations, events of a kind they have not previously encountered, changing conditions, or when faulty data. Illustrating with real examples, this talk shows the need for robust and reliable systems.
Professor Michael Wooldridge (The Alan Turing Institute and University of Oxford)
The field of multi-agent systems is concerned with AI systems (agents) that interact with each other. Multi-agent systems raise a host of challenges for AI, because we need to equip AI systems with social skills -- the ability to cooperate, coordinate, and negotiate with each other. The most important single application of multi-agent systems is on the global markets: automated (high frequency) trading programs are agents, autonomously buying and selling on timescales that are beyond the ability of humans to monitor. Unfortunately, multi-agent systems are prone to unpredictable and dramatic dynamics, as was demonstrated in the Flash Crash of 6 May 2010, when the US financial markets collapsed over a 20 minute period, suffering the largest one-day point drop in history -- only to recover within a similar timescale. The widespread use of multi-agent systems demands the theory and tools to understand and manage the dynamics of multi-agent systems. In this talk, I will begin by briefly motivating the work, and then discuss two approaches to this problem. In the first, we use ideas from game theory and program correctness to analyse rational behaviours in multi-agent systems. In the second, we use large scale agent-based simulation to directly model flash crash scenarios.
Dr Aldo Faisal (Imperial College London), opened by
Professor Yike Guo (郭毅可) (Imperial College London)
Artificial Intelligence in Healthcare has seen many recent successes in the area of diagnostics, achieving often above human expert level’s of performance, that promise more precise, faster and cheaper diagnosis. These AI diagnostic capabilities mimic a clinicians perceptual ability to assess a situation. However diagnosis, is where medicine only starts, the next big step is understanding how to tackle treatments, i.e. capturing the ability of experts to plan, manage and adjust treatments and therapies to steer patients towards a healthy outcome. These cognitive abilities of clinicians require us to built technology that can capture cognitive processes and ways of delivering these in such a way that experts and patients can understand and adopt. This requires the development of appropriate methods for measuring, predicting and building trust whenever an AI and a human are in one interaction loop.
Shakeel Khan (Her Majesty's Revenue and Customs), Frankie Kay (Office for National Statistics) and Dr Jasmine Grimsley (Office for National Statistics)
A review of machine learning maintenance in HMRC inspired by financial services best practice.
Dan Kellett (Capital One) and Professor Jonathan Crook (University of Edinburgh Business School)
Within Financial Services the need for a structured governance framework is clear. This becomes more important with the use of more opaque Machine Learning techniques. This talk will discuss the frameworks used at Capital One to ensure the build, deployment and ongoing assessment of algorithms are understood and the risks well managed.
Stan Boland (FiveAI) and Dr Iain Whiteside (FiveAI)
It’s clear that companies building self-driving technology requires us to untangle several issues: complexity, uncertainty, infinity and human safety. That makes it the most challenging science problem of our era. The complex systems being built and comprising many hardware and software components, including AI ones with uncertain outputs, are required to go out into the infinite state space we call the real world, with its many confounders which by definition it can’t have seen before, and we are expecting this complicated, fragile and error-prone integrated software and hardware stack to perform safely in virtually all decisions, to at least the level of human performance. That means errors of about 1 in 10^7 decisions.
This talk will describe how we go about framing this challenge, what tools we need to start to solve it, how they will work and what we can expect to see over the next 5-10 years.
Professor Michael Bronstein (Twitter)
In the past decade, deep learning methods have achieved unprecedented performance in various fields from computer vision to speech recognition. In the majority of these applications, the data has an underlying Euclidean grid-like structure. There has been recently an increasing interest in developing deep learning methods for graph- and manifold-structured data. Such data arises in applications ranging from social networks and recommender systems, to particle physics, computational biology, and drug design. In this talk, I will overview the new field of Geometric ML, encompassing deep neural models for graphs, its promises, limitations, risks and challenges.
Pushmeet Kohli (Google Deepmind)
Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and control systems. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways. Such behaviour and the difficulty of interpreting the behaviour of neural networks is a serious hindrance in the deployment of these models for safety-critical applications. In this talk, I will review the challenges in developing models that are robust and explainable and discuss the opportunities for collaboration between the formal methods and machine learning communities.
Dr Stephanie Hare (Researcher and Broadcaster in Technology and Politics) and Giles Herdale (Independent Digital Ethics Panel for Policing)
The legitimacy of policing has always been contested since the time of the creation of the Metropolitan Police as the world’s first professional police force 190 years ago. As such the first Commissioner of the Met, Sir Robert Peel, famously set out principles of policing by consent that have shaped British policing ever since.
We are now in a period of unprecedented flux, where demands and expectations of policing are changing at a faster pace than ever before. The traditional, locally organised and accountable model of policing envisaged at the time of Peel is challenged by the globalisation of digital communications and by increasing mobility and connectivity. Rapid rises in the prevalence of online offending and the growth of digital evidence have placed existing systems and processes for crime prevention and investigation under considerable stress.
It is therefore vital for the continued relevance and effectiveness of policing that it is able to engage with these changing requirements and adapt accordingly. This need for innovation has been highlighted by Sir Tom Winsor, Chief Inspector of Constabulary, the independent regulator of policing in England and Wales: “It is essential that the police are given the means to [invest in new technology]. For example, body-worn video, fully-functional hand-held mobile devices, facial recognition and artificial intelligence, and the connected systems and infrastructure to support them, are all things in which police forces must invest for the long term. If they don’t, they are left playing catch-up as offenders intensify and increase their abuse of modern technology to cause harm.”
New methods of crime prevention, investigation and evidence gathering are being developed in response to these changing demands, often involving privacy intrusive technology, including (but not limited to) facial recognition. Other applications include downloading data from devices for investigative purposes, machine learning algorithms searching data sets against matching criteria, and predictive policing applications including hotspot mapping and offender profiling. All depend on the collection and analysis of significant amounts of data, including sensitive personal data.
How to engage with the potential of technology, and at the same time maintain the principles of policing by consent, is the key challenge for policing the digital age.
Panel session chaired by Dr Zeynep Engin (University College London).
Carly Kind (Ada Lovelace Institute),
Professor Marta Kwiatkowska (University of Oxford)
Dr Martin Goodson (Evolution AI)
Tom Smith (Office for National Statistics)
Professor David Hand