“Towards the assurance of AI-based systems”

May 14, 2024 @ 1:30 pm – 2:30 pm America/New York Timezone
QinetiQ UK – 78
St. James’ Street
London, SW1A 1JB

IEEE Aerospace and Electronic Systems Society – co-sponsored by AESS Boston (Chair Dr. Francesca Scire-Scappuzzo) and AESS London (Chair Dr. Julien Le Kernec)

Seminar:  Via formal verification

Registration is required.

Agenda: (Please note that since this is taking place in London, for USA remote attendees the event will be a lunch time event 1:30PM – 2:30PM (Eastern Time) 
6:00-6:30 – In-Person Networking
6:30-7:30 – WebEx Presentation
7:30-8:00 – In-Person Networking

Speaker: Prof. Alessio Lomuscio, PhD, Imperial College, London, UK. Safe AI Lab. Royal Academy Engineering Chair in Emerging Technologies.

Bio:Dr. Lomuscio is Professor of AI Safety and Director of the Safe AI Lab, Department of Computing, Imperial College London, UK. He is founding co-director of the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence. Alessio’s research interests concern the development of verification methods for artificial intelligence.  Since 2000 he has pioneered the development of formal methods for the verification of autonomous systems and multi-agent systems, both symbolic and ML-based. He has published over 200 papers in leading AI and formal methods conferences and journals.  He is distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. Prof. Lomuscio is the founder and CEO of Safe Intelligence, a VC-backed Imperial College London spinout helping users build and assure robust ML systems.

A major challenge in deploying ML-based systems, such as ML-based computer vision, is the inherent difficulty in ensuring their performance in the operational design domain. The standard approach consists in extensively testing models for inputs. However, testing is inherently limited in coverage, and it is expensive in several domains. Novel verification methods provide guarantees that a neural model meets its specifications in dense neighborhood of selected inputs. For example, by using verification methods we can establish whether a model is robust with respect to infinitely many lighting perturbations, or particular noise patterns in the vicinity to an input. Verification methods can also be tailored to specifications in the latent space and establish the robustness of models against semantic perturbations not definable in the input space (3D pose changes, background changes, etc). Additionally, verification methods can be paired with learning to obtain robust learning methods capable of generating models inherently more robust than those that may be derived with standard methods. In this presentation I will succinctly cover the key theoretical results leading to some of the present ML verification technology, illustrate the resulting toolsets and capabilities, and describe some of the use cases developed with our colleagues at Boeing Research, including centerline distance estimation, object detection, and runway detection. I will argue that verification and robust learning can be used to obtain models that are inherently more robust and better understood than present learning and testing approaches, thereby unlocking the deployment of applications in the industry.

Registration is required.