Friday 3rd February, 2pm-3pm UK time
‘Militarizing Artificial Intelligence Theory, Technology, and Regulation’ with Professor Nik Hynek and Dr Anzhelika Solovyeva, Charles University.
Security Lancaster Seminar.
This book examines the military characteristics and potential of Artificial Intelligence (AI) in the new global revolution in military affairs.
Offering an original perspective on the utilization, imagination, and politics of AI in the context of military development and weapons regulation, the work provides a comprehensive response to the question of how we might reflect on the AI revolution in warfare and what can be said about the ways in which this has been handled. In the first part of the book, AI is accommodated, both theoretically and empirically, in the strategic context of the ‘Revolution in Military Affairs’ (RMA). The book offers a novel understanding of autonomous weapons as multi-layered composite systems, pointing to a complex, non-linear interplay between evolutionary and revolutionary dynamics. In the second section, the book provides an impartial analysis of the related politics and operations of power, whereby increases in military budgets and R&D of the great powers are met and countered by advocacy networks and scientists campaigning for a ban on lethal autonomous weapons. As such, it moves beyond popular caricatures of ‘killer robots’ and points out some of the problems which result from over-reliance on such imagery.
This book will be of much interest to students of strategic studies, critical security studies, arms control and disarmament, science and technology studies and general International Relations.
Nik Hynek is a professor specializing in security studies at the Department of Security Studies, Faculty of Social Sciences at Charles University. He leads the inter-scientific Charles University Research Centre of Excellence dedicated to the topic of ‘Human-Machine Nexus and the Implications for the International Order’.
Dr Anzhelika Solovyeva is Assistant Professor specializing in strategic and media studies at the Department of Security Studies, Faculty of Social Sciences, Charles University. Her latest monograph, co-authored with Nik Hynek, is The Logic of Humanitarian Arms Control and Disarmament (2020)
Join this Microsoft Teams seminar.
Friday, 10th February 2023)
‘Assuring systems security: Addressing engineering, monitoring, and perception’ with Professor Siraj Ahmed Shaikh, Swansea University.
Security Lancaster Seminar.
While cyber-physical systems security poses technical challenges of design and verification, problems of in-life monitoring and risk perception for effective secure operation cannot be ignored. We argue for novel methods and tools to overcome such challenges, including a complex mix of different abstraction levels to address secure design and engineering. Such systems, be it handheld devices or on-road vehicles, need in-life monitoring to detect for threats using means that are sound and early. We set out our approach towards this, and ultimately argue that such security risks also need to be effectively communicated. We show how the art and science of language and storytelling could be leveraged for such purpose.
Professor Siraj Ahmed Shaikh is a Professor in Systems Security at Swansea University. He is also Co-Founder and Chief Scientist at CyberOwl, an alumni of the 1st cohort of the NCSC Cyber Accelerator, addressing maritime security. His research has been funded by EPSRC, MoD, NCSC and LRF. In 2017, he was commissioned by the Government Office for Science to author an evidence review for cyber security threats faced by UK’s maritime sector, as part of the Foresight: Future of the Sea programme. In 2015, he was funded by the Royal Academy of Engineering for an Industrial Fellowship to HORIBA MIRA, addressing automotive cybersecurity testing.
Join this Microsoft Teams seminar.
Friday, 9th December, 2pm-3pm UK time
‘Discovering Unknowns on Visual Data ‘ with Yang Zhou (Loughborough University).
Unknown discovery refers to the novel information-seeking that plays a key role in autonomous robotic exploration in extreme environments, especially underwater and in space. The novel features can raise the interest and attention of agent to achieve human-like curiosity behaviour, which is beneficial to many applications such as unusual object discovery, landmarking and rare data collection, etc. Former methods, like saliency detection, require human labels with complicated procedures. With the development of deep learning on out-of-distribution feature detection, end-to-end unsupervised methods are potential solutions to detect unknown features and provide real-time navigation guidance to robots when interfering with the environment.
Yang received his MSc at the Department of Computer Science at Loughborough University in 2018 and is about to receive his PhD with EPSRC CDT-EI at Loughborough University. He is now taking a role as a KTP Research Associate with Loughborough University and MOA Technology Ltd, Oxford. His research interests include computer vision, pattern recognition and image processing. He published papers at conferences like ACCV and IEEE SMC, and Journals like Pattern Recognition.
‘An ML Boosted Software Engine for Next Generation Drones’ with Dr. Carl Sequeira (Flarebright).
Friday, 25th November 2022, 2pm-3pm UK time. #TASSTalk
Market studies on the commercial use of drones repeatedly and consistently indicate values in the order of billions of dollars by 2030. Yet, drones are still too unreliable to fly ‘beyond visual line of sight’ (BVLOS) and consequently regulations to-date continue to hold this growth back. This perception of unreliability has been reinforced by a recent UK Air Accident Investigation Branch (AAIB) report, which indicated that unmanned aircraft systems (UAS) accounted for a quarter of all occurrences received by the AAIB in 2021. Furthermore, this report highlighted that the predominant cause of UAS accidents was loss of control in flight.
In this presentation, the capabilities offered by Flare Bright’s advanced Machine Learning (ML) driven software engine will be introduced. Using flight verified use case descriptions, we will show how using Flare Bright’s software as the ‘brain’ of the drone provides the ability to fly by themselves, when communications and satellite positioning signals are lost, and improves the drones ability to automatically handle adverse weather conditions and land safely. In this way, Flare Bright’s software engine is a route towards realising truly reliable BVLOS flights, thereby unleashing the potential of the unmanned aerial system market. The potential for collaboration with academia enroute to Flare Bright realising its ambitions will also be discussed.
Carl Sequeira is Flare Bright’s Engineering Manager and Aerodynamics Specialist, where he leads an engineering team pushing the boundaries of embedded AI and miniaturisation in the unmanned aircraft sector. A Chartered Engineer with the IMechE since 2015, Carl is a multi-disciplinary aeronautical engineer with a PhD in unsteady fluid mechanics from the University of Cambridge and commercial and academic research expertise spanning aerodynamics, modelling and simulation, design and test, and device performance. This expertise has been reflected in awards throughout his career, including Cambridge’s Morien Morgan prize for the ‘greatest distinction in Aeronautical Engineering’ as an undergraduate and more recently the 2016 ASME Turbo Expo Best Paper Award (Education category).
Previously, as the Senior Flight Sciences Engineer, Carl led the flight simulation team at Hybrid Air Vehicles (HAV), developed the in-house aerodynamic design capability and was a key member of the flight test team that took the innovative Airlander to the skies above Bedford, UK. He has a firm understanding of aircraft regulatory processes, having served as an EASA approved Compliance Verification Engineer for aerodynamics, performance and loads while at HAV. He was a Research Associate at Cambridge’s Whittle Lab, where he was the founding member of the Cambridge Tidal Group, and has worked on a range of aerospace related subjects in the past with Rolls-Royce in Derby-UK, ONERA in Meudon-France and the VKI in Rhode.St.Genese-Belgium.
Friday, 11th November, 2pm-3pm UK time. ‘Hierarchical Potential-based Reward Shaping from Specifications ‘ with Dr. Dejan Ničković (Austrian Institute of Technology). #TASSTalk
The automatic synthesis of policies for robotic-control, including autonomous driving tasks through reinforcement learning relies on a reward signal that simultaneously captures many possibly conflicting requirements. We introduce a novel, hierarchical, potential-based reward-shaping approach (HPRS) for defining effective, multivariate rewards for a large family of such control tasks. We formalize a task as a partially-ordered set of safety, target, and comfort requirements, and define an automated methodology to enforce a natural order among requirements and shape the associated reward. Building upon potential-based reward shaping, we show that HPRS preserves policy optimality. Our experimental evaluation demonstrates HPRS’s superior ability in capturing the intended behavior, resulting in task-satisfying policies with improved comfort, and converging to optimal behavior faster than other state-of-the-art approaches. We demonstrate the practical usability of HPRS on several robotics applications and the smooth sim2real transition on two autonomous-driving scenarios for F1TENTH race cars.
Priv. Doz. Dr. Dejan Nickovic is a Senior Scientist in AIT Austrian Institute of Technology GmbH, in the Center for Digital Safety and Security. He is with AIT since December 2011 and focuses on research in verification and testing of complex cyber-physical systems, including autonomous systems. He is the coordinator of the national FFG project “Autonomous Driving Examiner” (ADEX) and assumes the role of the Technical Manager in the H2020 project “Foundations for Continuous Engineering of Trustworthy Autonomy” (FOCETA). Dejan Nickovic published more than 70 scientific papers that appeared in peer-reviewed international journals, conference and workshop proceedings.
‘Trusted Data Sharing (TDS); sharing data based on trust in dynamic (eco)system life cycles’. Dr. Arthur van der Wees, Arthur’s Legal
TAS-S Seminar. Friday, 18th March 2022
Arthur van der Wees is managing director of Arthur’s Legal, Strategies & Systems, an international research-based strategic and legal organization that covers the unique combination of technology, strategy, impact, ethics & law focusing on (inter)national, regional and global strategy & policy aspects in this Digital Age. It has a global practice with multiple relevant programs/projects in the UK, EU and US in the public, private and public-private sectors, on security, sovereignty, safety, internet & identity of things, data sharing, human-centricity, nuances of trust, trustworthy cyber-physical ecosystems of ecosystems, dynamic assurance and accountability.
He is (co-)author of various publications about innovation, digital transformation, data, computing, IoT, robotics, AI, manufacturing, autonomous systems, security, safety and privacy and trust. He has contributed to several regulations, standards and policy instruments for the Digital Age. Furthermore he is advisory board member respectively partner in more than 15 European projects.
‘Trust and Governance for Autonomous Vehicle Deployment’, Dr. Phil Koopman, Carnegie Mellon University. Tuesday, 1st March 2022.
‘Industrial Perspectives of Artificial Intelligence” with Prof. Nick Colosimo, BAE Systems.
This talk addressed the following:
- A discussion of key trends in AI and autonomous systems from a defence industrial perspective.
- Application areas and solutions in terms of products, services, and process improvement.
- Outstanding challenges from a defence industrial perspective-relevant to safety and security.
- Views of how those challenges could be addressed
- Future catalysts and “game changers”.
Prof. Nick Colosimo started with BAE Systems (then British Aerospace) in 1990 as a technical apprentice. In his current role he defines technology strategy and planning, and provides innovative solutions to hard technical problems in the context of the future combat air system (FCAS) project. He is also the Principal Technologist for Disruptive Technologies and a visiting Professor at Cranfield University.
If you would like to find out more about this talk or request a copy of the slides, please email Prof. Colosimo directly.
Invited Industrial Talk from Spirent on Trustworthy Autonomy.
Spirent is a British FTSE 250 company in telecommunications, navigation, and autonomy. They are industrial sponsors of many MSc and research project at Cranfield University.
‘Security challenges for collaborative autonomous aircraft systems’ with Dr. Cora Perner, Airbus.
Increasing autonomy is an emerging topic for both civil and military aircraft systems. However, the increased connectivity of previously isolated services in combination with legacy leaves such systems vulnerable to cyber attacks. This talk covered challenges related to securing autonomous aircraft systems operating in the same airspace as crewed aircraft. The focus was on establishing trust with potential collaborators as well as on investigating the impact of a propagating attack on the success of a collaborative mission.
Dr. Perner is a Cybersecurity Aeronautics Architect with Airbus Cybersecurity and leads several research projects. She completed her PhD in Computer Science from the Technical University of Munich and a degree in Aerospace Vehicle Design from Cranfield University.
‘Safe Autonomous Systems: Challenges and Potential Solutions’ with Wilfried Steiner, TTTech Labs.
Over the last decades we have managed to build quite sophisticated dependable systems, like airplanes or power plants. However, the complexity of autonomous systems like self-driving cars is unprecedented, and so is their safety assurance. This talk discussed a conceptual architecture as the foundation for safe autonomous systems, followed by practical design considerations and challenges. Some formal verification studies and discuss possible strategies to achieve dependability of systems that incorporate ML components were also presented.
Wilfried Steiner is the Director of the TTTech Labs which acts as center for strategic research as well as the center for IPR management within the TTTech Group. He holds a degree of Doctor of Technical Sciences and the Venia Docendi in Computer Science, both from the Vienna University of Technology, Austria.
His research is focused on dependable cyber-physical systems for which he designs algorithms and network protocols with real-time, dependability, and security requirements.
‘Secure-by-Design – the challenge of moving beyond Cyber Risk Management to Cyber Resiliency’ with Dr. Alex Tarter, Thales
One of the challenges for companies building modern critical infrastructure is that you need to design highly complex interconnected systems that can withstand a changing cyber attack landscape over the long-term. Our traditional risk management approaches don’t let us take into consideration emergent properties, critical interdependencies or the ability to continuously change.
This talk described some of the approaches and ideas Thales is employing to try and integrate continuous assurance and resilience into our critical systems – so that we can trust that they will work as expected.
Dr. Tarter has been working in the fields of Defence and Critical National Infrastructure Cyber security for over 15 years. As the CTO-Cyber, he is responsible for shaping the technical strategy and cyber capabilities of Thales UK. This includes leading the Thales UK Cyber Competence Centre, cyber-related R&D, and global product line management for our cyber security consulting offers.
‘Security by Design for IOT and Automotive using THREATGET’ with Dr. Willibald Krenn, Austrian Institute of Technology (AIT).
AIT is developing a security-by-design tool called THREATGET. In this talk, Dr. Krenn presented the current state of the tool and how it can be applied in the IoT and automotive domains using some examples. After discussing the basics of the tool and the threat modelling approach, Dr. Krenn discussed current challenges and research, like automated security attribute selection.
Dr. Krenn is the Thematic Coordinator of AIT’s Dependable Systems Engineering group and holds a PhD in computer science from Graz University of Technology.
“Towards Safe, Trustworthy and Efficient Autonomous Vehicles” (19th August 2021)
Dr. Dezong Zhao EPSRC Innovation Fellow, University of Glasgow.
The main challenge in autonomous driving is to handle uncertainties. It proposes rigorous requirements that autonomous vehicles need to guarantee safe and trustworthy decision making. To make this realistic, autonomous vehicles have to be interpretable, adaptable, verifiable and robust. The goals would be achieved by developing transparent and reliable tools in perception, planning, modelling and control. Moreover, the current autonomous vehicles are power hungry so green driving solutions are expected. To this end, developing ecological driving strategies and event-camera-based perception falls into our research interest.