On 6th May, Lancaster-based colleagues from TAS-S Research Strand 3 (RS3) and National Highways colleagues returned to cyberspace to meet for the second workshop in our collaboration. After opening many exciting avenues of discussion in the first workshop back in January, this second meeting gave us a chance to dive deeper into some key themes and areas of work which National Highways are currently managing. While in this work there are many issues of interest to us as researchers, that could likely fuel endless discussions, we organised this workshop around two core themes: Organisational Adaptation, and Public Engagement.
In the workshop we also proposed our plan for further collaborative research on these themes, which would focus on three case studies. Each is an area where National Highways is engaged in exploring the potentials and challenges of autonomous systems. We are exploring each of these case studies in depth with National Highways, with a particular focus on issues of security and ethics.
Case study 1: Connected and Autonomous Vehicles
Our first area of work will focus on a key opportunity for National Highways, in its role as a UK leader in road management and development: the proliferation of connected and autonomous vehicles (CAVs). Their priorities of customer protection, road safety, and streamlining road efficiency, connected with RS3’s interest in data ethics, networked communications, and more-than-human agency. An interesting dynamic in the shift to more autonomy in transport is the development of new cultural norms and expectations. As drivers negotiate with increasingly “hands-off” behaviours in driving, what new rules of the road will emerge, and how will this feed into keeping publics informed? And how can we think of questions of security in relation to CAVs that include both questions of technical security and the security of diverse publics that we would expect to form around a world with greater CAV use?
Case study 2: Autonomous Plant and Construction
In our second case study, we will focus on how new technologies for road maintenance, manufacturing, and transporting goods, could or should integrate into existing UK infrastructures. An opportunity for National Highways, and an area of interest for RS3, is how such technologies might interact with the surrounding social contexts into which they might be deployed, as well as how the challenges of adapting to unexpected events and security concerns. In the workshop, we began exploring some wider ethical issues connected to the possibility of increasingly automated road maintenance infrastructures, such as the potential consequences of automation on job markets, and what bearing this might have on workers’ rights and responsibilities. From our perspective, we see opportunities to feed in insights from sociological debates around cultures of work, as well as futures with more distributed security. With respect to the latter, as more work is carried out via networked communications, what vulnerabilities may emerge that need to be accounted for? How might security of road construction technologies and practices be achieved and imagined in an increasingly networked, automated age?
Case study 3: Supply Chain Challenges
Our final case study explores an area that has bearing on the design and deployment of both CAVs and Autonomous Plant, but also extends beyond each. Supply chains are increasingly vast, knitting together the local and the global in increasingly complex configurations. Autonomous systems already play a role in such processes, something that will likely intensify in the coming years. What role, for National Highways, might autonomous systems play in the supply chains they are involved with? What are the social, organisational, and material consequences of increasingly autonomous, distributed supply chains? How might issues of sustainability and climate change feed into an ethical assessment of particular supply chains? And crucially, what security questions arise – whether concerning materials or the data flows of both users and National Highways itself – in an increasingly networked context?
isITethical? Card Game
In the second half of the workshop, we transported discussions around our case studies into a game of cards (see the results here). This wasn’t your everyday Texas hold‘em though. This was the latest iteration of the isITethical? card game, where players are presented with a series of value cards, each describing ethical, legal and social (ELSI) values in the context of autonomous systems.
The aim of the game is for participants to reach consensus on a set of three value cards that form the foundation for an “ELSI vision”: a reiterative, contextual framework of values that informs and negotiates with future practices. Not all values on the cards necessarily align – some even conflict. Each player receives a value card visible only to them, and must make the case to either replace one of the three communal value cards with their own, discard it, or add another space.
The discussion through each round was both fascinating and challenging, as different perspectives converged and diverged. In the end, we agreed upon three values, with a fourth rallying for its own position. These values were:
Security can mean many different things, depending on whom you ask. While within computer science security tends to refer to the vulnerability of a system to attack, a social scientific perspective on security opens up to include a wide set of issues, including the security of researchers, users, and data subjects, as well as the balance between security and civil liberties.
Inspired by indigenous protocols for artificial intelligence, Two-Spirit is a value that looks beyond the binary differentiation inherent in many autonomous and AI systems. It recognises the deep co-dependences and entanglements between humans, things, and worlds. In our discussion, there was much debate about whether to keep this card or replace it with Accessibility. While both seemed important, it was the embracing of chaotic, uncertain, and contradictory dimensions of autonomous systems – including their potential to generate challenges, controversies, and security challenges –that ultimately kept Two-Spirt in place.
The easiest pick for our round was accountability. There was strong agreement that identifying who is accountable and when is essential for our futures with autonomous systems. One area of uncertainty is over what happens to accountability as more automation enters the scene. For example, can a CAV be accountable, and if so, when?
As time was running out, a last-minute addition added another layer of nuance. A range of questions around trust emerged, including does accountability scaffold trust, or does trust scaffold accountability?
By the end of the workshop, despite having spent three hours together, it felt like we were barely getting started. There’s lots of work still to be done, and we look forward to sharing more of it soon.