Research Strand 3:
Securing the AS “User” Environment
The activities of RS3 are conducted over the different three themes. These are detailed below, along with an overview of our methods for engagement and collaboration.
As Autonomous System design and application progress, how will people adapt their behaviour in relation to them? And how might behavioural adaptations weaken AS security? Little is known about how critical aspects of a security breach may go unnoticed when operators are out of the loop. These are some of the questions we are addressing in the TAS-S behavioural adaptation research led by Lisa Dorn at Cranfield.
To start with we are focusing on autonomous vehicles and identifying security issues that may apply to other AS. Previous studies to evaluate behavioural adaptation in responses to assisted and automated vehicles have shown how unintended consequences can mean that safety benefits are not realised and may even be put at risk. These studies have been short term in duration and lab-based with very few studies conducted in real-world fully autonomous vehicles. Longitudinal field-based studies across a range of platforms, with younger and older people, will investigate behavioural adaptation to inform interface design in order to capture the operators attention when the security of the system is compromised.
As public-private collaborations become more prevalent, there is a need to clarify the liabilities and duties of private companies working in a public capacity because there are different legal and incentive frameworks between private and public organisations (i.e. Is the contractor company legally bound to serve the public? Are decisions based on public care or private profit?). While these collaborations open new possibilities for inclusion, networking, information exchange, knowledge transfer and resource mobilisation, they also bring forth a range of ethical, legal and social issues which warrant careful consideration. In a collaborative information management setting, it is important to support and encourage reflection on such issues by making more visible the ethical and legal implications of outsourcing, subcontracting, and privatisation in general.
As the rate of technological innovation exponentially increases, the ways that organisations manage their data, their business, and their ethics, must adapt. This adaptation is not simply a case of ‘keeping up’ with the technology, but of creating synergies, affordances, and spaces for response. Given the extent and diversity of contexts in which A/S do and will operate, organisation adaptation needs to happen ‘all the way through’, from policy and protocol, to every day practice.
Ethical, legal and social issues (ELSI), is a framework for understanding and shaping technological innovation. It has been used as a central methodological framework by isITethical? Exchange, to examine the rapidly evolving innovations in technologies for Disaster and Risk Management (DRM). ELSI are contextual, neither ‘in’ the technology nor in their use, but distributed across an assemblage that includes networked devices, human and more-than-human actors, social norms, regulations, and many more things besides. The ‘solutions’ to ELSI, are socio-technical, which means they require reflexivity, adaptability and collaboration.
The isITethical? Exchange starts from the premise that attention to ethical, legal and social issues is not a constraint on technological innovation, but, on the contrary, the key to creating technology that supports better futures for all. It works to test and challenge responses to the frameworks within which technology is developed, to determine the interplay of legal, ethical and regulatory constraints.
Autonomous and/or intelligent systems
- How do ethical, legal and social issues (ELSI) impact on the security of Autonomous and/or Intelligent Systems (A/IS)?
- What are the ethical, legal and social implications for A/IS internal security and the security of individuals and societies?
- How can the regulatory environment support the embedding of ethical approaches in order to develop security that operates for the good of society?
A/IS yield new capabilities; the capacity to operate in dangerous terrain with less risk to human life, delivery drones that might lead to climate change reduction through reducing the need for carbon heavy vehicles, and safer transport on the roads, for example. But unintended consequences and unforeseen threats are also emerging. In the context of such tensions, ideas of trade-off are coming under attack.
Alternatives, such as ‘positive sum’ approaches and accountable computing are more alert to the socio-technical, transformative nature of A/IS. Our ELSI research begins from the premise that gains must be balanced transparently against losses and risks, based on a more broad-based understanding of what is at stake.
A key part of the TAS-S Node is the formation of a multi-disciplinary team (details below) of academics at Lancaster University whose role is to collaborate with stakeholders to explore the Ethical, Legal and Social Issues (ELSI) of Autonomous Systems (AS) Security, both in organisational and public contexts.
A key focus of our research is to engage and collaborate with diverse stakeholders across industry, policy, and public sectors, with the aim of enabling these groups to reflect on the issues of ethics and security that relate their specific areas of activity. The ultimate aim of our work is to develop a series of broadly applicable resources and toolkits for stakeholders that are working on designing and deploying Autonomous Systems within the UK.
The ELSI framework we are drawing on is one of many cross-disciplinary approaches to technological innovation which seeks to examine, address, and shed new light on the wider implications of new technologies implemented in different social and organisational contexts. In RS3, we draw on this framework to inform our research, our engagement with others within the TAS-S project, and our collaboration with external partners.
How we work with stakeholders
One of our key principles is to foster genuine forms of collaboration with our stakeholders. As academics, we are interested in understanding how diverse stakeholders, and organisations in particular, are navigating the challenges of designing and deploying AS. In practice this means that:
Collaborations should result in actionable insights for stakeholder partners, that they can use in their strategies and/or their direct work with AS
Collaborations should involve co-produced outputs. Our role is not to act as ethical or security auditors, but to foster different forms of critical reflection, emerging out of the dialogue between practitioners and academics.
Before collaborations commence, all parties should be clear about the aims of the collaboration, its scope, and its boundaries.
The precise nature of the collaboration between TAS-S RS3 and stakeholders can, therefore, vary, but often includes some or all of the following elements:
A series of online workshops with different groups stakeholders, both within and external to a particular organisation, using creative methods to map the issues of ethics and security that are relevant to different interest groups.
One to one interviews with key members of the partner organisation, to enable the team to understand the particular challenges they are working with on more detail.
A transitions report, delivered to the partner organisation, providing actionable insights for them to use going forward, as well as strategies for continuing to reflect in different ways about relevant issues of ethics and security.
As mentioned above, the ELSI framework is a key reference point for our work. This involves us working with stakeholders to reflect on the following issues:
Ethical: While traditional ethical theories tend to focus on individual conduct and individual technological devices, the ELSI framework approaches ethics as an interconnected, complex process of negotiation, appraisal, and reflection. Who benefits from technologies being designed and used, and who is harmed as a result? Often an ELSI approach will use tools such as a co-produced Ethical Impact Assessments to help developers in industry to audit their new technologies according to these values of benefit and harm.
Legal: Standardisation and best practice go hand in hand with a robust understanding of the legal landscape, as well as the capacity to change it. This requires opening channels between tech developers, operates, and policy advocates, so that legal practices can help forecast better AS futures, as well as responding to existing challenges.
Social: AS do not exist in a vacuum. They operate in, engage with, and respond to, pre-existing social structures and protocols. A key component of ELSI involves making space for communities to voice their thoughts, apprehensions, and desires for how AS work with and for them.
In addition to this framework, RS3 also integrates insights from additional methods into its work. These include:
Controversy analysis: This is an approach with a rich history in social science, as a method for understanding and articulating complex arrangements of beliefs and arguments around a given topic. This method is particularly useful in the AS domain, firstly because AS spark controversies in so many ways, and secondly because it allows us to track and discuss these controversies as they emerge and evolve.
Backcasting: This is a speculative method that heavily informs our workshop design, as it creates spaces for imagining not just likely or predictable scenarios, but desired ones. Backcasting asks collaborators questions such as: what outcomes do you wish to see realised at a given point in the future? What forms of action need to be put in place to realise these outcomes?
Design Justice: RS3 uses Design Justice principles, in combination with ELSI, to look at ways of expanding the positives of AS according to community assets, and limiting harms imposed by techno-solutionism. Please follow the link to find out more about Design Justice
Prof. Corinne May Chahal (Department of Sociology): TAS-S project Co-Investigator. Co-Director of Security Lancaster, with a longstanding focus the socio-technical aspects of human security through developing and applying new technologies.
Dr. Joe Deville (Department of Organisation, Work & Technology / Department of Sociology): Leading the collaborations with organisations within RS3, with a research interest in the use of big data and machine learning within organisations.
Dr. Catherine Easton (Law School): Head of Department at the Law School, leading RS3’s exploration of legal issues relevant to stakeholders collaborating with RS3. Has a track record of using the ELSI framework to examine the legal and especially liability issues involved in organisations’ use of new digital technologies.
Dr. Lisa Dorn (Cranfield University): Associate Professor of Driver Behaviour and Director of the Driving Research Group. Her main interests are the design and evaluation of interventions to improve driver safety
Dr. Luke Moffat (Department of Sociology): Conducting original research, and designing participatory engagements for RS3, a background in philosophy and strong interest in ethics of technology and innovation for social justice