International Center for Ethics in the Sciences and Humanities (IZEW)

The Society That We Want To Be: Four Conversations and a Disclaimer

by Roger Brownsword

13.01.2024 · I am sure that everyone associated with the IZEW will have paused when they heard that Regina Ammicht Quinn would be stepping down from her present responsibilities. Certainly, as a member of the IAB, I can say that it has been an unqualified pleasure to work with Regina. Even though we come from very different disciplinary backgrounds, I always felt that we belonged to the same community of interest. Moreover, while Regina might no longer be a Speaker for the IZEW, I am confident that her voice in today’s ethical debates will continue to be both influential and inspirational.

In this spirit, I want to offer a few reflections on what I take to be the central questions posed by Regina’s recent article on AI and ethics (Quinn, 2021). The questions posed by Regina are fundamental. When we talk about or debate ‘ethics’ what exactly is it that we are discussing and debating? What is the function of ethical interventions in relation to emerging science and technology? How should we understand the role of ethics in the landscape of the governance of AI?

Responding to these questions, Regina is clear that we should not view ethics simply as an attempted brake on science and technology. Given the momentum behind some technologies, this would probably be futile but, even if ethics has difficulty in matching the pace of technological development, simply slowing things down is not the point. Rather, the role of ethics is to maintain a critical discourse about whether the options brought forward by science and technology really are ‘progress’ and it should ask ‘how principles and values that are important for a democratic society can be translated into a digital democratic society’ (at 75). Summing this up, Regina underlines what this entails in the following evocative way: ‘We have to develop a new culture of thinking about ethics—not as a bicycle brake, but as an intelligent way of thinking about the society we want to live in. Ethicists are not only worrying about but reflecting on and caring for the whys and wherefores of extensive technological change. AI researchers should, too’ (at 77).

So, while AI researchers, like all researchers in science and technology, will be focusing on what can and cannot be done, and while ethicists will be focusing on what ought and ought not to be done, Regina’s point is that we all share an interest in engaging in intelligent thinking about the kind of society that we aspire to be. In the spirit of this common enterprise, let me sketch four conversations that we should have.

The first conversation is one that we might think is not for ethics. This is not about brakes for the sake of it; rather, it is about the speed and pace of change. Most of our law and ethics was crafted in a pre-digital time and I am pretty sure that we will soon be saying that even our digital law and ethics was articulated in a pre-AI time. The context for our governance today is one of transition—transition that is driven by emerging technologies that have taken us rapidly from a pre-digital world to a digital world and now towards a digital AI world.  Before long, we will have moved on from the information society to the AI society.

These transformative technologies put a huge stress on law’s governance, on its ability to maintain order and to manage a smooth transition from an old order to a new order. On the one hand, those who have vested interests in the old order will want to stand on their established rights; on the other, those who value the benefits of a new order will press for change.  But, ethics, too, is challenged by technologies that throw up new questions faster than we can respond to the existing questions.

For both law and ethics, the pace of change is such that governance becomes reactive, responding to rather than controlling these technologies; it is not a case that we have our governance framework and principles in place before the transition begins; and it is not the case that we can expect the process of technological transformation to stop any time soon (compare Rosa, 2015).

None of this is to suggest that we should simply apply the regulatory or ethical brakes (even if we could) because there are occasions when we want the science to go as fast as possible (e.g., in order to develop vaccines for a virus that is sweeping the world); but, we should not under-estimate the significance of the speed at which we transition from one kind of technological society to another. As a community, we certainly want to have a conversation about our direction of travel but also about how quickly we want to go.

Secondly, societies will want to have a conversation about the balance of benefits and harms promised by a new technology as well as about the likely distribution of benefits and harms. In general, the regulatory packages that are agreed for new technologies are presented as striking the optimal balance between supporting beneficial innovation and minimising the risks (to citizens, consumers, data subjects, and so on). In principle, a regulatory regime might be all-out proactive in its approach, being entirely focused on benefit and not at all concerned about risk reduction; conversely, its approach might be all-out precautionary, entirely focused on minimising risks, and not at all concerned about sacrificing potential benefits. However, in practice, regulatory regimes will situate themselves somewhere between these two extremes. Such, it seems to me, is the case with both the EU’s draft AI Regulation (European Commission, 2021) and the UK’s pro-innovation approach (Department for Science, Innovation and Technology, 2023)—that is to say, they are both taking a position in the middle ground. That said, the positions taken are not identical: while the tilt of the EU’s proposed regime is towards precaution, in the UK the tilt is more towards proaction.

So much for the governance positions being negotiated in Brussels and proposed in London. But, what about the conversations that have preceded the taking of these respective positions? Can we say that, in both the EU and the UK, the positions taken have resulted from an inclusive community-wide conversation about an acceptable balance of benefit and risk? Can we say that the positions taken are compatible with broader conversations about the kind of community that the people in the EU and similarly in the UK want to be? I cannot adequately answer these questions. However, my impression is that the regulatory process in the EU has more of these conversational features than we find in the UK. In particular, in the EU, the process started with the guidance from the High Level Expert Group and the spirit of that approach has been sustained by the Parliament; by contrast, in the UK, the process started with a traditional top-down consultation followed by a White Paper (and, it is not yet clear whether the guiding principles articulated in the White Paper will be put on a statutory footing).

The third conversation is the one that I think many would treat as distinctively ethical. It is a conversation that starts with something that can be done—such as using AI to profile persons in the criminal justice system—and that seems, prima facie, to be beneficial. However, it does not stop there. Rather, this is a conversation that proceeds to ask whether a particular use or application of technology would be compatible with the fundamental values that are constitutive of the community (that is, with the values that reflect the kind of society that the people want to be). In many cases, the society’s aspiration will be to apply technologies only in ways that are compatible with human rights and respect for human dignity. However, it is striking that, in the EU’s deliberations about AI, while there have been frequent assurances that human rights will be respected, there has also been a heavy emphasis on the importance of AI being applied only in ways that are ‘human-centric’.

I say that this is striking for three reasons. One reason is that this is not something that seems to feature in the UK’s approach. Although the White Paper echoes the EU’s ambition to put in place an ecosystem of trust for the use of AI, it makes no explicit mention of human-centricity. A second reason is that I do not recall hearing human-centricity being invoked with the same urgency in relation to other emerging technologies. Certainly, modern biotechnologies have re-awakened concerns about human dignity, but the appeal has been to human dignity not to human-centricity as such. Thirdly, and perhaps most interestingly, the whole thrust of AI is to take on functions that it can perform more efficiently and, often, more safely than humans. Once we are satisfied that AI, in its own way, ‘outperforms’ humans, humans are no longer central to the actual performance. In this sense, humans are de-centred; and, yet, we insist on human-centricity.

Picking up this last point, if human-centricity means more than respecting human rights and human dignity, we urgently need to have a conversation about what it does mean and why it would be fundamentally wrong to rely on AI in a way that is not human-centric. For example, if AI tools are much more efficient and accurate than humans in diagnosing certain medical conditions, or if disputes can be more efficiently resolved by AI than by humans, why is it fundamentally wrong to apply AI in this way? Is our society one that places special value on human diagnosis in health care and human decision-makers (and human justice) in law? If we are not sure about this, then we need to have a conversation about how we understand human-centricity and its value for our community.

This takes me to the fourth conversation. Although I regard this as the most important of the four conversations, I can be short with this because I have only recently outlined the nature of this conversation in my contribution to the Festschrift for Thomas Potthast (Brownsword, 2023). What I have in mind is a conversation about respect for the conditions that make it possible for humans to exist and co-exist on planet Earth, that enable communities of humans to form and articulate their own particular vision of the society that they want to be, and that allow individuals to develop their capacity for self-directing agency and autonomy. When AI experts caution that large language models could lead to the extinction of humanity, it is this fourth conversation that we should be having; and when we fear that AI-enabled super-surveillance might compromise our agency, this is also the conversation that we should be having.

Finally, let me conclude with a concession, and a short note about authorial declarations and disclaimers.

The concession is that it might not always be possible to maintain clear lines between the conversations that I have outlined. In particular, when ethics turns to process and public engagement, it might be difficult to separate out responses that are based on self-serving prudential reason from those that are based on moral reason—for example, this might be a problem where responses speak to what respondents see as being in ‘the public interest’. However, to the extent that we have to live with this, then so be it because the alternative is much more problematic. That alternative is that ethicists, politicians, lawyers, theologians, scientists and technologists have their own conversations in their own silos without working together, in just the intelligent way that Regina advocates, to debate good governance in their communities and to join in conversations about the kind of society that they want to live in.     

Turning to authorial declarations and disclaimers, in the case of this piece, I can simply declare that I have made no use of AI tools to generate any of the text. However, my laptop is already nudging me towards predictive text and search engines that are enabled with generative AI. Before too long, I imagine that some use of text-generating AI tools will be commonplace and that authors will employ a form of words declaring that these tools have been used, that their use conforms to both legal requirements and accepted practice, and that the standard disclaimers (about authorial responsibility for the work) apply. What precisely this will signify, what the legal requirements will specify, and what will be acceptable practice remains to be seen; and it remains to be discussed and settled by our governance conversations.

-------------------------------------------------------

REFERENCES

Brownsword, Roger (2023): ‘Good Governance, Ethics and Science’ in Cordula Brand, Simon Meisch, Daniel Franks, and Regina Ammicht Quinn (eds), Ich Lehne Mich Jetzt Mal Ganz Konkret Aus Dem Fenster… (Festschrift for Thomas Potthast) (Tubingen: Tubingen Library Publishing) 77-86

Department for Science, Innovation and Technology (2023): A pro-innovation approach to AI regulation (CP 815)

European Commission (2021): Explanatory Memorandum to the proposed Regulation on AI, Brussels, COM(2021) 206 final

Quinn, Regina Ammicht (2023): ‘Artificial intelligence and the role of ethics’ 37 Statistical Journal of the IAOS 75-77

Rosa, Hartmut (2015): Social Acceleration, New York, Columbia University Press

--------------------------------------------------------

Shortcut to forward this article: https://uni-tuebingen.de/de/258996

--------------------------------------------------------

About the author:

Roger Brownsword is a member of the International Advisory Board at the IZEW. He has professorial positions in Law at King’s College London (where he was the founding Director of TELOS) and at Bournemouth University. His many books and articles are known throughout the English-speaking world; and, he also has publications in Chinese, French, German, Italian, and Portuguese. His most recent books are: Law, Technology and Society: Reimagining the Regulatory Environment (2019), Law 3.0: Rules, Regulation and Technology (2020), Rethinking Law, Regulation and Technology (2022), Technology, Governance and Respect for Law: Pictures at an Exhibition (2022), Law, Regulation and Governance in the Information Society (co-edited with Maurizio Borghi) (2023), and Technology, Humans, and Discontent with the Law: the Quest for Better Governance (2024).