Link para o artigo original: https://www.man.com/maninstitute/ri-podcast-andrew-strait
How is AI being regulated? Listen to Jason Mitchell discuss with Andrew Strait, why the development of strong AI governance systems is in everyone’s interest.
NOVEMBER 2023
Listen to Jason Mitchell discuss with Andrew Strait, Ada Lovelace Institute, about how to think through the typology of AI harms, what to make of the different national and supranational efforts to regulate AI, and why the development of strong AI governance systems is in everyone’s interest.
Recording date: 18 September 2023
Andrew Strait
Andrew Strait is an Associate Director at the Ada Lovelace Institute where he is responsible for Ada’s work addressing emerging technology and industry practice. He’s spent the last decade working at the intersection of technology, law and society. Prior to joining Ada, he was an Ethics & Policy Researcher at DeepMind, where he managed internal AI ethics initiatives and oversaw the company’s network of external partnerships. Previously, Andrew worked as a Legal Operations Specialist at Google where he developed and implemented platform moderation policies on areas such as data protection, hate speech, terrorist content and child safety.
Episode Transcript
Note: This transcription was generated using a combination of speech recognition software and human transcribers and may contain errors. As a part of this process, this transcript has also been edited for clarity.
Jason Mitchell
Welcome to the podcast. Andrew Strait It’s great to have you here and thank you for taking the time today.
Andrew Strait
Thanks so much for having me. Jason. It’s great to be here.
Jason Mitchell
Absolutely. So let’s start with some scene setting I’ve come to appreciate that semantics seem particularly important in AI. And I guess what I mean is the notion of externalities seems to kind of abstract what we really mean, these underlying harms. So can you start off by talking about how we organize these AI harms? And I think and to that point, what does a typology of AI harms really look like?
Andrew Strait
It’s a great question. I think it’s helpful to start off by just saying AI presents numerous opportunities for businesses for public service practitioners, but it also raises a range of risks of potential harms from arising. And these are not hypothetical harms, these are not potential harms that we have no evidence of. Many of the harms that we see from are well-documented.
There’s a whole body of evidence around these broadly we think about for different types of harms. We talk about AI. One does accidental harms. These are from systems that fail or systems that act in unanticipated ways. It can include things like self-driving car crashes or some forms of discrimination from CV shifting systems that are going through job applications, a second type of harm or misuse harms. These are harms that are involve assistance being used by bad actors, generating misinformation or about bad actors who are trying in some way to cause harm to other people. So it could include, for example, the use of AI from systems like chat, CBT or MIT journey to generates fraudulent or incorrect medical information. A third type of farmer or what we call supply chain harms.
And those are from the processes and inputs used to develop AI. And this you can think of as a broad range of harms that apply all the way across the supply chain from the way in which AI systems are trained, what kinds of data goes into them. In some cases, that can include really alarming labour practices around the individuals with humans who are responsible for annotating and cleaning data.
There’s been some research about how, in the case of chat you about a lot of that labour was outsourced to workers in Africa who were paid $2 an hour to undergo extremely traumatizing evaluations of toxic content in these datasets. And on the other hand, it can also could environmental impacts of developing and training and deploying in our system.
The kinds of environmental harms that arise from the computes and the amount of energy used in training a system to the amount of environmental harms or impacts that arise from how that system is deployed. And a fourth final kind of harm is what we call structural harms from our systems. These involve harms that alter political, social and economic systems and including the creation of unequal power dynamics or the aggregate effects of misinformation on democratic institutions. It can include the ways in which I can automate certain kinds of roles or devalue certain kinds of labour in a market ecosystem.
Jason Mitchell
How would you grade sort of our response to these harms? Are all harms created equal or are we sort of responding? And by we you can sort of dissect that or divide it into public and or private sector responses, but how are they sort of addressing it and prioritizing them?
Andrew Strait
I think this is the problem is there’s no consistent clear way of addressing or mitigating these types of harms in the lifecycle of an AI system. Each of these types of harms can proliferate and originate at different stages of an AI lifecycle. And the challenge of the AI is that many of these applications, many of these AI driven services are dynamic. What’s safe and lab settings may be incredibly dangerous when deployed in a real-life environment like a hospital or a classroom. And so, it creates a challenge of how do you create a regulatory ecosystem and norms of practice within the vast array of industries, vast array of service sectors where AI is prevalent that evaluates, assesses and mitigates those harms and monitors those impacts over time. We don’t have any kind of holistic system like that in anywhere in the world. It’s the grand challenge that many of the current regulatory proposals in Europe, the US and the UK are seeking to address.
Jason Mitchell
I’m looking forward to talking a lot about that, but that was a great backdrop and let’s move to what’s about to happen over the next couple of months. The UK is set to host the AI summit in London next month, November, with Rishi Sunak focused on position in the UK as a global AI leader. And I feel like it’s kind of a lightning rod in some respects for what we’re going to talk about today. So first, what do you expect to come out of this summit in terms of kind of topics and maybe agenda?
Andrew Strait
I think what’s really clear from the messaging around the summit is going to be very narrowly focused on two things. One, it’s going to be focused on Frontier AI systems, which is a very poorly defined concept and term that really stems from a few small but significant players in the ecosystem. These are your Google Deep mind’s or Open Eyes or Microsoft’s who’ve really pushed that term into the regulatory conscious. It’s a term that refers to cutting edge AI systems that do not yet exist. These are systems that create new modalities like potentially some form of self-replication feature.
But again, I really want to stress these are not systems that are here enough today. The second way in which the four summit is going to be narrowly focused is very much going to focus on air safety, this very contested term within regulatory circles, some organizations, including ours, take a much broader conception of air safety. We consider all the different kinds of harms they can raise. But increasingly from the messages that we’re seeing from the summit organizers, it looks like the summit is going to primarily focus on extreme risks or catastrophic risk issues, really focus more on that national security concept. So those would be concerns around misinformation.
The prevalence or capability of systems to generates information hazards, which is a term referring to the ability to creates, for example, instructions in how to carjack a car or how to create a Molotov cocktail. So, what’s unfortunate in our view, is that the summit is taking this very narrow approach. It’s missing a whole suite of AI systems that are already in use, that have been used for a very long time and are the most high impact on our day to day lives. These are the kinds of systems that are determining what job benefits you will be eligible for, what kinds of jobs you will be able to route to apply for, or to be able to get a job interview for.
It’s the kind of systems that determine what medical procedures you may be eligible for. It’s the kind of systems that will be determining what kind of grades or your children will be receiving in their classrooms. Those kinds of systems are currently, as we see it, out of scope from this summit. So the hope is that the summit can at least provide some international clarity on how to govern these systems from an international context. But there’s a lot more work that needs to be done in terms of addressing the vast array of AI systems that are impacting us on a day to day basis and thinking about the kinds of ways in which our systems can cause all kinds of harms, not just this very narrow subset.
Jason Mitchell
There is a lot to unpack here, and so I’m really excited. I’ve got a set of questions coming off of this, but I think if you step out of the corporate development world, what’s your read about the national interests at play? There seems to be clear friction between the U.S., the EU and Japan who are pushing back, it seems to me, on an invitation from the UK for China to join. Can anything really be negotiated on a good faith, essentially non-binding basis at a summit like this? Or does it need to be more formally negotiated through something like the G7 or G20?
Andrew Strait
Well, this is the challenge of any kind of international agreement. It’s not a one and done job. It requires an ongoing relational dynamic between different international players. And at the moment, we’re in a geopolitical environment in which we’re seeing a great untangling, a great decoupling between the US and China and an increased global awareness and global tension between Russia, Europe and the US that is creating a challenge for these kinds of international governance conditions in which creating agreements that are non-binding but that are actionable, are reasonable, are seen as viable by all parties.
This is probably harder now than it was four or five years ago. That said, the decision of whether to include China or not in this kind of summit is a really tricky one. You know, I would compare it in some ways to trying to have an international conversation on nuclear safety and proliferation, but then excluding Russia and China from that conversation, two of the largest nuclear powers, it’s the case that China is, whether we like it or not, a global power when it comes to Asia. They are one of the largest providers of the natural resources required to produce chips for produce all the hardware behind these systems. They have three of the largest AI and tech companies in the world servicing likely billions of users on a daily basis.
So if we want to have a conversation around the kinds of nonbinding international commitments around issues like national security, misinformation, environmental risks, it seems difficult to do that meaningfully if they are not somehow someway involved either in this conversation or in ones continuing forward. Again, I want to stress the summit itself is just one of the first steps really to try to build that consensus. It’s building on previous initiatives like the Global Partnership and AI, which exclusively bans or excluded China from its discussions. But it’s an opportunity as we see it, to try to undo some of this fragmentation of international governance approaches. Whether China is willing to be a partner in these conversations is yet to be seen. But it’s a major challenge, I think, for the summit organizers to grapple with. I, I don’t see the easy answer here.
Jason Mitchell
Yeah, super, super, super interesting. You mentioned the focus on AI safety in a previous comment. This this sort of term trustworthy AI seems like the new semantic buzzword after the European Commission defined it is AI that is lawful, meaning complying with all applicable laws and regulations, that it’s ethical and that it’s robust in terms of a system that’s resilient, secure and has some sort of preventative approach to risk. Is there an opportunity to develop this idea, this notion, into an international framework with more definition? And let me add to that, too. Could a model like the one that OpenAI proposed this global governance of superintelligence that borrows from the design of the International Atomic Energy Agency? Could that work?
Andrew Strait
So I think it’s helpful to start by looking at what the what the end goal of trust for the AI is as a concept. It is to try to create systems that are maximizing benefits to people in society while reducing and mitigating harms that arise. And that creates a challenge for any kind of governance framework because at the end of the day, you need to think of governance at an international, national and local level. We talk about A.I. harms. A.I. harms could arise in a variety different ways. They can arise at different stages of a supply chain. So some of these harms may be international focused.
It may be that, for example, a provider of a data sets who’s outside of your national jurisdiction needs to be abiding by some kind of data quality and data ethics standards to ensure that your compliance with, you know, national regime, other kinds of risks like the risks that arise in a specific deployment of any AI system, for example, using a medical diagnostics tool in a very specific hospital that serves a particular demographic of people in a very particular region of the UK. Those kinds of risks will not be evident without a localized contextual framework for assessing and mitigating those risks. We need all of those. That’s the challenge with it at the moment.
That’s the challenge we’re facing in this current regulatory framework. So models that are being proposed like the International Atomic Energy Agency framework, I’ve also seen proposals for the IPCC, the Intergovernmental Panel on Climate Change. There potentially ways to address some of these challenges at the more kind of international focus, but they’re not themselves enough, right? We need that in addition to local and national frameworks.
The other thing to say is that both those because of international governance models, you know, they’re a product of their time. I think that’s that’s a real fair challenge that you actually Roberts at Oxford has posed. The IAEA is created in response to the rise of nuclear superpowers in an era where creating legally binding global institutions was a lot easier than it was today. And it was for a series of technologies that pose catastrophic risks. The IPCC, as well as an interesting model, you know, that was created after almost two decades of multilateral scientific assessments that looked at climate change, which we just do not have for A.I. systems today.
A significant challenge that I’m getting out here with AI is that any kind of international governance framework, it’s not addressing a single policy issue like the environmental impacts of AI or the potential for nuclear safety or proliferation of a nuclear catastrophe caused by an air system. It’s looking at a whole suite of different kinds of risks bias, misinformation, misuse, harms, harms to labour. Those are all interconnected and interlinked risks. So it’s a massive challenge for any one single centralized international agency to address. I think the solution here, as we see it, is probably going to be some combination of an international framework that’s either done through a multilateral agreements, but crucially, crucially, it starts with national frameworks at home. I think that’s why we’re where we’re seeing at the moment with a lot of the action and activity around the US, the UK and EU developing their own approaches to regulation.
Jason Mitchell
Yeah, I want to put a pin on that because I’ve got a lot of questions, but I want to come back to one reference you made around carjacking. And the fact is some risks are inherent to the development of AI, which we aren’t yet aware of. And I mean, information hazards are generally defined as the harm caused by the spread of true information. Think, for instance, of, you know, as you kind of said, the instructions for creating a thermonuclear weapon. In fact, I remember the uneasiness decades ago around the distribution of I don’t know if you remember, but The Anarchist Cookbook in paperback. Right. So how do we build AI systems in a way that disallows this kind of information hazard? And specifically, how do we protect the public sector that use these foundation models?
Andrew Strait
It’s interesting to think about that question in the context of of a similar technology that raised information hazards, which was the Internet and specifically search engines. Search engines were a way to search across a vast array of information on the public web to quickly pull information relevant to your query. And you could argue in many ways that creates a way in which you can very quickly find information hazards.
The information. The Anarchist Cookbook example being a perfect one. If someone posts the Anarchist Cookbook to some random channel with, you know, one or two viewers, suddenly you had a technology that enabled that to be discovered much more quickly. The ways that we address that challenge in the Internet governance context, digital platform, governance context, where to look at different modes of addressing where risk originates and how they play reframed. So one model, for example, was looking at how can you create accountability requirements for enemies hosting that kind of content to be held accountable for any misuse of it. In this model, you can think we apply it to foundation models to look at who’s making these models accessible. It’s a combination of platforms like Openai, Microsoft, Google that are enabling access to their foundation models, and then you have open source models that are made accessible through platforms like GitHub or through hugging face.
So one question is what kinds of controls can you put on those platforms to ensure that they are looking at assessing for the kinds of applications they have that might enable that kind of misinformation or that might enable those kinds of information hazards from spreading? A second is you look at the developers who are actually creating these models themselves that might be the same group as those who are hosting them, but it might be different. What practices are they putting in place to identify if their system enables information hazards to exist? One way that you can assess for those kinds of issues is by the use of standardized evaluation metrics, which provide a consistent way to test for whether your model can create outputs that are potential information hazards.
You can imagine, for example, a shared database of potential risks like instructions for how to steal a car, instructions for how to make Molotov cocktail, instructions for how to create a particular bioweapon. And that database being shared between different companies who are developing these systems. We have a similar precedent for this, which is the photo DNA database of child sexual abuse images that Microsoft and many large platforms host, which is a shared database to assess or to identify these images when they come up on their platforms.
Finally, though, I think it’s a question of how can you address the kinds of issues that arise from information hazards through a model of liability? So can you create some kind of incentive against these companies from enabling that kind of content to be even profitable or output able, for lack of a better term, through their systems? This is where the conversation around liability comes in about what kinds of models of liability can you can you hold developers to if it’s a strict liability model in which a developer essentially, whether they have tested it or not, whether they value or not, it still produces instructions for a bioweapon. They could be held liable and face severe consequences. And that, I think, touches on the final thing, which is a model for addressing information hazards, which is design features.
And this is a question I think is important to ask now. Do you need a GPT type product for all the applications, all the use cases that we intend to use these systems for something that can in theory enable access and generation of content for any type of question you might ask? The answer is probably no. If you’re thinking about, for example, chat bots in medical contexts or customer service chat bots, you probably don’t need that chop art to be able to answer questions around how to create a thermonuclear weapon. You needed to be able to answer questions very specific to your context and to your use. So the more that we kind of ground these systems in particular use cases, that might also be a way to think about design features as a way to mitigate the proliferation of some of these information hazard risks.
Jason Mitchell
I wanted to take a step back a little bit and ask, how do you see the formation of A.I. regulation broadly evolving? And I’m thinking specifically what happens when private companies who see themselves essentially as standard setters driving ethical standards that run counter to something like the EU A.I. Act, which will ultimately be enforced in 2024, where the gaps, especially with countries who are taking a more flexible call it a pro-innovation approach to AI Will the U.S. develop a national approach or default to state level laws? I’m not trying to kind of block this question, but I think the layers just keep kind of coming on. How is China balancing pro-innovation laws with personal interests?
Andrew Strait
I think it’s important to start with perhaps just a bit of history of how we’ve gotten to this moment. So it’s safe to say that in 2016 that was a real watershed moment in AI and in the tech industry. It had the response to the Cambridge Analytica scandal and misinformation concerns around the US election and Brexit that really massively projected on social media issues into the mainstream political consciousness.
And it was at that time that in response to scandal you saw the development of these kind of ethical principles that were coming from a combination of government agencies, of private sector actors like Google, DeepMind, opening AI and in some cases from civil society organizations, all aimed at trying to create some kind of common set of values in which these these companies or these these developers should be aiming towards. By that point in 2019, you have essentially 92, I believe, difference ethical principles that are out there sets these principles which, you know, break down to things like transparency, accountability, privacy, but without much clarity about what specific practices underlie those principles.
And that’s kind of moving us into the challenge that we’ve been facing the last three years, which is how do you then turn those kinds of broad conceptions of ethical issues that AI raises into hard regulation, hard rules that establish very clear practices for addressing and meeting those values? That’s at the moment where I think different countries are trying to approach that challenge from a slightly different angle in the US, we’re seeing, I think Alex Engler, who’s a wonderful researcher at Brookings, calls a approach to A.I. risk management that can broadly be characterized as risk based, sexually specific, and then highly distributed across federal agencies. There’s very much like the EU, a risk based approach here to assessing A.I. systems, depending on what risk to society they pose. That’s a fundamental component of the risk management framework, which then, depending on that level of risk, determines what kinds of practices different A.I. developers and deployer should be following.
I think the difference between that and the EU is that the EU is calling for a much more centralized, much more robust and specific form of risk management for AI systems. It’s creating a series of national market surveillance regulators who will be in charge of implementing and enforcing these requirements. It has much more specific requirements around data quality, around discrimination, around auditing and independent evaluation of these systems. And it also creates a more centralized A.I. office that’s meant to oversee all these regulators, as the US approach tends to be more aimed at creating or empowering an existing regulator like the FTC. To add a third one into the mix, you have something like the UK approach, which is very distinct, and then it’s according to three AI regulation white paper, not creating any new statutory powers, not necessarily creating a new regulator, but is instead aimed at sectoral based regulation and trying to empower sectoral regulators is to address the kinds of issues they might raise in their own specific domains.
It uses the principles based approach, so it’s aimed at creating some set of principles from that ethical principle era that can then govern or kind of guide regulators in how to think about the issues that arise in their domains. But it doesn’t come with any kind of statutory requirements or any new statutory binding rules necessarily. It would leave that up to individual regulators to apply.
Jason Mitchell
I mean, given all this, do you think the establishment of an international AI regulatory agency is realistic, given the national interests at stake, given the EU spent more than three years developing the act? What lessons I’m thinking are templates. Does it provide for other national regulatory efforts, and do you think the Air Act carries enough weight on the global stage to act as a catalyst for similar regulations across the globe?
Andrew Strait
I think what we’re seeing are some consistent practices that are coming out of these regulatory efforts. So things, for example, like pre-market auditing and evaluation of an AI system which is drawing on existing regulatory frameworks we have in other sectors, you know, in the way that in-car safety, we test our cars before safety, before they are allowed to be sold in the market. Something similar could be done for air.
And that’s essentially what the European Air Act is proposing, some pre-market safety evaluation that includes independent testing before systems are sold. Another aspect to it is, I think, a similar notion of risk based assessments or risk based criteria for different kinds of systems, depending on how risky that system may be. This is the notion that rather than a one size fits all approach for all AI systems, which could be overly burdensome, you should have some distinguishing feature of which systems pose a particular risk to people in society, and then making the stringency or the extent of those requirements dependent on that risk level. The use certainly led with this, and we’ve seen that reflected in the risk management framework in the US.
I think you also see that reflected in some of the state proposals in the US which are sort of at the moment supplanting federal legislation. Then third, I think the other thing that we’re seeing in terms of these national regulatory approaches, which is sort of bleeding out, is this notion that you need to have evaluate options and testing and processes throughout the lifecycle of an AI system, including after it’s deployed.
The EU approach really centers on this. It really centers on this notion of how different kinds of evaluations and assessments throughout the course of a systems lifecycle is also something that we see in some of the US proposals. And in the UK with the CDI assurance framework, which is a framework that the Center for Data Ethics Innovation has proposed for different kinds of evaluations and assessments throughout the lifecycle of any AI system.
So, I suppose what I’m saying is there are certainly international similarities that are coming forward, commonalities between these different proposals, which is a promising sign that said, that doesn’t recreate itself, an international agreed shared approach or some kind of international regulatory institution. That again begs the question of what would that institution before, what kinds of sort of international risks or issues might it be well-placed to address? And would that agency have the right level of authority and capacity and power to actually address and mitigate those risks? At the moment, that’s where a lot of the proposals that we’re seeing don’t seem to really have a clear idea of what they before they seem to be more kind of building on some existing models we have around climate change or around atomic energy, but without a very clear idea of precisely what that would be, what place to address.
Jason Mitchell
Interesting. You mentioned the UK, and I saw that the UK House of Commons Science, Innovation and Technology Committee recently released the governance of Artificial Intelligence Interim Report, which describes basically 12 challenges of AI governance. What’s your read on the urgency of regulation? Relative to the technological development of AI, which I found this report interesting in terms of it emphasizing the report calls out to, in my mind, very different examples. The successful example is human fertilization embryology, where regulation closely followed the Warneke report in 1984, the unsuccessful model is social media regulation, where Parliament is only now debating the UK online safety bill. After 20 years of inaction.
Andrew Strait
I think the committee is completely correct in this that the the issue is one of urgency. I would argue it’s not just about the technical capacity of AI systems, it’s about the scope and breadth of ways in which is already impacting everyday people across society in very high impact ways. And that’s been the case for the last decade, if not longer. AI systems are being deployed again in almost every facet of your day to day life.
It’s something that some people are aware of, some of these uses. But as our public attitudes research shows, many people are unaware of the most high impact applications of AI that determine things around their medical health care, around their education, around their employment, around what kinds of benefits or types of loans from a bank they may be eligible to receive. So it’s very clear from this report and we completely agree with the perspective that the time for regulation is now.
There is an urgent need to address these kinds of issues because it’s it’s not just a case of the technical capacity getting more significant or more powerful. It’s the reach, breadth and scope of the uses of AI becoming more entrenched in our day to day lives without the right safety oversight and regulation in place. I think in the case of the the two examples that the Parliament referring to here, the Warnock report is a fantastic example of the use of a committee’s findings to quickly turn around some kind of practices around Lifesciences.
And and I believe the genomic editing that is the kind of urgency and pace that we need to see government respond to now. And it’s oftentimes when we think about the actions that the government needs to put in place, the biggest one is to get a bill before parliament to begin to discuss what the appropriate approach will be. At the moment, that is looking like a much longer timeline in the UK. I mean, the White paper consultation responses have not even been fully analyzed yet. I believe it’s we’re looking at something in which a bill wouldn’t be before Parliament until well past the next election if action isn’t taken the next few months.
Jason Mitchell
Hmm. Interesting. I guess by extension, and you’ve mentioned sort of the precedent of medical devices just now, but what regulatory models or precedents exist in other fields where there are higher than normal probabilities of harm that could be applied to AI? For instance, does the way that the U.S. Food and Drug Administration oversees medical technologies and therapies represent a realistic way for regulators to regulate AI? After all, the controls for Class three medical devices are much more rigorous than what exists for generative AI models, even despite AI founders, as we know, highlighting the much bigger societal risks.
Andrew Strait
There’s certainly, I think one of the challenges in AI development has been this notion that as a as a unique, distinct thing that is unlike any previous technology, and that we need to come up with some bespoke completely from first principles approach to regulating it. I think I would argue that that’s an incorrect assumption or an incorrect way of thinking about it.
And if we look at how other technologies that pose challenges or risks to people in society are regulated, offers a very good blueprints for how we might think about regulating some kinds of AI systems. It’s interesting, I mentioned the Food and Drug Administration model. We are currently writing a paper on that exact topic, applying it to that regulatory approach to foundation models and one of the interesting things is if you look at how the Food and Drug Administration in the US regulates software as a medical device. So this is any kind of technology that is providing medical diagnostics, for example, or any kind of software diagnostic medical diagnosis through AI or through software, it does regulate that in a very particular way.
It looks first at a risk classification from class one to Class three, Class three. The most heavily risky technologies tend to be ones that are complex, they’re novel, they have a high potential to impact a wide group or member of people in society, and they’re often technologies where identifying and mitigating risk is quite difficult. This is again very synonymous, I think, about foundation models.
You know, these are very powerful, very complex, very novel technologies without a lot of precedent about how they can the ways in which risks may originate in periphery. And another feature of the FDA model provides is this notion of gated access or gated approvals before a drug or software device is approved for market entry. So that takes place all the way in the early design stages. There’s a relational dynamic between the FDA and a developer in which the developer has to meet particularly burdens of proof around the efficacy of their AI system or their or their software or device and the potential risks that it may pose. Is that leading to an adverse impact on particular demographics? Is it causing harm? Does it have any kind of side effects that might be causing particular issues? As the FDA process goes along? It requires sort of this continuous testing approach and continuous learning approach of of the impacts of these systems before.
It is approved for market entry. And even in 17 cases with particularly risky software devices, there’s a post monitoring requirement in which the developer has to undertake some kind of ongoing assessments of the impacts of that system when it’s to play different contexts. So these are all the various synonymous proposals that already exist. When we talk about the governance of foundation models, their proposals that you see within the proposals that you see within some of the emerging legislation in the U.K., in the U.S., But they’re just sort of from a slightly different approach. I think one major difference is the notion of where the burden of proof lives in the FDA model, the burden of proof for continues for for market approval, lives with the developer. They have to prove efficacy and they have to prove safety. A second is that notion of efficacy. Crucially, you can’t just release something. You have to actually prove that it works and prove that it addresses the problem that you’re facing. And that’s a major issue with AI because a lot of AI doesn’t work.
A lot of the AI is released mainly just to see if it may fix the problem. It may make for an overall beneficial outcome, but there’s no meaningful way to actually assess that or test for that. And thirdly, it offers a way for kind of continuous post about post-market entry testing of that system to really understand what kinds of risks might arise after it’s deployed in foundation models. That’s particularly important because many of the rest, the foundation models and generally AI systems, will only arise when they’re applied in a particular context when they’re deployed, for example, in a classroom where they’re deployed for example, in a hospital, when they’re deployed, for example, to augment or to make decisions around banking and loan applications.
It’s not until you’re in that sort of very context specific application that you can do a lot of the testing and evaluation required to determine efficacy and safety of these systems. So in a way, the FDA model provides a lot of unique and approaches that you could see applied to foundation model governance if those were to be replicated in some of the emerging regulatory proposals.
Jason Mitchell
Can we change lanes in terms of harms for for a second, can you frame the environmental impact that I could represent? Crypto’s impact has obviously been well covered, but there’s comparatively less transparency around what that poses for air models. So first, what are your working assumptions behind it’s energy intensity? I’ve seen early estimates that Open AI’s GPT three emitted more than 500 metric tons of CO2 during training, which doesn’t include any of the downstream electronic waste or impact on natural ecosystems. In the training for current models, in my mind doesn’t seem egregious. But how do we think about this? I guess in two dimensions. I mean, first number one, the increase in models, the number of models, and to the explosion in parameters. For instance, GPT two in 2019 had 1.5 billion parameters versus Googlebot, which had I think 540 billion parameters.
Andrew Strait
I could see a collision course at the moment between AI and the climate emergency. And I say that because at the moment the short answer to your question is we don’t know what the environmental impact of AI is at the moment, and that’s because this information is not reliably tracked or measured and there are absolutely zero incentives. In fact, if anything, there are disincentives for companies and developers and employers to track and measure that carbon footprint. And when we talk about how to track and measure, it is crucial to think about the kinds of the different ways in which emissions might arise from these applications and technologies.
One are embodied emissions that go into the creation of the computing hardware, the chips, the raw materials that are behind these kinds of systems. You can then think about dynamic consumption, which is the amount of energy required just to run or train the model. And then lastly, there’s idle consumption, which is the amount of energy to maintain the infrastructure when it’s not in use or to maintain the production lives on. At the moment, many of the projections of carbon footprint of some of these large models like GPT three and CBT four or bloom from hugging face are just focusing on the training of the model.
And it’s a significant amount. I mean, I believe in paper From Dr. Sacha Luciani at Hugging Face, the prediction of the amount of carbon involved in just training GPT three was the equivalent of 111 cars worth of CO2 emissions over the course of an entire year. At the moment, some of the predictions that we see for charging Beechey are suggesting that the emissions are about 25 tonnes of CO2 per day, approximate the same daily emissions of 600 U.S. people.
That’s all to say that those are just assumptions. We really don’t know what the environmental footprint is. The best estimates we have are quite outdated. They’re from 2019 reports that suggest the total amount of emissions of just datacentres as a proportion of total emissions is between 0.6 to 1%. But it’s unlikely that’s still the case. Now, as you’ve seen bigger models in trains, the motivation on bigger being better really coming into place.
And I guess it’s mentioned earlier, this surge in the use of AI across different sectors of society. So if we look at how do you move forward from this, you know, one of the main ways of addressing the carbon footprint of our systems that’s been put forward by industry is this notion of improving energy efficiency, using more clean sources of energy in the training of these models.
That’s really focusing on renewable sources, on training systems, on data centers that are closer in proximity to where you are, so that you’re not increasing the amount of carbon footprint from further range data centers. But that then runs into an issue that as well documented in the transport space, which is something called Jevons Paradox. My colleague Emily Cluff has just published a paper on this topic which looks at Jevons Paradox.
This notion that an increase in efficiency can lead to a decrease in price and therefore an increase in demand. Just because you’re improving the efficiency of energy consumption for systems doesn’t mean that you’ll have the same number of systems or air training runs being made. You might actually justify even more. So it’s a really thorny problem. I think it starts first with requirements for how do you even measure and assess the environmental impact of your air system and having some consistent standardized ways of doing that would go a long way. But it needs to also be a company with hard requirements to actually conduct those assessments and to make this publicly available until we know what kinds of impact this is having. Its impossible to make any sort of policy determinations or assessments.
I think it’s very synonymous in some ways with the lack of tracking of many gun crimes in the United States. That’s that lack of data that makes policy discussions very difficult to have and to keep finding those answers will only occur once we have the right data.
Jason Mitchell
Interesting. I’m very aware of the Jevons Paradox. Stanley Jevons is popped up more than once in a number of episodes, whether it’s EIA or sort of other folk in the energy complex. So yeah, that’s interesting. And I would imagine it would certainly apply here. I want to come back to the legal aspect that we talked about maybe in the introduction. I feel like frankly this is a bit of a Pandora’s Box, but I can’t help but ask it. Where do you see the bright lines forming for air liabilities? Do we address this via tort law and a broad sense of duty of care by all manner of stakeholders? That means creators, distributors, marketers, users, etc. who could be found liable or even negligent of air issues like hallucinations.
Andrew Strait
So I’ll put my hand up and say, I’m not in liability expert, I’m not a lawyer. But what I can say is just reflecting some of the research that we’ve put out around liability, particularly a blog post from Kristian Windhorst, who published a report on our website about this essentially, I think the challenge with our liability in part is around the issue of the opaqueness of these systems and how difficult it is to understand what actually happened in the faults or the misuse of a system. When you think about traditional forms of product liability, they often rest on some ability to very clearly identify fault, whether it was a poor design, a faulty piece of a system, or whether it was user error, something in which the user specifically did.
You can look at how that’s played out in the ways in which some of the self-driving car crashes that Tesla has incurred has played out in terms of liability concepts? It’s so difficult to understand who’s at fault. Is it the car making a mistake? Is it a personal liability, the driver making a mistake that is much more opaque and therefore much harder for someone who’s trying to make a liability, a tort claim to have a case? You know, the burden of proof that they must meet is very difficult already. And the evidence would need is probably too difficult to to identify.
This is the challenge that the EU is currently taking up in their ad liability directive. They’re kind of looking at some of these questions of can you have strict liability which doesn’t require any evidence of fault. It just just, you know, is there sort of causation of harm, just meaning that there’s there’s liability that is something that they’re suggesting should be applied to high risk systems, systems that are high risk, like any kind of system that could influence voters in political campaigns or recommendation systems used by social media platforms.
Another way that the EU is considering this is the notion of lowering the burden of proof for demonstrating proof of fault so that would require, for example, operators VR systems to prove they did not reach the duty of care that they had. If it’s an autonomous lawnmower, for example, that causes an accident or a cleaning robot, you would have to prove that it was produced by a reliable provider deployed for a task was designed for and that’s an adequately trained staff member monitored and maintained at according to the instructions that it was given. That’s all to say that those are some some models of liability. I should flag one other one that is coming up, which is this notion of liability within a supply chain.
One forthcoming paper that Jenny Tennison it connected by data is is coming out with is this notion of looking at food safety legislation and liability in that domain where liability often is on the provider of a food all the way at the downstream end of a supply chain to confirm or to hold the liability for its safety. So that’s your restaurant provider, that’s your grocery store.
If they have an outbreak of any kind of of e-coli or any kind of food poisoning, it’s on them essentially. And that then creates this dynamic in which there is then a lot of pressure on that downstream provider to look upstream and ensure that their providers of food are following appropriate safety standards and safeguards. It’s a way to create sort of a race to the top around food safety. But the problem is that when you apply that to I, it’s a bit of a different landscape. In the case of foundation models, for example, we’re talking about a very small number of providers of these services, you know, really four or five, and that gives them enormous bargaining power with downstream developers.
They can sort of offer these take it or leave it contracts that might just push all the risk on you as a user without any obligation to actually do due diligence. Now, it’s not unclear if they would actually do that or not. I’m not saying that’s that’s where Openai and Google are currently doing. But it’s just to say that when we talk about Liability API, it’s really critical to look first at those issues of information asymmetries, how difficult it is to assess fault and accident in these kinds of systems, particularly when we talk about harms that are occurring at great scale. And secondly, the challenges of supply chains, where you have all kinds of issues around where to place liability within these complex supply chains that might exacerbate some of the power dynamics that we see in this current market ecosystem.
Jason Mitchell
So by extension, around this sort of legal conversation, what does a strong intellectual property in copyright framework look like for a I in the EU? In the UK, the bar for creating new intellectual property from existing ones appears pretty low. Does that need to be reset? Can I, for instance, purchase a database of photographs, alter them and redistribute them as original content? I mean, what are the limitations that actually need to be programed in?
Andrew Strait
Yeah, this is a real thorny question. It touches on, I think, a deeper question, which is are current IP protections fit for purpose in an age of generative AI, where generally it’s been made far more accessible to day to day users and you can generate images, generate texts that are essentially exact replications of the style of a particular person’s form of work or artwork. It also raises some really messy questions about the contractual basis of of a behind some of these agenda of applications, about the ways in which many contracts, for example, in the performing arts space, you give up any intellectual property rights over your likeness, your voice or image. And now many of those contracts are being enforced or interpreted in a way by studios to enable the complete generation of someone’s voice or likeness in video and images and film and media without their consent, attribution or even awareness.
I suppose it gets to a couple issues. One is about text and data mining and. This is this is, I think, crucial for text texts, kind of models like chat, CBT, where the way in which those models have been trained tends to be through the scraping of publicly accessible contents. But publicly accessible is a bit of a tricky one because it’s not just any content that’s publicly accessible. It’s oftentimes content that has copyright restrictions but is available on the web.
One of the biggest training datasets for chat CBT and for many of the these large language models is a database called Books two in a separate one called Books three. These are a corpus of thousands of books that were originally pirated, and books are a particular high quality form of data set for these kinds of applications, precisely because they have very high quality of syntax high quality of writing. It’s oftentimes very creative styles, a bit more valuable than, say, something like, read it.
COMMENTATOR So this this is the challenge I think we’re facing our traditional protections for artists and for those who are creating content or copyrights. And the big question here is that some of the texts and data mining restrictions for copyright law, as we see in Japan and as is proposed here in the UK, seems to enable the ability for large companies to take that data without consent, without awareness, and to then use it to generate new content. That is, I think, a slightly different issue from some of the right to likeness issues and right to dignity issues that we’re now seeing play out. So that’s that’s something that I think is copyright is not a very good tool for those kinds of applications.
It’s it’s the issues in which you can imagine your voice being completely replicated without your consent or images of you or video of you being replicated without your consent. There’s very few protections legally right now from prevent someone from doing that against without your awareness or consent. So we might need some new kind of legal regimes. This is a big question that’s currently in the debate.
The other thing I’ll mention about copyright is just you have to think about who’s benefiting from copyright law, who’s losing out at the moment, the way in which these models are being trained. It’s it’s not like the benefits of mid journey are being felt by all artists. It’s not like they’re receiving any kind of payment or benefit for their hard earned labour being used to generate images that are of that likeness and which devalue their future work. So if you imagine the kind of goal of copyright law, if it really is about trying to provide an ecosystem, a world in which artists forming artists, writers are having their labour properly value, that they’re sort of encourage or incentivized to create, they’re being fairly paid and rewarded for that labour.
At the moment, it doesn’t seem like existing legislation is well fit for purpose, so that opens up a whole big question of then, So what should a future copyright regime look like? What changes might need to be made? This is the hardest question. I could not answer this for you, and I guess I.
Jason Mitchell
Had no expectations about it.
Andrew Strait
But to me, it’s it’s I think one of the thorniest issues that we’ll see play out over the next few years because it fundamentally gets to the business model of many of these large generative model providers. If there is an obligation that you need consents or some kind of contractual agreement to use someone’s data in those models, or suddenly that massively restricts your ability to collect and scrape this data. It’s an incredibly thorny issue that I think both we’ll have to see how this goes in the next next few months ahead.
But it is notable that the one of the big drivers of that debate, our writers are screen actors. You know, the SAG after strikes, the Writers Guild of America strikes it. They are really pushing that notion of A.I. being a potential threat to the future of their industry. It’s interesting to see how this will play out in the months ahead. Again, switching lanes a little bit, the question about the coexistence of commercial air models and open source models often comes up in Anton Cornick, a previous guest who studies the economic impact of AI, said that there’s a strong economic argument for open source elevens. But I guess I’m wondering what’s the counter argument to that? What are the larger societal or even national security considerations that we should be thinking about?
This is a major topic in regulatory conversations right now. The EU and the UK level should open source be exempt from any of the regulatory requirements that are trying to address and mitigate some of the harms you’ve talked about. It gets, I think, to a thorny question. So one is what do you mean by open source? This is a contested term. It’s not clear precisely what we mean by an open source application, but broadly how it’s being understood are models or software that are being openly released and made accessible for anyone to use and to tinker with. They’re being released on platforms like GitHub or Hugging Face, and they’re made accessible to other members of the software community to then iterates or innovate from.
That raises, I think, a significant trade off that we’ve seen play out. When you talk about mitigating some of the risks, it removes one of the gatekeeping functions that we talked about earlier in that FDA model regulation for trying to restrict who has access to a powerful model and how they might use it or how they might fine tune it for another purpose. A great example this has played out recently in last few months was a hate GPT. It was essentially a open source version of chat CBT that was explicitly developed for hates and toxic content purposes which just spew very hateful things. You can imagine, you know, like 10 million of your racist uncles at a Thanksgiving dinner.
That’s basically what this model was aiming to to replicate, and it was removed by many of the major platforms like hugging face and GitHub because the intended purpose behind that model was to generate hate content. But what happens when you have a model that doesn’t have an intended purpose for genuine hate content, but it has that as a capability, you know, catchy Beat is released in a way where it has a range of of trust and safety restrictions on what kinds of prompts you can send to it and what kinds of outputs it’s allowed to send back. And some of these open source models that are sort of Akin’s charges, but they’re just lacking those kinds of toxicity controls.
So it gets that question of, you know, open source applications. You’re kind of relying on the good graces of developers to appropriately release and use it. That said, it also comes with another tradeoff, which is the ability to easily access and audit and evaluate a model for performance issues or toxicity issues. One of the values of open release models is it makes it far easier for anyone in the community to audit a model for particular issues. And if you think about one of the challenges, when we talk about model evaluations right now, a big one is how do you evaluate models for risks or for issues that are very context and local dependent? I’ll give one example which is evaluating chatbot performance for non-English languages.
It’s very difficult to do this in part because for some languages there’s only, you know, a couple thousand speakers of that globally. If you are allowing a model to be open sourced, in theory, you allow a wider of people to assess and evaluate and test that model, particular safety or performance issues. So in some ways it reduces some kinds of risk potentially from occurring. So that’s kind of the tradeoff. I think we see, you know, there’s there is an economic argument, which is that, again, many of these models are being developed by the largest, most powerful select few tech companies that are almost all US based.
But competition angle that raises the challenge. An open source community might be a way to challenge those existing power holders. But as Sarah Myers West and Meredith Whittaker have really pointed out for me, I know in a wonderful paper about the limitations of open source, that’s a bit of a simplistic narrative. Open source in many cases has been co-opted by large platforms and large tech companies like Openai and Google as a way to just make their products better and essentially rely on a form of free labour. It’s a really tricky dynamic when we look at how open source has played out in some contexts. So I say all this just to flag a bit more complex than is open source, good or bad. I don’t know if anyone can can say that. You know, I think I think it’s it comes to certain tradeoffs. It comes as certain benefits in certain intentions. But I think what is safe to say is when you think about how to regulate for some of these kind of national security issues, it’s exempts open source completely raises a bit of a challenge in that you’re kind of removing one of the best levers for mitigating some of these risks from the table. It does rely essentially on on the willingness of individual developers to comply voluntarily with those regulations.
Jason Mitchell
I wanted to ask you, just given your experience in this space, how do you design an AI ethics board that works? What are the essential ingredients since frankly, seems so prone to failure? I mean, Metis Oversight Board looks promising at this point, but famously, Google’s attack was was shut down only one week after its announcement.
Andrew Strait
Yeah, the history of ethics boards in the tech industry is not a one of great successes, I would say. I would say, if anything, it’s one of of monumental failures. I think there’s a few issues here. One is what is the purpose of an ethics board? Is is a big, big issue. You know, is it to ensure a meaningful form of accountability from an external body or is it just to provide sort of an alternative view on what you should do that you can discount and ignore? I can’t remember precisely with the Google Etsy, but if remember correctly, I believe it didn’t have any binding authority. And the same is true of matters oversight board. There’s no binding authority from the oversight board to require Facebook or Medha to follow any of the findings that it takes.
So that’s one issue when it’s seen as an accountability measure. It’s a way to do sort of soft accountability, to offer a sort of public critique of a particular decision that they’ve undertaken. But it’s not necessarily one that’s binding in any way. And the way that a hard regulatory bodies decision might be a second issue is one of scope. What’s in scope of this, these kinds of ethics boards. In the case of the oversight board, it’s looking at specific content, moderation decisions that Mehta has made. It’s not looking necessarily at the overall policies that matter, has has in place governing those decisions or even features of matters, products that might enable certain kinds of harms from arising. You know, for example, their Facebook video, Facebook Live, which, you know, is enabling sort of live broadcasting to a user level. There’s been a whole slew of really horrific violence content that has been shared that way, along with copyrighted content. The board has no power to critique or challenge those design choices or business choices that that matter has made.
So the third thing I think that comes up and this is definitely true of the Google board is who’s on its board is in theory meant to be this, you know, kind of voice of the external public over the decisions that a corporation is making. But if you choose people who are from one particular region or from a very particular political slant that doesn’t represent the the average person or particular particularly groups who are maybe highly impacted by your technology is very often excluded from decisions about how the product is designed or deployed. You’re kind of creating a bit of a false sense of accountability. And I think that was one of the challenges that the ATC faced when it was first launched.
One of the first things I say on this is, is while oversight boards or ethics boards are not necessarily the strongest form of accountability, they can offer a valuable tool in the toolbox. And I think that’s one of the things I find very depressing about it is there’s been a couple of really disastrous uses or attempts at these boards that has really killed any motivation, and it’s really try that meaningfully again. And in my mind, it’s more of a question of what can these boards accomplish, What are they good for? I think forums of public engagement are participation by tech companies are a way in which you can better understand how different publics or people who are impacted by technology are experiencing it. What kinds of risks or issues might they face? So we’ve done a paper recently on participation in AI labs. It is something that, you know, that you’d like to see more of, but unfortunately, many of these these kind of ethics boards have not been an effective way of doing that.
Jason Mitchell
I want to finish off on the labour element, which you alluded to earlier in the episode. I guess I’m curious, particularly from a global South perspective, there have been some troubling signs that generative AI even is creating an underclass or essentially a subaltern of Mechanical Turk like crowd workers, Annotators Anti-Maskers, who labeled data used to train limbs. And I remember one of our first conversations, a presentation you made, you sort of made the observation that A.I. regulation is essentially labour regulation.
Jason Mitchell
Can you talk more about that?
Andrew Strait
Absolutely. Many of the harms that arise in the AI systems are not necessarily harms distinct to AI, their harms that relate to a lack of institutional robustness or protections within society that go far beyond AI. In the case of labour harms many of the harms that arise from I in the training process is a training process that involves the collection of large amounts of data and then crucially the annotation and cleaning of that data to ensure that it’s of a high quality in the case of some of these datasets, it involves humans who are evaluating and assessing vast swaths of toxic, terrible, racist offensive contents or really gruesome images to essentially pull those out from the dataset, or at least to flag and annotate them as toxic, toxic content or words.
It’s very painful, very emotionally traumatizing forms of labour. And I speak of someone who used to have to do that for many years for part of my job at Google was was dealing with child safety concerns and annotating those images before they were removed. It is incredibly difficult. That is made far worse by the fact that for companies that are trying to do this at reduced costs and many of the companies that are offering data annotation services are based in Southeast Asia, based in parts of Africa.
They are offering services for a fraction of the amount of money than than what would happen in the United States. And it’s usually in domains that have very weak labour protections, very weak protections for the amount of time that annotators are spending on these very traumatizing content and very low pay in the case of tragic. But that’s precisely what happens with many.
It came out from some brilliant porting by Billy Perrigo at Time magazine that summer. A content moderation company in operates in Africa, was behind a vast amount of the annotation of toxic harmful content and in speaking with many of those same moderators, is reporting demonstrated that they have had ongoing mental health issues from that labour and are now essentially left completely unsupported because of it. It’s a real issue when we talk about the ways in which you remedy that harm, it is not one that is going to be solved solely through a regulation. It’s a matter of labour protection and crucially, international labour protections, requirements and developers to follow particular labour practices that are of a high quality. In the same way that we talk about labour. It’s also an issue about some of the the kinds of harms that arise downstream. I mentioned right to dignity issues. You know, that that’s that’s not something that is an AI issue.
That’s a that’s an issue of there’s no protection for you to prevent someone from misusing your likeness or your voice. In most regions, it’s a gap in our current regulatory framework. So as I create these new capabilities, that makes capabilities more accessible, as it makes capabilities more in your hands, it also needs to come with a much wider suite of regulatory practices and protections that ensure that we are able to have a use of AI that’s beneficial for all members of society, all people across the world, not just those who are fortunate enough to have access to the technology itself.
Jason Mitchell
Got it. So, it’s been fascinating to discuss how to think through the typology of AI harms, what to make of the different national and supranational efforts to regulate AI, and why the development of strong AI governance systems is in everyone’s interest. So I’d really like to thank you for your time and insights. I’m Jason Mitchell, head of Responsible Investment Research at Mann Group here today with Andrew Street, associate Director at the ADA Lovelace Institute. Many thanks for joining us on a sustainable future, and I hope you’ll join us on our next podcast episode. Thank you so much, Andrew.
Andrew Strait
Thank you so much, Jason. Great to be here.
This information herein is being provided by GAMA Investimentos (“Distributor”), as the distributor of the website. The content of this document contains proprietary information about Man Investments AG (“Man”) . Neither part of this document nor the proprietary information of Man here may be (i) copied, photocopied or duplicated in any way by any means or (ii) distributed without Man’s prior written consent. Important disclosures are included throughout this documenand should be used for analysis. This document is not intended to be comprehensive or to contain all the information that the recipient may wish when analyzing Man and / or their respective managed or future managed products This material cannot be used as the basis for any investment decision. The recipient must rely exclusively on the constitutive documents of the any product and its own independent analysis. Although Gama and their affiliates believe that all information contained herein is accurate, neither makes any representations or guarantees as to the conclusion or needs of this information.
This information may contain forecasts statements that involve risks and uncertainties; actual results may differ materially from any expectations, projections or forecasts made or inferred in such forecasts statements. Therefore, recipients are cautioned not to place undue reliance on these forecasts statements. Projections and / or future values of unrealized investments will depend, among other factors, on future operating results, the value of assets and market conditions at the time of disposal, legal and contractual restrictions on transfer that may limit liquidity, any transaction costs and timing and form of sale, which may differ from the assumptions and circumstances on which current perspectives are based, and many of which are difficult to predict. Past performance is not indicative of future results. (if not okay to remove, please just remove reference to Man Fund).