Daniel Emmerson 00:02
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. Today's guest is Suzy Madigan. Suzy is a human rights specialist and aid worker who drives critical conversations about AI's impact on society, communities and humanitarian action. In collaboration with Accenture, she co authored the groundbreaking report AI and the Global south, exploring the role of civil society in AI decision making. The research amplifies Global south voices in 12 countries, supplemented with perspectives from multinational technology companies and international aid agencies. It provides insights into how we can increase civil society participation in AI development and governance to make AI more inclusive, including how to create equitable connections between technology companies and communities to make AI more ethical and simultaneously more effective. With 15 years of humanitarian experience in programming and policy across NGOs, governments and the United Nations, Susi has worked in nearly 20 crisis contexts such as Northern Iraq, Haiti, Ukraine, Uganda, Colombia and Lebanon. Previously, she worked in strategic communications in the private sector. This combined experience has shaped her global and cross this combined experience has shaped her global and cross sectoral perspectives on AI. Through her thought, leadership and regular blog post series on AI and society, the Machine Race, Suzy brings a vital perspective to ensuring advances in AI benefit those most often left behind, including biotechnological progress. Suzy, it's amazing to have you with us today.
First of all, Suzy, it's wonderful to have you with us on Foundational Impact. Thank you so very much for being here. I'm mindful that you're an incredibly busy person. I'd love to know a little bit more and I'm sure our audience would as well. At the outset, just a bit about your work with CARE International UK and your interest in AI generally speaking.
Suzy Madigan 00:49
Absolutely. Well, thank you so much, Daniel. And it's brilliant to be here. Yes, busy, but probably not as busy as kind of anybody working in the education sector for whom I have enormous regard. So yeah, so my, I'm responsible AI lead at CARE International, which is an international humanitarian relief and international development organization. So we're, we're in 109 countries worldwide, we're almost 80 years old. And my, my kind of background is I'm also an aid worker previously. Well, still, but not so previously, not looking at AI, so deploying to lots of emergencies and conflict contexts and working with communities to, to kind of support them in response to crisis. So my interest in AI really came about about three or four years ago now. I started realizing that there was this incredible kind of movement happening and this was pre, pre ChatGPT that was actually kind of impacting quite, you know, serious areas of our lives, but not really so much in the mainstream or people being too aware of what the implications of that was. And also particularly in the humanitarian and development sector. And then when ChatGPT kind of launched and it all went mainstream, I then set up my blog series, the Machine Race, looking at the implications of AI development and the governance of AI with regards to communities in the global south majority and thinking that, that we always as aid workers need to understand the context, political, social, economic context of where we work. AI is completely integrated into all of that and also, so what are the impacts of that for communities? What's the impact on inclusion? And then also if international organizations in particular are thinking about rolling out AI systems within humanitarian and international development response, what are the implications of that and how do we make sure that communities are involved in all the decision making around that?
Daniel Emmerson 03:06
It sounds fascinating, Suzy, and just picking up on some of the things that you mentioned there. First of all, I'd love to know a little bit more about your deployments, if that's okay. I know that might not specifically revolve around AI, however, throughout the course of this series, we've been talking to folks who work in so many different sectors and we're finding that AI is encroaching, of course, not just in education, but in so many different parts of life, learning and work. I think to contextualize that from the perspective of CARE International would be great to know about what does a deployment actually look like? Where have you been and what have you done there?
Suzy Madigan 03:46
Well, many, many places. So, you know, across Africa, Middle East, Latin America, so, you know, from South Sudan to Haiti to Iraq to Mozambique, Guatemala, I mean, you know, lots of different places. So working a lot, both in conflict contexts. So actually previously I was working on the reintegration of armed groups in Colombia, working with demobilized guerrillas and paramilitaries and looking at integration back into society to kind of create greater kind of peace building within societies. And similarly in Haiti with the UN working with people in former armed group members, kind of it's the community violence reduction is what we refer to there.
Daniel Emmerson 04:36
Restorative justice. Is that what we're talking about here?
Suzy Madigan 04:39
No, it's more from how do you try and create kind of stability. It was actually the UN stabilization mission, create stability within a violence context so that there are opportunities for people rather than going into an armed group and also kind of working with people who have been victims of violence as well to make sure that there is, you know, opportunities and protection in place for them as well. So it's kind of a holistic response also with kind of community policing and things like that. But actually from a humanitarian perspective. So that's more on the kind of life saving, the humanitarian life saving responses to in crises. So that's the work that I've been doing when I've been with international NGOs, a lot of that is about so say when a disaster strikes. So for example, let's take the Beirut port explosion in 2020 compounded on top, you know, that was on top of the COVID crisis. What an organization. So CARE International is very much focused on the gendered implications of crises. You know, particularly kind of having support to women and girls, you know, different kinds of vulnerable groups and having an intersectional lens, understanding how a crisis impacts different group differently and that people are starting from different points. So there, amongst other things, I was working on a process that CARE and others deploy, which is called a rapid gender analysis. So going out and having focus group discussions and interviews and so on with different groups, from older female refugees to members of the LGBTQ community, and understanding what are the very different needs and also, you know, skills and. And kind of the needs for different outcomes amongst all of that those groups, so that we can then create recommendations for the humanitarian response so that humanitarian actors can tailor their response accordingly.
Daniel Emmerson 06:51
So are you organizing those groups, or is that something that happens locally? And then you go and conduct research with those groups? What does that process look like?
Suzy Madigan 07:01
Yeah, I mean, so I think the localization piece is absolutely critical because, you know, I think historically, and this is actually where. This is the perspective. What I'm about to explain is the perspective with which I come to all conversations about artificial intelligence. So we know that historically and, you know, still ongoing when we think about humanitarian assistance, there has been a real kind of colonial model, you know, kind of white man's burden and so on, and, you know, Westerners who look very much like me, flying over to crises and thinking that they might have, you know, all the solutions and, you know, with the best. With good intentions, kind of imposing their ways of operating onto communities who are, in fact, the best place to know exactly what is needed. So that approach to decolonising aid, in fact, is how we kind of refer to that in the sector, how we increase that decision making, how we try to increase funding going to local actors and so on, is a really critical part. And so when we go into a context, a crisis context, we work with local partners in order for them to be able to say to us, these are the kind of people that, you know, we need to be speaking to.
Daniel Emmerson 08:26
Right. Okay. I mean, this, this gives us some context for what we're about to talk about on the AI side. But this is going to be quite far removed from some of the previous conversations we've had. If we take you mentioned South Sudan as an example, what happens while you're there on the ground? How much time are you spending there? Who are you speaking with? What are the outcomes that you're looking for? And what does a completed project, I suppose, in that regard look like?
Suzy Madigan 08:53
Yeah, I mean, that's difficult to. That's difficult to answer because, you know, every single context is different. Every crisis is different. Yeah, so. So what. What perhaps any individual is doing or what any groups will be doing will. Will be totally different in every context. And, you know, there's a. There's a big difference, for example, between a humanitarian response. So, for example, we're in Gaza and the West bank and you know, that's very much life saving support, you know, working through very brave partners and our local staff at the moment to provide, you know, food, hygiene facilities, shelter and so on. In other contexts we might be working on climate change resilience programming. In other contexts. Well, in all contexts actually that CARE works on it's protection of women and girls. We do a lot, I mean that's also very broad but you know, we do a lot, for example on response and prevention and response to gender based violence. So it really depends on what the context is, what the needs are. But I mean, in answer to your question, it's the critical thing is always to embed any programming with that locally led approach so that we're working with local partners. I mean, one thing that we advocate for a lot is working with women led organisations and women's rights organizations because you know, obviously women, women and girls are half, half the population and their needs are so different to men and young boys who also have, you know, important needs as well. But historically, unfortunately, women and girls do tend to fare much worse for various different reasons in crises. Whether it's, you know, in food insecurity, they're eating least. And last, gender based violence traditionally impacts women and girls much, much more. So there are these kind of particular. If you don't have the right people speaking up and saying on behalf of this particular constituent, these are the kind of needs that are really important, then we're not going to tailor our response appropriately.
Daniel Emmerson 11:13
I'm interested in the connection between the work that you were doing in your deployments and the report. So your report focuses on the role of civil society within the context of AI and the Global South. I'm keen to know about how that happened. Right, so you mentioned earlier that you became aware as AI technology was starting to become more mainstream in lots of different sectors, that it was important to address this from a humanitarian perspective. So how did you go about doing that?
Suzy Madigan 11:46
Yeah, absolutely. So as a humanitarian worker, I mean, centering civil society and thinking about the implications of things for civil society on community, communities is everything that is the driving, you know, force objective and so on. So I guess kind of three or four years ago when I started becoming aware of AI and then, you know, as you say, with ChatGPT going mainstream, that's when I thought, okay, this is a real opportunity now to start encouraging international NGOs and to start centering this as a conversation and start, you know, using, using the kind of privileged position that we have with those international connections and international stage to step up and start amplifying the voices of Global South civil society actors in relation to artificial intelligence. So, I mean, this is an ongoing issue that, you know, when it comes to very big questions, particularly, you know, taking for example, AI governance, there are, you know, lots of conversations going on in international fora, and consistently we don't see Global South civil society, sometimes not even Global South governments in the room making, making the decisions or having their say. Now, within the Global South majority, there are of course plenty of people trying to say really important things and doing some incredible work around AI, but they're just not always given the space or the decision making power to be able to have the impact that they deserve and should be having. So I think with all of that in mind was where I thought, okay, international NGOs have an opportunity and I would say a responsibility to use the privileged position that we've got to start amplifying those voices. So I thought, okay, well, where we can start, a start point for that is doing some research to reach out to civil society organizations in several countries and ask them, okay, what do you think are the risks, risks and opportunities and gaps in relation to AI? What do you think are the pathways to more equitable inclusion of civil society through the AI life cycle? I mean, the development life cycle, and also in governance fora. And I thought a start point for that is actually for an international NGO like CARE International to partner with a big tech multinational like Accenture. Because to bring those two kind of entities together means that we have to bring business into this conversation. And it's about bridging the conversation between the lived experiences of Global South players, civil society players, and those who are predominantly creating AI at the moment in the west, in the Global North, and of course, centering within that paper, the voices of Global South civil society.
Daniel Emmerson 15:03
So the paper focuses on four strands and we, we had a brief conversation about this earlier on. So we've got in there AI literacy, we've got increasing local decision making and strengthening advocacy, and we also have the improvement of digital infrastructure. Each of those, of course, is its own unique and fascinating strand. Just on that last point, I think this is something that we've come up, well, that has come up in many conversations we've had with partners in many different parts of the world. This conversation around access and what tools might be available, particularly when thinking about digital infrastructure. I'd love to know from your perspective, what are the main barriers there and how might these be addressed, both from a humanitarian and also from a commercial perspective?
Suzy Madigan 15:57
Yeah, I mean, this Is, you know, this is really, this is such a hard one, right? And for all the conversations we can have about the softer skills. So for example, around how do you improve AI literacy, how do you increase advocacy, things like that, it's all the things that might be slightly cheaper, right? The really tricky one is how do we improve digital infrastructure and also equitable data governance? You know, both two absolutely huge topics. How do we get to the point where artificial intelligence can actually overcome digital divides rather than widen them? And that is potentially a real danger. You know, at the moment, a third of the global population is not online. You know, we spoke to, when we were speaking to civil society organizations, you know, some, you know, I remember people from Colombia telling us, you know, well, a lot of the people that we speak to in rural areas, they're struggling to even have electricity there, you know, and women in particular won't even have mobile phones. So, you know, and I think also then thinking about the data side of it, so we spoke to a local humanitarian organization in Yemen. And you know, that person was, you know, telling us, well, if you think about, if you think about the data side of this, you know, when basic needs are going unmet, you know, getting data from vulnerable communities is, is going to be really, is really hard, you know, and he couldn't see how their inputs were going to be incorporated into systems. And then, you know, we have an ongoing issue of data that AI is being trained on not being rep, you know, not being representative. And that's before we even get to issues of data sovereignty and privacy and that kind of stuff. So, I mean, it's huge. I mean, when we spoke to several tech multinationals as well for this research, and you know, the challenge as to why there isn't greater investment sometimes is, you know, around perceived risks like economic instability, regulatory uncertainty, high R and D costs, you know, which, which can deter big tech from investing in tailoring AI models, for example, to specific markets and communities. You know, with that there's going be a shortfall in models that address specific needs of Global south majority users to improve social outcomes there. So it's kind of a bit of a, you know, there's a bit of a vicious cycle here, you know, and it's, it's, you know, unfortunately not my words, but, you know, one of the quotes is about, you know, the business case on AI impact in the Global south is just not present yet. You know, that was one of the quotes from one of our technology participants. So I mean, without going into all of the nitty gritty about, you know, cloud and AI adoption, things like that. I mean clearly there needs to be a rethinking of how investment is, is made. And you know, I mean I come, I come from the humanitarian and development sector, so certainly, you know, we would be recommending to donors that actually there needs to be some support for national efforts to establish basic connectivity, you know, and part of that could be through equitable private public partnerships, but also, you know, also working with the relevant kind of authorities and so on to support, support kind of capacity building around responsible data governance, you know, equitable interoperable data governance. So I mean a lot of this, you know, your listeners might be interested in, you know, checking out what, what's in the global, global, the UN Global Digital Compact that was just signed in November, which, you know, which is a really helpful document. I mean it's non binding unfortunately, but you know, with a real kind of call to action for not just states, but stakeholders across businesses and civil society as well. It's going to come together to try and address, address some of these, some of these issues. But obviously that infrastructure bit is a real piece for government and businesses.
Daniel Emmerson 20:26
So when thinking about civil society organizations and how they're using AI tools in different parts of the world, is there anything on that you might be able to speak to just in terms of giving us a handle on what best practice might look like?
Suzy Madigan 20:41
So a lot of the civil society organizations that we spoke to are, they're already using chatbots and things like that. That is probably the, the main thing that, that the global south majority CSOs were talking about. And you know, kind of more widely There are international NGOs who are trialing certain, I mean chatbots are probably the most common thing. I'd say there's quite a bit of kind of, it's mainly being used in the back office by organizations, you know, kind of productivity gains and things like that. In some places there are kind of use cases of predictive analytics and I think, you know, that's, that's one where it can be a lot more, you know, potentially problematic. You know, when things, when AI systems are being used and they're community facing obviously the risks related to that kind of, you know, multiple multiply hugely. So in terms of kind of best practice, I mean, you know, this is a, you know, this is a, this is a huge question, right? It's like how do you, how do you make sure that any AI tools that are being rolled out amongst vulnerable communities are safe? And there's this huge amounts of work, work that still needs to be done on that. And actually, you know, many organizations have not yet had, and I'm talking particularly about international NGOs here, there is a kind of a sense of FOMO, fear of missing out. So, you know, kind of rushing to pilot new kind of, you know, cool looking, innovative AI tools. But you know, first of all, the really important thing is to get kind of governance structures in place and an understanding through the organization of like, how do you actually operationalize what is our governance and ethics framework and our basic do no harm framework when we come to deploying AI in these contexts? And that requires a lot of analysis. It requires multidisciplinary teams to really understand what could potentially be the implications for communities, for women, for people with disabilities, you know, very different. And that work is not really being done yet and needs to urgently be put in place before these tools get rolled out. I mean, also even, you know, the, you know, with, with chatbots which might, you know, you might think, well, okay, you know, how much harm can, can, can that do? Well, if you imagine that you're a vulnerable, I don't know, say there's a survivor of gender based violence asking a chatbot whether or not they are to blame for what has happened to them and they receive an extremely, you know, the answer could lead somebody to extreme distress and potential danger or giving, you know, people seeking sanctuary, you know, safety, you know, where to, where to go to find certain, you know, certain support and that information is wrong. You know, the implications of this are serious. So I think the whole sector is at that moment where people really need to be empowered and that's where the AI literacy piece comes in. It's not just for communities, you know, in different places to have some AI literacy so they can make decisions on whether or not something is safe or not, or to be more critical about certain answers they're receiving. But that also goes through kind of international NGOs. And also the other literacy piece is about for technology companies to have greater literacy about the lived experiences of people in very different contexts as they're developing AI systems in order not just that they're ethical, but that they're safe and also effective.
Daniel Emmerson 24:39
Suzy, absolutely fascinating perspective on this subject. So, so very grateful for your reflections and sharing on your experience. For our listeners as the AI and the Global south, the role of civil society in AI decision making is an absolutely mind blowing read. I encourage everyone to check that out. Suzy, thank you so much once again for your time and your insights. It's been an absolute pleasure having you on foundational impact.
Suzy Madigan 25:09
Oh, it's been an absolute pleasure to be here. Thanks so much for inviting me, Daniel, to you and the team. Thank you.
Daniel Emmerson 25:15
That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here here, and we'll see you next time.