Equality and Ethics in AI

May 20, 2024

Daniel 00:00

Welcome to Foundational Impact. In this episode, Nebs Arslan and I will discuss equality and ethics in AI, as well as finding out about what it takes to build a successful nonprofit in this space. This is going to be fascinating. I'm so glad you can join us.

Voiceover 00:18

This is foundational impact, the podcast from Goodnotes for the Good Future Foundation, a new nonprofit that aims to empower teachers and schools to thrive. Let's get into this episode right now.

Daniel 00:32

Hello everyone, and welcome to Foundational Impact. My name is Daniel Emmerson and I'm both the academic affairs lead at Goodnotes and the executive director of Good Future Foundation, a brand-new nonprofit that aims to empower teachers and schools to thrive in an AI infused world. Foundational Impact sets out to explore the trials and tribulations of building a nonprofit from the ground up, while also investigating the changing world of technology in life, learning, and work. I'm delighted to be joined here in our second episode by Nebs Arslan, who serves as legal director for the Global Partnership Office at Women in AI. Women in AI is a nonprofit organization that works towards gender inclusive technology and equal opportunities and has over 40,000 members. Nebs is also general counsel at Goodnotes and has previously worked for companies listed on the London Stock Exchange and led their compliance, ethics and AI functions by incorporating different technology solutions and artificial intelligence. Nebs, welcome to foundational impact. It's wonderful to have you here.

Nebs 01:42

Thank you so much. Daniel, thank you so much for the introduction.

Daniel 01:45

I'd like to start, Nebs, if I may, by just asking you a little bit about your background, particularly concerning technology and artificial intelligence. Where did this interest come from and what direction did it initially take you in?

Nebs 01:58

So I started my career in a traditional environment as a lawyer. So conventional route. I took the conventional route to qualify. And that meant I did my training contract in private practice and did all the traditional seats. So corporate, commercial, probate. And in my time, when I was training to become a lawyer in England and Wales, we didn't have the option of a seat within the technology department. We did have commercial. And commercial meant you could work with entrepreneurs or companies who are investing in different jurisdictions. And I was involved in preparing commercial contracts, commercial deals, technology transfers, and I was always interested in technology, and I didn't know that you can combine that with law. And I have a disabled brother who uses technology to meet his needs. And I was really fascinated and really wanted to learn more about technology and how technology can change one's life. And through my brother, we looked into various tools and technologies that we could help, or he could use, not to rely on support from third parties, but to use technology to have a quality life. And actually it did, it really did help him. So that really my fascination began there. I really wanted to work in an industry where I know that I knew that I could make a difference. And so, yeah, I started off my career in the in house world as a lawyer, moved from private practice to in house in 2015, and then didn't look back. It has always been in technology companies or companies that had a division where I would sit and work with product and design and software engineers.

Daniel 03:51

So let's talk a little bit then, about artificial intelligence specifically, because this is a field that has really gained a lot more prominence over the course of the last year or two, not only in industry, but also in education. I was wondering if you could tell us a little bit about how that is impacting the world of law and ethics and regulation, just to give our listeners a little bit of context around some of the broader aspects of artificial intelligence.

Nebs 04:18

Of course, AI has been around for years, and soft shadow AI's have been there. We have been using AI's. AI has been part of our lives for many years. Generative AI, obviously is now new buzzword, and it's in our dictionary, and we are all made aware of what it can do, and it's so clever. So it's this very clever technology. And since 2022 November, also with the OpenAI launch of their first GPT, it meant that everyone was familiar with what it does and what it can do and how it can change people's lives. And in terms of the legal aspect and regulatory side, obviously the lawmakers or the policymakers are still, they haven't caught up with what they can do to protect users, and they're still trying to understand and regulate. And for lawyers at the moment, it's the same. So the lawyers in private practice or traditional environments, they are still struggling to understand what they can do and how they can work with their clients to use the technology in a better way, but in a safer way. And they really need to work with, I guess, lawmakers and the policy makers will need to work with engineers or leaders in the AI industry to really understand how they can make it more ethical, the use case more safe for the end user, and how it can affect one's life. For lawyers, lawyers tend to be very traditional, and they haven't really started using AI so much in the legal industry. And if they do, it's mainly to do with filling in the gaps in terms of administrative work, but not for drafting, or it's not used for writing emails or templates, etcetera. It's mainly used to bridge the gap in terms of administrative work or to find if they have a resource pool, knowledge pool, but they want to pull all the data into one place, they can use AI effectively. So they've seen there's a gap there where there's so much knowledge in the law firms, or they really need to use AI to be able to bridge that gap, to be able to find the information that they're looking for. Not to reinvent the will, but in the education space we've come into, I guess we're in a very interesting space at the moment. And I feel AI can really bridge that gap with those underprivileged students who don't have access to the same resources as someone who is in a private education. And they can be in their own room, in their own house and studying the same subjects without having to have extra lessons after school, those in state school. And they can use a very simple, cheap AI tool to really educate them. And it's a very, you know, it's an interactive tool and we have obviously, you know, good notes. We're working on something amazing which we think we can bridge that gap. We can help students in underprivileged backgrounds so that they can have access to the same education and same tools as someone who's in a private education or in a private setting. So I think it can really work in the education sector. However, we should just make sure that everyone is working together in the industry, in the AI industry, the technology industry, with the policymakers, that anything that is distributed is done in an ethical way and a safe way, and we try to eliminate any biases.

Daniel 08:00

Thanks ever so much, Nebs. I think, I mean, you mentioned that generative AI is a bit of a buzzword. Gen AI is something that gets banded about quite a bit. So is the word ethical. I think in a lot of cases where you'll see new companies, new tech companies in particular, popping up, claiming to be an ethical provider or offering an ethical solution. What does ethical AI really mean and why does that matter?

Nebs 08:28

I mean, in a layman's terms, ethical AI means developers or those decision makers developing these tools think of things that are going to not discriminate one who is from a different background or who's a different color, or is, if we're talking about someone who is Muslim, who is Jewish, who is from a different faith. So you build your tools in an ethical way, going to help, and you have guardrails in place to ensure that. So we talk about ethical development, but that starts from the top. So you have to think about the ethical side when you're building your tools and including everyone that you think should be included. So you're making it very inclusive at the design stage. And you think about ethics from the outset, not once you've built the tool. And to make sure that one of the things that you have to think about is safety. How do you make AI safe? If we're talking about education setting safe for students in an education setting, if an AI is going to use to evaluate someone's grades, someone has already built that. And how do we ensure that the developers working on that tool are not. We're eliminating biases, we're eliminating, we're building things ethically, we're using quality datasets to get to that. And what we mean by datasets, data that's available to us, quality data. And for that, you have to do a lot of research, have a lot of users who from all walks of life, not just having users from one set of group, not using data from students, from a privately educated setting. You look for, you bring in students, you bring in researchers, you bring in your professionals, you bring in policymakers to work on an ethical way of building an AI tool. So again, another example would be when they introduced a face recognition technology, it wasn't built in an ethical way. So you had a lot of people who were convicted wrongly, and that had to be pulled back. And we realised with that, and that happened in the UK, we realized with that the police wasn't using that properly, it wasn't built in a way that was ethical. And the same applies to an education setting.

Daniel 11:00

I think there's a lot of concern in the education sector about compliance and safeguarding. Understandably, if schools are bringing these tools and this technology into their teaching and learning environments, they want to make sure that the people using it are well looked after. And that also, I would have thought, includes things like data privacy and also intellectual property. Is there anything that you think schools or indeed any other companies might do to ensure that the technology they're using has been built in this ethical way? Is there anything they can do to check that?

Nebs 11:36

So, of course, as a lawyer or one thing I would advise all businesses to do, and they do deploy AI to their settings, businesses should be, and that's education providers as well, institutions, they should look at the licenses. So the license would be an important document for you to review to see whether who owns the IP, intellectual property rights, do you pass that over to the provider, the deployer of the AI tool? What happens to your data? So when we talk about data and data, obviously at the moment means golden for most companies. They need that data. They need the data to train the model. Without the data, they can't develop the tool. But you shouldn't be freely giving away your data. You should be mindful, especially if we're talking about education setting. You should be mindful where that data goes. Where does it sit? Who is going to have access to that data? And one other important topic you want to review in that license would be retention. How long do they keep the information, the prompt and the output? Most of the companies have now added an additional term to say it's 30 days. After 30 days, it's completely removed. It depends on the type of tier you go with. And now, obviously, consumers are becoming smarter about the use of AI and the tools, especially when it comes to privacy side. So they tend to look for companies that provide that retention period where it's deleted immediately. And companies have realized they need to do that. They need to have an ethical and safe way of building these AI's to sell. And safety and ethics has to be at the core front of the implementation and design, and that's the only way they can really have the buy in of the consumers. So, yeah, I think before deployment, that would be my first thing to advise to businesses or education institutes.

Daniel 13:43

Let's talk a little bit now, Nebs, about Women in AI, which is a nonprofit organization that you do a lot of work for and have been involved with for a fair bit of time. Could you tell us a bit about what that organization does and why the mission for Women in AI is important?

Nebs 14:00

So I became involved with Women in AI in 2021, and that was through my work at another company, Babcock International, a listed company in the UK. And they work in the defence sector, and also they provide emergency services and rescue mission for people who are stranded, for example, in the sea. Or you have, unfortunately, the unfortunate incidents of people crossing, especially asylum seekers, or for various reasons, and you would have people you wouldn't believe it, but you would have a rescue mission to rescue these individuals. And so I was working on that mission and I then became aware of Women in AI, and I realized, I remember in a room where we had developers, there wasn't any women representation who were really participating in how to develop these tools to make it more inclusive and ethical and eliminate biases. And we were talking about health and how to build tools for doctors. And again, there was no women representation. And for me it felt like, well, if we're making decisions about all sexes and in terms of what healthcare they should have access to, should we not have women developers as well working on this? And then I became aware of Women in AI. I was invited to join them. I have joined them since 2021. And since 2021, what I have been involved in is finding the right partners to partner up with Women in AI who we could really drive that message. And the Women in AI, when they were launched in 2017, the main core of their mission was to eliminate biases, have more women representation in the AI world, the development stage as well. And Women in AI conduct activities around the world. They partner with the likes of Goodnotes and they hire ambassadors around the world, giving those in disadvantage to societies the voice. And that was the core mission as well. And they work on various AI projects with the EU Commission, for example, partner with governments around the world to shape policies around AI and ethics. And, you know, and they work on how to make AI more inclusive. But although the mission has started with women having more women voices, I think we've now come to a stage where we don't discriminate. So we have, members are growing in terms of having more male voices as well around the table. And I think that's great. And yeah, that's the mission of Women in AI.

Daniel 16:48

We've been lucky enough to have done some work with Women in AI in the past. At Goodnotes, for example, we hosted a virtual roundtable on difficulties, particularly that young girls might face in accessing sTEM focused curricula, as well as employment opportunities for women in the tech space, and also indeed in AI. I was wondering, Nebs, if you could tell us a little about your, a bit about your experiences with that and perhaps what you might say to women who are looking for a career in this space. Any hints or tips that you might want to give?

Nebs 17:25

Obviously, you know, I have started off in law, and it's not the same. However, I made my transition into the world of where it's mainly dominated by men. But I was of the view that I need to be in that environment to make a change and have a voice to really push and influence more women, or to come in and study STEM subjects. And one of the things that Women in AI does, we have an education department, and what we do is we push for women in different parts of the world, especially Africa, to really take up STEM subjects. And with that we have events where we really do go into schools and try to influence girls in primary schools or secondary schools. That why it's important that it's not just reserved for boys to do STEM subjects and girls can do it as well. And we have some amazing ambassadors who come in to talk about their experiences and what they have done and how they've achieved that. And with that we also offer sponsorships and scholarships as well to inspire. And my sister, she, in the nineties, when she was doing data science at university, I remember she was the only female in her classroom. We went to her graduation, she was the only female graduating in that science department. And I was surprised to see that because it felt like it was only reserved for men. But with her, when she went into it, she realized it was not. And at that point it was really hard to break barriers. And she's been an inspiration for me to see how you can make changes and she has made the changes in her small world. But I think one advice I would give to those looking to do it, don't be afraid, go for it. And there's so many different opportunities and the world is so different at the moment that we're trying to make it more inclusive and it actually is better than how it was before. So hopefully we'll have more representation.

Daniel 19:43

Just one last question then, Nebs, if I may. So, thinking about this from the perspective of women and girls looking for opportunities either in STEM or careers in tech and AI, the responsibility, of course, is on the education system to ensure that these subjects are inclusive enough for everybody and that they're easy to approach and easy to adapt to. And the same comes. The same, sorry, goes with employment. So it's up to companies to ensure that their working environments are as inclusive and welcoming as they possibly can be. What might you say to companies who are looking to be more inclusive and who would looking to do more work, particularly when it comes to encouraging women who have an interest in this space?

Nebs 20:31

I think we're not there yet. If I can be frank about it, I'm optimistic we will change. I think the companies at the moment are looking for the talent. They don't want to unless there's a policy change where they are forced to make sure there's equal representation. You'll probably find that especially in the private world, private sector is very hard to make the change. I think most companies are trying. They're trying to make a change and a difference in their recruitment system, because it doesn't start from again, it's education, educating the decision makers within that company, educating them. Why? It's important to have different voices and why it's important to have representation from people from all walks of life. It's not just women. We're talking about everyone who could be, who are not from particular, just one, from one, you know, someone from one university or from a university that's well known, but any, you know, people can go to university that's not in the top league but still do really well. And so it's going to take time. But I think, I believe the approach would be to not, I mean, it's really hard because they can't restrict. Then it becomes positive discrimination. Where you say, I'm only going to be recruiting 50% female and 50% male. That's positive discrimination because you're not really targeting the right talent. But on the other hand, there has to be a way of where there's no discrimination so we don't discriminate and we are open and we're basing our recruitment on talent and skills. But that will take time and I think that's going to have to be on education policymakers making probably forcing companies and also really having more female leaders who also push for a change. And that's the only way I think we can make a change.

Daniel 22:35

Women in AI are doing some incredible work. Nebs, you are also doing some amazing work in this space. It's been a real pleasure speaking with you today. Thank you so much for being with us on foundational impact.

Nebs 22:48

Thank you so much. Thank you for having me.

Voiceover 22:50

That's it for this episode. Don't forget the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

About this Episode

Equality and Ethics in AI

In this episode, Nebahat Arslan discusses equality and ethics in AI and the mission of Women in AI. They also explore the impact of AI on law, education, and inclusivity.

Nebahat Arslan

Legal Director, Global Partnership Office at Women in AI

Related Episodes

September 5, 2024

NAIS Perspectives on AI and Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

The Keys to a Successful Nonprofit and Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.